id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2309.10183
Bearing and Distance Formation Control of Rigid Bodies in SE(3) with Bearing and Distance Constraints
Rigidity of the interaction graph is a fundamental condition for achieving the desired formation which can be defined in terms of distance or bearing constraints between agents. In this paper, for reaching a unique formation with the same scaling and orientation as the target formation, both distance and bearing constraints are considered for defining the desired formation. Besides, both distance and bearing measurements are also available. Each agent is able to gather the measurements with respect to other agents in its own body frame. So, the agents are coordinated-free concerning a global reference frame. On the other hand, the framework is embedded in SE(3). The control signal is designed based on a gradient descent method by introducing a cost function. Firstly, the formation problem is considered for bearing-only constraints in SE(3) configuration. Then, the formation control is expressed for the general case of both bearing and distance constraints. Furthermore, the essential conditions that guarantee reaching the desired formation are discussed. Finally, the validity of the proposed formation control is verified by numerical simulations.
Sara Mansourinasab, Mahdi Sojoodi, Seyed Reza Moghadasi
2023-09-18T22:20:48Z
http://arxiv.org/abs/2309.10183v1
Bearing and Distance Formation Control of Rigid Bodies in \(Se(3)\) with Bearing and Distance Constraints ###### Abstract Rigidity of the interaction graph is a fundamental condition for achieving the desired formation which can be defined in terms of distance or bearing constraints between agents. In this paper, for reaching a unique formation with the same scaling and orientation as the target formation, both distance and bearing constraints are considered for defining the desired formation. Besides, both distance and bearing measurements are also available. Each agent is able to gather the measurements with respect to other agents in its own body frame. So, the agents are coordinated-free concerning a global reference frame. On the other hand, the framework is embedded in \(SE(3)\). The control signal is designed based on a gradient descent method by introducing a cost function. Firstly, the formation problem is considered for bearing-only constraints in \(SE(3)\) configuration. Then, the formation control is expressed for the general case of both bearing and distance constraints. Furthermore, the essential conditions that guarantee reaching the desired formation is discussed. Finally, the validity of the proposed formation control is verified by numerical simulations. ## I Introduction Formation control is an actively studied strategy in analyzing multi agents systems [1, 2]. In formation control, the type of available data that agents have access plays an important role in designing the control strategy. Based on sensing capabilities for measuring the relative positions between agents, formation control methods are categorized into three types of position-based, displacement-based, and distance-based. The two former methods are based on global distance measuring which necessitates the use of global positioning system (GPS), for example. Then, each agent shares the information with other agents through wireless communications [3]. Using GPS is not always reliable since limitations such as high dependence of measurements accuracy on the number of available satellites, environmental challenges like indoors, deep urban canyons, dense vegetation and cloud cover obstruct line-of-sight to the satellite [4]. In distance-based method, the distances between agents are controlled to achieve the desired formation. In this method, the sensing capability is only defined with respect to local coordinate systems or body frame of agents and no GPS is required. Local coordinates formation control procedures include distance-based and bearing-based methods depending on the type of measuring and constraint parameters. Formation rigidity theory helps to achieve the desired formation up to scaling, translation and coordinated rotation factors by its inter-agent bearings and distances. According to the definitions, in a given system connected by flexible linkages and hinges, rigidity has addressed the amount of stiffness of the given framework to an induced deformation [5]. In a rigid formation, converging to the desired formation can be achieved by distance and/or bearing measurements. Besides, the desired formation can be defined based on distance and/or bearing constraints between any pair of agents. These types of controllers are classified as distance/bearing-based controller. In distance-based formation control, each agent has its own body frame which does not need to be aligned with other agents body frame orientation. Distance-based rigidity problem has been studied in many investigations [6, 7, 8]. In most of them, the target formation constraints are only defined as inter-agent distances. In bearing-based theory, the desired formation constraints are defined in terms of the direction of the neighboring agents. In other words, the direction of the unit vector aligned the edge that connecting two agents is considered as inter-agent bearing between those two agents. [9, 10] have extensively studied different multi-agent strategies based on bearing rigidity theory for converging to the desired formation, generally assume each agent is able to measure bearings and distances of the nearest neighbors. Approaches with only bearing measurements capability are proposed as bearing-only formation control problems. Vision-based devices are suitable for bearing measurements. Optical cameras are one of low-cost and light-weight onboard sensors that are bearing-only sensors. This yields the resulting formation to be with a different scale of the desired formation. In this paper, formation control for a multi-rigid body system is proposed considering bearing measurements and constraints. Since the motion behavior of a rigid body is a combination of translation and rotation movements, the notion of bearing rigidity comes into play for synchronized target formations. As an illustrative example, multi-satellite systems that are orbiting around the Earth, should have a specific orientation to cover a specific region on Earth. As a result, in multi-rigid body problems, using bearing rigidity concept is beneficial, more accurate and low-cost. Geometrically speaking, roto-translational motion of a rigid body represents an \(SE(3)\) state space configuration which is a Lie group. Using bearing measurements for analyzing such systems is compatible with geometry of the problem [11]. In bearing rigidity problems, the only bearing preserving motions subtend translation and scaling of the entire system [12]. However, distance preserving motions in distance-based problems include roto-translational motions. It is clear that the
2309.08914
Outram: One-shot Global Localization via Triangulated Scene Graph and Global Outlier Pruning
One-shot LiDAR localization refers to the ability to estimate the robot pose from one single point cloud, which yields significant advantages in initialization and relocalization processes. In the point cloud domain, the topic has been extensively studied as a global descriptor retrieval (i.e., loop closure detection) and pose refinement (i.e., point cloud registration) problem both in isolation or combined. However, few have explicitly considered the relationship between candidate retrieval and correspondence generation in pose estimation, leaving them brittle to substructure ambiguities. To this end, we propose a hierarchical one-shot localization algorithm called Outram that leverages substructures of 3D scene graphs for locally consistent correspondence searching and global substructure-wise outlier pruning. Such a hierarchical process couples the feature retrieval and the correspondence extraction to resolve the substructure ambiguities by conducting a local-to-global consistency refinement. We demonstrate the capability of Outram in a variety of scenarios in multiple large-scale outdoor datasets. Our implementation is open-sourced: https://github.com/Pamphlett/Outram.
Pengyu Yin, Haozhi Cao, Thien-Minh Nguyen, Shenghai Yuan, Shuyang Zhang, Kangcheng Liu, Lihua Xie
2023-09-16T07:39:00Z
http://arxiv.org/abs/2309.08914v1
# Outram: One-shot Global Localization via Triangulated Scene Graph and Global Outlier Pruning ###### Abstract One-shot LiDAR localization refers to the ability to estimate the robot pose from one single point cloud, which yields significant advantages in initialization and relocalization processes. In the point cloud domain, the topic has been extensively studied as a global descriptor retrieval (i.e., loop closure detection) and pose refinement (i.e., point cloud registration) problem both in isolation or combined. However, few have explicitly considered the relationship between candidate retrieval and correspondence generation in pose estimation, leaving them brittle to substructure ambiguities. To this end, we propose a hierarchical one-shot localization algorithm called Outram that leverages substructures of 3D scene graphs for locally consistent correspondence searching and global substructure-wise outlier pruning. Such a hierarchical process couples the feature retrieval and the correspondence extraction to resolve the substructure ambiguities by conducting a local-to-global consistency refinement. We demonstrate the capability of Outram in a variety of scenarios in multiple large-scale outdoor datasets. Our implementation is open-sourced: [https://github.com/Pamphlett/Outram](https://github.com/Pamphlett/Outram). ## I Introduction LiDAR-based localization problems can be stated in the following general form. We are given a point cloud \(\mathcal{P}\) produced by a LiDAR, and a reference point cloud \(Q\), which can be either another frame of LiDAR scan [1], an accumulated submap [2], or even the entire mapping space [3]. Given a point correspondence \(i\in\mathcal{I}\), \(\mathbf{p}_{i}\in\mathcal{P}\) and \(\mathbf{q}_{i}\in\mathcal{Q}\) can be associated and represented in the residual function \(r_{i}:=\|\mathbf{T}\mathbf{p}_{i}-\mathbf{q}_{i}\|\rightarrow[0,\infty)\) where \(\mathbf{T}\) is the ground truth localization result we are searching for. While the estimation problem can be relatively easy in special cases (e.g., the size of point clouds is constrained or an approximation \(\mathbf{T}_{initial}\) is known a priori), it can be hard in general [4, 5] due to limited descriptiveness of local features and computational complexity. These general, prior-free cases are what we encountered in relocalization or global localization problems. To address this problem, prior work [6, 7, 8, 9, 10] usually break it down into a retrieval phase and a pose estimation phase, where several candidates are generated first and verified later for final pose estimation. These decoupled approaches, however, suffer from local ambiguities where the substructures in different LiDAR scans are similar, leading to false candidate retrieval. Rather than attempting to find the most appearance-similar keyframe through one single retrieval, we propose an algorithm called Outram for one-shot, accurate, and efficient global localization directly against a reference map. Different from existing works, we rely on local substructures of a 3D scene graph to generate locally consistent correspondences directly with the map, and further find the inlier correspondence that globally unifies substructures-wise consistency. This leads to the proposed local-to-global, hierarchical global localization algorithm that has the following contributions: * We propose a novel representation encoding substructure of 3D scene graphs for efficient locally consistent correspondence generation. * Together with a subsequent graph-theoretic pruning module, we propose an accurate, efficient, and one-shot global localization pipeline for large-scale outdoor environments. * Extensive experiments are conducted on publicly available datasets on a global localization setup, showing superior robustness compared with current state-of-the-arts. We further open-sourced our implementation to benefit the community. Fig. 1: Illustration of the proposed global localization algorithm on MulRan DCC dataset [11]. We generate 3D scene graphs and leverage their substructures for locally consistent correspondence generation. With these raw correspondences, we exploit a graph-theoretic outlier pruning process for globally consistent inlier extraction (green lines). The point cloud transformed by the estimated pose is shown in green in the enlarged area. The position of the semantic segmented point cloud and the grey map point cloud is presented for visualization purposes only. ## II Related Work In the literature, LiDAR-based relocalization or global localization methods can be broadly categorized into two branches by whether a movement of the robot is needed, namely one-shot localization and iterative localization. Iterative localization methods are usually formulated in a Monte Carlo localization manner [12, 13], where the movement of the robot will provide more environment observations and thus update the weight of each particle till convergence. With the accumulative submap and generated dense segments, Segmap [14] proposed a data-driven retrieval mechanism for global localization. On the contrary, one-shot localization is referred to as algorithms that solve the global localization problem in a full prior-free manner. We further divide the literature on this into two groups: loop closure detection-based methods and registration-based methods. The next subsection provides more details on each of the two groups. ### _Loop closure detection-based global localization_ Loop closure detection (LCD) methods identify previously visited places by encoding current LiDAR measurements into global descriptors and comparing them to a database constructed from historical frames. The construction process can be divided into global and local ones. Very similar to its visual-based counterpart, several local methods [15, 16] detect 3D key points and aggregate them into a global representation, and the retrieval process is arranged in a bag-of-words manner. Alternatively, global methods directly encode the whole LiDAR scan. The encoding pattern can be either handcrafted or deep-learned. Scan Context [6] encodes the geometric information of a point cloud into a bird-eye-view global descriptor. Following the same data structure, several variants [17, 10] have been proposed to enhance the descriptiveness by extending the original geometric-only representation by either the inherent intensity information [17] or high-level semantics [10]. Several methods leverage deep neural networks to generate global descriptors directly [18, 19]. Uy et al. [18] combined PointNet [20] and NetVLAD [18] to generate compact global representations. Chen et al. [19] proposed an LCD network that estimates the overlap ratio between point clouds. More recently, techniques exploiting the graph structure of local features have been proposed [21, 9, 22, 7]. Methods involving point cloud semantics [21, 9, 22] usually encode the scene by instances and the spatial layout in between. LCD is accomplished by conducting similarity checks between these semantic instance graphs. Yuan et al. [7] proposed to aggregate local point features to form a triangle-based descriptor (the simplest graph). Leveraging the side length of each triangle as the hash table key, loop closure candidates can be found by a voting scheme. Further, the transformation in between is calculated and verified by planes in the scene. It is natural to extend LCD methods to global relocalization due to the similar retrieval mechanism [23]. Nevertheless, local ambiguities and scene changes can make the descriptor-based algorithms brittle. Moreover, while geometric verification is widely employed after the retrieval process, it can be both time-consuming and inaccurate [24]. Additionally, even if the geometrically proximate candidate is retrieved from the database, LCD methods usually rely on local point features for the subsequent pose estimation, which could potentially suffer from feature degeneracy [4, 5], resulting in impossible pose estimation. ### _Registration-based methods_ Instead of finding one single keyframe in the pre-built LCD database, registration-based methods seek to solve the global localization problem in a point cloud registration manner. There exist two concerns [25] when leveraging point cloud registration in solving global localization problems: correspondence generation and outlier pruning. In correspondence extraction, it is not computationally feasible to directly extract correspondences on the point level. Subsequently, several methods leverage high-level representation in replace of geometric points. Ankenbauer et al. proposed [26] semantic object maps for global localization by formulating a global registration problem, where all-to-all correspondence is built between semantic objects within local and global maps. In a very similar sense, all-to-all correspondence is also employed in [27] for global localization, while semantic clusters to be registered are from different modalities. In the outlier correspondence pruning stage, the aforementioned two methods both send prebuilt correspondences into a graph-theoretic inlier selection module, where the inlier set is modeled as the maximum clique (MC) of the consistency graph [28]. Additionally, RANSAC-based methods [8] are also ubiquitous in the outlier pruning stage, while being proven to be brittle to high amounts of outliers [28, 29]. In comparison with these methods, we proposed a novel substructure representation, instead of semantic clusters only, of a semantic segmented point cloud for more informative correspondence extraction. We emperially demonstrate in IV that how our proposition is more computationally tractable and accurate than previous state-of-the-art. ## III Methodology In this section, we first formulate the global localization problem that is considered herein. Next, we present our proposed one-shot global localization algorithm Outram with two sub-modules, where we first leverage local substructures in a 3D scene graph for correspondence generation and further prune the correspondence globally with substructure-wise consistency check. ### _Registration-based One-shot Global Localization_ The point cloud registration problem is formulated as acquiring the pose transformation \(\mathbf{T}\) of a single query LiDAR point cloud \(\mathcal{P}=\left\{\mathbf{p}_{i}\in\mathbb{R}^{3}\right\}_{i=1}^{n}\) against a prebuilt point cloud map \(\mathcal{M}=\left\{\mathbf{m}_{j}\in\mathbb{R}^{3}\right\}_{j=1}^{m}\) accumulated by a series of scans in the world frame collected during a time span \([1,t]\). The pose transformation in between is defined as: \[\mathbf{T}\triangleq[\mathbf{R},\mathbf{t}]\in\text{SO(3)}\times\mathbb{R}^{ 3}, \tag{1}\] where \(\mathbf{R}\) represents the rotation and \(\mathbf{t}\) is the translation. With the unknown ground truth transformation, corresponding points in the query scan and the reference map can be associated as: \[\mathbf{m}_{j}=\mathbf{R}\mathbf{p}_{i}+\mathbf{t}+\mathbf{o}_{ij}, \tag{2}\] where \(\mathbf{o}_{ij}\) is the measurement error. Finding the pose transformation typically includes three steps: find an initial data association \(\mathcal{I}\in[n]\times[m]:=\{1,\ldots,n\}\times\{1,\ldots,m\}\), prune the initial correspondences set \(\mathcal{I}\) for the inlier set \(\mathcal{I}^{\star}\), and estimate the pose transformation with the inlier set \(\mathcal{I}^{\star}\)[30]. In the following sections, we will detail how we leverage scene graphs for informative correspondence generation and efficient outlier pruning. We also empirically demonstrated in IV how our proposed method is superior to current existing works [26] leveraging all-to-all correspondence for global localization problems in terms of scalability. ### _Triangulated 3D Scene Graph_ Since point feature level correspondence generation is not computationally feasible in global localization problems, we leverage 3D scene graphs for correspondence extraction at a higher instance level. Different from previous works [26, 31] that build all-to-all correspondence or leverage instance-only descriptor, we present a new representation, triangulated 3D scene graphs, for informative and efficient correspondence generation. Given the query point cloud \(\mathcal{P}=\left\{\mathbf{p}_{i}\right\}_{i=1}^{n}\), we employ the state-of-the-art point cloud semantic segmentation network [32] to match a point \(\mathbf{p}_{i}\) with a semantic label \(l\in\mathcal{L}\). The network thus acts as a mapping \(\lambda(\mathbf{p}_{i})\colon\mathbb{R}^{3}\rightarrow\mathcal{L}\subset \mathbb{N}\). Hence, we can define the following semantic \(\mathcal{S}\) point cloud as: \[\mathcal{S}=\left\{s_{i}|s_{i}=\left(\mathbf{p}_{i},\lambda\left(\mathbf{p}_{ i}\right)\right),\forall\mathbf{p}_{i}\in\mathcal{P}\right\}. \tag{3}\] Subsequently, we leverage the projection-based clustering method [33] to generate instances from the semantic point cloud with the same label \(l\): \[\mathbf{C}^{l}=\{C_{k}\subset\mathcal{P}|k=1,\ldots N; \tag{4}\] \[l=\lambda(\mathbf{p}_{i})=\lambda(\mathbf{p}_{j}),\ \forall \mathbf{p}_{i},\mathbf{p}_{j}\in C_{k}\}.\] We further enhance these semantic clusters by approximating each of them as Gaussian distributions: \[\boldsymbol{\mu}_{k}=\frac{1}{|C_{k}|}\sum\mathbf{p}_{i},\boldsymbol{\Sigma}_ {k}=\frac{1}{|C_{k}|}\sum(\mathbf{p}_{i}-\boldsymbol{\mu}_{k})^{\top}(\mathbf{ p}_{i}-\boldsymbol{\mu}_{k}). \tag{5}\] The query scan and reference semantic map can be represented by a set of semantic Gaussian distributions \(\mathbb{C}_{\mathcal{A}}=\left\{\mathcal{A}_{i}^{\star}\sim\mathcal{N}\left( \mathbf{a}_{i},\boldsymbol{\Sigma}_{\mathcal{A}_{i}}\right)\right\}\) and \(\mathbb{C}_{\mathcal{B}}=\left\{\mathcal{B}_{j}^{i}\sim\mathcal{N}\left( \mathbf{b}_{j},\boldsymbol{\Sigma}_{\mathcal{B}_{j}}\right)\right\}\) respectively. These semantic Gaussian distributions will then act as primitives for establishing correspondences and later pose estimation. This representation is beneficial for correspondence generation as the covariance depicts the shape information of each semantic instance, which serves as another metric for similarity check. Such semantic lifting also structures the query scan as a two-layer scene graph where we have the semantic segmented points at the lower level and semantic instances as vertexes at the upper level. Each edge in the scene encodes the spatial relationship as well as the semantic topological information between two semantic instances. ### _Correspondence Generation via Local Substructures_ We then leverage substructures of the scene graph for correspondence generation. Inspired by STD [7], we triangulate each scene graph to form a series of triangles as in the minimal representation for local similarity measurement and subsequent correspondence generation. To be more specific, each anchor semantic cluster \(\mathcal{A}_{i}\sim\mathcal{N}\left(\mathbf{a}_{i},\boldsymbol{\Sigma}_{ \mathcal{A}_{i}}\right)\) is associated with \(K\) nearest clusters \(\left\{\mathcal{A}_{j}\right\}_{j=1}^{K}\). Afterward, we exhaustively select two of the neighbors, together with the anchor cluster, i.e., \(\mathcal{A}_{1}\), \(\mathcal{A}_{2}\) and \(\mathcal{A}_{3}\), to form one triangle representation of current scene graph. By an abuse of notation, we denote it as \(\Delta\left(\mathcal{A}_{1,2,3}\right)\) which comprises of the following attributes: * \(\mathbf{a}_{1}\), \(\mathbf{a}_{2}\), \(\mathbf{a}_{3}\): centroids of the semantic clusters; * \(\boldsymbol{\Sigma}_{1}\), \(\boldsymbol{\Sigma}_{2}\), \(\boldsymbol{\Sigma}_{3}\): corresponding covariance matrices; * \(d_{12}\), \(d_{23}\), \(d_{31}\): three side lengths, \(d_{12}\leq d_{23}\leq d_{31}\); * \(l_{1}\), \(l_{2}\), \(l_{3}\): three semantic labels associated with each vertex of the triangle. Similar to STD [7], we build a hash table using only the sorted side length \(d_{12}\), \(d_{23}\), and \(d_{31}\) as the key value due to its simplicity and permutation invariance. Other attributes are left for verification purposes. In the searching process, we have the triangulated scene graph in the query scan and reference map: \[\Delta\textit{Query} =\left\{\Delta\left(\mathcal{A}_{1,2,3}^{n}\right)\right\}_{n=1}^{N }, \tag{6}\] \[\Delta\textit{Map} =\left\{\Delta\left(\mathcal{B}_{1,2,3}^{m}\right)\right\}_{m=1}^{M },\] where \(n\) and \(m\) are indexes for triangle descriptors in the query and map scene graph respectively. We drop the subscript and denote \(\Delta\left(\mathcal{A}_{1,2,3}^{n}\right)\) as \(\Delta\mathcal{A}^{n}\) for clarity. As shown in Fig. (3), query each of the triangles (e.g., \(\Delta\mathcal{A}^{1}\)) against the hash table constructed by the reference semantic scene graph will produce multiple responses \(\left\{\Delta\mathcal{B}^{q}\right\}_{q=1}^{Q}\) as similar substructures could exist throughout the whole mapping region. We further leverage the semantic labels \(l\), as well as the covariance matrix \(\boldsymbol{\Sigma}\), associated with each vertex for another round of similarity-check for semantic and shape resemblance. For semantic labels, we simply employ the equality condition. For the covariance matrices, Fig. 2: One LiDAR scan from MulRan DCC dataset and its corresponding triangulated 3D scene graph. Colored spheres are the centroids of each semantic cluster with cars being purple, tree trunks being brown, and poles being yellow. Wasserstein distance is applied for similarity measurement. After querying all triangles in the query scene graph, a set of raw cluster-wise correspondence \(\mathcal{I}_{\text{raw}}\) can be naturally built as the sorted side length offers direct mapping between semantic clusters. Although the descriptor-based retrieval process presented above is very similar to the one in any ordinary LCD, we highlight here that we neither solve for the pose nor produce multiple candidates. Instead, we leverage these locally similar substructures, i.e., the triangulated scene graph, to build the coarse correspondence. This is different from both local feature-aggregation-based LCD methods [15, 16] and global feature-based LCD methods [6, 8] where local features are only stacked for retrieval purpose in the former, and post-retrieval correspondence generation for the latter. We implicitly formulate the candidate selection process in the correspondence generation stage which guarantees the local similarity while also retaining the possibility of exploring scene-wise global similarity in the next stage. ### _Global Graph-theoretic Outlier Pruning_ With the prebuilt correspondence set \(\mathcal{I}_{\text{raw}}\) that associates subareas of the current scene with locally similar ones in the reference map, we seek to find an area that maximizes the number of mutually consistent correspondences as well as maintains the consistency between these local structures: \[\max_{\mathcal{I}\subset\mathcal{I}_{\text{raw}}} \left|\mathcal{I}\right| \tag{7}\] \[\text{s.t. }\mathcal{D}\left(\mathcal{I}_{i},\mathcal{I}_{j} \right)\leq\epsilon,\ \forall\mathcal{I}_{i},\mathcal{I}_{j}\in\mathcal{I},\] where \(\mathcal{D}\) is a metric consistency check indicating whether two correspondences are mutually consistent with each other and \(\epsilon\) is the threshold. Namely, for two correspondences \(\mathcal{I}_{i}\) and \(\mathcal{I}_{j}\), with their corresponding semantic clusters \(\mathcal{A}_{i},\mathcal{B}_{i}\) and \(\mathcal{A}_{j},\mathcal{B}_{j}\), a consistency check is defined as \[\mathcal{D}\left(\mathcal{I}_{i},\mathcal{I}_{j}\right)\triangleq\text{dist} \left(\mathcal{A}_{ij},\mathcal{B}_{ij}\right), \tag{8}\] with \(\mathcal{A}_{ij}:=\mathcal{A}_{i}-\mathcal{A}_{j}\) and \(\mathcal{B}_{ij}:=\mathcal{B}_{i}-\mathcal{B}_{j}\) distribution differences between semantic clusters. It is worth noting that the similarity check \(\mathcal{D}\) can vary. From the simplest Euclidean distance-based [28, 26] to distribution distance-based [5]. Problem 7 can be solved by formulating the correspondence set to a consistency graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where each vertex in \(\mathcal{V}\) represent one correspondence and each edge in \(\mathcal{E}\) represents two correspondences are consistent with each other evaluated by similarity check \(\mathcal{D}\). Afterward, finding the inlier set is equivalent to searching for the maximum clique of the consistency graph. We invite interested readers to refer to [28] for more details. Maximum clique is a classic combinatorial problem in graph theory and is NP-complete. We leverage the PMC library [34] to solve it. We will also show how current state-of-the-art registration-based global localization algorithms [26] are brittle to the hardness of the maximum clique problem when the problem scales up even with a powerful parallel solver. From a holistic view, such graph-theoretic outlier pruning strategy embeds globally consistent on top of the coarse correspondence set generated by locally consistent triangulated scene graphs. Such a local-to-global scheme resists constructing the hard association between the query scan and one single scan in the keyframe database, which is usually conducted by voting or similarity check. Instead, it first exploits local structures of one scene to generate correspondences associating similar substructures. Afterward, relationships between these local fragments are considered in search of a place in the reference that is globally consistent between the local substructures. The process presented here is very similar to feature re-ranking methods [24] for LCD where the computation of pose does not happen immediately after candidate retrieval but goes through another re-ranking process for frame-wise consistency verification. While our proposed method works on a more informative lower level, the substructures of scene graphs, thus have a better possibility of reaching global consistency. ### _Pose Estimation_ With the estimated inlier set \(\mathcal{I}\), we formulate the objective function in Eq. (2) into the following truncated least squares (TLS) form to resist potential outliers further[35] \[\hat{\mathbf{R}},\hat{\mathbf{t}}=\operatorname*{arg\,min}_{\mathbf{R}\in SO (3),\mathbf{t}\in\mathbb{R}^{3}}\sum_{ij\in\hat{\mathcal{I}}}\min\left( \left\|\mathbf{p}_{i}-\mathbf{R}\mathbf{q}_{j}-\mathbf{t}\right\|_{2},c_{ij} \right), \tag{9}\] with \(c_{ij}\) the truncation parameter. Eq. (9) is then solved by leveraging Black-Rangarajan duality [36] and graduated non-convexity (GNC) [35]. ## IV Experimental Results In this section, we compare our proposed method with several state-of-the-art one-shot global localization methods. All mentioned algorithms are implemented in C++ and tested on a PC with Intel i9-13900 and 32Gb RAM. **Experimental Setup.** We evaluate our proposed method, Outram, on six different sequences of two publicly available datasets: MulRan [11] and MCD [37]. To mimic a real global Fig. 3: Illustration of the substructure ambiguities and the proposed correspondence generation process. One triangle representation of the query scan (with three vertices labeled as tree trunk in brown) is shown in blue and multiple responses from different regions of the reference map are shown in green (true location) and red (false location) due to substructure ambiguities. Correspondence generation between all these substructures ensures local consistency while also retaining the possibility of exploiting scene-wise global consistency. localization or relocalization scenario, different from a loop closure detection setting, we intentionally involve temporal diversity between the mapping or descriptor generation session and the localization session from days to months. For each mapping sequence, we concatenate semantically annotated scans [32] by the ground truth pose to generate the semantic segmented reference map for registration-based methods. Three representative semantic classes are used for all semantic-related methods: pole, tree trunk, and car. For LCD-based global localization methods, frames in the mapping sequences are encoded to form a database for retrieval using scans in localization sequences. Statistics of the benchmark datasets are presented in Table I. The criteria for choosing the mapping sequence is the sequence that has the most coverage of the target area. We also highlight the time differences between the mapping and localization sessions ranging from several days to months, making our setup suitable for benchmarking global localization algorithms. **Baselines.** We involved a variety of state-of-the-art loop closure detection methods, STD [7], Scan Context [6] and GosMatch [21], as well as recently developed registration-based global localization algorithms [26] to benchmark the performance of each of the method. STD also leverages substructures of a scene for loop closure detection in a voting manner and GosMatch leverages semantic clusters for global descriptor generation. As they have certain sub-modules that are the same as our proposed one, we involve them to confirm that our proposition, the local-to-global method, can exploit more structural information of a LiDAR scan and have a better chance to be localized in a one-shot manner. As most of the methods have an open-sourced implementation [7, 21, 6], we directly use them for comparison. As to the method proposed by Ankenbauer et al. [26], since it also leverages semantic objects for global localization, we share the same semantic clusters for a fair comparison. **Metric.** We employ the ordinary relative pose error (RPE) to evaluate the accuracy of estimated pose \(\hat{\mathbf{T}}\) with respect to the ground truth \(\mathbf{T}\): \[e_{trans}=\sqrt{\Delta x^{2}+\Delta y^{2}+\Delta z^{2}},\] \[e_{rot}=\arccos{(trace(\Delta\mathbf{T})/2-1)},\] with \(\Delta\mathbf{T}=\hat{\mathbf{T}}\cdot\mathbf{T}^{-1}\) the transformation difference, and \(\Delta x\), \(\Delta y\), \(\Delta z\) the positional entries of \(\Delta\mathbf{T}\). We regard global localization results with \(e_{trans}<5\) meters and \(e_{rot}<10\) degrees are valid, which is generally the convergence region for local registration methods [30] for subsequent refinement. **Results.** We present the experimental results in four aspects, including successful global localization rate, error distribution analysis, runtime analysis, and storage analysis. ### _Successful Rate of Global Localization_ We present the results of LCD-based global localization methods on the upper side of Table II while the registration-based ones are on the lower side. Our proposed algorithm, Outram, outperforms all other methods by a margin. We observed that without the proposed triangulated scene graph for correspondence generation, the method proposed by Ankenbauer et al. [26] can hardly scale to bigger size problems as it generates correspondences between all semantic clusters in the current scan and the reference map that have the same label. For smaller-size data sets (e.g., the reference semantic map of NTU MCD only includes 1192 clusters), it performs relatively well as in this case, the straightforward all-to-all correspondence generation method guarantees the full inclusion of the inlier correspondences while being also computationally amenable. However, when the reference map scales to a bigger size (e.g., 5136 and 5103 semantic clusters in MulRan DCC and KAIST respectively), the original method quickly becomes computationally intractable, where we observe the algorithm drained all the 32Gb RAM of the test platform, making the program to crash. In such scenarios, we modified the method to a constrained version where we limited the size of semantic clusters in the query scan by random downsampling. However, the constrained algorithm performs poorly due to the failure of inlier inclusion. We also involve a pure geometric variant of Outram for the ablation study of semantic labels. In the implementation of the method, we disable the semantic attributes of each semantic cluster and produce a set of triangle descriptors with pure geometric information, similar to the representation proposed in STD [7]. Conversely, these pure geometric triangle descriptors are used to build up correspondence followed by a graph-theoretic outlier pruning process, like what is proposed in this paper. These comparisons demonstrate the effectiveness of the proposed triangulated scene graph in terms of more informative correspondence extraction (compared with existing registration-based methods) and the superior performance of the whole registration-based pipeline (compared with LCD-based methods), which verifies our claims in I. ### _Error Distribution Analysis_ The average translation error (ATE) and average rotation error (ARE) in Table II are calculated using the successfully localized ones only. On average, point-based loop closure detection-based methods [7] have the most accurate localization result. The phenomenon can also be verified in Fig. (4) the cumulative distribution function (ECDF). We observe that in the lower left corner, the state-of-the-art LCD-based global localization method, STD, surpasses all other methods in terms of translation error. This is because STD detects point-level stable features for pose estimation, while all other methods work on a higher level and rely on centroids of semantic clusters for pose estimation. Centroids could be different due to viewpoint variation thus affecting the pose estimation accuracy. However, this does not hinder the robustness of our proposed method to outperform others, which is the most important evaluation metric for global localization. ### _Runtime_ In the last column of Table II, the average runtime of each method is presented. We noticed LCD-based methods are more computationally efficient compared with registration-based methods due to the simple retrieval-based design. All registration-based methods leverage maximum clique for outlier pruning, which could be time-consuming when the graph size is big. To understand each of the sub-modules of our proposed method better, we analyzed the time breakdown of each module and plotted it in Fig. (5). Three main components are considered here, namely, the time to generate a triangulated scene graph, the time to search for the corresponding substructure and establish correspondences, and the time to solve the subsequent maximum clique problem. Please note logarithmic scale is used in the y-axis for better visualization. We note that solving for the maximum clique (i.e., finding the globally consistent inlier correspondences out of the prebuilt one) requires 190 ms on average and is the most computationally expensive process. The time required for the process is jointly determined by the size of the 3D scene graph and the reference map. Although our system cannot run in real-time, the global localization task itself usually does not require a real-time algorithm. ### _Storage Efficiency_ We further analyze the storage consumption of each method on MulRAN KAIST sequence 02 containing 8941 frames in Table III. For average global descriptor-based LCD methods [21], it requires several MB to store the vectorized descriptors for each frame. STD [7] requires storing planes detected in every frame for geometric verification, thus more space is consumed. The result reveals the potential of leveraging Outram for relocalization in bigger datasets. ## V Conclusions In this paper, we propose Outram, a one-shot LiDAR global localization algorithm leveraging triangulated 3D scene graphs and graph-theoretic outlier pruning. Substructures of scene graphs are leveraged for locally consistent correspondence generation, and the subsequent outlier pruning process ensures the global consistency between the substructures and finds the inlier correspondences. We demonstrate the effectiveness of our proposed methods on various datasets where Outram surpasses several state-of-the-art LCD-based global localization methods, albeit at the cost of real-time performance. In the future, we plan to work on a proper indicator of the global localization quality and a theoretical guarantee for localization results. Fig. 4: The empirical cumulative distribution function (ECDF) of translation and rotation error on sequence 13 of MCD NTU dataset. The x-axis represents different translation and rotation errors, and the corresponding y-axis is the probability of one specific method producing an estimation with a smaller error. Fig. 5: Runtime breakdown of Outram on MulRan DCC sequence 01.
2309.08900
Certain properties of generalized tracially approximated C*-algebras
We show that the following properties of unital ${\rm C^*}$-algebra in a class of $\Omega$ are preserved by unital simple ${\rm C^*}$-algebra in the class of $\rm WTA\Omega$: $(1)$ uniform property $\Gamma$, $(2)$ a certain type of tracial nuclear dimension at most $n$, $(3)$ weakly $(m, n)$-divisible.
Qingzhai Fan, Jiahui Wang
2023-09-16T06:56:56Z
http://arxiv.org/abs/2309.08900v2
# Certain properties of generalized tracially approximated \(\mathrm{C}^{*}\)-algebras ###### Abstract. We show that the following properties of unital \(\mathrm{C}^{*}\)-algebra in a class of \(\Omega\) are preserved by unital simple \(\mathrm{C}^{*}\)-algebra in the class of \(\mathrm{WTA}\Omega\): \((1)\) uniform property \(\Gamma\), \((2)\) a certain type of tracial nuclear dimension at most \(n\), \((3)\) weakly \((m,n)\)-divisible. **Key words**\(\mathrm{C}^{*}\)-algebras, tracial approximation, Cuntz semigroup. 2000 _Mathematics Subject Classification_: 46L35, 46L05, 46L80. ## 1. Introduction The Elliott program for the classification of amenable \(\mathrm{C}^{*}\)-algebras might be said to have begun with the \(\mathrm{K}\)-theoretical classification of AF algebras in [9]. A major next step was the classification of simple AH algebras without dimension growth (in the real rank zero case see [12], and in the general case see [13]). This led eventually to the classification of simple separable amenable \(\mathrm{C}^{*}\)-algebras with finite nuclear dimension in the UCT class (see [30], [36], [15], [27], [28], [41], [14], [25], and [26]). A crucial intermediate step was Lin's axiomatization of Elliott-Gong's decomposition theorem for simple AH algebras of real rank zero (classified by Elliott-Gong in [12]) and Gong's decomposition theorem ([24]) for simple AH algebras (classified by Elliott-Gong-Li in [13]). For this purpose, Lin introduced the concepts of TAF and TAI ([32] and [33]). Instead of assuming inductive limit structure, Lin started with a certain abstract (tracial) approximation property. Elliott and Niu in [16] considered this notion of tracial approximation by other classes of unital \(\mathrm{C}^{*}\)-algebras than the finite-dimensional ones for TAF and the interval algebras for TAI. Large and centrally large subalgebras were introduced in [37] and [5] by Phillips and Archey as abstractions of Putnam's orbit breaking subalgebra of the crossed product algebra \(\mathrm{C}^{*}(X,\mathbb{Z},\sigma)\) of the Cantor set by a minimal homeomorphism in [38]. Inspired by centrally large subalgebras and tracial approximation \(\mathrm{C}^{*}\)-algebras, Elliott, Fan and Fang introduced a class of unital weakly tracial approximation \(\mathrm{C}^{*}\)-algebras in [11]. The notion generalizes both Archey and ## 1. Introduction Let \(\Omega\) be a class of unital \(\mathrm{C}^{*}\)-algebras which are weakly \((m,n)\)-divisible (see Definition 2.2). Let \(A\in\mathrm{WTA}\Omega\) be a simple unital stably finite finite \(\mathrm{C}^{*}\)-algebra such that for any integer \(n\in\mathbb{N}\) the \(\mathrm{C}^{*}\)-algebra \(\mathrm{M}_{n}(A)\) belongs to the class \(\mathrm{WTA}\Omega\). Then \(A\) is _weakly_ \((m,n)\)-divisible (see Definition 2.3). Let \(A\) be an infinite-dimensional stably finite unital simple separable \(\mathrm{C}^{*}\)-algebra. Let \(B\subseteq A\) be a centrally large subalgebra in \(A\) such that \(B\) has uniform property \(\Gamma\). Then \(A\) has uniform property \(\Gamma\). This result was obtained by Fan and Zhang in [19]. Let \(\Omega\) be a class of stably finite unital \(\mathrm{C}^{*}\)-algebras such that for any \(B\in\Omega\), \(B\) has uniform property \(\Gamma\). Then \(A\) has uniform property \(\Gamma\) for any simple unital \(\mathrm{C}^{*}\)-algebra \(A\in\mathrm{T}\Lambda\Omega\). Then \(A\) is secondly weakly \((m,n)\)-divisible. This result was obtained by Fan, Fang, and Zhao in [17]. ## 2. Preliminaries and definitions Let \(A\) be a \(\mathrm{C}^{*}\)-algebra, and let \(\mathrm{M}_{n}(A)\) denote the \(\mathrm{C}^{*}\)-algebra of \(n\times n\) matrices with entries elements of \(A\). Let \(\mathrm{M}_{\infty}(A)\) denote the algebraic inductive limit of the sequence \((\mathrm{M}_{n}(A),\phi_{n}),\) where \(\phi_{n}:\mathrm{M}_{n}(A)\rightarrow\mathrm{M}_{n+1}(A)\) is the canonical embedding as the upper left-hand corner block. Let \(\mathrm{M}_{\infty}(A)_{+}\) (respectively, \(\mathrm{M}_{n}(A)_{+}\)) denote the positive elements of \(\mathrm{M}_{\infty}(A)\) (respectively, \(\mathrm{M}_{n}(A)\)). Given \(a,b\in\mathrm{M}_{\infty}(A)_{+},\) one says that \(a\) is Cuntz subequivalent to \(b\) (written \(a\precsim b\)) if there is a sequence \((v_{n})_{n=1}^{\infty}\) of elements of \(\mathrm{M}_{\infty}(A)\) such that \[\lim_{n\rightarrow\infty}\|v_{n}bv_{n}^{*}-a\|=0.\] One says that \(a\) and \(b\) are Cuntz equivalent (written \(a\sim b\)) if \(a\precsim b\) and \(b\precsim a\). We shall write \(\langle a\rangle\) for the Cuntz equivalence class of \(a\). The object \(\mathrm{Cu}(A):=(A\otimes K)_{+}/\sim\) will be called the Cuntz semigroup of \(A\). (See [8].) Observe that any \(a,b\in\mathrm{M}_{\infty}(A)_{+}\) are Cuntz equivalent to orthogonal elements \(a^{\prime},b^{\prime}\in\mathrm{M}_{\infty}(A)_{+}\) (i.e., \(a^{\prime}b^{\prime}=0\)), and so \(\mathrm{Cu}(A)\) becomes an ordered semigroup when equipped with the addition operation \[\langle a\rangle+\langle b\rangle=\langle a+b\rangle\] whenever \(ab=0\), and the order relation \[\langle a\rangle\leq\langle b\rangle\Leftrightarrow a\precsim b.\] Let \(A\) be a unital \(\mathrm{C}^{*}\)-algebra. Recall that a positive element \(a\in A\) is called purely positive if \(a\) is not Cuntz equivalent to a projection. Let \(A\) be a stably finite \(\mathrm{C}^{*}\)-algebra and let \(a\in A\) be a positive element. Then either \(a\) is a purely positive element or \(a\) is Cuntz equivalent to a projection. Given \(a\) in \(A_{+}\) and \(\varepsilon>0,\) we denote by \((a-\varepsilon)_{+}\) the element of \(\mathrm{C}^{*}(a)\) corresponding (via the functional calculus) to the function \(f(t)=\max(0,t-\varepsilon)\), \(t\in\sigma(a)\). The following facts are well known. **Theorem 2.1**.: ([3], [29], [37], [40].) _Let \(A\) be a \(\mathrm{C}^{*}\)-algebra._ \((1)\) _Let \(a,b\in A_{+}\) and any \(\varepsilon>0\) be such that \(\|a-b\|<\varepsilon\). Then \((a-\varepsilon)_{+}\precsim b\)._ \((2)\) _Let \(a,p\) be positive elements in \(A\) with \(p\) a projection. If \(p\precsim a,\) and \(p\) is not equivalent to \(a\), then there is a nonzero element \(b\) in \(A\) such that \(bp=0\) and \(b+p\precsim a\)._ \((3)\) _Let \(a\) be a positive element of \(\mathrm{C}^{*}\)-algebra \(A\) and \(a\) is not Cuntz equivalent to a projection. Let \(\delta>0\), and let \(g\in C_{0}(0,1]\) be a non-negative function with \(g=0\) on \((\delta,1),\)\(g>0\) on \((0,\delta)\), and \(\|g\|=1\). Then \(g(a)\neq 0\) and \((a-\delta)_{+}+g(a)\precsim a\)._ The property of weakly \((m,n)\)-divisible was introduced by Kirchberg and Rordam in [31]. **Definition 2.2**.: ([31].) _Let \(A\) be a unital \(\mathrm{C}^{*}\)-algebra. Let \(m,n\geq 1\) be two integers. \(A\) is said to be weakly \((m,n)\)-divisible, if for every \(a\in\mathrm{M}_{\infty}(A)_{+}\) and any \(\varepsilon>0\), there exist elements \(x_{1},x_{2},\cdots,x_{n}\in\mathrm{M}_{\infty}(A)_{+}\), such that \(m\left\langle x_{j}\right\rangle\leq\left\langle a\right\rangle\), for all \(j=1,2,\cdots,n\) and \(\left\langle(a-\varepsilon)_{+}\right\rangle\leq\left\langle x_{1}\right\rangle +\left\langle x_{2}\right\rangle+\cdots+\left\langle x_{n}\right\rangle\)._ **Definition 2.3**.: _Let \(A\) be a unital \(\mathrm{C}^{*}\)-algebra. Let \(m,n\geq 1\) be two integers. \(A\) is said to be secondly weakly \((m,n)\)-divisible, if for every \(a\in\mathrm{M}_{\infty}(A)_{+}\) and any \(\varepsilon>0\), there exist elements \(x_{1},x_{2},\cdots,x_{n}\in\mathrm{M}_{\infty}(A)_{+}\) such that \(m\left\langle x_{j}\right\rangle\leq\left\langle a\right\rangle+\left\langle a\right\rangle\) for all \(j=1,2,\cdots,n\), and \(\left\langle(a-\varepsilon)_{+}\right\rangle\leq\left\langle x_{1}\right\rangle +\left\langle x_{2}\right\rangle+\cdots+\left\langle x_{n}\right\rangle\)._ Let \(\Omega\) be a class of unital \(\mathrm{C}^{*}\)-algebras. Elliott, Fan, and Fang defined as follows the class of \(\mathrm{C}^{*}\)-algebras which can be weakly tracially approximated by \(\mathrm{C}^{*}\)-algebras in \(\Omega\), and denoted this class by \(\mathrm{WTA}\Omega\) in [11]. **Definition 2.4**.: ([11].) _A simple unital \(\mathrm{C}^{*}\)-algebra \(A\) is said to belong to the class \(\mathrm{WTA}\Omega\) if, for any \(\varepsilon>0\), any finite subset \(F\subseteq A\), and any non-zero element \(a\geq 0\), there exist a projection \(p\in A\), an element \(g\in A\) with \(0\leq g\leq 1\), and a unital \(\mathrm{C}^{*}\)-subalgebra \(B\) of \(A\) with \(g\in B,\ 1_{B}=p\), and \(B\in\Omega\), such that_ \((1)\)_\((p-g)x\in_{\varepsilon}B,\ x(p-g)\in_{\varepsilon}B\) for all \(x\in F\),_ \((2)\)_\(\|(p-g)x-x(p-g)\|<\varepsilon\) for all \(x\in F\),_ \((3)\)_\(1-(p-g)\precsim a\), and_ \((4)\)_\(\|(p-g)a(p-g)\|\geq\|a\|-\varepsilon\)._ Let \(\Omega\) be a class of unital \(\mathrm{C}^{*}\)-algebras. Then the class of simple unital separable \(\mathrm{C}^{*}\)-algebras which can be tracially approximated by \(\mathrm{C}^{*}\)-algebras in \(\Omega\), denoted by \(\mathrm{TA}\Omega\). It follows from the definitions and by the proof of Theorem 4.1 of [16] that if \(A\) is a simple unital \(\mathrm{C}^{*}\)-algebra and \(A\in\mathrm{TA}\Omega\), then \(A\in\mathrm{WTA}\Omega\). Furthermore, if \(\Omega=\{B\}\), and \(B\subseteq A\) is a centrally large subalgebra of \(A\), then \(A\in\mathrm{WTA}\Omega\). Winter and Zacharias introduced the notion of nuclear dimension for \(\mathrm{C}^{*}\)-algebras in [42]. **Definition 2.5**.: ([42].) _A \(\mathrm{C}^{*}\)-algebra \(A\) has nuclear dimension at most \(n\), and denoted by \(\dim_{\mathrm{nuc}}\leq n\), if there exists a net \((F_{\lambda},\psi_{\lambda},\varphi_{\lambda})_{\lambda\in\Lambda}\) such that the \(F_{\lambda}\) are finite-dimensional \(\mathrm{C}^{*}\)-algebras, and such that \(\psi_{\lambda}:A\to F_{\lambda}\) and \(\varphi_{\lambda}:F_{\lambda}\to A\) are completely positive maps satisfying_ \((1)\)_\(\psi_{\lambda}\varphi_{\lambda}(a)\to a\) uniformly on finite subsets of \(A\),_ \((2)\)_\(\|\psi_{\lambda}\|\leq 1\),_ \((3)\) _for each \(\lambda\), \(F_{\lambda}\) decomposes into \(n+1\) ideals \(F_{\lambda}=F_{\lambda}^{\ 0}+\cdots+F_{\lambda}^{\ n}\) such that \(\varphi_{\lambda}|_{F_{\lambda}^{i}}\) is completely positive contractive order zero maps (which means preserves orthogonality, i.e., \(\varphi_{\lambda}(e)\varphi_{\lambda}(f)=0\) for all \(e,\ f\in F_{\lambda}^{\ i}\) with \(ef=0\)) for \(i=0,\ \cdots,\ n\)._ Inspired by Hirshberg and Orovitz's tracial \(\mathcal{Z}\)-absorption in [29], Fu introduced some notion of tracial nuclear dimension in his doctoral dissertation [22] (see also [23]). **Definition 2.6**.: ([22].) _Let \(A\) be a \(\mathrm{C}^{*}\)-algebra. Let \(n\in\mathbb{N}\) be an integer. \(A\) is said to have second type of tracial nuclear dimension at most \(n\), and denoted by \(\mathrm{T}^{2}\mathrm{dim}_{\mathrm{nuc}}(A)\leq n\), if for any finite positive subset \(\mathcal{F}\subseteq A\), for any \(\varepsilon>0\) and for any nonzero positive element \(a\in A\), there exist a finite dimensional \(\mathrm{C}^{*}\)-algebra \(F=F_{0}\oplus\cdots\oplus F_{n}\) and completely positive maps \(\psi:A\to F\), \(\varphi:F\to A\) such that_ \((1)\) _for any \(x\in F\), there exists \(x^{\prime}\in A_{+}\), such that \(x^{\prime}\precsim a\) and \(\|x-x^{\prime}-\varphi\psi(x)\|<\varepsilon\),_ \((2)\)_\(\|\psi\|\leq 1\), and_ \((3)\)_\(\varphi|_{F_{i}}\) is a contractive completely positive order zero map for \(i=1,\cdots,n\)._ Inspired by Fu's second type of tracial nuclear dimension in [22], we shall introduce a new type of tracial nuclear dimension for unital \(\mathrm{C}^{*}\)-algebra. **Definition 2.7**.: _Let \(A\) be a unital \(\mathrm{C}^{*}\)-algebra. Let \(n\in\mathbb{N}\) be an integer. \(A\) will said to have a new type of tracial nuclear dimension at most \(n\), if for any finite positive subset \(\mathcal{F}\subseteq A\), for any \(\varepsilon>0\) and for any nonzero positive element \(a\in A_{+}\), there exist a finite dimensional \(\mathrm{C}^{*}\)-algebra \(F=F_{0}\oplus\cdots\oplus F_{n}\) and completely positive maps \(\psi:A\to F\), \(\varphi:F\to A\) such that_ \((1)\) _for any \(x\in F\), there exists \(x^{\prime}\in A_{+}\), such that \(\|x-x^{\prime}-\varphi\psi(x)\|<\varepsilon\),_ \((2)\)_\((1_{A}-\varphi\psi(1_{A})-\varepsilon)_{+}\precsim a\),_ \((3)\)_\(\|\psi\|\leq 1\), and_ \((4)\)_\(\varphi|_{F_{i}}\) is a contractive completely positive order zero map for \(i=1,\cdots,n\)._ Let \(A\) be a unital \(\mathrm{C}^{*}\)-algebra. It is easy to know that \(\mathrm{T}^{2}\mathrm{dim}_{\mathrm{nuc}}(A)\leq n\) implies \(A\) has the new type of tracial nuclear dimension at most \(n\). Uniform property \(\Gamma\) was introduced by J. Castillejos, S. Evington, A. Tikuisis, S. White, and W. Winter, which was used to prove that \(\mathcal{Z}\)-stable imply that finite nuclear dimension in [6]. Examples of separable nuclear \(\mathrm{C}^{*}\)-algebra with uniform property \(\Gamma\) are by now abundant. Kerr and Szabo establish uniform property \(\Gamma\) for crossed product \(\mathrm{C}^{*}\)-algebras arising from a free action with the small boundary property of an infinite amenable group on a compact metrisable space see Theorem 9.4 in [20]. We recall the equivalent local refinement of Property \(\Gamma\) from Proposition 2.4 of [7]. **Theorem 2.8**.: ([7].) _Let \(A\) be a separable \(\mathrm{C}^{*}\)-algebra with \(T(A)\) nonempty and compact. Then the following are equivalent:_ \((1)\)_\(A\) has uniform property \(\Gamma\)._ \((2)\) _For any finite subset \(F\subseteq A\), any \(\varepsilon>0\), and any integer \(n\in\mathbb{N}\), there exist pairwise orthogonal positive contractions \(e_{1},\cdots,e_{n}\in A\) such that for \(i=1,\cdots,n\), and \(a\in F\), we have \(\|e_{i}a-ae_{i}\|<\varepsilon\) and_ \[\sup_{\tau\in T(A)}\|\tau(ae_{i})-\frac{1}{n}\tau(a)\|<\varepsilon.\] ## 3. The main results **Theorem 3.1**.: _Let \(\Omega\) be a class of unital separable \(\mathrm{C}^{*}\)-algebra such that \(T(B)\) is nonempty and compact and \(B\) has uniform property \(\Gamma\) for any \(B\in\Omega\). Then \(A\) has uniform property \(\Gamma\) for any simple infinite-dimensional separable unital stably finite \(\mathrm{C}^{*}\)-algebra \(A\in\mathrm{WTA}\Omega\)._ Proof.: Since \(A\) is a stably finite \(\mathrm{C}^{*}\)-algebra, then \(T(A)\) is nonempty. This together with the unitality of \(A\) implies that \(T(A)\) is compact. By Theorem 2.8: (2), we need to show that for fixed finite subset \(F=\{a_{1},a_{2},\cdots,a_{k}\}\) of \(A\) (we may assume that \(\|a_{j}\|\leq 1\) for all \(j=1,\cdots,k\)), any \(\varepsilon>0\), and any integer \(n\in\mathbb{N}\), there exist pairwise orthogonal positive contractions \(e_{1},\cdots,e_{n}\in A\) such that \(\|e_{i}a_{j}-a_{j}e_{i}\|<\varepsilon\) and \[\sup_{\tau\in T(A)}\|\tau(a_{j}e_{i})-\frac{1}{n}\tau(a_{j})\|<\varepsilon,\] for \(i=1,\cdots,n,\) and \(j=1,\cdots,k\). For \(\varepsilon>0\), we choose \(0<\delta<\varepsilon\) with \(X=[0,1],f(t)=t^{1/2},g(t)=(1-t)^{1/2}\) according to Lemma 2.5.11 in [34]. Since \(A\) is an infinite-dimensional unital simple separable \(\mathrm{C}^{*}\)-algebra, by Corollary 2.5 in [37], there exists a nonzero positive element \(a\in A\) such that \(\delta>d_{\tau}(a)=\lim_{n\to\infty}\tau(a^{1/n})\) for any \(\tau\in T(A)\). For \(F=\{a_{1},a_{2},\cdots,a_{k}\}\), any \(\delta>0\), and nonzero \(a\in A_{+}\), since \(A\in\mathrm{WTA}\Omega\), there exist a projection \(p\in A\), an element \(g\in A\) with \(\|g\|\leq 1\) and a unital \(\mathrm{C}^{*}\)-subalgebra \(B\) of \(A\) with \(g\in B,1_{B}=p\) and \(B\in\Omega\) such that \((1)\)\((p-g)x\in_{\delta}B,\;x(p-g)\in_{\delta}B\) for any \(x\in F\), \((2)\)\(\|(p-g)x-x(p-g)\|<\delta\) for any \(x\in F\), and \((3)\)\(1_{A}-(p-g)\preccurlyeq a\). By \((2)\), we have \((1)^{\prime}\left\|(1_{A}-(p-g))a_{j}-a_{j}(1_{A}-(p-g))\right\|<\delta\) for any \(j=1,\cdots,k\). By \((3)\) and by Proposition 1.19 of [37], we have \[d_{\tau}(1_{A}-(p-g))\leq d_{\tau}(a)\] for any \(\tau\in T(A)\). Since \(d_{\tau}(a)<\delta\), and \(\tau(1_{A}-(p-g))\leq d_{\tau}(1_{A}-(p-g))\), we have \[\tau(1_{A}-(p-g))<\delta\] for any \(\tau\in T(A)\). By the choice of \(\delta\), by \((2)\), \((1)^{\prime}\) and by Lemma 2.5.11 in [34], one has \(\|(1_{A}-(p-g))^{1/2}a_{j}-a_{j}(1_{A}-(p-g))^{1/2}\|<\varepsilon,\) and \(\|(p-g)^{1/2}a_{j}-a_{j}(p-g)^{1/2}\|<\varepsilon.\) By \((1)\), there exists \(a_{j}^{\prime}\in B\) such that \((2)^{\prime}\left\|(p-g)a_{j}-a_{j}^{\prime}\right\|<\delta\). By \((1)^{\prime}\) and \((2)^{\prime}\), one has \(\|a_{j}-a_{j}^{\prime}-(1_{A}-(p-g))^{1/2}a_{j}(1_{A}-(p-g))^{1/2}\|\) \(=\|(1_{A}-(p-g))a_{j}+(p-g)a_{j}-a_{j}^{\prime}-(1_{A}-(p-g))^{1/2}a_{j}(1_{A} -(p-g))^{1/2}\|\) \(\leq\|(1_{A}-(p-g))a_{j}-(1_{A}-(p-g))^{1/2}a_{j}(1_{A}-(p-g))^{1/2}\|+\|(p-g) a_{j}-a_{j}^{\prime}\|<\varepsilon+\delta<2\varepsilon\), and \(\|(p-g)^{1/2}a_{j}(p-g)^{1/2}-a_{j}^{\prime}\|\leq\|(p-g)^{1/2}a_{j}(p-g)^{1/2 }-(p-g)a_{j}\|+\|(p-g)a_{j}-a_{j}^{\prime}\|<\varepsilon+\delta<2\varepsilon\) for all \(1\leq j\leq k\). For \(\varepsilon>0\), and any integer \(n\in\mathbb{N}\), we choose \(\delta^{\prime\prime}=\delta^{\prime\prime}(\varepsilon,n)\) (with \(\delta^{\prime\prime}<\varepsilon\)) sufficiently small such that satisfying Lemma 2.5.12 in [34]. For \(\delta^{\prime\prime}/2>0\), finite subset \(\{a_{1}^{\prime},a_{2}^{\prime},\cdots,a_{k}^{\prime},(p-g),(p-g)^{1/2}\}\) of \(B\) and \(n\in\mathbb{N}\), since \(B\) has uniform property \(\Gamma\), there exist pairwise orthogonal positive contractions \(e_{1}^{\prime},\cdots,e_{n}^{\prime}\in B\) such that for \(i=1,\cdots,n,j=1,\cdots,k\), one has \[\|e_{i}^{\prime}a_{j}^{\prime}-a_{j}^{\prime}e_{i}^{\prime}\|<\delta^{\prime \prime}/2,\|e_{i}^{\prime}(p-g)-(p-g)e_{i}^{\prime}\|<\delta^{\prime\prime}/2,\] \[\|e_{i}^{\prime}(p-g)^{1/2}-(p-g)^{1/2}e_{i}^{\prime}\|<\delta^{\prime\prime}/2,\] and \[\sup_{\tau\in T(B)}\|\tau(a_{j}^{\prime}e_{i}^{\prime})-\frac{1}{n}\tau(a_{j} ^{\prime})\|<\delta^{\prime\prime}/2.\] Since \(\|(p-g)^{1/2}e_{i}^{\prime}-e_{i}^{\prime}(p-g)^{1/2}\|<\delta^{\prime\prime}\), and \(e_{i}^{\prime}e_{j}^{\prime}=0,i\neq j\), so we have \(\|(p-g)^{1/2}e_{i}^{\prime}(p-g)^{1/2}(p-g)^{1/2}e_{j}^{\prime}(p-g)^{1/2}\|\) \(\leq\|(p-g)^{1/2}e_{i}^{\prime}(p-g)^{1/2}(p-g)^{1/2}e_{j}^{\prime}(p-g)^{1/2} -(p-g)e_{i}^{\prime}(p-g)^{1/2}e_{j}^{\prime}(p-g)^{1/2}\|+\|(p-g)e_{i}^{\prime} (p-g)^{1/2}e_{j}^{\prime}(p-g)^{1/2}\|<\delta^{\prime\prime}/2+\delta^{\prime \prime}/2=\delta^{\prime\prime}\). By the choice of \(\delta\), since \((p-g)^{1/2}e_{i}^{\prime}(p-g)^{1/2}\) is a contraction, by the proof the Lemma 2.5.12 in [34], one can fine pairwise orthogonal positive contractions \(e_{1}^{\prime},\cdots,e_{n}^{\prime}\in B\) such that for \(i=1,\cdots,n,j=1,\cdots,k\), one has \[\|e_{i}^{\prime}a_{j}^{\prime}-a_{j}^{\prime}e_{i}^{\prime}\|<\delta^{\prime \prime}/2,\|e_{i}^{\prime}(p-g)-(p-g)e_{i}^{\prime}\|<\delta^{\prime\prime}/2,\] \[\|e_{i}^{\prime}(p-g)^{1/2}-(p-g)^{1/2}e_{i}^{\prime}\|<\delta^{\prime\prime}/2,\] and \[\sup_{\tau\in T(B)}\|\tau(a_{j}^{\prime}e_{i}^{\prime})-\frac{1}{n}\tau(a_{j} ^{\prime})\|<\delta^{\prime\prime}/2.\] Since \(\|(p-g)^{1/2}e_{i}^{\prime}-e_{i}^{\prime}(p-g)^{1/2}\|<\delta^{\prime\prime}\), and \(e_{i}^{\prime}e_{j}^{\prime}=0,i\neq j\), so we have \(\|(p-g)^{1/2}e_{i}^{\prime}(p-g)^{1/2}(p-g)^{1/2}e_{j}^{\prime}(p-g)^{1/2}\|\) \(\leq\|(p-g)^{1/2}e_{i}^{\prime}(p-g)^{1/2}(p-g)^{1/2}e_{j}^{\prime}(p-g)^{1/2}-( p-g)e_{i}^{\prime}(p-g)^{1/2}e_{j}^{\prime}(p-g)^{1/2}\|+\|(p-g)e_{i}^{\prime}(p-g)^{1/2}e_{j}^{ \prime}(p-g)^{1/2}e_{j}^{\prime}(p-g)^{1/2}\|\) \(<\delta^{\prime\prime}/2+\delta^{\prime\prime}/2=\delta^{\prime\prime}\). By the choice of \(\delta\), since \((p-g)^{1/2}e_{i}^{\prime}(p-g)^{1/2}\) is a contraction, by the proof the Lemma 2.5.12 in [34], one can fine pairwise orthogonal positive contractions \(e_{1}^{\prime},\cdots,e_{n}^{\prime}\in B\) such that for \(i=1,\cdots,n,j=1,\cdots,k\), one has \[\|e_{i}^{\prime}a_{j}^{\prime}-a_{j}^{\prime}e_{i}^{\prime}\|<\delta^{\prime \prime}/2,\|e_{i}^{\prime}(p-g)-(p-g)e_{i}^{\prime}\|<\delta^{\prime\prime}/2,\] \[\|e_{i}^{\prime}(p-g)^{1/2}-(p-g)^{1/2}e_{i}^{\prime}\|<\delta^{\prime\prime}/2,\] and \[\sup_{\tau\in T(B)}\|\tau(a_{j}^{\prime}e_{i}^{\prime})-\frac{1}{n}\tau(a_{j}^{ \prime})\|<\delta^{\prime\prime}/2.\] Since \(\|(p-g)^{1/2}e_{i}^{\prime}-e_{i}^{\prime}(p-g)^{1/2}\|<\delta^{\prime\prime}\), and \(e_{i}^{\prime}e_{j}^{\prime}=0,i\neq j\), so we have \(\|(p-g)^{1/2}e_{i}^{\prime}(p-g)^{1/2}(p-g)^{1/2}e_{j}^{\prime}(p-g)^{1/2}\|\) \(\leq\|(p-g)^{1/2}e_{i}^{\prime}(p-g)^{1/2}(p-g)^{1/2}e_{j}^{\prime}(p-g)^{1/2}-( p-g)e_{i}^{\prime}(p-g)^{1/2}e_{j}^{\prime}(p-g)^{1/2}\|+\|(p-g)e_{i}^{\prime}(p-g)^{1/2}e_{j}^{ \prime}(p-g)^{1/2}\|<\delta^{\prime\prime}/2+\delta^{\prime\prime}/2=\delta^{ \prime\prime}.\) By the choice of \(\delta\), since \((p-g)^{1/ contractions \(e_{i}\)\((i=1,\cdots,n)\), such that \[\|(p-g)^{1/2}e_{i}^{\prime}(p-g)^{1/2}-e_{i}\|<\varepsilon.\] We have \(\|a_{j}e_{i}-a_{j}^{\prime}e_{i}^{\prime}\|\leq\|a_{j}e_{i}-a_{j}(p-g)^{1/2}e_{ i}^{\prime}(p-g)^{1/2}\|\) \(+\|a_{j}(p-g)^{1/2}e_{i}^{\prime}(p-g)^{1/2}-a_{j}(p-g)^{1/2}(p-g)^{1/2}e_{i}^{ \prime}\|\) \(+\|a_{j}(p-g)^{1/2}(p-g)^{1/2}e_{i}^{\prime}-(p-g)^{1/2}a_{j}(p-g)^{1/2}e_{i}^{ \prime}\|\) \(+\|(p-g)^{1/2}a_{j}(p-g)^{1/2}e_{i}^{\prime}-a_{j}^{\prime}e_{i}^{\prime}\|\leq 4\varepsilon\). With the same argument, one has \[\|e_{i}a_{j}-e_{i}^{\prime}a_{j}^{\prime}\|<4\varepsilon\] for \(i=1,\cdots,n,j=1,\cdots,k\). Since \(\|e_{i}^{\prime}a_{j}^{\prime}-a_{j}^{\prime}e_{i}^{\prime}\|<\delta^{\prime \prime}/2\), so one has \(\|a_{j}e_{i}-e_{i}a_{j}\|\) \(\leq\|a_{j}e_{i}-a_{j}^{\prime}e_{i}^{\prime}\|+\|a_{j}^{\prime}e_{i}^{\prime} -e_{i}^{\prime}a_{j}^{\prime}\|+\|e_{i}^{\prime}a_{j}^{\prime}-e_{i}a_{j}\|\) \(<4\varepsilon+4\varepsilon+\delta^{\prime\prime}/2<9\varepsilon,\) for \(i=1,\cdots,n,j=1,\cdots,k\). Since \(\|a_{j}e_{i}-a_{j}^{\prime}e_{i}^{\prime}\|<4\varepsilon\) for any \(\tau\in T(A)\), one has \[|\tau(a_{j}e_{i})-\tau(a_{j}^{\prime}e_{i}^{\prime})|<4\varepsilon.\] Since \(\|a_{i}-a_{i}^{\prime}-(1_{A}-(p-g))^{1/2}a_{j}(1_{A}-(p-g))^{1/2}\|<2\varepsilon,\) we have \[|\tau(a_{j})-\tau(a_{j}^{\prime})-\tau((1_{A}-(p-g))^{1/2}a_{j}(1_{A}-(p-g))^{ 1/2})|<2\varepsilon.\] Therefore, we have \(|\tau(a_{j}e_{i})-\frac{1}{n}\tau(a_{j})|\) \(=|\tau(a_{j}e_{i})-\tau(a_{j}^{\prime}e_{i}^{\prime})|+|\tau(a_{j}^{\prime}e_ {i}^{\prime})-\frac{1}{n}\tau(a_{j}^{\prime})|+|(\frac{1}{n}\tau(a_{j}^{ \prime})-\frac{1}{n}\tau(a_{j})|\) \(\leq 4\varepsilon+|\tau(a_{j}^{\prime}e_{i}^{\prime})-\frac{1}{n}\tau(a_{j} ^{\prime})|+\frac{1}{n}(2\varepsilon+\tau((1-(p-g))^{1/2}a_{j}(1-(p-g))^{1/2}))\) \(\leq 4\varepsilon+|\tau(a_{j}^{\prime}e_{i}^{\prime})-\frac{1}{n}\tau(a_{j} ^{\prime})|+\frac{1}{n}(2\varepsilon+\tau(1-(p-g))\) \(\leq 5\varepsilon+|\tau(a_{j}^{\prime}e_{i}^{\prime})-\frac{1}{n}\tau(a_{j} ^{\prime})|\). Therefore, one has \(\sup_{\tau\in T(A)}|\tau(a_{j}e_{i})-\frac{1}{n}\tau(a_{j})|\) \(\leq\sup_{\tau\in T(A)}|\tau(a_{j}^{\prime}e_{i}^{\prime})-\frac{1}{n}\tau(a_ {j}^{\prime})|+5\varepsilon\) \(\leq\frac{1}{\tau(p)}\sup_{\tau\in T(B)}|\tau(a_{j}^{\prime}e_{i}^{\prime})- \frac{1}{n}\tau(a_{j}^{\prime})|+5\varepsilon\) \(\leq\frac{1}{1-\delta}\sup_{\tau\in T(B)}|\tau(a_{j}^{\prime}e_{i}^{\prime})- \frac{1}{n}\tau(a_{j}^{\prime})|+5\varepsilon\) \(<5\varepsilon+\frac{\delta^{\prime\prime}}{1-\delta}<6\varepsilon.\) By Theorem 2.11:(2), \(A\) has uniform property \(\Gamma\). The following two corollaries were obtained by Fan and Zhang in [19]. **Corollary 3.2**.: _([19]) Let \(A\) be an infinite-dimensional stably finite unital simple \(\mathrm{C}^{*}\)-algebra. Let \(B\subseteq A\) be a centrally large subalgebra of \(A\) such that \(B\) has uniform property \(\Gamma\). Then \(A\) has uniform property \(\Gamma\)._ **Corollary 3.3**.: _([19]) Let \(\Omega\) be a class of stably finite unital \(\mathrm{C}^{*}\)-algebras such that for any \(B\in\Omega\), \(B\) has uniform property \(\Gamma\). Then \(A\) has uniform property \(\Gamma\) for any simple unital \(\mathrm{C}^{*}\)-algebra \(A\in\mathrm{T}\Lambda\Omega\)._ **Theorem 3.4**.: _Let \(\Omega\) be a class of unital nuclear \(\mathrm{C}^{*}\)-algebras which have the new type of tracial nuclear dimension at most \(n\) (in the sense of Definition 2.7). Then \(A\) has the new type of tracial nuclear dimension at most \(n\) for any simple unital \(\mathrm{C}^{*}\)-algebra \(A\in\mathrm{W}\mathrm{T}\Lambda\Omega\)._ Proof.: We must show that for any finite positive subset \(\mathcal{F}=\{a_{1},a_{2},\cdots,a_{k}\}\subseteq A\), for any \(\varepsilon>0\), and for any nonzero positive element \(b\in A\), there exist a finite dimensional \(\mathrm{C}^{*}\)-algebra \(F=F_{0}\oplus\cdots\oplus F_{n}\) and completely positive maps \(\psi:A\to F\), \(\varphi:F\to A\) such that \((1)\) for any \(x\in F\), there exists \(\overline{x}\in A_{+}\), such that \(\|x-\overline{x}-\varphi\psi(x)\|<\varepsilon\), \((2)\)\((1_{A}-\varphi\psi(1_{A})-\varepsilon)_{+}\precsim b\), \((3)\)\(\|\psi\|\leq 1\), and \((4)\)\(\varphi|_{F_{i}}\) is a completely positive contractive order zero map for \(i=1,\cdots,n\). By Lemma 2.3 of [34], there exist positive elements \(b_{1},b_{2}\in A\) of norm one such that \(b_{1}b_{2}=0,b_{1}\sim b_{2}\) and \(b_{1}+b_{2}\precsim b\). Given \(\varepsilon^{\prime}>0\), for \(H=\mathcal{F}\cup\{b\}\), since \(A\in\mathrm{W}\mathrm{T}\Lambda\Omega\), there exist a projection \(p\in A\), an element \(g\in A\) with \(\|g\|\leq 1\) and a unital \(\mathrm{C}^{*}\)-subalgebra \(B\) of \(A\) with \(g\in B,1_{B}=p\) and \(B\in\Omega\) such that \((1)^{\prime}\)\((p-g)x\in_{\varepsilon^{\prime}}B,x(p-g)\in_{\varepsilon^{\prime}}B\) for any \(x\in H\), \((2)^{\prime}\)\(\|(p-g)x-x(p-g)\|<\varepsilon^{\prime}\) for any \(x\in H\), \((3)^{\prime}\)\(1_{A}-(p-g)\precsim b_{1}\sim b_{2}\), and \((4)^{\prime}\)\(\|(p-g)b(p-g)\|\geq 1-\varepsilon^{\prime}\). By \((2)^{\prime}\) and Lemma 2.5.11 of [34], with sufficiently small \(\varepsilon^{\prime}\), we can get \((5)^{\prime}\)\(\|(p-g)^{\frac{1}{2}}x-x(p-g)^{\frac{1}{2}}\|<\varepsilon\) for any \(x\in H\), and \((6)^{\prime}\)\(\|(1_{A}-(p-g))^{\frac{1}{2}}x-x(1_{A}-(p-g))^{\frac{1}{2}}\|<\varepsilon\) for any \(x\in H\). By \((1)^{\prime}\) and \((5)^{\prime}\), with sufficiently small \(\varepsilon^{\prime}\), there exist positive elements \(a_{1}^{\prime},\cdots,a_{n}^{\prime}\in B\) and a positive element \(b_{2}^{\prime}\in B\) such that \(\|(p-g)^{\frac{1}{2}}a_{i}(p-g)^{\frac{1}{2}}-a_{i}^{\prime}\|<\varepsilon\) for \(1\leq i\leq k\), and \(\|(p-g)^{\frac{1}{2}}b_{2}(p-g)^{\frac{1}{2}}-b_{2}^{\prime}\|<\varepsilon\). Therefore, one has \(\|a_{i}-a_{i}^{\prime}-(1_{A}-(p-g))^{\frac{1}{2}}a_{i}(1_{A}-(p-g))^{\frac{1} {2}}\|\) \(\leq\|a_{i}-(p-g)a_{i}-(1_{A}-(p-g))a_{i}\|+\|(p-g)a_{i}-a_{i}^{\prime}\|\) \(+\|(p-g)^{\frac{1}{2}}a_{i}(p-g)^{\frac{1}{2}}-(p-g)a_{i}\|\) \(+\|(1_{A}-(p-g))a_{i}-(1_{A}-(p-g))^{\frac{1}{2}}a_{i}(1_{A}-(p-g))^{\frac{1} {2}}\|\) \(<\varepsilon+\varepsilon+\varepsilon+\varepsilon=4\varepsilon\) for \(1\leq i\leq k\). Since \(\|(p-g)^{\frac{1}{2}}b_{2}(p-g)^{\frac{1}{2}}-b_{2}^{\prime}\|<\varepsilon\), by (1) of Theorem 2.1, we have \[(b_{2}^{\prime}-3\varepsilon)_{+}\precsim((p-g)^{\frac{1}{2}}b_{2}(p-g)^{\frac {1}{2}}-2\varepsilon)_{+}. \tag{3.4.1}\] By \((4)^{\prime}\), one has \[\|(p-g)^{\frac{1}{2}}b_{2}(p-g)^{\frac{1}{2}}\|\geq\|(p-g)b_{2}(p-g)\|\geq 1-\varepsilon.\] Therefore, we have \(\|(b_{2}^{\prime}-3\varepsilon)_{+}\|\geq\|(p-g)^{\frac{1}{2}}b_{2}(p-g)^{\frac {1}{2}}\|+4\varepsilon\geq 1-5\varepsilon\), then, with \(0<\varepsilon^{\prime}<\varepsilon<\frac{1}{5}\), \((b_{2}^{\prime}-3\varepsilon)_{+}\neq 0\). Define a completely positive contractive map \(\varphi^{\prime\prime}:A\to A\) by \(\varphi^{\prime\prime}(a)=(1_{A}-(p-g))^{\frac{1}{2}}a(1_{A}-(p-g))^{\frac{1}{ 2}}\). Since \(B\) is a nuclear \(\mathrm{C}^{*}\)-algebra, by Theorem 2.3.13 of [34], there exists a contractive completely positive map \(\psi^{\prime\prime}:A\to B\) such that \(\|\psi^{\prime\prime}(p-g)-(p-g)\|<\varepsilon\), and \(\|\psi^{\prime\prime}(a_{i}^{\prime})-a_{i}^{\prime}\|<\varepsilon\) for all \(1\leq i\leq n\). Since \(B\in\Omega\), then \(B\) has the new type of tracial nuclear dimension at most \(n\), there exist a finite dimension \(\mathrm{C}^{*}\)-algebra \(F=F_{0}\oplus\cdots\oplus F_{n}\) and completely positive maps \(\psi^{\prime}:B\to F\), \(\varphi^{\prime}:F\to B\) such that \((1)^{\prime\prime}\) for any \(a_{i}^{\prime}\) (\(1\leq i\leq k\)), there exists \(\overline{a_{i}^{\prime}}\in B_{+}\), such that \(\|a_{i}^{\prime}-\overline{a_{i}^{\prime}}-\varphi^{\prime}\psi^{\prime}(a_{i }^{\prime})\|<\varepsilon\), and for \(g\in B_{+}\), there exists \(\overline{\overline{g}}\in B\), such that \(\|g-\overline{g}-\varphi^{\prime}\psi^{\prime}(g)\|<\varepsilon\). (3.4.2) \((2)^{\prime\prime}\)\((p-\varphi^{\prime}\psi^{\prime}(p)-\varepsilon)_{+}\precsim(b_{2}^{\prime}-3 \varepsilon)_{+}\), \((3)^{\prime\prime}\)\(\|\psi^{\prime}\|\leq 1\), and \((4)^{\prime\prime}\)\(\varphi^{\prime}|_{F_{i}}\) is a completely positive contractive order zero map for \(i=1,\cdots,n\). Define \(\varphi:F\to A\) by \(\varphi(a)=\varphi^{\prime}(a)\) and \(\psi:A\to F\) by \(\psi(a)=\psi^{\prime}\psi^{\prime\prime}((p-g)^{\frac{1}{2}}a(p-g)^{\frac{1}{2}})\) and \(\overline{a_{i}}=\varphi^{\prime\prime}(a_{i})+\overline{\overline{a_{i}^{ \prime}}}\in A_{+}\) for \(1\leq i\leq k\). Then, one has \((1_{A}-\varphi\psi(1_{A})-4\varepsilon)_{+}\) \(=(1_{A}-\varphi^{\prime}\psi^{\prime}\psi^{\prime\prime}(p-g)-4\varepsilon)_{+}\) \(\precsim 1_{A}-(\varphi^{\prime}\psi^{\prime}(p)-2\varepsilon)_{+}+\varphi^{ \prime}\psi^{\prime}(g)\) \(\precsim(1_{A}-p)+(p-\varphi^{\prime}\psi^{\prime}(p)-\varepsilon)_{+}+ \varphi^{\prime}\psi^{\prime}(g)\) \(\precsim(1_{A}-p)+(p-\varphi^{\prime}\psi^{\prime}(p)-\varepsilon)_{+}+ \varphi^{\prime}\psi^{\prime}(g)+(\overline{\overline{g}}-\varepsilon)_{+}\) \(\precsim(1_{A}-p)+g+(p-\varphi^{\prime}\psi^{\prime}(p)-\varepsilon)_{+}\) (by (3.4.2)) \(\precsim b_{1}\oplus(b_{2}^{\prime}-3\varepsilon)_{+}\) (by (1)\({}^{\prime}\)) \(\precsim b_{1}\oplus((p-g)^{\frac{1}{2}}b_{2}(p-g)^{\frac{1}{2}}-2 \varepsilon)_{+}\) (by (3.4.1)) \(\precsim b_{1}+b_{2}\precsim b\). One also has \(\|a_{i}-\overline{a_{i}}-\varphi\psi(a_{i})\|=\|a_{i}-\varphi^{\prime\prime}(a _{i})-\overline{\overline{a_{i}^{\prime}}}=\varphi^{\prime}\psi^{\prime}\psi^ {\prime\prime}((p-g)^{\frac{1}{2}}a_{i}(p-g)^{\frac{1}{2}})\|\) \(\leq\|a_{i}-(1_{A}-(p-g))^{\frac{1}{2}}a_{i}(1_{A}-(p-g))^{\frac{1}{2}}- \overline{a_{i}^{\prime}}-\varphi^{\prime}\psi^{\prime}\psi^{\prime\prime}((p- g)^{\frac{1}{2}}a_{i}(p-g)^{\frac{1}{2}})\|\) \(\leq\|a_{i}-a_{i}^{\prime}-(1_{A}-(p-g))^{\frac{1}{2}}a_{i}(1_{A}-(p-g))^{ \frac{1}{2}}\|+\|a_{i}^{\prime}-\overline{\overline{a_{i}^{\prime}}}-\varphi^{ \prime}\psi^{\prime}\psi^{\prime\prime}((p-g)^{\frac{1}{2}}a_{i}(p-g)^{\frac{1 }{2}})\|\) \(g)^{\frac{1}{2}}a_{i}(p-g)^{\frac{1}{2}})\|\leq 3\varepsilon+\|a_{i}^{\prime}- \overline{\overline{a_{i}^{\prime}}}-\varphi^{\prime}\psi^{\prime}(a_{i}^{ \prime})\|\) \(+\|\varphi^{\prime}\psi^{\prime}(a_{i}^{\prime})-\varphi^{\prime}\psi^{\prime} \psi^{\prime\prime}(a_{i}^{\prime})\|+\|\varphi^{\prime}\psi^{\prime}\psi^{ \prime\prime}(a_{i}^{\prime})-\varphi^{\prime}\psi^{\prime}\psi^{\prime\prime}((p-g )^{\frac{1}{2}}a_{i}(p-g)^{\frac{1}{2}})\|\) \(<3\varepsilon+2\varepsilon+2\varepsilon+2\varepsilon=9\varepsilon\). Since \(\varphi^{\prime\prime},\varphi^{\prime},\psi^{\prime},\psi^{\prime\prime}\) are completely positive contractive maps, then \(\varphi\) and \(\psi\) are completely positive maps. For \((3)^{\prime\prime}\), \(\varphi^{\prime}|_{F_{i}}\) is a completely positive contractive order zero map for \(i=0,1,\cdots,n\) and \(\varphi(a)=\varphi^{\prime}(a)\), then \(\varphi|_{F_{i}}\) is contractive completely positive order zero map for \(i=0,1,\cdots,n\). For any \(x\in A\), \(\|\psi(x)\|=\|\psi^{\prime}\psi^{\prime\prime}((p-g)^{\frac{1}{2}}x(p-g)^{ \frac{1}{2}})\|\leq\|\psi^{\prime}\|\|\psi^{\prime\prime}\|\|x\|\), then \(\|\psi\|\leq\|\psi^{\prime\prime}\|\|\psi^{\prime}\|\leq 1\). Therefore, \(A\) has the new type of tracial nuclear dimension at most \(n\). **Corollary 3.5**.: _Let \(A\) be a unital simple \(\mathrm{C}^{*}\)-algebra. Let \(B\subseteq A\) be a centrally large nuclear subalgebra of \(A\) such that \(B\) has the new type of tracial nuclear dimension at most \(n\). Then \(A\) has the new type of tracial nuclear dimension at most \(n\)._ **Corollary 3.6**.: _Let \(\Omega\) be a class of unital nuclear \(\mathrm{C}^{*}\)-algebras such that for any \(B\in\Omega\), \(B\) has the new type of tracial nuclear dimension at most \(n\). Then \(A\) has the new type of tracial nuclear dimension at most \(n\) for any simple unital \(\mathrm{C}^{*}\)-algebra \(A\in\mathrm{TA}\Omega\)._ **Theorem 3.7**.: _Let \(\Omega\) be a class of unital \(\mathrm{C}^{*}\)-algebras such that \(B\) is weakly \((m,n)\)-divisible (with \(n\neq m\)) (see Definition 2.2) for any \(B\in\Omega\). Let \(A\in\mathrm{WTA}\Omega\) be a simple unital stably finite \(\mathrm{C}^{*}\)-algebra such that for any integer \(n\in\mathbb{N}\) the \(\mathrm{C}^{*}\)-algebra \(\mathrm{M}_{n}(A)\) belongs to the class \(\mathrm{WTA}\Omega\). Then \(A\) is secondly weakly \((m,n)\)-divisible (with \(n\neq m\)) (see Definition 2.3)._ Proof.: For given \(a\in\mathrm{M}_{\infty}(A)_{+},\ \varepsilon>0\), we may assume that \(a\in A_{+}\), and \(\|a\|=1\), (we have replaced \(\mathrm{M}_{n}(A)\) containing \(a\) given initially by \(A\)), we must show that there are \(x_{1},x_{2},\cdots,x_{n}\in\mathrm{M}_{\infty}(A)_{+}\) such that \(m\langle x_{j}\rangle\leq\langle a\rangle+\langle a\rangle\), and \(\langle(a-\varepsilon)_{+}\rangle\leq\langle x_{1}\rangle+\langle x_{2} \rangle+\cdots+\langle x_{n}\rangle\) for all \(j=1,2,\cdots,n\). For any \(\delta_{1}>0\), since \(A\in\mathrm{WTA}\Omega\), there exist a projection \(p\in A\), an element \(g\in A\) with \(0\leq g\leq 1\), and a \(\mathrm{C}^{*}\)-subalgebra \(B\) of \(A\) with \(g\in B\), \(1_{B}=p\), and \(B\in\Omega\) such that (1) \((p-g)a\in_{\delta_{1}}B\), and (2) \(\|(p-g)a-a(p-g)\|<\delta_{1}\). By (2), with sufficiently small \(\delta_{1}\), by Lemma 2.5.11 (1) of [34], we have (3) \(\|(p-g)^{\frac{1}{2}}a-a(p-g)^{\frac{1}{2}}\|<\varepsilon/3\), and (4) \(\|(1-(p-g))^{\frac{1}{2}}a-a(1-(p-g))^{\frac{1}{2}}\|<\varepsilon/3\). By (1) and (2), with sufficiently small \(\delta_{1}\), there exists a positive element \(a^{{}^{\prime}}\in B\) such that (5) \(\|(p-g)^{\frac{1}{2}}a(p-g)^{\frac{1}{2}}-a^{{}^{\prime}}\|<\varepsilon/3\). By (3), (4) and (5), \(\|a-a^{{}^{\prime}}-(1-(p-g))^{\frac{1}{2}}a(1-(p-g))^{\frac{1}{2}}\|\) \(\leq\|a-(p-g)a-(1-(p-g))a\|+\|(p-g)a-(p-g)^{\frac{1}{2}}a(p-g)^{\frac{1}{2}}\|\) \(+\|(1-(p-g))a-(1-(p-g))^{\frac{1}{2}}a(1-(p-g))^{\frac{1}{2}}\|\) \(+\|(p-g)^{\frac{1}{2}}a(p-g)^{\frac{1}{2}}-a^{{}^{\prime}}\|\) \(<\varepsilon/3+\varepsilon/3+\varepsilon/3=\varepsilon\). Since \(B\) is weakly \((m,n)\)-divisible, and \((a^{{}^{\prime}}-2\varepsilon)_{+}\in B_{+}\), there exist \(x_{1}^{{}^{\prime}},x_{2}^{{}^{\prime}},\cdots,x_{n}^{{}^{\prime}}\in B\), such that \(\langle x_{j}^{{}^{\prime}}\rangle+\langle x_{j}^{{}^{\prime}}\rangle+\cdots+ \langle x_{j}^{{}^{\prime}}\rangle\leq\langle(a^{{}^{\prime}}-2\varepsilon)_{+}\rangle\), and \(\langle(a^{{}^{\prime}}-3\varepsilon)_{+}\rangle\leq\sum_{i=1}^{n}\langle x_{i }^{{}^{\prime}}\rangle\), where \(\langle x_{j}^{{}^{\prime}}\rangle\) repeats \(m\) times. Since \(B\) is weakly \((m,n)\)-divisible, and \((a^{{}^{\prime}}-\varepsilon)_{+}\in B\), there exist \(y_{1}^{{}^{\prime}},y_{2}^{{}^{\prime}},\cdots,y_{n}^{{}^{\prime}}\in B\), such that \(\langle y_{j}^{{}^{\prime}}\rangle+\langle y_{j}^{{}^{\prime}}\rangle+\cdots+ \langle y_{j}^{{}^{\prime}}\rangle\leq\langle(a^{{}^{\prime}}-\varepsilon)_{+}\rangle\), and \(\langle(a^{{}^{\prime}}-2\varepsilon)_{+}\rangle\leq\sum_{i=1}^{n}\langle y_{i }^{{}^{\prime}}\rangle\), where \(\langle y_{j}^{{}^{\prime}}\rangle\) repeats \(m\) times. Write \(a^{{}^{\prime\prime}}=(1-(p-g))^{\frac{1}{2}}a(1-(p-g))^{\frac{1}{2}}\). We divide the proof into two cases. **Case (1)** We assume that \((a^{{}^{\prime}}-2\varepsilon)_{+}\) is Cuntz equivalent to a projection. **(1.1)** We assume that \((a^{{}^{\prime}}-3\varepsilon)_{+}\) is Cuntz equivalent to a projection. **(1.1.1)** We assume that \((a^{{}^{\prime}}-2\varepsilon)_{+}\) is Cuntz equivalent to \((a^{{}^{\prime}}-3\varepsilon)_{+}\). **(1.1.1.1)** If \(x_{1}^{{}^{\prime}},x_{2}^{{}^{\prime}},\cdots,x_{n}^{{}^{\prime}}\in B\) are all Cuntz equivalent to projections, and \(\langle(a^{{}^{\prime}}-3\varepsilon)_{+}\rangle\leq\sum_{i=1}^{n}\langle x_{ i}^{{}^{\prime}}\rangle\), then, by Theorem 2.1 (2), there exist some integer \(j\) and a nonzero projection \(d\) such that \(\langle x_{j}^{{}^{\prime}}+d\rangle+\langle x_{j}^{{}^{\prime}}+d\rangle+ \cdots+\langle x_{j}^{{}^{\prime}}+d\rangle\leq\langle(a^{{}^{\prime}}-2 \varepsilon)_{+}\rangle\), where \(\langle x_{j}^{{}^{\prime}}+d\rangle\) repeats \(m\) times, otherwise, this contradicts the stably finiteness of \(A\) (since \(m\neq n\) and C\({}^{*}\)-algebra \(A\) is stably finite). For any \(\delta_{2}>0\), since \(A\in\operatorname{WTA}\Omega\), there exist a projection \(p_{1}\in A\), an element \(g_{1}\in A\) with \(0\leq g_{1}\leq 1\), and a C\({}^{*}\)-subalgebra \(D_{1}\) of \(A\) with \(g_{1}\in D_{1}\), \(1_{D_{1}}=p_{1}\), and \(D_{1}\in\Omega\) such that (1\({}^{{}^{\prime}}\)) \((p_{1}-g_{1})a^{{}^{\prime\prime}}\in_{\delta_{2}}D_{1}\), (2\({}^{{}^{\prime}}\)) \(\|(p_{1}-g_{1})a^{{}^{\prime\prime}}-a^{{}^{\prime\prime}}(p_{1}-g_{1})\|<\delta_ {2}\), and (3\({}^{{}^{\prime}}\)) \(1-(p_{1}-g_{1})\preceq d\). By (1\({}^{{}^{\prime}}\)) and (2\({}^{{}^{\prime}}\)), with sufficiently small \(\delta_{2}\), as above, via the analogues for (4), (5) and (6) for \(a^{{}^{\prime\prime}},\ p_{1},\) and \(g_{1}\), there exist a positive element \(a^{{}^{\prime\prime\prime}}\in D_{1}\), such that \(\|(p_{1}-g_{1})^{\frac{1}{2}}a^{{}^{\prime\prime}}(p_{1}-g_{1})^{\frac{1}{2}}-a ^{{}^{\prime\prime\prime}}\|<\varepsilon/3,\) and \(\|a^{{}^{\prime\prime}}-a^{{}^{\prime\prime\prime}}-(1-(p_{1}-g_{1}))^{\frac{1}{ 2}}a^{{}^{\prime\prime}}(1-(p_{1}-g_{1}))^{\frac{1}{2}}\|<\varepsilon.\) Since \(D_{1}\) is weakly \((m,n)\)-divisible, and \((a^{{}^{\prime\prime\prime}}-2\varepsilon)_{+}\in D_{1}\), there exist positive elements \(x_{1}^{{}^{\prime\prime}},x_{2}^{{}^{\prime\prime}},\cdots,x_{n}^{{}^{\prime \prime}}\in D_{1}\), such that \(\langle x_{j}^{{}^{\prime\prime}}\rangle+\langle x_{j}^{{}^{\prime\prime}} \rangle+\cdots+\langle x_{j}^{{}^{\prime\prime}}\rangle\leq\langle(a^{{}^{\prime \prime\prime}}-2\varepsilon)_{+}\rangle\), and \(\langle(a^{{}^{\prime\prime\prime}}-3\varepsilon)_{+}\rangle\leq\sum_{i=1}^{n} \langle x_{i}^{{}^{\prime\prime}}\rangle\), where \(\langle x_{j}^{{}^{\prime\prime}}\rangle\) repeats \(m\) times. Since \(a^{{}^{\prime}}\leq a^{{}^{\prime}}+a^{{}^{\prime\prime}}\), then \(\langle(a^{{}^{\prime}}-\varepsilon)_{+}\rangle\leq\langle(a^{{}^{\prime}}+a^{{}^{ \prime\prime}}-\varepsilon)_{+}\rangle\), and \(\|a-a^{{}^{\prime}}-a^{{}^{\prime\prime}}\|<\varepsilon\), so one has \(\langle(a^{{}^{\prime}}+a^{{}^{\prime\prime}}-\varepsilon)_{+}\rangle\leq\langle a\rangle\). Therefore, one has \[\langle(a^{{}^{\prime}}-2\varepsilon)_{+}\rangle\leq\langle(a^{{}^{\prime}}- \varepsilon)_{+}\rangle\leq\langle a\rangle.\] Write \(x=(1-(p_{1}-g_{1}))^{\frac{1}{2}}a^{{}^{\prime\prime}}(1-(p_{1}-g_{1}))^{\frac{1}{ 2}}\). Since \(a^{{}^{\prime\prime\prime}}\leq a^{{}^{\prime\prime\prime}}+x\), then \(\langle(a^{{}^{\prime\prime\prime}}-2\varepsilon)_{+}\rangle\leq\langle(a^{{}^{ \prime\prime\prime}}+x-\varepsilon)_{+}\rangle\), and \(\|a^{{}^{\prime\prime}}-a^{{}^{\prime\prime\prime}}-x\|<\varepsilon\), which implies that \[\langle(a^{{}^{\prime\prime\prime}}-2\varepsilon)_{+}\rangle\leq\langle a^{{}^ {\prime\prime}}\rangle\leq\langle a\rangle.\] Therefore, we have \(\langle(x_{j}^{{}^{\prime}}\oplus d)\oplus x_{j}^{{}^{\prime\prime}}\rangle+ \langle(x_{j}^{{}^{\prime}}\oplus d)\oplus x_{j}^{{}^{\prime\prime}}\rangle+ \cdots+\langle(x_{j}^{{}^{\prime}}\oplus d)+x_{j}^{{}^{\prime\prime}}\rangle\) \(\leq\langle(a^{{}^{\prime}}-2\varepsilon)_{+}\rangle+\langle(a^{{}^{\prime \prime\prime}}-2\varepsilon)_{+}\rangle\leq\langle a\rangle+\langle a\rangle\), where \(\langle(x_{j}^{{}^{\prime}}\oplus d)\oplus x_{j}^{{}^{\prime\prime}}\rangle\) repeats \(m\) times, and \(\langle x_{i}^{{}^{\prime}}\oplus x_{i}^{{}^{\prime\prime}}\rangle+\langle x_{i }^{{}^{\prime}}\oplus x_{i}^{{}^{\prime\prime}}\rangle+\cdots\langle x_{i}^{{ }^{\prime}}\oplus x_{i}^{{}^{\prime\prime}}\rangle\), \(\leq\langle(a^{{}^{\prime}}-2\varepsilon)_{+}\rangle+\langle(a^{{}^{\prime \prime\prime}}-2\varepsilon)_{+}\rangle\leq\langle a\rangle+\langle a\rangle\), where \(\langle x_{i}^{{}^{\prime}}\oplus x_{i}^{{}^{\prime\prime}}\rangle\) repeats \(m\) times for \(1\leq i\leq n\) and \(i\neq j\). We also have \(\langle(a-10\varepsilon)_{+}\rangle\) \(\leq\langle(a^{{}^{\prime}}-3\varepsilon)_{+}\rangle+\langle(a^{{}^{\prime \prime\prime}}-3\varepsilon)_{+}\rangle+\langle(1-(p_{1}-g_{1}))^{\frac{1}{ 2}}a^{{}^{\prime\prime}}(1-(p_{1}-g_{1}))^{\frac{1}{2}}\rangle\) \(\leq\langle(a^{{}^{\prime}}-3\varepsilon)_{+}\rangle+\langle(a^{{}^{\prime \prime\prime}}-3\varepsilon)_{+}\rangle+\langle(1-(p_{1}-g_{1}))\rangle\) \(\leq\langle(a^{{}^{\prime}}-3\varepsilon)_{+}\rangle+\langle(a^{{}^{\prime \prime\prime}}-3\varepsilon)_{+}\rangle+\langle d\rangle\) \(\leq\sum_{i=1,i\neq j}^{n}\langle x_{i}^{{}^{\prime}}\oplus x_{i}^{{}^{ \prime\prime}}\rangle+\langle(x_{j}^{{}^{\prime}}\oplus d)\oplus x_{j}^{{}^{ \prime\prime}}\rangle\). There are the desired inequalities, with\(\langle x_{i}^{{}^{\prime}}\oplus x_{i}^{{}^{\prime\prime}}\rangle+\langle(x_{j }^{{}^{\prime}}\oplus d)\oplus x_{j}^{{}^{\prime\prime}}\rangle\) in place of \(\langle x_{i}\rangle\), and \(10\varepsilon\) in place of \(\varepsilon\). **(1.1.2)** If \(x_{1}^{{}^{\prime}},x_{2}^{{}^{\prime}},\cdots,x_{k}^{{}^{\prime}}\in B\) are projections, and \(\langle(a^{{}^{\prime}}-3\varepsilon)_{+}\rangle<\sum_{i=1}^{n}\langle x_{i}^{{} ^{\prime}}\rangle\), then, by Theorem 2.1 (2), there exists a nonzero projection \(e\), such that \(\langle(a^{{}^{\prime}}-3\varepsilon)_{+}\rangle+\langle e\rangle\leq\sum_{i=1 }^{n}\langle x_{i}^{{}^{\prime}}\rangle\). As in the part (1.1.1.1), since \(A\in\operatorname{WTA}\Omega\), there exist a projection \(p_{2}\in A\), an element \(g_{2}\in A\) with \(0\leq g_{2}\leq 1\), and a \(\operatorname{C}^{*}\)-subalgebra \(D_{2}\) of \(A\) with \(g_{2}\in D_{2}\), \(1_{D_{2}}=p_{2}\), and \(D_{2}\in\Omega\), by \((1)^{\prime}\), there exists a positive element \(a^{4}\in D_{2}\), such that \(1-(p_{2}-g_{2})\preceq e\), \(\|(p_{2}-g_{2})^{\frac{1}{2}}a^{{}^{\prime\prime}}(p_{2}-g_{2})^{\frac{1}{2}}-a^ {4}\|<\varepsilon/3\), and \(\|a^{{}^{\prime\prime}}-a^{4}-(1-(p_{2}-g_{2}))^{\frac{1}{2}}a^{{}^{\prime \prime}}(1-(p_{2}-g_{2}))^{\frac{1}{2}}\|<\varepsilon\). Also as in the part (1.1.1.1), we have \(\langle(a^{4}-2\varepsilon)_{+}\rangle\leq\langle a\rangle\). Since \(D_{2}\) is weakly \((m,n)\)-divisible, \((a^{4}-2\varepsilon)_{+}\in D_{2}\), there exist \(x_{1}^{4},x_{2}^{4},\)\(\cdots,x_{n}^{4}\in D_{2}\), such that \(\langle x_{j}^{4}\rangle+\langle x_{j}^{4}\rangle+\cdots+\langle x_{j}^{4} \rangle\leq\langle(a^{4}-2\varepsilon)_{+}\rangle\), and \(\langle(a^{4}-3\varepsilon)_{+}\rangle\leq\sum_{i=1}^{n}\langle x_{i}^{4}\rangle\), where \(\langle x_{j}^{4}\rangle\) repeats \(m\) times. Therefore, we have \(\langle x_{j}^{{}^{\prime}}\oplus x_{j}^{4}\rangle+\langle x_{j}^{{}^{\prime}} \oplus x_{j}^{4}\rangle+\cdots+\langle x_{j}^{{}^{\prime}}\oplus x_{j}^{4}\rangle\) \(\leq\langle(a^{{}^{\prime}}-2\varepsilon)_{+}\rangle+\langle(a^{4}-2\varepsilon)_ {+}\rangle\leq\langle a\rangle+\langle a\rangle\), where \(\langle x_{j}^{{}^{\prime}}\oplus x_{j}^{4}\rangle\) repeats \(m\) time for \(1\leq i\leq n\). We also have \(\langle(a-10\varepsilon)_{+}\rangle\) \(\leq\langle(a^{{}^{\prime}}-3\varepsilon)_{+}\rangle+\langle(a^{4}-3\varepsilon)_{+} \rangle+\langle(1-(p_{2}-g_{2}))^{\frac{1}{2}}a^{{}^{\prime\prime}}(1-(p_{2}-g_{ 2}))^{\frac{1}{2}}\rangle\) \(\leq\langle(a^{{}^{\prime}}-3\varepsilon)_{+}\rangle+\langle(a^{4}-3 \varepsilon)_{+}\rangle+\langle(1-(p_{2}-g_{2}))\rangle\) \(\leq\langle(a^{{}^{\prime}}-3\varepsilon)_{+}\rangle+\langle(a^{4}-3 \varepsilon)_{+}\rangle+\langle e\rangle\) \(\leq\sum_{i=1}^{n}\langle x_{i}^{{}^{\prime}}\oplus x_{i}^{4}\rangle\). There are the desired inequalities, with \(\langle x_{i}^{{}^{\prime}}\oplus x_{i}^{4}\rangle\) in place of \(\langle x_{i}\rangle\), and \(10\varepsilon\) in place of \(\varepsilon\). (**1.1.1.3**) we assume that there is a purely positive element, and we assume \(x_{1}^{{}^{\prime}}\) is a purely positive element. As \(\langle(a^{{}^{\prime}}-2\varepsilon)_{+}\rangle\leq\sum_{i=1}^{n}\langle x _{i}^{{}^{\prime}}\rangle\), for any \(\varepsilon>0\), there exists \(\delta>0\) such that \(\langle(a^{{}^{\prime}}-4\varepsilon)_{+}\rangle\leq\langle(x_{1}^{{}^{\prime }}-\delta)_{+}\rangle\oplus\sum_{i=1}^{n}\langle x_{i}^{{}^{\prime}}\rangle\). Since \(x_{1}^{{}^{\prime}}\) is a purely positive element, by Theorem 2.1 (3), there exists a nonzero positive element \(s\) such that \(\langle(x_{1}^{{}^{\prime}}-\delta)_{+}\rangle+\langle s\rangle\leq\langle x _{1}^{{}^{\prime}}\rangle\). As in the part (1.1.1.1), since \(A\in\mathrm{WTA}\Omega\), by \((1)^{\prime}\), there exists a projection \(p_{3}\in A\), an element \(g_{3}\in A\) with \(0\leq g_{3}\leq 1\), and a \(\mathrm{C}^{*}\)-subalgebra \(D_{3}\) of \(A\) with \(g_{3}\in D_{3}\), \(1_{D_{3}}=p_{3}\), and \(D_{3}\in\Omega\) there exist a positive element \(a^{5}\in D_{3}\), such that \(1-(p_{3}-g_{3})\preceq s\), \(\|(p_{3}-g_{3})^{\frac{1}{2}}a^{{}^{\prime\prime}}(p_{3}-g_{3})^{\frac{1}{2}} -a^{5}\|<\varepsilon/3,\) and \(\|a^{{}^{\prime\prime}}-a^{5}-(1-(p_{3}-g_{3}))^{\frac{1}{2}}a^{{}^{\prime \prime}}(1-(p_{3}-g_{3}))^{\frac{1}{2}}\|<\varepsilon\). Also as in the part (1.1.1.1), we have \(\langle(a^{5}-2\varepsilon)_{+}\rangle\leq\langle a\rangle\). Since \(D_{3}\) is weakly \((m,n)\)-divisible, \((a^{5}-2\varepsilon)_{+}\in D_{3}\), there exist \(x_{1}^{5},x_{2}^{5}\), \(\cdots,x_{n}^{5}\in D_{3}\), such that \(\langle x_{j}^{5}\rangle+\langle x_{j}^{5}\rangle+\cdots+\langle x_{j}^{5} \rangle\leq\langle(a^{5}-2\varepsilon)_{+}\rangle\), and \(\langle(a^{5}-3\varepsilon)_{+}\rangle\leq\sum_{i=1}^{n}\langle x_{i}^{5}\rangle\), where \(\langle x_{j}^{5}\rangle\) repeats \(m\) times. Therefore, we have \(\langle x_{j}^{{}^{\prime}}\oplus x_{j}^{5}\rangle+\langle x_{j}^{{}^{\prime}} \oplus x_{j}^{5}\rangle+\cdots+\langle x_{j}^{{}^{\prime}}\oplus x_{j}^{5}\rangle\), \(\leq\langle(a^{{}^{\prime}}-2\varepsilon)_{+}\rangle+\langle(a^{5}-2 \varepsilon)_{+}\rangle\leq\langle a\rangle+\langle a\rangle\), where \(\langle x_{i}^{{}^{\prime}}\oplus x_{i}^{5}\rangle\) repeats \(m\) times for \(1\leq i\leq n\). We also have \(\langle(a-10\varepsilon)_{+}\rangle\) \(\leq\langle(a^{{}^{\prime}}-4\varepsilon)_{+}\rangle+\langle(a^{5}-3 \varepsilon)_{+}\rangle+\langle(1-(p_{3}-g_{3}))^{\frac{1}{2}}a^{{}^{\prime \prime}}(1-(p_{3}-g_{3}))^{\frac{1}{2}}\rangle\) \(\leq\langle(a^{{}^{\prime}}-4\varepsilon)_{+}\rangle+\langle(a^{5}-3 \varepsilon)_{+}\rangle+\langle(1-(p_{3}-g_{3}))\rangle\) \(\leq\langle(a^{{}^{\prime}}-4\varepsilon)_{+}\rangle+\langle(a^{5}-3 \varepsilon)_{+}\rangle+\langle s\rangle\leq\sum_{i=1}^{n}\langle x_{i}\oplus x _{i}^{5}\rangle\). There are the desired inequalities, with \(\langle x_{i}^{{}^{\prime}}\oplus x_{i}^{5}\rangle\) in place of \(\langle x_{i}\rangle\), and \(10\varepsilon\) in place of \(\varepsilon\). (**1.1.2**) we assume that exists nonzero projection \(r\) such that \(\langle(a^{{}^{\prime}}-3\varepsilon)_{+}\rangle+\langle r\rangle\leq\langle(a ^{{}^{\prime}}-3\varepsilon)_{+}\rangle\). As in the part (1.1.1.1), as \(A\in\mathrm{WTA}\Omega\), there exist a projection \(p_{4}\in A\), an element \(g_{4}\in A\) with \(0\leq g_{4}\leq 1\), and a \(\mathrm{C}^{*}\)-subalgebra \(D_{4}\) of \(A\) with \(g_{4}\in D_{4}\), \(1_{D_{4}}=p_{4}\), and \(D_{4}\in\Omega\), by \((1)^{\prime}\), there exists a positive element \(a^{6}\in D_{4}\), such that \(1-(p_{4}-g_{4})\preceq r\), \(\|(p_{4}-g_{4})^{\frac{1}{2}}a^{{}^{\prime\prime}}(p_{4}-g_{4})^{\frac{1}{2}}-a ^{6}\|<\varepsilon/3,\) and \(\|a^{{}^{\prime\prime}}-a^{6}-(1-(p_{4}-g_{4}))^{\frac{1}{2}}a^{{}^{\prime\prime}}( 1-(p_{4}-g_{4}))^{\frac{1}{2}}\|<\varepsilon\). Also as in the part (1.1.1.1), we have \(\langle(a^{6}-2\varepsilon)_{+}\rangle\leq\langle a\rangle\). Since \(D_{4}\) is weakly \((m,n)\)-divisible, \((a^{6}-2\varepsilon)_{+}\in D_{4}\), there exist \(x_{1}^{6},x_{2}^{6},\)\(\cdots,x_{n}^{6}\in D_{4}\), such that \(\langle x_{j}^{6}\rangle+\langle x_{j}^{6}\rangle+\cdots+\langle x_{j}^{6} \rangle\leq\langle(a^{6}-2\varepsilon)_{+}\rangle\), and \(\langle(a^{6}-3\varepsilon)_{+}\rangle\leq\sum_{i=1}^{n}\langle x_{i}^{6}\rangle\), where \(\langle x_{j}^{6}\rangle\) repeats \(m\) times. Therefore, we have \(\langle y_{i}^{{}^{\prime}}\oplus x_{i}^{6}\rangle+\langle y_{i}^{{}^{\prime} }\oplus x_{i}^{6}\rangle+\cdots+\langle y_{i}^{{}^{\prime}}\oplus x_{i}^{6}\rangle\) \(\leq\langle(a^{{}^{\prime}}-\varepsilon)_{+}\rangle+\langle(a^{6}-2\varepsilon )_{+}\rangle\leq\langle a\rangle+\langle a\rangle\), where \(\langle y_{i}^{{}^{\prime}}\oplus x_{i}^{6}\rangle\) repeats \(m\) times for \(1\leq i\leq n\). We also have \(\langle(a-10\varepsilon)_{+}\rangle\) \(\leq\langle(a^{{}^{\prime}}-3\varepsilon)_{+}\rangle+\langle(a^{6}-4 \varepsilon)_{+}\rangle+\langle(1-(p_{4}-g_{4}))^{\frac{1}{2}}a^{{}^{\prime \prime}}(1-(p_{4}-g_{4}))^{\frac{1}{2}}\rangle\) \(\leq\langle(a^{{}^{\prime}}-3\varepsilon)_{+}\rangle+\langle(a^{6}-4 \varepsilon)_{+}\rangle+\langle(1-(p_{4}-g_{4}))\rangle\) \(\leq\langle(a^{{}^{\prime}}-3\varepsilon)_{+}\rangle+\langle(a^{6}-4 \varepsilon)_{+}\rangle+\langle r\rangle\leq\sum\limits_{i=1}^{n}\langle y_{i }\oplus x_{i}^{6}\rangle\). There are the desired inequalities, with \(\langle y_{i}\oplus x_{i}^{6}\rangle\) in place of \(\langle x_{i}\rangle\), and \(10\varepsilon\) in place of \(\varepsilon\). **(1.2)** If \((a^{{}^{\prime}}-3\varepsilon)_{+}\) is not Cuntz equivalent to a projection, by Theorem 2.1 (3), then there is a nonzero positive element \(d\) such that \(\langle(a^{{}^{\prime}}-4\varepsilon)_{+}\rangle+\langle d\rangle\leq\langle (a^{{}^{\prime}}-3\varepsilon)_{+}\rangle\). As in the part (1.1.1.1), since \(A\in\operatorname{WTA}\Omega\), there exist a projection \(p_{5}\in A\), an element \(g_{5}\in A\) with \(0\leq g_{5}\leq 1\), and a \(\operatorname{C}^{*}\)-subalgebra \(D_{5}\) of \(A\) with \(g_{5}\in D_{5}\), \(1_{D_{5}}=p_{5}\), and \(D_{5}\in\Omega\), by \((1)^{\prime}\), there exists a positive element \(a^{7}\in D_{5}\), such that \(1-(p_{5}-g_{5})\preceq d\), \(\|(p_{5}-g_{5})^{\frac{1}{2}}a^{{}^{\prime\prime}}(p_{5}-g_{5})^{\frac{1}{2}}- a^{7}\|<\varepsilon/3\), and \(\|a^{{}^{\prime\prime}}-a^{7}-(1-(p_{5}-g_{5}))^{\frac{1}{2}}a^{{}^{\prime \prime}}(1-(p_{5}-g_{5}))^{\frac{1}{2}}\|<\varepsilon\). Also as in the part (1.1.1.1), we have \(\langle(a^{7}-2\varepsilon)_{+}\rangle\leq\langle a\rangle\). Since \(D_{5}\) is weakly \((m,n)\)-divisible, \((a^{7}-2\varepsilon)_{+}\in D_{5}\), there exist \(x_{1}^{7},x_{2}^{7},\)\(\cdots,x_{n}^{7}\in D_{5}\), such that \(\langle x_{j}^{7}\rangle+\langle x_{j}^{7}\rangle+\cdots+\langle x_{j}^{7} \rangle\leq\langle(a^{7}-2\varepsilon)_{+}\rangle\), and \(\langle(a^{7}-3\varepsilon)_{+}\rangle\leq\sum_{i=1}^{n}\langle x_{i}^{7}\rangle\), where \(\langle x_{j}^{7}\rangle\) repeats \(m\) times. Therefore, we have \(\langle x_{j}^{{}^{\prime}}\oplus x_{j}^{7}\rangle+\langle x_{j}^{{}^{\prime} }\oplus x_{j}^{7}\rangle+\cdots+\langle x_{j}^{{}^{\prime}}\oplus x_{j}^{7}\rangle\) \(\leq\langle(a^{{}^{\prime}}-2\varepsilon)_{+}\rangle+\langle(a^{7}-2 \varepsilon)_{+}\rangle\leq\langle a\rangle+\langle a\rangle\), where \(\langle x_{j}^{{}^{\prime}}\oplus x_{j}^{7}\rangle\) repeats \(m\) times for \(1\leq i\leq n\). We also have \(\langle(a-10\varepsilon)_{+}\rangle\) \(\leq\langle(a^{{}^{\prime}}-4\varepsilon)_{+}\rangle+\langle(a^{7}-4 \varepsilon)_{+}\rangle+\langle(1-(p_{5}-g_{5}))^{\frac{1}{2}}a^{{}^{\prime \prime}}(1-(p_{5}-g_{5}))^{\frac{1}{2}}\rangle\) \(\leq\langle(a^{{}^{\prime}}-4\varepsilon)_{+}\rangle+\langle(a^{7}-4 \varepsilon)_{+}\rangle+\langle(1-(p_{5}-g_{5}))\rangle\) \(\leq\langle(a^{{}^{\prime}}-4\varepsilon)_{+}\rangle+\langle(a^{7}-4 \varepsilon)_{+}\rangle+\langle d\rangle\) \(\leq\sum_{i=1}^{n}\langle x_{i}^{{}^{\prime}}\oplus x_{i}^{7}\rangle\). There are the desired inequalities, with \(\langle x_{i}^{{}^{\prime}}\oplus x_{i}^{7}\rangle\) in place of \(\langle x_{i}\rangle\), and \(10\varepsilon\) in place of \(\varepsilon\). **Case(2)** If \((a^{{}^{\prime}}-2\varepsilon)_{+}\) is not Cuntz equivalent to a projection, by Theorem 2.1 (3), there is a nonzero positive element \(d\) such that \(\langle(a^{{}^{\prime}}-3\varepsilon)_{+}\rangle+\langle d\rangle\leq\langle (a^{{}^{\prime}}-2\varepsilon)_{+}\rangle\). As in the part (1.1.1.1), since \(A\in\operatorname{WTA}\Omega\), there exist a projection \(p_{6}\in A\), an element \(g_{6}\in A\) with \(0\leq g_{6}\leq 1\), and a \(\operatorname{C}^{*}\)-subalgebra \(D_{6}\) of \(A\) with \(g_{6}\in D_{6}\), \(1_{D_{6}}=p_{6}\), and \(D_{6}\in\Omega\), by \((1)^{\prime}\), there exists a positive element \(a^{8}\in D_{6}\), such that \(1-(p_{6}-g_{6})\preceq d\), \(\|(p_{6}-g_{6})^{\frac{1}{2}}a^{{}^{\prime\prime}}(p_{6}-g_{6})^{\frac{1}{2}}- a^{8}\|<\varepsilon/3\), and \(\|a^{{}^{\prime\prime}}-a^{8}-(1-(p_{6}-g_{6}))^{\frac{1}{2}}a^{{}^{\prime \prime}}(1-(p_{6}-g_{6}))^{\frac{1}{2}}\|<\varepsilon\). Also as in the part (1.1.1.1), we have \(\langle(a^{8}-2\varepsilon)_{+}\rangle\leq\langle a\rangle\). Since \(D_{6}\) is weakly \((m,n)\)-divisible, \((a^{8}-2\varepsilon)_{+}\in D_{6}\), there exist \(x_{1}^{8},x_{2}^{8}\), \(\cdots,x_{n}^{8}\in D_{6}\), such that \(\langle x_{j}^{8}\rangle+\langle x_{j}^{8}\rangle+\cdots+\langle x_{j}^{8} \rangle\leq\langle(a^{8}-2\varepsilon)_{+}\rangle\), and \(\langle(a^{8}-3\varepsilon)_{+}\rangle\leq\sum_{i=1}^{n}\langle x_{i}^{8}\rangle\), where \(\langle x_{j}^{8}\rangle\) repeats \(m\) times. Therefore, we have \(\langle y_{i}^{{}^{\prime}}\oplus x_{i}^{8}\rangle+\langle y_{i}^{{}^{\prime} }\oplus x_{i}^{8}\rangle+\cdots+\langle y_{i}^{{}^{\prime}}\oplus x_{i}^{8}\rangle\) \(\leq\langle(a^{{}^{\prime}}-\varepsilon)_{+}\rangle+\langle(a^{8}-2\varepsilon )_{+}\rangle\leq\langle a\rangle+\langle a\rangle\), where \(\langle y_{i}^{{}^{\prime}}\oplus x_{i}^{8}\rangle\) repeats \(m\) times for \(1\leq i\leq n\). We also have \(\langle(a-10\varepsilon)_{+}\rangle\) \(\leq\langle(a^{{}^{\prime}}-3\varepsilon)_{+}\rangle+\langle(a^{8}-4 \varepsilon)_{+}\rangle+\langle(1-(p_{6}-g_{6}))^{\frac{1}{2}}a^{{}^{\prime \prime}}(1-(p_{6}-g_{6}))^{\frac{1}{2}}\rangle\) \(\leq\langle(a^{{}^{\prime}}-3\varepsilon)_{+}\rangle+\langle(a^{8}-4 \varepsilon)_{+}\rangle+\langle(1-(p_{6}-g_{6}))\rangle\) \(\leq\langle(a^{{}^{\prime}}-3\varepsilon)_{+}\rangle+\langle(a^{8}-4 \varepsilon)_{+}\rangle+\langle d\rangle\leq\sum_{i=1}^{n}\langle y_{i}^{{}^{ \prime}}\oplus x_{i}^{8}\rangle\). There are the desired inequalities, with \(\langle y_{i}^{\prime}\oplus x_{i}^{8}\rangle\) in place of \(\langle x_{i}\rangle\), and \(10\varepsilon\) in place of \(\varepsilon\). The following corollary were obtained by Fan, Fang, and Zhao in [17]. **Corollary 3.8**.: _Let \(A\) be a unital simple stably finite separable \(\operatorname{C}^{*}\)-algebra. Let \(B\subseteq A\) be a centrally large subalgebra of \(A\) such that \(B\) is weakly \((m,n)\)-divisible. Then \(A\) is secondly weakly \((m,n)\)-divisible._
2310.20137
Co-evolution and Nuclear Structure in the Dwarf Galaxy POX 52 Studied by Multi-wavelength Data From Radio to X-ray
The nearby dwarf galaxy POX 52 at $z = 0.021$ hosts an active galactic nucleus (AGN) with a black-hole (BH) mass of $M_{\rm BH} \sim 10^{5-6} M_\odot$ and an Eddington ratio of $\sim$ 0.1-1. This object provides the rare opportunity to study both AGN and host-galaxy properties in a low-mass highly accreting system. To do so, we collected its multi-wavelength data from X-ray to radio. First, we construct a spectral energy distribution, and by fitting it with AGN and host-galaxy components, we constrain AGN-disk and dust-torus components. Then, while considering the AGN-disk emission, we decompose optical HST images. As a result, it is found that a classical bulge component is probably present, and its mass ($M_{\rm bulge}$) is consistent with an expected value from a local relation. Lastly, we analyze new quasi-simultaneous X-ray (0.2-30 keV) data obtained by NuSTAR and XMM-Newton. The X-ray spectrum can be reproduced by multi-color blackbody, warm and hot coronae, and disk and torus reflection components. Based on this, the spin is estimated to be $a_{\rm spin} = 0.998_{-0.814}$, which could suggest that most of the current BH mass was achieved by prolonged mass accretion. Given the presence of the bulge, POX 52 would have undergone a galaxy merger, while the $M_{\rm BH}$-$M_{\rm bulge}$ relation and the inferred prolonged accretion could suggest that AGN feedback occurred. Regarding the AGN structure, the spectral slope of the hot corona, its relative strength to the bolometric emission, and the torus structure are found to be consistent with Eddington-ratio dependencies found for nearby AGNs.
Taiki Kawamuro, Claudio Ricci, Satoshi Yamada, Hirofumi Noda, Ruancun Li, Matthew J. Temple, Alessia Tortosa
2023-10-31T03:08:52Z
http://arxiv.org/abs/2310.20137v1
Co-evolution and Nuclear Structure in the Dwarf Galaxy POX 52 Studied by Multi-wavelength Data From Radio to X-ray ###### Abstract The nearby dwarf galaxy POX 52 at \(z=0.021\) hosts an active galactic nucleus (AGN) with a black-hole (BH) mass of \(M_{\rm BH}\sim 10^{5-6}\,M_{\odot}\) and an Eddington ratio of \(\sim 0.1\)-\(1\). This object provides the rare opportunity to study both AGN and host-galaxy properties in a low-mass highly accreting system. To do so, we collected its multi-wavelength data from X-ray to radio. First, we construct a spectral energy distribution, and by fitting it with AGN and host-galaxy components, we constrain AGN-disk and dust-torus components. Then, while considering the AGN-disk emission, we decompose optical _HST_ images. As a result, it is found that a classical bulge component is probably present, and its mass (\(M_{\rm bulge}\)) is consistent with an expected value from a local relation. Lastly, we analyze new quasi-simultaneous X-ray (0.2-30 keV) data obtained by _NuSTAR_ and _XMM-Newton_. The X-ray spectrum can be reproduced by multi-color blackbody, warm and hot coronae, and disk and torus reflection components. Based on this, the spin is estimated to be \(a_{\rm spin}=0.998_{-0.814}\), which could suggest that most of the current BH mass was achieved by prolonged mass accretion. Given the presence of the bulge, POX 52 would have undergone a galaxy merger, while the \(M_{\rm BH}\)-\(M_{\rm bulge}\) relation and the inferred prolonged accretion could suggest that AGN feedback occurred. Regarding the AGN structure, the spectral slope of the hot corona, its relative strength to the bolometric emission, and the torus structure are found to be consistent with Eddington-ratio dependencies found for nearby AGNs. galaxies: active - galaxies: individual (POX 52) - X-rays: galaxies 0000-0002-4151-584X]Taiki Kawamuro 0000-0002-1882-8867]Claudio Ricci 0000-0002-1888-0885]Satoshi Yamada 0000-0002-1888-788X]Hirofumi Noda 0000-0002-1888-788X]Riancun Li 0000-0002-1888-088X]Matthew J. Temple 0000-0002-1888-088X]Alessia Tortosa ## 1 Introduction Supermassive black holes (SMBHs) heavier than a million solar masses are believed to be ubiquitously present at the center of the massive galaxies (\(>10^{9-10}\,M_{\odot}\), where \(M_{\odot}\) is the solar mass) (e.g., Greene, 2012; Miller et al., 2015; Chadayamburi et al., 2023). Various correlations have been found between galaxies and their SMBHs, such as the one between the bulge and SMBH masses (e.g., Magorrian et al., 1998; Gebhardt et al., 2000; Marconi and Hunt, 2003; Gultekin et al., 2009; Kormendy and Ho, 2013), and it has been considered that galaxies and SMBHs have co-evolved. Many studies have been conducted to understand the physical mechanisms responsible for this co-evolution (e.g., Fabian, 2012; King and Pounds, 2015; Harrison et al., 2018, and references therein), especially in massive systems. An often hypothesized scenario is that a galaxy merger ignites star formation (SF) and mass accretion onto an SMBH, and, at some point, the resultant active galactic nucleus (AGN) blows gas out of the system in the form of outflows, preventing further star-formation (SF) and SMBH growth (e.g., Hopkins et al., 2008; Hopkins and Quataert, 2010; Booth and Schaye, 2009). In this scenario, some simulations successfully reproduced the correlations that have actually been observed (e.g., Di Matteo et al., 2005; Croton et al., 2006). In addition to the interplay between the SF and AGN, it has also been proposed that successive mergers are important as they may reduce the scatter around correlations with the SMBH mass (Kormendy and Ho, 2013). However, it is not well un
2309.14497
Interaction-Aware Decision-Making for Autonomous Vehicles in Forced Merging Scenario Leveraging Social Psychology Factors
Understanding the intention of vehicles in the surrounding traffic is crucial for an autonomous vehicle to successfully accomplish its driving tasks in complex traffic scenarios such as highway forced merging. In this paper, we consider a behavioral model that incorporates both social behaviors and personal objectives of the interacting drivers. Leveraging this model, we develop a receding-horizon control-based decision-making strategy, that estimates online the other drivers' intentions using Bayesian filtering and incorporates predictions of nearby vehicles' behaviors under uncertain intentions. The effectiveness of the proposed decision-making strategy is demonstrated and evaluated based on simulation studies in comparison with a game theoretic controller and a real-world traffic dataset.
Xiao Li, Kaiwen Liu, H. Eric Tseng, Anouck Girard, Ilya Kolmanovsky
2023-09-25T19:49:14Z
http://arxiv.org/abs/2309.14497v1
Interaction-Aware Decision-Making for Autonomous Vehicles in Forced Merging Scenario Leveraging Social Psychology Factors ###### Abstract Understanding the intention of vehicles in the surrounding traffic is crucial for an autonomous vehicle to successfully accomplish its driving tasks in complex traffic scenarios such as highway forced merging. In this paper, we consider a behavioral model that incorporates both social behaviors and personal objectives of the interacting drivers. Leveraging this model, we develop a receding-horizon control-based decision-making strategy, that estimates online the other drivers' intentions using Bayesian filtering and incorporates predictions of nearby vehicles' behaviors under uncertain intentions. The effectiveness of the proposed decision-making strategy is demonstrated and evaluated based on simulation studies in comparison with a game theoretic controller and a real-world traffic dataset. ## I Introduction One of the major challenges in autonomous driving lies in ensuring safe interaction with nearby traffic, particularly in highway merging scenarios. Unlike the simple stop-and-go strategy used in urban intersections with stop signs, the on-ramp ego vehicle must cooperate with high-speed vehicles and transition itself into the highway traffic in a secure, but also timely manner. Moreover, the varied social behaviors of different drivers can result in diverse responses to the merging intent. In this dynamic interaction, the ego vehicle must actively search for available space or create opportunities for merging. A cooperative driver on the highway may decelerate or change lanes to facilitate the merging process, while a self-interested driver may maintain a constant speed and disregard the merging vehicle. Consequently, understanding the driving intentions of the surrounding vehicles becomes crucial for the ego vehicle to accomplish its task successfully. Learning-based methods have been extensively explored for autonomous driving applications. Without explicit interaction behavior modeling, Reinforcement-Learning (RL) based methods have been utilized to learn end-to-end driving policies [1, 2]. Additionally, researchers have employed imitation learning to train decision-making modules [3, 4] that emulate expert behavior, such as a Model Predictive Controller [3]. A comprehensive survey of RL methods in autonomous driving research can be found in [5]. However, a significant drawback of end-to-end RL-learned policies is the lack of interpretability; furthermore, their ability to generalize may be limited by the interactive behaviors observed in the training data. To address this challenge, researchers have explored the integration of learning-based methods with planning and control techniques. The Inverse Reinforcement Learning (IRL) approach has been employed to learn the reward function of human drivers for planning purposes [6, 7]. Additionally, neural network models, such as the Social Generative Adversarial Network [8], have been implemented in trajectory prediction modules for Model Predictive Control (MPC) [9, 10]. Novel network architectures have also been designed to enhance driving motion forecasting [11, 12]. However, a common issue in these learned modules is their limited generalization capability beyond the training dataset. Game theoretic approaches have also been considered to represent interactions between agents in traffic, such as the Level-k method [13], potential games [14], and Stackleberg games [15, 16]. A Leader-Follower Game theoretic Controller (LFGC) has been proposed specifically for modeling pairwise leader-follower interactions in forced merging scenarios in [17, 18]. Inspired by the concepts presented in [17, 18], we adopt a pairwise interaction formulation to model vehicle cooperative behaviors, enabling better scalability for scenarios involving multiple vehicles. Differently from [17, 18], to create a more comprehensive model of human driving, we incorporate a novel behavioral model that incorporates various social psychology factors. Early studies in social psychology have revealed that individuals don't always act solely to maximize their own rewards in two-person [19] or \(n\)-person experimental games [20]. Drawing inspiration from the concept of Social Value Orientation (SVO) [20, 21, 22, 23], and its application in the context of autonomous driving [24], we propose a novel behavioral model that encompasses both the drivers' inclination towards social cooperation and their individual objectives as latent parameters. Leveraging this proposed behavioral model, we can estimate the underlying driving intentions of the interacting vehicles and make appropriate decisions for the ego vehicle in forced merging scenarios. The algorithms we propose offer several potential advantages: 1. The proposed behavioral model incorporates aspects of both driver's social cooperativeness and personal objectives and captures a rich and realistic set of behaviors. 2. The algorithm uses a Bayesian filter to infer the latent driving intent parameters, thereby handling uncertain ties in the cooperation intent of interacting vehicles. 3. The derived decision-making module adopts a pairwise interaction formulation and utilizes receding-horizon optimization-based control that leads to good scalability while ensuring safety for forced merging applications. This paper is organized as follows: In Sec. II, we introduce the problem setting and the forced merging scenario. We also outline the assumptions made regarding vehicle kinematics, action space, and driver's action objectives. In Sec. III, we present our behavioral model that incorporates the cooperation intents and personal objectives of the interacting vehicles. In Sec. IV, we present the decision-making module for the ego vehicle, that effectively handles uncertainties in the driving intentions of the interacting vehicles. In Sec. V, we demonstrate the ability of our behavioral model to reproduce realistic driving behaviors. Furthermore, we validate the proposed controller through simulations in comparison with the LFGC and real-world dataset evaluations. Finally, the conclusions are given in Sec. VI. ## II Problem Formulation In this paper, we focus on the design of a decision-making module for autonomous driving applications in forced merging scenarios. The decision-making module plans high-level behaviors such as acceleration, deceleration, or lane changing, and generates desired reference trajectories for the autonomous vehicle. Subsequently, a lower-level controller is assumed to be available that can control the steering and acceleration/braking of the vehicle to track the reference trajectory. As illustrated in Fig. 1, the goal is to design a behavior planner for the ego vehicle to merge into the target highway lane while accounting for interactions with multiple highway vehicles to ensure safe and effective merging. ### _Vehicle Kinematics Model_ We use the following discrete-time model to represent the vehicle kinematics, \[\left[\begin{array}{c}x(t+1)\\ v_{x}(t+1)\\ y(t+1)\end{array}\right]=\left[\begin{array}{c}x(t)+v_{x}(t)\Delta t\\ v_{x}(t)+a(t)\Delta t\\ y(t)+v_{y}(t)\Delta t\end{array}\right]+\tilde{w}(t), \tag{1}\] where \(x\), \(v_{x}\), and \(a\) are the longitudinal position, velocity, and acceleration, respectively; \(y\) and \(v_{y}\) are the lateral position and velocity; \(\Delta t>0\) is the sampling period between discrete time instances \(t\) and \(t+1\); \(\tilde{w}(t)\in\mathbb{R}^{3}\) is a disturbance representing unmodeled dynamics. We assume all the vehicles, including both ego and highway vehicles, follow this dynamics model. For simplicity, Eq. (1) can be rewritten as, \[s_{i}(t+1)=f\big{(}s_{i}(t),u_{i}(t)\big{)}+\tilde{w}_{i}(t),\ i=0,1,2,\dots, \tag{2}\] where \(s_{i}(t)=[x(t),y(t),v_{x}(t)]^{T}\) and \(u_{i}(t)=[a(t),v_{y}(t)]^{T}\) represent the state and control of the \(i\)'th vehicle at time instance \(t\), respectively. In the following context, the subscript \(i=0\) designates the ego vehicle, while \(i\in\{1,2,\dots\}\) represents another vehicle with which the ego vehicle interacts. ### _Action Space_ We assume vehicles take actions from the action set \(U\) that comprises: 1. "Maintain": keep the current lateral position and longitudinal speed; 2. "Accelerate": keep the current lateral position and accelerate at \(a\ \mathrm{m/s^{2}}\) without exceeding the upper speed limit \(v_{\text{max}}\ \mathrm{m/s}\); 3. "Decelerate": keep the current lateral position and decelerate at \(-a\ \mathrm{m/s^{2}}\) without falling below the lower speed limit \(v_{\text{min}}\ \mathrm{m/s}\); 4. "Steer to the left": keep the current longitudinal speed and steer to the left adjacent lane with a constant lateral velocity of \(\frac{v_{\text{max}}}{T_{\text{max}}}\ \mathrm{m/s}\); 5. "Steer to the right": keep the current longitudinal speed and steer to the right adjacent lane with a constant lateral velocity of \(-\frac{v_{\text{max}}}{T_{\text{line}}}\ \mathrm{m/s}\). Note that we assume a complete lane change takes \(T_{\text{line}}\) sec to move into the adjacent lane with a lateral traveling distance of lane width \(w_{\text{lane}}\), which is reflected by the above actions. We also note that more acceleration and deceleration levels \(|a|\) can be introduced to the action space, but we only consider one level here for simplicity. ### _Driving Objectives_ The driving objectives of each vehicle are reflected in the reward function that depends on the following four variables: 1. Traffic rules \(c\): The binary variable \(c\in\{0,1\}\) is an indicator for either getting into a collision with other vehicles or getting beyond the road boundaries. A safety bounding box is constructed that overbounds each vehicle body in the \(x-y\) plane with certain safety margins. The value of \(c=1\) indicates the overlap of two vehicles' bounding boxes or the overlap between the vehicle's bounding box and the road boundary. The value of \(c=0\) indicates that the vehicle stays within the road and is not in collision with other vehicles. The visualization of the highway road boundaries is shown in Fig. 1 using solid black lines. 2. Safety consciousness \(h\): The variable \(h\in[0,1]\) is derived from the Time-to-Collision \((TTC)\) with a vehicle ahead in the same lane \[h=\frac{\mathrm{sat}_{[\text{max},T_{\text{max}}]}(TTC)-T_{\text{min}}}{T_{ \text{max}}-T_{\text{min}}},\] Fig. 1: Schematic diagram of the highway forced merging problem: an ego vehicle (red) interacts with highway vehicles (grey) to facilitate its merging. where \(T_{\text{min}}=0.2\) sec is the minimum reaction time, \(T_{\text{max}}=3\) sec stands for an adequate time headway, and \(\operatorname{sat}_{[a,b]}\left(\cdot\right)\) is a saturation function between the minima \(a\) and the maxima \(b\). The reward function depends on \(h\) to encourage vehicles to keep an appropriate headway distance and be conscious of potential collisions. 3. Traveling time \(\tau\): The variable \(\tau\in[0,1]\) reflects the objective of shortening the traveling time, and is a weighted summation of \[\tau_{x}=\frac{x-x_{0}}{x_{f}-x_{0}},\tau_{y}=1-\frac{1}{w_{\text{ lane}}}\min\left(\left|y-y_{r}\right|,w_{\text{lane}}\right),\] where \(x_{0}\) and \(x_{f}\) (see Fig. 1) are the \(x-\)coordinates of the beginning of the ramp and a goal placed a specified distance away from the end of the ramp, respectively, while \(y_{r}\) corresponds to the center of the highway lane that is next to the ramp. The reward for \(\tau_{x}\) promotes the highway vehicle reaching the end of the highway in a shorter time. A higher reward for \(\tau_{y}\) is imposed for on-ramp vehicles to encourage merging action. 4. Control effort \(e\): The reward for \(e\in[0,1]\) promotes vehicles to drive at a constant speed and to reduce acceleration/deceleration. The variable \(e\) attains value of 1 under the action "maintain"; its value decreases if the vehicle makes speed changes or lane changes. ## III Social Behavior Modeling In this section, we introduce our behavioral model that captures drivers' interactive decision-making process during the forced merge scenario. Inspired by social psychology studies [20, 21, 22, 23], and its application in the context of autonomous driving [24], we define the SVO-based reward model in Sec. III-A. In Sec. III-B, we integrate this reward model into the interacting vehicle's decision-making process. ### _Social Value Orientation and Multi-modal Reward_ We assume each vehicle \(i\) interacts pairwise with each adjacent vehicle \(j\in A(i)\), where \(A(i)\) contains indices of all the adjacent vehicles around \(i\). We assume each driver aims to achieve their personal objectives and, to a certain extent, is cooperating with others. Hence, we model each driver's intention using a multi-modal reward function of the form \[\begin{split}& R_{i}\big{(}s(t),u(t)|\sigma_{i},w_{i}\big{)}= \frac{1}{\left|A(i)\right|}\sum_{j\in A(i)}\cdot\\ &\Big{[}\theta_{1}(\sigma_{i})\cdot r_{i}\big{(}s_{i}(t),u_{i}(t),s_{j}(t),u_{j}(t)|w_{i}\big{)}\\ &+\theta_{2}(\sigma_{i})\cdot r_{j}\big{(}s_{j}(t),u_{j}(t),s_{i} (t),u_{i}(t)|w_{j}\big{)}\Big{]},\end{split} \tag{3}\] where \(u(t)=[u_{i}^{T}(t),u_{A(i)}^{T}(t)]^{T}\) is the aggregated control vector of all vehicles and \(u_{A(i)}(t)\) is a column vector concatenating \(u_{j}(t)\) for all \(j\in A(i)\); \(s(t)=[s_{i}^{T}(t),s_{A(i)}^{T}(t)]^{T}\) reflects the state of the traffic at time \(t\); \(\left|A(i)\right|\) is the number of interacting vehicles; \(r_{i}(\cdot)\) and weights \(w_{i}\in\mathbb{R}^{3}\) model personal reward as a weighted summation of personal objectives defined in Sec. II-C, \[r_{i}(s_{i},u_{i},s_{j},u_{j}|w_{i})=(\neg c)\cdot w_{i}^{T}\cdot[h,\tau,e]^{ T}. \tag{4}\] The symbol \(\neg\) in Eq. (4) is the logical negative operator and \(\sigma_{i}\) in Eq. (3) takes one of four values corresponding to four SVO categories and specific values of \(\theta_{1}(\sigma_{i})\) and \(\theta_{2}(\sigma_{i})\): \[(\theta_{1},\theta_{2})=\left\{\begin{array}{rl}(0,1)&\text{if }\sigma_{i}= \text{~{}altruistic}\\ (1/2,1/2)&\text{if }\sigma_{i}=\text{~{}prosocial}\\ (1,0)&\text{if }\sigma_{i}=\text{~{}egoistic}\\ (1/2,-1/2)&\text{if }\sigma_{i}=\text{~{}competitive}\end{array}\right.. \tag{5}\] Note that in Eq. (3), \(\theta_{1}\) and \(\theta_{2}\) correspond to the weight of the self-reward \(r_{i}\) and the weight of the other drivers' net reward, respectively. In this multi-modal reward given by Eq. (3), there are two latent parameters \((\sigma_{i},w_{i})\) that represent different driving incentives: \(w_{i}\) reflects different personal goals, and \(\sigma_{i}\) represents different social behaviors or levels of cooperativeness. For instance, a driver with weights \(w_{i}=[0,1,0]^{T}\) in Eq. (4) might consider driving at full speed thereby minimizing the traveling time. As implied by Eq. (3), a "prosocial" driver has equal weights between personal objectives and other drivers' objectives; hence such drivers intend to cooperate with others in pursuing a large net reward. Note that \(w_{j}\) is the internal parameter of vehicle \(j\) and is a latent variable affecting the decision of vehicle \(i\) if \(\sigma_{i}\neq\) "egoistic" and \(j\in A(i)\). Nonetheless, an altruistic or prosocial (or competitive) driver of vehicle \(i\) is likely to improve (or diminish) other drivers' rewards in all three variables if they do not know other drivers' objectives \(w_{j}\) a priori. Therefore, we assume that during the \(i\)'th vehicle's decision-making, \(w_{j}=[1/3,1/3,1/3]\) in Eq. (3) for \(j\in A(i)\). ### _Driving Behavior Model_ In our behavior model, we assume the driver of vehicle \(i\) aims to maximize the cumulative reward, defined as \[\begin{split}& Q_{i}^{\prime}\big{(}s(t),\gamma_{i}|\sigma_{i},w_{i} \big{)}=\\ &\mathbb{E}_{\gamma_{j},j\in A(i)}\left[\sum\limits_{k=0}^{N-1} \lambda^{k}R_{i}\big{(}s(t+k),u(t+k)\big{|}\sigma_{i},w_{i}\big{)}\right], \end{split} \tag{6}\] where \(\gamma_{i}=\{u_{i}(t+k)\}_{k=0}^{N-1}\in U^{N}\) is an action sequence over a horizon of length \(N\), and \(\lambda\in[0,1]\) is a discount factor. This cumulative reward is an averaged reward over all possible action sequences \(\gamma_{j}\) of vehicles \(j\in A(i)\). Furthermore, the driver of vehicle \(i\) is assumed to adopt a receding horizon control strategy, i.e., \[u_{i}^{*}(t)=\operatorname*{argmax}_{u\in U}Q_{i}\big{(}s(t),u|\sigma_{i},w_{i} \big{)}, \tag{7}\] where \(Q_{i}(s(t),u)=\mathbb{E}_{\gamma_{i}\in\Gamma^{1}(u)}\left[Q_{i}^{\prime}(s(t), \gamma_{i}|\sigma_{i},w_{i})\right]\) and \(\Gamma^{1}(u)=\left\{\gamma_{i}=\{u_{i}(t+k)\}_{k=0}^{N-1}:u_{i}(t)=u\right\}\) contains all the action sequences with the initial action \(u\). Furthermore, considering stochasticity in the decision-making process, a policy distribution can be prescribed by adopting a softmax decision rule [25]: \[\mathbb{P}\big{(}u_{i}=u|\sigma_{i},w_{i},s(t)\big{)}\propto\exp\big{(}Q_{i}(s(t), u|\sigma_{i},w_{i})\big{)}. \tag{8}\] Based on the behavioral model defined above, the model parameters \(\sigma_{i},w_{i}\) can affect action policies to represent different driving intentions. Since other drivers' intentions are not known in a given traffic scenario, the model parameters \(\sigma_{i},w_{i}\) (i.e., drivers' intentions) need to be estimated and updated online so that the autonomous ego vehicle is able to make optimal merging decisions. ## IV Decision-Making Under Cooperation Intent Uncertainty We now develop a decision-making algorithm to facilitate the forced merging process. We first present a Bayesian filter for the ego vehicle that estimates the latent variables \(\sigma_{i},w_{i}\) of the interacting vehicles online. Considering the uncertainties in our estimation and other drivers' intentions, we use a receding-horizon control formulation to simultaneously address the safety and performance aspects of the forced merging. ### _Bayesian Inference of Latent Driving Intentions_ At each time step, we assume that the ego vehicle can observe the traffic nearby the \(i\)'th interacting vehicle where \(i\in A(0)\), and the observed traffic history is defined as \[\begin{array}{l}\xi(t)=\big{\{}s(0),s(1),\ldots,s(t),\\ u_{A(i)}(0),u_{A(i)}(1),\ldots,u_{A(i)}(t-1)\big{\}},\end{array}\] where \(s(t)=[s_{i}^{T}(t),s_{A(i)}^{T}(t)]^{T}\) and \(u_{A(i)}(t)\) is a column vector concatenating \(u_{j}(t)\) for all \(j\in A(i)\). The ego vehicle utilizes the traffic history to estimate the posterior distribution \(\mathbb{P}\left(\sigma_{i},w_{i}|\xi(t+1)\right)\) of the \(i^{\prime}\)th interacting vehicle's latent parameters using the following proposition: **Proposition 1**.: _Given a prior \(\mathbb{P}\left(\sigma_{i},w_{i}|\xi(t)\right)\) and assuming that the disturbance \(\tilde{w}_{i}(t)\sim\mathcal{N}(0,Q)\) is zero-mean Gaussian, the posterior distribution can be computed as_ \[\mathbb{P}\left(\sigma_{i},w_{i}|\xi(t+1)\right)=\frac{\Lambda_{i}\left(\sigma _{i},w_{i},s(t),s_{i}\right)}{N_{i}(t)}\cdot\mathbb{P}\left(\sigma_{i},w_{i}| \xi(t)\right), \tag{9}\] _where \(N_{i}(t)\) is a normalization factor and \(\Lambda_{i}\left(\sigma_{i},w_{i},s(t),s_{i}\right)\) admits the following form:_ \[\begin{array}{l}\Lambda_{i}\left(\sigma_{i},w_{i},s(t),s_{i} \right)=\sum\limits_{u\in U}\mathbb{P}\left(u_{i}=u|\sigma_{i},w_{i},s(t) \right)\cdot\\ \mathbb{P}\left(\tilde{w}_{i}(t)=s_{i}-f(s_{i}(t),u)\right),\end{array} \tag{10}\] _where \(\mathbb{P}\left(u_{i}=u|\sigma_{i},w_{i},s(t)\right)\) is defined in Eq. (8)._ Note that the above recursive Bayesian filter can be initialized using a uniform distribution and the covariance matrix \(Q\) is a tunable parameter. Intuitively, if we consider the current traffic state \(s(t)\) and the vehicle \(i\) is executing policy defined in Eq. (8) conditioned on parameters \(\sigma_{i}\) and \(w_{i}\), \(\Lambda_{i}\left(\sigma_{i},w_{i},s(t),s_{i}\right)\) represents the transition probability of the vehicle \(i\) moving from \(s_{i}(t)\) to \(s_{i}(t+1)=s_{i}\). The proof is presented as follows: Proof.: Applying the Bayesian rule, the posterior admits the following form, \[\begin{array}{l}\mathbb{P}\left(\sigma_{i},w_{i}|\xi(t+1)\right)=\mathbb{P} \left(\sigma_{i},w_{i}|s(t+1),u_{A(i)}(t),\xi(t)\right)\\ \mathbb{P}\left(s_{i}(t+1)|\sigma_{i},w_{i},s(t)\right)\end{array},\] whereby the posterior reduces to \[\begin{array}{l}\mathbb{P}\left(\sigma_{i},w_{i}|\xi(t+1)\right)\propto\\ \mathbb{P}\left(s_{i}(t+1)|\sigma_{i},w_{i},s(t)\right)\cdot\mathbb{P}\left( \sigma_{i},w_{i}|\xi(t)\right),\end{array}\] and \(\Lambda_{i}\left(\sigma_{i},w_{i},s(t),s_{i}\right)=\mathbb{P}\left(s_{i}(t+1 )|\sigma_{i},w_{i},s(t)\right)\) is the transition probability conditioned on the model parameters \(\sigma_{i}\), \(w_{i}\) and the current traffic state \(s(t)\). ### _Receding-horizon Optimization-based Control_ We leverage the receding horizon control to achieve safe merging. The objective of successful merging without collisions can be translated into maximizing predictive cumulative reward, \[\begin{array}{l}Q_{0}^{\prime}(s(t),\gamma_{0})=\frac{1}{|A(0)|}\sum\limits _{i\in A(0)}\sigma_{i},w_{i}\sim\mathbb{E}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! where \(Q_{0}(s(t),u)=\mathbb{E}_{\gamma_{0}\in\Gamma^{1}(u)}\left[Q_{0}^{\prime}(s(t), \gamma_{0})\right]\) and \(\Gamma^{1}(u)\) contains all action sequences of length \(N\) with \(u\) being their initial action. **Remark**.: _Note that the reward computation in Eq. (11) is not reactive. Our algorithm does not predict other vehicles' reactions to the ego actions in the prediction. To address this concern, we can also adopt a game-theoretic formulation similar to [17, 18, 24]. However, such a formulation is computationally demanding in practice. Due to the formulation of pairwise interaction in Eq. (11), our algorithm can solve Eq. (12) effectively via an exhaustive search that scales linearly with the number of interacting vehicles._ ## V Simulation and Experimental Results Here, we demonstrate the effectiveness of the proposed behavioral model and the forced merging control algorithm. The behavioral model is first demonstrated by reproducing real-world driving behaviors. Then, the effectiveness of the proposed forced merging control algorithm is illustrated through simulation studies in comparison with the LFGC [17, 18] against interacting vehicles controlled by our behavior model and through a naturalistic driving dataset present in High-D [26]. ### _Reproducing Real-world Traffic_ We leverage a drone-recorded naturalistic traffic dataset called the High-D dataset [26]. It contains 60 traffic recordings among which three recordings/scenes (58-60) have merging ramps. We first calibrate the values of the model parameters \(a\) and \(T_{\text{lane}}\) from the High-D dataset (see Fig. 3). The lane width \(w_{\text{lane}}=3.5\;\;\mathrm{m}\) in High-D. We choose \(|a|=6\;\mathrm{m}/\mathrm{s}^{2}\) because the longitudinal accelerations and decelerations of High-D vehicles are within an interval of \([-6,6]\;\mathrm{m}/\mathrm{s}^{2}\). And we select \(T_{\text{lane}}=4\;\mathrm{sec}\) since the majority of the vehicles in the High-D dataset take \(4\sim 6\;\sec\) to change lane. We aim to reproduce the real-world driving behavior (see Fig. 2) in the recording segments using our behavioral model with certain parameters \(\sigma_{i},w_{i}\). Specifically, to reproduce the behavior of a target vehicle \(i\), we initialize our behavioral model with its actual initial state \(s_{i}(t=0)\) from the recording segment. At each time step of 2 seconds, we assume a virtual vehicle \(i\) is controlled by our behavior model, and can observe the surrounding traffic \(s(t)=[s_{i}(t)^{T},s_{A(i)}(t)^{T}]^{T}\). With prescribed parameters \(\sigma_{i},w_{i}\), the virtual vehicle \(i\) updates its control decision every time step using the behavioral model defined in Eq. (7) with MPC prediction horizon \(N=3\) spanning 6 seconds. The kinematics of the virtual vehicle obeys Eq. (1) with a sampling rate of 25 Hz. Furthermore, to introduce realistic lateral behaviors, the underlining lane change trajectories are modeled by 5th-order polynomials as proposed in [27]. As shown in Fig. 1(a), the virtual truck \(14\) in green is controlled by our behavioral model with parameters \(\sigma_{14}=\) egoistic, \(w_{14}=[0,2/3,1/3]^{T}\). This combination of parameters implies the virtual vehicle cares about the self-reward solely by the formulation in Eq. (3) and Eq. (5) and tries to minimize the traveling time because it has the largest weight \(2/3\) for \(\tau\) in Eq. (4). As a result, the virtual truck merges into the highway, and the trajectory in the green boxes matches the actual target vehicle in the red boxes. As shown in Fig. 1(b), in front of the virtual vehicle \(13\), there are two vehicles \(12\) and \(15\) driving slowly. With parameters \(\sigma_{13}=\) competitive, \(w_{14}=[0,1,0]^{T}\), the behavioral model controls the virtual vehicle to compete with the two vehicles while minimizing its traveling time. This interaction results in an overtaking behavior for the virtual vehicle which qualitatively matches the actual traffic recording. These examples provide evidence of our behavioral model being able to capture realistic driving behaviors. Note that there are quantitative mismatches in positions between the reproduced and the actual results in Fig. 2. Such mismatches partially result from the model assumption in Eq. (1); improvements to the kinematic model are left as future work. ### _Forced Merging in Simulation Compared with LFGC_ We build up a highway with five vehicles (see Fig. 4) to simulate a forced merging scenario. The ego vehicle interacts with the surrounding traffic using the proposed controller in Eq. (12). In this example, each time step corresponds to one second and the lane change takes two time steps. Other vehicles are controlled using our behavioral model with different parameters \(\sigma_{i},w_{i}\). All the traffic vehicles are modeled as "egoistic" drivers, i.e., \(\sigma_{i}=\)"egoistic" for all \(i=1,\dots,4\). We assign vehicle \(i=1,3,4\) with weights \(w_{i}=[0,0,1]\) to encourage minimization of control effort \(e\) such that the three vehicles will keep a constant speed. Vehicle 2 with weights \(w_{2}=[1,0,0]\) tries to search for a larger headway space \(h\), therefore, changes to the inner lane from \(t=0\) to \(t=2\) seconds. As shown in Fig. 4, we also deploy the LFGC to control the ego vehicle in the same highway traffic setting, and the results are plotted overlaid with ours in comparison. Note that the parameters \(\sigma_{i}\) and \(w_{i}\) of all vehicles 1-4 are not available to ego vehicle 0. As a result, the ego vehicle needs to interact with other vehicles and estimate their intentions online to facilitate its merging. During the simulation, our ego vehicle successfully infers the cooperative intents of vehicle \(1,2,3\) from their behaviors, and it decides to first accelerate to surpass vehicle \(1\) from \(t=0\) to \(t=1\), and merges into the gap created by the lane changing of vehicle \(2\). We also provide our Bayesian filter estimation results (see Fig. 5) at time step \(t=1\). Here, we confine the domain of the weights \(w_{i}\) into a finite subset \(W\subset[0,1]^{3}\) as follows \[W=\left\{\begin{array}{cc}[0,0,1],\,[0,1,1]/2,\,[0,1,0],\,[1,1,1]/3,\\ [1,0,1]/2,\,[1,1,0]/2,\,[1,0,0]\end{array}\right\}, \tag{13}\] where each weight case is a normalized combination of zeros and ones. Moreover, as shown in Fig. 5, the \(\sigma_{i}=\)"altruistic" category stands alone and is not correlated with the reward weights \(w_{i}\) because the "altruistic" driver does not care about the self-reward as modeled in Eqs. (3), (4) and (5). Notably, the actual parameters \(\sigma_{i},w_{i}\) are among the ones of the highest probability. Meanwhile, for vehicle \(2\), the cases with the highest probability mostly emphasize traveling time \(\tau\) and headway distance \(h\) while those of vehicles \(1,3\) emphasize control effort \(e\). Using these probability distributions, the ego vehicle can predict the driving and cooperation intent of vehicles \(1,2,3\), and plan its task accordingly, as demonstrated in Fig. 4. In comparison, the ego vehicle controlled by the LFGC estimates the probability of vehicles being a leader or being a follower, i.e., \[\mathbb{P}\left(i=\text{``leader''}\right)=1-\mathbb{P}\left(i=\text{``follower''} \right),i=1,2,3,4.\] However, due to the lane-changing behavior, the LFGC cannot distinguish between vehicle \(2\) being a "leader" and it being a "follower", namely, \(\mathbb{P}\left(i=\text{``leader''}\right)=\mathbb{P}\left(i=\text{``follower''} \right)=0.5\). This results in a deceleration decision of the LFGC from \(t=0\) to \(t=1\). Seeing the constant-speed vehicle \(1\) as a "leader", the LFGC further keeps a low speed from \(t=1\) to \(t=2\) and decides to merge after vehicle \(1\) at \(t=2\) while the ego vehicle controlled by our algorithm is in the middle of a lane change. This provides evidence that, compared to the LFGC, our behavioral model Fig. 4: Forced merging comparison in a simulation environment: Ego vehicle controlled by our algorithm in the red box is interacting with the adjacent vehicles in colored boxes. The LFGC is tested using the same highway traffic setting, and the results are plotted overlaid using grey boxes. Four subfigures demonstrate the interactions at \(t=0,1,2,3\) seconds, respectively. After accelerating to surpass vehicle 1, our ego vehicle properly merges into the highway at \(t=3\) seconds. Fig. 3: Histogram of vehicle driving statistics in High-D dataset: (a) Longitudinal acceleration/deceleration (y-axis in log scale). (b) Time duration for a complete lane change. Fig. 2: **Examples of reproducing real-world highway merging and overtaking behaviors**: In each example, the target vehicle is represented by a green box, and interacts with the traffic. All traffic vehicles are visualized as boxes of different edge colors filled with grey. The trajectory of vehicles is shown in dashed lines with vehicles’ positions every second marked as boxes in green (for the target vehicle) and circles filled with grey (for other traffic vehicles). The virtual vehicles’ trajectories are visualized using red dashed lines and red boxes and match the actual ones closely. a) A traffic of 12 seconds is sampled from frames 1-300 in scene 59 from the High-D dataset. The trajectory of vehicle 14 is reproduced using our behavioral model with \(\sigma_{14}=\text{egoistic}\), \(w_{14}=[0,2/3,1/3]^{T}\), i.e., vehicle 14 is an egoistic driver, and minimizes the traveling time by merging to the highway. b) A traffic of 14 seconds is sampled from frames 1-350 in scene 59 from the High-D dataset. The trajectory of vehicle 13 is reproduced using our behavioral model with \(\sigma_{13}=\text{competitive}\), \(w_{13}=[0,1,0]^{T}\), i.e., vehicle 13 is a competitive driver, and minimizes the traveling time by overtaking the leading vehicles. captures a richer and more realistic set of behaviors and the controller integrated with our behavioral model can achieve faster merging in certain cases. ### _Validation on Real-world Dataset_ To validate our controller in real-world traffic, we consider traffic segments (see Fig. 6) that contain merging vehicles from the High-D dataset. Similar to Sec. V-A, we use \(w_{\text{lane}}=3.5\ \text{m}\), \(|a|=6\ \text{m}/\text{s}^{2}\), \(T_{\text{lane}}=4\ \text{sec}\), and an MPC prediction horizon \(N=3\) spanning 6 seconds. We initialize our virtual ego vehicle using the initial state of the merging vehicle on the ramp. Afterward, we control the virtual ego vehicle with the proposed controller. We use the finite weight set \(W\) in Eq. (13) for parameter estimation in the Bayesian filter. As shown in Fig. 6, the virtual ego vehicle interacts with two trucks \(1\) and \(2\) that drive approximately at constant speeds. Thus, the ego vehicle first accelerates to create adequate merge space and speed advantage, then successfully merges between the two trucks. Another example is provided in Fig. 7, where the ego vehicle keeps a constant speed for 2 seconds to create merge space after vehicle 2. Notably, in both examples, we can observe that the ego vehicle's trajectories are similar to the ones of the actual target vehicles, shown in green boxes. Meanwhile, in the High-D dataset, there are in total of 75 merging vehicles in scenes 58-60. For each merging vehicle, we repeat the aforementioned procedures to set up the test environment and control the virtual ego vehicle to merge onto the highway. The test results are presented in Table I. We consider a test case a success if the ego vehicle successfully merges without collisions. A failure case implies that the ego vehicle either collides with other vehicles or fails to merge by the end of the ramp. Our algorithm can achieve a \(100\%\) success rate among the \(75\) test cases, and properly merges the virtual vehicles into the naturalistic traffic. the merging objective and ensures driving safety. Finally, we demonstrate the effectiveness of the proposed behavioral model and the forced merge control algorithm by reproducing real-world trajectories and evaluating the merging performance in simulations in comparison with the LFGC and a naturalistic traffic dataset.
2309.15789
Large Language Model Routing with Benchmark Datasets
There is a rapidly growing number of open-source Large Language Models (LLMs) and benchmark datasets to compare them. While some models dominate these benchmarks, no single model typically achieves the best accuracy in all tasks and use cases. In this work, we address the challenge of selecting the best LLM out of a collection of models for new tasks. We propose a new formulation for the problem, in which benchmark datasets are repurposed to learn a "router" model for this LLM selection, and we show that this problem can be reduced to a collection of binary classification tasks. We demonstrate the utility and limitations of learning model routers from various benchmark datasets, where we consistently improve performance upon using any single model for all tasks.
Tal Shnitzer, Anthony Ou, Mírian Silva, Kate Soule, Yuekai Sun, Justin Solomon, Neil Thompson, Mikhail Yurochkin
2023-09-27T17:08:40Z
http://arxiv.org/abs/2309.15789v1
# Large Language Model Routing with Benchmark Datasets ###### Abstract There is a rapidly growing number of open-source Large Language Models (LLMs) and benchmark datasets to compare them. While some models dominate these benchmarks, no single model typically achieves the best accuracy in all tasks and use cases. In this work, we address the challenge of selecting the best LLM out of a collection of models for new tasks. We propose a new formulation for the problem, in which benchmark datasets are repurposed to learn a "router" model for this LLM selection, and we show that this problem can be reduced to a collection of binary classification tasks. We demonstrate the utility and limitations of learning model routers from various benchmark datasets, where we consistently improve performance upon using any single model for all tasks. ## 1 Introduction Large Language Models (LLMs) have demonstrated ground-breaking abilities to solve diverse tasks across a variety of NLP domains (Devlin et al., 2018; Brown et al., 2020). Today, researchers in both academia and industry are releasing new LLMs _daily_.1 These models perform tasks ranging from text classification to question-answering, summarization, and dialogue. Footnote 1: Hugging Face currently hosts 22,482 models for text generation. The popularity and influx of open-source LLMs and the diversity of their potential use cases made it crucial to develop comprehensive benchmarks, i.e., collections of datasets representing different tasks and domains to compare LLMs. For example, HELM (Liang et al., 2022) consists of 42 scenarios covering a variety of uses, MMLU (Hendrycks et al., 2020) is a multiple-choice question answering benchmark with 57 tasks organized by topics, Open LLM Leaderboard (Beeching et al., 2023) combines MMLU with other question-answering datasets, and LM Evaluation Harness (Gao et al., 2021) supports over 200 tasks. While there always will be an LLM that is the best _on average_ across benchmarks, there is unlikely to ever be a model that is strictly the best _on each_ of the hundreds of datasets comprising various benchmarks. Meanwhile, a practitioner typically wants to know what is the best model for their specific use case and is less concerned about average performance on a plethora of other datasets. In this paper, we study the problem of identifying the best LLM for a new task. To learn about the strengths and weaknesses of candidate LLMs we use benchmark datasets Figure 1: We learn the strengths of candidate LLMs (marked with corresponding colors) on various tasks (emojis: QA, reasoning, summarization, etc.) and domains (4 sections within each box: finance, legal, general knowledge, etc.) from benchmark datasets. We accomplish this by training a binary classifier per LLM (upper part of the figure). For a new task, we score each LLM with these binary classifiers and recommend an LLM for the user (lower part). that give insights into the performance of LLMs across tasks and domains. For example, suppose the new task is answering math questions. In that case, it is more intuitive to consider models that do well on other STEM question-answering datasets and discount performance on, e.g., sociology or toxicity detection. We make this idea precise by casting the learning of model strengths as a binary supervised learning task, where the features are input embeddings of samples across tasks and the labels are whether the model "did well" on the corresponding inputs, e.g., generated correct class label, answered a question correctly, or followed input instructions sufficiently well. See Figure 1 for an illustration. Such information is collected during benchmark evaluations and can be reused for training model routers without having to run expensive LLM inference again. The resulting router is also efficient at test time, as it only requires calling the chosen LLM. Our contributions are summarized below: * We formalize the problem of learning the strengths and weaknesses of LLMs for downstream _routing_, i.e., selecting the best model, as a collection of binary classification problems. The goal of each classification problem is to predict whether a given LLM will be "correct" on an input. * We propose three scores for selecting LLMs for a new task using these correctness predictors. Our third score is designed to account for mistakes a correctness predictor can make on the (out-of-distribution) data from a new task that is likely to be different from the datasets in benchmarks used for training the correctness predictors. We establish connections to meta-learning to obtain theoretical insights into the efficacy of these scores. * We verify the efficiency of our model routing scores empirically on 29 datasets from HELM (Liang et al., 2022) representing scenarios like question answering, text classification, knowledge, and reasoning, and MixInstruct (Jiang et al., 2023), a collection of datasets for evaluating instruction following capabilities of LLMs. * We discuss and empirically investigate questions concerning the efficacy and utility of learning LLM routers from benchmarks: generalization of correctness predictors to new tasks, the importance of a larger pool of benchmarks, and the potential of routing smaller LLMs to reduce costs. ## 2 Related work BenchmarkingComparing models or algorithms across various tasks is a standard practice in ML and AI literature. Prior to Foundation Models (Bommasani et al., 2021), it was typical to apply _the same learning algorithm_ to train a model on each of the datasets and compare the performance against other learning algorithms. The UCI Machine Learning Repository (Kelly et al., 2023) is one prominent example of such a collection of datasets often used to compare learning algorithms. With the emergence of Foundation Models, i.e., models with billions of parameters trained on massive datasets using large compute clusters, the paradigm changed to evaluating _the same model_ (or a few-shot tuned version of it) on a variety of tasks (Bojar et al., 2014; Goyal et al., 2019; Li et al., 2022). In the context of Large Language Models, many benchmarks (Wang et al., 2018, 2019; Hendrycks et al., 2020; Gao et al., 2021; Srivastava et al., 2022; Liang et al., 2022; Beeching et al., 2023; Jiang et al., 2023) were proposed to help determine the most capable LLM. Benchmarks typically average the performance of models across tasks and provide a final ranking, discarding the rest of the information. In this work, we use the byproducts of benchmark evaluations, i.e., the per-sample performance of various LLMs across tasks, to learn about their individual strengths and identify the best LLM for a new task. Model selectionSelecting the best model, or model selection, is a classical topic in statistics and ML (Bishop and Nasrabadi, 2006; Hastie et al., 2009; Raschka, 2018). However, the typical problem setting is quite different: classical methods like cross-validation aim to estimate the population error of a model trained on samples from the population distribution. In other words, the goal is to find the best model for in-distribution test data, i.e., data sampled from the same distribution as the train data. The notion of "train" data is quite elusive for LLMs, as they are usually trained on massive datasets with trillions of tokens with a simple task of next token prediction (Radford et al., 2019; Brown et al., 2020). However, the tasks we evaluate them on are often more structured, e.g., classification and question-answering, and are specific to domains that may or may not be sufficiently represented in the train data. In addition, techniques like \(k\)-fold cross-validation require training the model multiple times, which is infeasible for LLMs. Out-of-distribution model selectionRecognizing the limitations of the model selection methods for in-distribution test data (Gulrajani and Lopez-Paz, 2021; Koh et al., 2021), recent work has proposed a variety of methods to select models when deployed on data that may differ from the train data. These methods rely on ideas such as bootstrapping (Xu and Tibshirani, 2022), reweighing (Chen et al., 2021; Maity et al., 2023), agreement of models or ensembles (Jiang et al., 2021; Chen et al., 2021; Ng et al., 2023), or aligning model accuracy in-distribution with a confidence threshold (Guillory et al., 2021; Garg et al., 2022; Yu et al., 2022). Most of these methods are nontrivial to extend to generation use-cases of LLMs; some require training multiple models, and some need well-defined in-distribution data related to the new task. Routing LLMsPrior work on selecting LLMs primarily considers choosing one that produces the best generation for a given input. Liu and Liu (2021); Ravaut et al. (2022); Jiang et al. (2023) train dedicated scoring or ranking models that can be applied to model generations. Unlike our work, these approaches require generating outputs with _every_ candidate LLM to make a decision, which can be computationally prohibitive with a large pool of candidate LLMs. FrugalGPT (Chen et al., 2023) calls LLMs sequentially until a dedicated scoring model deems the generation acceptable. Prior works in this group require training data sufficiently representative of each of the tasks and domains of interest to train the corresponding ranking and scoring models. In this paper, instead, we use data from benchmarks to learn the strengths and weaknesses of LLMs across tasks and domains. The resulting model router requires generating outputs only with the chosen LLM at test time. ## 3 Learning from Benchmarks We start by introducing notation to describe the majority of NLP benchmarks. Let \(\{x_{1}^{d},\ldots,x_{n_{d}}^{d}\}_{d=1}^{D}\) be a collection of inputs across \(D\) tasks. Each input text \(x_{i}^{d}\) corresponds to a reference answer \(r_{i}^{d}\), i.e., an ideal generation for the corresponding input. Finally, there is a metric \(F_{d}(x,o,r)\) that can be task-dependent and measures how well a response \(o\) for an input \(x\) corresponds to the reference \(r\). To test an \(\text{LLM}_{m}\), \(m\in\{1,\ldots,M\}\), on the benchmark, for each task \(d=1,\ldots,D\), its responses are generated \(\{o_{im}^{d}=\text{LLM}_{m}(x_{i}^{d})\}_{i=1}^{n_{d}}\) and compared to the corresponding references to obtain performance metrics \(\{f_{im}^{d}=F_{d}(x_{i}^{d},o_{im}^{d},r_{i}^{d})\}_{i=1}^{n_{d}}\). At this point, the majority of the benchmark studies will take a (weighted) average of the performance metrics and report a single score for every LLM to rank them in performance. Instead, we reuse these evaluation results to formulate a supervised learning problem to better understand the strengths and weaknesses of various LLMs based on their performance on data points and tasks. Supervised learning from benchmarksOur goal is to learn a simple routing function \(g_{m}(x)\) for each LLM, \(m=1,\ldots,M\), that can predict \(\{f_{im}^{d}\}_{i=1}^{n_{d}}\), i.e., the performance of the corresponding LLM on a new task \(d^{\prime}\). Then it is trivial to select the best LLM for this task. For efficiency at test time, we restrict the routers \(\{g_{m}\}_{m=1}^{M}\) to only depend on the input \(x\). This is in contrast to the majority of prior works on LLM routing that first obtain generations with every candidate LLM and then use them to choose the best model (Liu and Liu, 2021; Ravaut et al., 2022; Jiang et al., 2023). With thousands of open-source LLMs, it is simply infeasible to obtain generations with every LLM for every input at test time. To complete the problem formulation, we denote the "correctness" of model \(m\) on an input \(x\) by \(y(x,m)\in\{0,1\}\). Correctness is evaluated as follows: generate a response \(o_{im}^{d}\) with LLM \(m\) on input \(x_{i}^{d}\), compare it to the corresponding reference \(r_{i}^{d}\), and output \(1\) if the model's response is good enough, i.e., \(f_{im}^{d}>\eta_{d}\), and \(0\) otherwise, where \(\eta_{d}\) is some threshold that can be task and/or metric specific. For tasks like classification or multiple-choice QA, \(y(x_{i}^{d},m)=f_{im}^{d}\), while for various evaluation metrics used in summarization and instruction following tasks (Zhang et al., 2020; Sellam et al., 2020; Yuan et al., 2021), the notion of correctness can help to account for the heterogeneity of popular metrics and task difficulty levels. In Section 5.2, we also present results with raw metrics instead of correctness. To train a predictor of an LLM correctness, for each LLM, \(m=1,\ldots,M\), we solve the following optimization problem: \[\min_{g_{m}}\sum_{d=1}^{D}\sum_{i=1}^{n_{d}}\ell(g_{m}(x_{i}^{d}),y(x_{i}^{d}, m)), \tag{1}\] where we choose \(\ell\) to be a binary cross-entropy loss and \(g_{m}\) is any standard probabilistic classifier, i.e., \(g_{m}(x)\) estimates \(P(y(x,m)=1|x)\). An important consideration when training correctness predictors is their ability to generalize out-of-distribution (OOD) data, since our goal is to estimate LLM performance on a new task \(d^{\prime}\) that has not been seen during training. Training predictors given data from multiple domains that need to generalize to unseen domains is indeed an active area of research in ML literature. For example, Sun and Saenko (2016); Arjovsky et al. (2019) proposed methods for improving OOD generalization when training on data from multiple domains, while Koh et al. (2021) proposed a benchmark for OOD generalization demonstrating the challenging nature of the problem in various applications. In this work, we use a simple model for the correctness predictor: we embed all inputs with a sentence transformer (Reimers and Gurevych, 2019) and use a \(k\)-nearest neighbors classifier (Cover and Hart, 1967) as \(\{g_{m}\}_{m=1}^{M}\). kNN is a simple non-parametric classifier that allows us to fit a potentially complicated decision boundary of an LLM's correctness across multiple tasks without extensive hyperparameter tuning. We choose this approach for learning correctness predictors to emphasize the utility of learning from benchmarks even with a basic method and instead focus on the question specific to our problem that has not been studied in prior works on OOD generalization: _Can we improve the quality of LLM routing with an imperfect correctness predictor_? ## 4 LLM routing with (imperfect) correctness predictors The goal of LLM routing is to identify an LLM that will have the highest frequency of being correct on a new task \(d^{\prime}\), given the inputs \(\{x_{i}^{d^{\prime}}\}_{i=1}^{n_{d^{\prime}}}\) from this task: \[\arg\max_{m}\tilde{S}(m,d^{\prime}),\,\text{where}\,\,\tilde{S}(m,d^{\prime}) =\tfrac{1}{n^{d^{\prime}}}\sum_{i=1}^{n_{d^{\prime}}}y(x_{i}^{d^{\prime}},m). \tag{2}\] Here, \(\tilde{S}(m,d^{\prime})\) is the "oracle" score that we want to estimate. The most intuitive estimator is simply using the correctness predictor \[S_{1}(m,d^{\prime})=\tfrac{1}{n^{d^{\prime}}}\sum_{i=1}^{n_{d^{\prime}}}g_{m}( x_{i}^{d^{\prime}}), \tag{3}\] but prior work has shown that accurately estimating \(P(y|x)\), i.e., calibration, is challenging on OOD data (Ovadia et al., 2019). Meanwhile, \(g_{m}\) may still produce accurate predictions after thresholding the predicted probability even if the class probabilities are not estimated well, which is often the case with neural networks (Guo et al., 2017). This motivates another score: \[S_{2}(m,d^{\prime})=\tfrac{1}{n^{d^{\prime}}}\sum_{i=1}^{n_{d^{\prime}}}\bar{ g}_{m}(x_{i}^{d^{\prime}}),\,\text{where}\,\,\bar{g}_{m}(x_{i}^{d^{\prime}})= \mathbb{I}(g_{m}(x_{i}^{d^{\prime}})>t), \tag{4}\] where \(t\in(0,1)\) is some threshold, e.g., \(t=0.5\), \(\mathbb{I}\) is an indicator function, and \(\bar{g}_{m}(x)\in\{0,1\}\) can be interpreted as the prediction of \(g_{m}\) on \(x\). This score, however, does not take into account the potential "imperfection" of \(g_{m}\), i.e., lower accuracy on OOD data from task \(d^{\prime}\). To address this issue, we model the out-of-distribution confidence of the predictions \(\bar{g}_{m}\). A simple OOD confidence modelWe model LLM correctness as follows: \[y(x,m)|x,d^{\prime}=\begin{cases}\bar{g}_{m}(x)&\text{with probability }p(d^{\prime},m)\\ 1-\bar{g}_{m}(x)&\text{with probability }1-p(d^{\prime},m),\end{cases} \tag{5}\] i.e., \(p(d^{\prime},m)\in[0,1]\) is the probability that \(\bar{g}_{m}\) is the correct prediction on a data point from task \(d^{\prime}\). The above model can be condensed as follows: \[y(x,m)|x,d^{\prime}\sim\text{Bern}(\bar{g}_{m}(x)p(d^{\prime},m)+(1-\bar{g}_{ m}(x))(1-p(d^{\prime},m))). \tag{6}\] In this simplistic (and approximate) model, we assume that \(p(d^{\prime},m)\) does not depend on the input \(x\) after conditioning on the task \(d^{\prime}\). The assumption is analogous to the homoscedastic error term assumption in linear regression models and allows us to interpret \(p(d^{\prime},m)\) as the marginal/overall accuracy of \(\bar{g}_{m}\) on data from the task \(d^{\prime}\). Prior work has studied the problem of estimating OOD accuracy given the inputs from a new task, but existing methods are challenging to combine with our approach. For example, Garg et al. (2022) learn a threshold on model confidence, which is hard to apply when using kNN classifiers, and Ng et al. (2023) require data augmentations that can be challenging to identify given the diversity of tasks in benchmarks. Prior methods also do not take into account the partition of the train data into tasks inherent in our problem setting. We treat the problem of estimating \(p(d^{\prime},m)\) as a supervised learning task, taking advantage of the task partition. Specifically, we assign a task descriptor \(u(d)\in\mathbb{R}_{+}\) to every task that measures the distance of the data from task \(d\) to the other available tasks combined. Then we collect the values of \(p(d,m)\), i.e., the accuracy of \(\bar{g}_{m}\) on \(d\), and fit a non-parametric regression model to predict \(p(d,m)\) from \(u(d)\). At test time, we compute \(u(d^{\prime})\) for a new task \(d^{\prime}\) based on the inputs \(\{x_{i}^{d^{\prime}}\}_{i=1}^{n_{d^{\prime}}}\) and predict \(p(d^{\prime},m)\) using the fitted regression model. In general, one can consider more sophisticated, higher-dimensional task descriptors \(u(d)\), but here, for simplicity, we keep it 1-dimensional and use a Gaussian kernel smoother (also known as the Nadaraya-Watson estimator) as the non-parametric regressor. We provide details in Appendix A. Finally, given the model of LLM correctness 6, \(\tilde{\mathbf{S}}(m,d^{\prime})\) is a random variable (corresponding to \(\tilde{S}(m,d^{\prime})\)) distributed as a (scaled) sum of two Bernoulli random variables. To arrive at our final score for LLM routing, we take its expected value: \[S_{3}(m,d^{\prime})=S_{2}(m,d^{\prime})p(d^{\prime},m)+(1-S_{2}(m,d^{\prime})) (1-p(d^{\prime},m)). \tag{7}\] When selecting an LLM with \(S_{3}\), we consider an alternative to the \(\arg\max\) criterion based on our correctness model 6, which defaults to the best model on average across benchmark datasets when we are not sufficiently confident that a candidate model will be better: \[\begin{cases}m_{3}&\text{if }P(\tilde{\mathbf{S}}(m_{3},d^{\prime})>\tilde{ \mathbf{S}}(m^{*},d^{\prime}))>\eta\\ m^{*}&\text{otherwise,}\end{cases} \tag{8}\] where \(m_{3}=\arg\max_{m}S_{3}(m,d^{\prime})\), i.e., the best LLM for the new task according to \(S_{3}\), and \(m^{*}=\arg\max_{m}\sum_{d=1}^{D}\tilde{S}(m,d)\), i.e., the best LLM across the benchmark datasets. In the experiments, we set \(\eta=0.6\). We summarize our LLM routing procedures in Appendix A. ### Connection to meta-learning The OOD confidence model in equation 6 is a meta-model of routing across multiple tasks, and fitting it entails a form of meta-learning. Consider the meta-learning problem \[\min_{g_{m},p(\cdot,m)}\sum_{d=1}^{D}\sum_{i=1}^{n_{d}}\ell(\bar{g}_{m}(x_{i} ^{d})p(d,m)+(1-\bar{g}_{m}(x_{i}^{d}))(1-p(d,m)),y(x_{i}^{d},m)), \tag{9}\] where \(\bar{g}_{m}\) and \(p(\cdot,m)\) are meta-parameters and adaptation step \(\bar{g}_{m}\rightarrow\bar{g}_{m}(\cdot)p(\cdot,m)\) adaptively shrinks the router output towards ambiguity. We exploit this connection to theoretically demonstrate the potential advantages of routing LLMs using \(S_{3}\) over \(S_{2}\). In expectation/in the population, equation 9 fits a larger model class than equation 1, so the risk of the adaptively shrunken router is at most that of the non-adaptive router: \[\begin{split}&\sum_{d=1}^{D}\mathbf{E}\big{[}\ell(\bar{g}_{m}(X^{d} )p(d,m)+(1-\bar{g}_{m}(X^{d}))(1-p(d,m)),y(X^{d},m))\big{]}\\ &\leq\sum_{d=1}^{D}\mathbf{E}\big{[}\ell(\bar{g}_{m}(X^{d}),y(X^ {d},m))\big{]}.\end{split} \tag{10}\] This suggests (subject to standard assumptions on the loss function) that adaptive shrinkage routing leads to better approximations of the oracle router. Lemma 4.1 confirms this intuition. **Lemma 4.1**.: _Let \(\ell(y_{1},y_{2})=\rho(y_{1}-y_{2})\) for some subadditive \(\rho:\mathbf{R}\rightarrow\mathbf{R}\) (e.g. \(\rho(x)=\frac{1}{2}x^{2}\) for the square loss). We have_ \[\ell(S_{2},\widetilde{S}) \leq\mathbf{E}\big{[}\ell(\bar{g}_{m}(X^{d}),y(X^{d},m))\big{]},\] \[\ell(S_{3},\widetilde{S}) \leq\mathbf{E}\big{[}\ell(p(d,m)\bar{g}_{m}(X^{d})+(1-p(d,m))(1- \bar{g}_{m}(X^{d})),y(X^{d},m))\big{]}).\] We present the proof in Appendix D. Combining equation 10 and Lemma 4.1, we expect the adaptive router based on \(S_{3}\) to outperform its non-adaptive counterpart based on \(S_{2}\). That said, it is unclear whether adaptive shrinkage will improve the performance of the adaptive router in finite samples: the expected performance of the adaptive router may be offset by the inflation in variance from fitting the larger (adaptive) model class. Fortunately, our empirical results show that task-specific adaption, i.e., using \(S_{3}\) as a score for routing, generally improves performance. The two-step method for fitting \(\bar{g}_{m}\) and \(p\) in Section 4 approximately minimizes equation 9 with a single Gauss-Seidel pass through the decision variables. ## 5 Experiments ### Model routing on HELM We explore the benefits and challenges of learning from benchmarks using the HELM (Liang et al., 2022) benchmark. DataWe select 29 datasets representing scenarios such as question answering (including a subset of MMLU (Hendrycks et al., 2020)), text classification, language, knowledge, and reasoning, among others. We present additional information about these datasets in Table 3. ModelsWe evaluate 18 open-source models ranging in size from 3B to 70B, including base and chat variations of Llama 2 in different sizes. All models are summarized in Table 4. Model routingThe best model on average (BMA) across the 29 considered HELM datasets is llama-2-70b (followed by llama-2-70b-chat). Our goal is to show that learning model routers from benchmark data can simultaneously outperform BMA and reduce inference costs by recommending smaller LLMs for tasks where they can perform well. We compare models selected with the three scores, \(S_{1},S_{2}\), and \(S_{3}\), presented in Section 4 to the performance of llama-2-70b, i.e., the BMA. All correctness predictors \(g_{m}\)s are kNN classifiers with \(k=5\). We also report the performance of the best model according to the "oracle" score \(\tilde{S}\), which is the upper bound on what can be achieved with model routing, and \(\tilde{S}_{3}\), which corresponds to \(S_{3}\) with the true \(p(d^{\prime},m)\), i.e., the accuracy of (an imperfect) \(g_{m}\) on \(d^{\prime}\). Finally, we compare to scoring LLMs with the average log-likelihood (LL) (or negative perplexity) of the response they generate on the inputs from the task of interest. This last baseline requires producing generations with _every_ LLM at test time to make a selection, while all of our scores only require generating with the chosen LLM. ResultsWe conduct 29 sets of experiments, each time selecting 28 of the datasets as the benchmark data for training the LLM routers and using the remaining task as the new task \(d^{\prime}\) for evaluating the quality of the LLM selection for this task. In Table 1 we report averages across experiments for the performance of the selected model (Acc.), ratio of this performance to the performance of the best model for the corresponding new task (Ratio to Best), Pearson and Spearman rank correlations between model accuracies and model scores, number of parameters of the selected model (# Params), rank of the selected model out of 18 considered (Rank). We also report the fraction of times the BMA is selected by a method (% BMA). Best results are highlighted with bold and second best with an underline (excluding Oracle). First, we notice that accounting for imperfections of the correctness predictors (their average accuracy is 0.59) has clear benefits: when we have access to the true accuracy of correctness predictors, the corresponding score, \(S_{3}\) true \(p\), noticeably outperforms all other scores. Our simple kernel smoothing estimator of this accuracy (MAE\(=0.116\)) allows us to obtain a practical model routing score \(S_{3}\) that outperforms BMA (llama-2-70b) while choosing smaller models for some of the tasks (as evident by the average number of parameters of the chosen models). \(S_{2}\) sacrifices some accuracy but chooses even smaller performant models. Overall, learning from benchmarks allows us to obtain LLM routers that can improve overall performance while utilizing smaller \begin{table} \begin{tabular}{l l l l l l l l} \hline \hline & Acc. & Ratio to Best & Pearson & Spearman & \% BMA & \# Params & Rank \\ \hline \(S_{1}\) eq. 3 & 0.662 & 0.855 & 0.685 & 0.465 & 0.17 & 40.3B & 6.172 \\ \(S_{2}\) eq. 4 & 0.676 & 0.868 & 0.636 & 0.468 & 0.10 & 44.3B & 5.897 \\ \(S_{3}\) eq. 7, 8 & 0.694 & 0.898 & 0.727 & 0.492 & 0.48 & 49.8B & 5.310 \\ \(S_{3}\) true \(p\) & **0.735** & **0.944** & **0.799** & **0.596** & 0.22 & **33.8B** & **3.800** \\ LL & 0.684 & 0.869 & 0.714 & 0.459 & 0.10 & — & 6.517 \\ BMA & 0.688 & 0.884 & — & — & 1.00 & 70.0B & 6.069 \\ \hline Oracle & 0.773 & 1.000 & — & — & 0.21 & 29.1B & 1.000 \\ \hline \hline \end{tabular} \end{table} Table 1: LLM routing on HELM: Comparison of various model scores for LLM routing with the Oracle model selection and performance of the best model on average (BMA). models where appropriate. Finally, we note that log-likelihood (LL) also performs well, however, routing with it requires passing each test input through _each_ candidate LLM, which have 347B parameters in total. Reducing the OOD gapThe average accuracy of correctness predictors across tasks and models for the experiments in Table 1 is 0.59. It is a fairly low accuracy for binary classification, which we attribute to the diversity of tasks in the HELM benchmark leading to substantial distribution shifts when predicting the correctness of LLMs on held-out tasks. We investigate the quality of model routing when we reduce this OOD gap. A simple strategy to reduce this gap is to collect a small number of labeled in-distribution samples. This can be accomplished by asking a practitioner to provide reference answers (\(r_{i}^{d^{\prime}}\)s) for a small number of inputs from their task, allowing us to evaluate the correctness of candidate LLMs on these in-distribution inputs and use them to improve correctness predictors. We simulate this scenario by moving \(\min(\alpha n^{d^{\prime}},50)\) samples from the data from a new task \(d^{\prime}\) to the data for training the correctness predictors. The upper limit of 50 samples is to maintain practical utility while accounting for varying dataset sizes (see Table 3). We conduct 29 sets of experiments, repeating each one 10 times to obtain standard deviations (randomness is due to random selection of data points from a new task for reducing the OOD gap). We summarize the average accuracy of models selected with various routing scores for varying \(\alpha\) in Figure 2 (\(\alpha=0\) corresponds to Table 1). Results for Pearson correlation are in Figure 6(a). We see that even a small number of in-distribution samples (\(\alpha=0.05\)) can reduce the OOD gap (corresponding average accuracy of correctness predictors is 0.65; see Figure 6(b)) and noticeably improves the model routing performance of all three of our scores. When the number of in-distribution samples further increases, \(S_{1}\) starts to outperform \(S_{3}\). We attribute this observation to kNN being well-calibrated in-distribution, i.e., the correctness predictors provide reliable estimates of their own confidence \(P(y|x)\), which are used by \(S_{1}\) in equation 3. Finally, we note a fairly large variance in the results due to random selection of the in-distribution training samples from \(d^{\prime}\), suggesting that active learning (Settles, 2009) can help to further improve LLM routing. ### Model Routing on Mix-Instruct We now consider a different setting and task type, the MixInstruct benchmark dataset (Jiang et al., 2023). The dataset is composed of instruction-following tasks, divided into train/validation/test sets of 100K/5K/5K samples, and includes evaluations of \(N=11\) open-source LLMs using common metrics, e.g. BERTScore (Zhang et al., 2020), BARTScore (Yuan et al., 2021), and BLEURT (Sellam et al., 2020). In Jiang et al. (2023), this benchmark was used to compare different LLM ranking methods in per-instance model selection. We follow the same setting and apply our score \(S_{1}(m,d^{\prime})\) to the test set, per-instance, where we use the 100K-sample train set as the benchmark data for training our LLM router. See Appendices A and C for details on the score computation and the experiment parameters, respectively. Due to the per-instance setting, and since the test set was constructed from in-distribution data, we focus on our simplest router model \(S_{1}\), equation 3. We compare our approach with the scoring methods examined by Jiang et al. (2023), as well as scoring based Figure 3: Average metrics on subsets of the MixInstruct test set, defined by limiting the maximal average distance between test instances and their closest neighbors in the reference (train) set. Figure 2: Using \(\min(\alpha n^{d^{\prime}},50)\) training samples from \(d^{\prime}\) to reduce OOD gap. on the average log-likelihood (LL) of the model responses to the inputs. Additionally, we present the metrics for the best models on average (BMA), Open-Assistant (LAION-AI, 2023) and Vicuna (Chiang et al., 2023). We report the results of BERTScore, BARTScore and BLEURT in Table 2, along with the number of model calls per instance (MCPI) performed during inference time. All compared methods require model generations for every point in the test set, by each of the examined LLMs, whereas our approach requires only one model generation and one call to some general embedding function. In addition, all methods, except for LL, require training auxiliary language models, whereas our approach is a simple kNN classifier on the embedded inputs. While our approach does not consistently outperform the compared methods, these results demonstrate the potential of using benchmark datasets for model routing with significantly better inference-time efficiency. Effect of benchmark dataset sparsityTo highlight the potential of our approach in this setting, we examine the effect of the reference benchmark data sparsity. We apply our method to different subsets of the test set, \(X_{\text{test}}\), where the subsets are defined by limiting the maximal average distance of each test set point to the closest points from the reference (train) set, denoted by \(\text{NN}_{\text{train}}\), i.e. \(X_{C}^{\prime}=\left\{x^{\prime}\in X_{\text{test}}\middle|\frac{1}{\text{NN }_{\text{train}}(x^{\prime})!}\sum_{x\in\text{NN}_{\text{train}}(x^{\prime}) }\text{dist}(x^{\prime},x)<C\right\}\), where \(C\) is the maximal average distance and \(X_{C}^{\prime}\) is the resulting subset of the test set. Figure 3 presents the metric scores for the different subsets using our method, the oracle (best possible choices), and LL scoring. We also report the percentage of the test set that is used in each subset. This figure depicts that our predictor approaches the oracle metrics as the average distance to the reference points decreases. This suggests that adding more benchmark datasets, to reduce the sparsity of the reference space, may lead to better LLM selections with our approach. ## 6 Discussion and Conclusion How useful are smaller LLMs?While a given LLM may work best on average, these models tend to be the biggest and therefore most expensive to run. Practitioners can achieve gains in cost, compute, and latency if we can successfully predict whether a smaller LLM can be adequate for a given task. Identifying good smaller models for tasks of interest will also redefine the cost/benefit tradeoff behind automating certain tasks, potentially incentivizing the automation of new tasks that were previously cost-prohibitive to automate with larger LLMs. To evaluate the potential of smaller LLMs we revisit our HELM experiment in Figure 2. In Figure 4, we perform LLM routing using _only models with \(\leq\) 13B parameters_ and compare it to the performance of Llama 2 7OB. Oracle's \begin{table} \begin{tabular}{l c c c c} \hline \hline & BERTScore \(\uparrow\) & BARTScore \(\uparrow\) & BLEURT \(\uparrow\) & MCPI \\ \hline Random & 66.36 & -3.76 & -0.77 & - \\ LL & 65.83 & -4.12 & -0.96 & \(N\) \\ BMA: Open-Assistant & 74.68 & -3.45 & -0.39 & - \\ BMA: Vicuna & 69.60 & -3.44 & -0.61 & - \\ MLM-Scoring (Salazar et al., 2020) & 64.77 & -4.03 & -0.88 & \(N\) \\ SimCLS (Liu and Liu, 2021) & 73.14 & -3.22 & -0.38 & \(N\) \\ SummaReranker (Ravaut et al., 2022) & 71.60 & -3.25 & -0.41 & \(N\) \\ PairRanker (Jiang et al., 2023) & 72.97 & **-3.14** & **-0.37** & \(N\) \\ Ours & **74.75** & -3.40 & -0.38 & **2** \\ \hline Oracle & 77.67 & -2.87 & -0.15 & \(N\) \\ \hline \hline \end{tabular} \end{table} Table 2: Average metrics for per-instance LLM selection on the test set of MixInstruct. MCPI denotes model calls per instance, for \(N\) models. Best results are highlighted with bold and second best with an underline (excluding Oracle). Figure 4: LLM routing with \(\leq\) 13B parameter models compared to Llama 2 7OB. performance demonstrates that it is conceptually possible to outperform a large model by routing smaller LLMs. Results with our scores \(S_{1}\) and \(S_{2}\) (see Figure 7 for breakdown by scores) demonstrate that it is also practically feasible to match the performance of the 70B model by combining learning from benchmarks with a small number (\(\alpha=0.04\), i.e., 2-40 samples) of labeled samples from a new task that a practitioner can provide to save on the inference costs in their LLM application. Learning from more benchmarksWe anticipate learning LLM routers from benchmarks to be the most effective when new tasks are similar to the benchmark tasks, thus reducing the OOD gap without any labeling burden for a practitioner. To empirically investigate this hypothesis, in Figure 5 we visualize the relation between the quality of model routing with \(S_{3}\), measured with Pearson correlation between model scores and accuracies of candidate LLMs, and the distance \(u(d^{\prime})\) from a new task \(d^{\prime}\) to the available benchmark data for training the routers. In this experiment, we aggregate results across different \(\alpha\) values from Figure 2. For smaller distance values the correlation is approaching 1, while for large distances it sometimes deteriorates. Results for other scores demonstrate a similar trend and are presented in Appendix B.2 along with additional details. This experiment and the benchmark dataset sparsity analysis presented in Figure 3 for MixInstruct illustrate that learning with _more benchmarks_ can improve the efficacy and reliability of the LLM routers as new tasks are more likely to be closer to a large collection of datasets. Future workOur work demonstrates the potential of learning from benchmarks for LLM routing and investigates 3 model scores in the context of OOD generalization when routing LLMs for new tasks. We summarize potential next steps for improving the quality and efficacy of LLM routers. The major challenge of LLM routing is OOD generalization of correctness predictors. Thus, using more benchmarks and modern methods for improving OOD generalization to learn correctness predictors is a promising next step. A practitioner can also provide labels for a few samples from their task, possibly guided by active learning techniques, to adapt or fine-tune correctness predictors. Even when reducing the OOD gap is too challenging, our score accounting for the (potentially low) accuracy of correctness predictors demonstrated strong results when this accuracy, \(p(d^{\prime},m)\), is known for a new task, thus encouraging the development of methods for estimating it better. We also anticipate that routing "expert" LLMs fine-tuned for a specific domain can improve the results. Regions of the sample space where such models are "correct" should mostly align with the domains of their expertise (recall Figure 1), making it easier to learn the corresponding correctness predictors, and simplifying LLM routing when a new task is from a specific domain. Our experiments in Figure 4 demonstrate the utility of LLM routing with _smaller_ models, which can reduce costs and facilitate the use of LLMs in a broader set of domains. Thus, we want to explore modifications to our scores that will encourage the selection of smaller LLMs when their anticipated performance is comparable to the larger, more reliable models. Prior work on frugal API selection (Chen et al., 2020, 2023) provides a good starting point to explore this direction.
2310.20651
The Quantum Decoding Problem
One of the founding results of lattice based cryptography is a quantum reduction from the Short Integer Solution problem to the Learning with Errors problem introduced by Regev. It has recently been pointed out by Chen, Liu and Zhandry that this reduction can be made more powerful by replacing the learning with errors problem with a quantum equivalent, where the errors are given in quantum superposition. In the context of codes, this can be adapted to a reduction from finding short codewords to a quantum decoding problem for random linear codes. We therefore consider in this paper the quantum decoding problem, where we are given a superposition of noisy versions of a codeword and we want to recover the corresponding codeword. When we measure the superposition, we get back the usual classical decoding problem for which the best known algorithms are in the constant rate and error-rate regime exponential in the codelength. However, we will show here that when the noise rate is small enough, then the quantum decoding problem can be solved in quantum polynomial time. Moreover, we also show that the problem can in principle be solved quantumly (albeit not efficiently) for noise rates for which the associated classical decoding problem cannot be solved at all for information theoretic reasons. We then revisit Regev's reduction in the context of codes. We show that using our algorithms for the quantum decoding problem in Regev's reduction matches the best known quantum algorithms for the short codeword problem. This shows in some sense the tightness of Regev's reduction when considering the quantum decoding problem and also paves the way for new quantum algorithms for the short codeword problem.
André Chailloux, Jean-Pierre Tillich
2023-10-31T17:21:32Z
http://arxiv.org/abs/2310.20651v1
# The Quantum Decoding Problem ###### Abstract One of the founding results of lattice based cryptography is a quantum reduction from the Short Integer Solution problem to the Learning with Errors problem introduced by Regev. It has recently been pointed out by Chen, Liu and Zhandry that this reduction can be made more powerful by replacing the learning with errors problem with a quantum equivalent, where the errors are given in quantum superposition. In the context of codes, this can be adapted to a reduction from finding short codewords to a quantum decoding problem for random linear codes. We therefore consider in this paper the quantum decoding problem, where we are given a superposition of noisy versions of a codeword and we want to recover the corresponding codeword. When we measure the superposition, we get back the usual classical decoding problem for which the best known algorithms are in the constant rate and error-rate regime exponential in the codelength. However, we will show here that when the noise rate is small enough, then the quantum decoding problem can be solved in quantum polynomial time. Moreover, we also show that the problem can in principle be solved quantumly (albeit not efficiently) for noise rates for which the associated classical decoding problem cannot be solved at all for information theoretic reasons. We then revisit Regev's reduction in the context of codes. We show that using our algorithms for the quantum decoding problem in Regev's reduction matches the best known quantum algorithms for the short codeword problem. This shows in some sense the tightness of Regev's reduction when considering the quantum decoding problem and also paves the way for new quantum algorithms for the short codeword problem. ###### Contents * 1 Introduction * 1.1 General context * 1.2 Regev's quantum reduction and follow-up work * 1.3 Contributions * 1.3.1 Using USD as a means of improving quantum algorithms for QDP * 1.3.2 Determining exactly the tractability of the quantum decoding problem * 1.3.3 Using our algorithms in Regev's reduction * 2 Preliminaries * 2.1 Notations and basic probabilities * 2.2 Random linear codes * 2.2.1 Basic properties * 2.2.2 Classical and quantum decoding problems * 2.2.3 Punctured codes and Prange's algorithm * 2.3 Distinguishing quantum states * 2.4 The classical and quantum Fourier transform on \(\mathbb{F}_{q}^{n}\) * 3 Algorithms for the binary quantum decoding problem * 3.1 Quantum polynomial time algorithm using unambiguous state discrimination * 3.2 Reduction between quantum decoding problems in the binary setting * 3.2.1 Partial unambiguous state discrimination * 3.2.2 Proof of Theorem 5 * 3.2.3 Interpretation of the above as changing the noise model * 4 Polynomial time algorithm for QDP in the \(q\)-ary setting * 4.1 Unambiguous state discrimination in the \(q\)-ary setting * 4.2 Quantum polynomial time algorithm for QDP in the \(q\)-ary setting * 5 (In)tractability of the quantum decoding problem * 5.1 Computing the PGM associated to the quantum decoding problem * 5.2 (In)tractability results * 5.2.1 First computations and probabilistic arguments on random codes * 5.2.2 Tractability * 5.2.3 Intractability * 6 From the quantum decoding problem to the short codeword problem * 6.1 Regev's reduction for codes * 6.2 The quantum reduction with unambiguous state discrimination * 6.3 The Quantum reduction with the Pretty Good Measurement * 6.3.1 A counterexample that shows complete failure * 6.3.2 A measurement that works * A General phases Introduction ### General context Error correcting codes which appeared first as the fundamental tool to transmit information reliably through a noisy channel [14] have found their way outside this kind of applications, such as for instance in average case complexity [10], or when locally testable codes were found to be the combinatorial core for probabilistically checkable proofs (PCP) [15]. Another important application domain for error correction is cryptography with Shamir's secret sharing scheme [14], authentication protocols [21], pseudorandom generators [22], signature schemes [23], or public-key encryption schemes [13, 15, 16]. Contrarily to the applications in reliable communication, data storage, or application in complexity theory where finding suitable families of structured codes is the problem that has to be addressed, many of these applications in cryptography deal with _random linear_ codes, and more precisely take advantage of the hardness of decoding a generic linear code. The decoding problem corresponds to decoding the \(k\)-dimensional vector space \(\mathscr{C}\) (_i.e._, the code) generated by the rows of a randomly generated \(\mathbf{G}\in\mathbb{F}_{q}^{k\times n}\) (which is called a _generating matrix_ of the code) : \[\mathscr{C}\mathop{=}\limits^{\triangle}\left\{\boldsymbol{u}\mathbf{G}\colon \boldsymbol{u}\in\mathbb{F}_{q}^{k}\right\}. \tag{1}\] Here \(\mathbb{F}_{q}\) denotes the finite field with \(q\) elements. In the decoding problem, we are given the noisy codeword \(\boldsymbol{c}+\boldsymbol{e}\) where \(\boldsymbol{c}\) belongs to \(\mathscr{C}\) and we are asked to find the original codeword \(\boldsymbol{c}\). **Problem 1** (\(\mathrm{DP}(q,n,k,f)\)).: _The decoding problem with positive integer parameters \(q,n,k\) and a probability distribution \(f\) on \(\mathbb{F}_{q}^{n}\) is defined as:_ * which generates a random codeword_ \(\boldsymbol{c}=\boldsymbol{u}\mathbf{G}\) _ - and_ \(\boldsymbol{e}\) _is sampled from the distribution_ \(f\)_._ * _Goal: from_ \((\mathbf{G},\boldsymbol{c}+\boldsymbol{e})\)_, find_ \(\boldsymbol{c}\)_._ This problem for random codes has been studied for a long time and despite many efforts on this issue, the best algorithms are exponential in the codelength \(n\) for natural noise distributions \(f\) in the regime where \(k\) is linear in \(n\) and the rate \(R\mathop{=}\limits^{\triangle}\frac{k}{n}\) bounded away from \(0\) and \(1\)[11, 12, 13, 1, 1, 1, 1]. The most common noise distribution studied in this context is the uniform distribution over the errors of fixed Hamming weight \(t\), but there are also other distributions, like in the binary case (\(q=2\)) the i.i.d Bernoulli distribution model which is frequently found in the Learning Parity with Noise problem (LPN) [17]. When the number of samples of the LPN problem is fixed, this is exactly the decoding problem defined above where \(n\) is equal to the number of available LPN samples. When the number of samples in LPN is unlimited, this can be viewed as a decoding problem where we might add on the fly as many columns in \(\mathbf{G}\) as we need (and as many corresponding positions in \(\boldsymbol{u}\mathbf{G}+\boldsymbol{e}\)). The LWE problem in its standard form [21] is a slight variation on the input alphabet, it is \(\mathbb{Z}_{q}\) rather than the finite field \(\mathbb{F}_{q}\) and as in LPN, the number of samples is often assumed to be unlimited. The noise distribution is frequently the discrete Gaussian distribution in this case. The fact that in LPN, \(n\) can grow unlimited with a fixed value of \(k\) and a fixed noise distribution can only make the problem simpler than the decoding problem. Interestingly enough, there are now algorithms solving the LPN problem like the Blum-Kalai-Wasserman algorithm [1] which solve the problem with only subexponential complexity of the form \(2^{O\left(\frac{k}{\log k}\right)}\) whereas no algorithm with such a complexity is known for \(n=O\left(k\right)\) (all known algorithms have exponential complexity in this case). Note that as soon as \(n=\Omega\left(k^{1+\varepsilon}\right)\) for any absolute constant \(\varepsilon>0\), the best known algorithm [10] is somewhat in between, namely \(2^{O\left(\frac{k}{\log\log k}\right)}\) and consists in building many new LPN samples from the original pool of samples. In terms of the decoding problem given above, this consists in adding artificially new columns to the generator matrix \(\mathbf{G}\) given above by summing a small number of columns of \(\mathbf{G}\) (together with the relevant positions of \(\boldsymbol{u}\mathbf{G}+\boldsymbol{e}\)) to artificially enlarge the value of \(n\) and then solve the new decoding problem for this larger matrix. In our work, we will only be interested in the linear regime setting _i.e._\(k=\Theta(n)\). It should be added here that the LWE problem has proved much more versatile than LPN for building cryptographic primitives. Indeed, it does not only allow to build cryptosystems from it [11], but also allows to obtain advanced cryptographic functionalities such as fully homomorphic encryption [1] or attribute-based encryption [14] for instance. It should also be mentioned that three out of the four signature schemes, public key encryption schemes or key establishment protocols supposed to resist to a quantum computer which were selected by the NIST for standardization are based on the hardness of this problem (see [https://csrc.nist.gov/Projects/post-quantum-cryptor](https://csrc.nist.gov/Projects/post-quantum-cryptor)). While the security of many code-based cryptosystems relies on the hardness of the decoding problem, it can also be based on finding a "short" codeword (as in [17] or in [1, 2] to build collision resistant hash functions), a problem which is stated as follows. **Problem 2** (\(\mathrm{SCP}(q,n,k,w)\)).: _The short codeword problem with parameters \(q,n,k,w\in\mathbb{N}\) is defined as:_ * _Given:_ \(\mathbf{H}\in\mathbb{F}_{q}^{(n-k)\times n}\) _which is sampled uniformly at random,_ * _Find:_ \(\boldsymbol{c}\in\mathbb{F}_{q}^{n}\setminus\{\mathbf{0}\}\) _such that_ \(\mathbf{H}\boldsymbol{c}^{\intercal}=\mathbf{0}\) _and the weight_ \(|\boldsymbol{c}|\) _of_ \(\boldsymbol{c}\) _satisfies_ \(|\boldsymbol{c}|\leq w\)_._ Here we are looking for a non-zero codeword \(\boldsymbol{c}\) of weight \(\leq w\) in the \(k\)-dimensional code \(\mathscr{C}\) defined by the so-called parity-check matrix \(\mathbf{H}\), namely1 : Footnote 1: The short codeword problem is usually defined by picking a random parity-check matrix \(\mathbf{H}\in\mathbb{F}_{q}^{(n-k)\times n}\) and not a random generating matrix \(\mathbf{G}\in\mathbb{F}_{q}^{k\times n}\) but the differences are minor (see for example [1]) and one could also define this problem via the generating matrix of a code as we did for the decoding problem. \[\mathscr{C}\mathop{\,\cong\,}\left\{\boldsymbol{c}\in\mathbb{F}_{q}^{n} \colon\mathbf{H}\boldsymbol{c}^{\intercal}=\vec{0}\right\}.\] The weight function which is generally used here is the Hamming weight, _i.e._ for a vector \(\boldsymbol{x}=(x_{1},\cdots,x_{n})\in\mathbb{F}_{q}^{n}\), its Hamming weight is defined as \[|\boldsymbol{x}|\mathop{\,=\,}\#\{i\in[\![1,n]\!]:x_{i}\neq 0\}.\] We will only deal with this weight here. The lattice version of this problem is called the Short Integer Solution (SIS) problem. It consists in replacing the finite field \(\mathbb{F}_{q}\) by \(\mathbb{Z}_{q}\) and using as weight function the euclidean weight \(\sqrt{\sum_{i=1}x_{i}^{2}}\) (and by representing the elements in \(\mathbb{Z}_{q}\) as \(\{-\lfloor(q-1)/2\rfloor,\cdots,0,\cdots,\lceil(q-1)/2\rceil\}\)). It was introduced in the seminal work [1] to build a family of one-way functions based on the difficulty of this problem. What made this problem so attractive is that it was shown there to be as hard on average as a worst case short lattice vector problem. Decoding and looking for short codewords are problems that have been conjectured to be extremely close. They have been studied for a long time, and the best algorithms for solving these two problems are the same, namely Information Set Decoding algorithms [10, 11, 1, 2, 3, 4]. A reduction from decoding to the problem of finding short codewords is known but in an LPN context [1, 2, 3, 4]. However, until recently and even in an LPN context, no reduction was known in the other direction before [10] which gave a quantum reduction from SCP to DP which followed the path of the breakthrough result of [14] which reduced the problem of sampling short lattice vectors to LWE. Note that the reduction [14] was not classical but quantum. Later on, it was shown in [11] that the quantum reduction technique of Regev allows to reduce quantumly SIS to LWE, and this kind of reduction also applies to structured versions of these problems, namely Ideal-SIS can be reduced to Ideal-LWE. There is a fundamental difficulty of reducing the search of low weight codewords to decoding a linear code which is due to the fact that the nature of these two problems is very different. Decoding concentrates on a region of parameters where there is typically just one solution, whereas finding low weight codewords concentrates on a region of parameters where there are many solutions (and typically an exponential number of solutions). This makes these problems inherently very different. This was also the case for the reduction of SIS to LWE and the fact that we can have a reduction from one to another by looking for quantum reductions instead of classical reductions was really a breakthrough at that time. It is also worthwhile to notice that all these problems, DP, LPN, LWE, SCP, SIS are all widely believed to be hard also for a quantum computer. The best quantum algorithms for solving these problems have not changed much the picture, the complexity exponent gets essentially only reduced by a constant factor when compared to the best classical algorithms achieving this task, see for instance [1, 10, 11, 12, 13, 14]. Indeed, as explained above, most public-key cryptosystems and digital signature schemes that are being standardized right now by the NIST are based on the presumed hardness of LWE, and there are also alternate fourth round finalists of the competition [1, 1, 12] which are based on the hardness of binary DP. ### Regev's quantum reduction and follow-up work Regev's quantum reduction[14] is at the core of complexity reductions for these problems, which with [1] essentially started lattice-based cryptography. His approach when rephrased in the coding context is based on the following observation. Suppose that we were able to construct a quantum superposition of noisy codewords of a code \(\mathcal{C}\) of dimension \(k\) over \(\mathbb{F}_{q}\), for a normalization factor \(Z\). If we would apply the quantum Fourier transform on such a state, then because of the periodicity property of such a state we would get a superposition concentrating solely on the codewords of the dual \(\mathcal{C}^{\perp}\) of \(\mathcal{C}\), that is \(\frac{1}{\sqrt{Z}}\sum_{\boldsymbol{c}^{\perp}\in\mathcal{C}^{\perp}}\sqrt{ \widehat{f}(\boldsymbol{c}^{\perp})}|\boldsymbol{c}^{\perp}\rangle\). Here \(\widehat{f}\) is the (classical) Fourier transform of \(f\) that we will properly define in the technical part of the paper. Recall that the dual code is defined as **Definition 1** (dual code).: _Let \(\mathcal{C}\) be a linear code over \(\mathbb{F}_{q}\), i.e. a \(k\)-dimensional subspace of \(\mathbb{F}_{q}^{n}\) for some \(k\) and \(n\). The dual code \(\mathcal{C}^{\perp}\) is an \((n-k)\) dimensional subspace of \(\mathbb{F}_{q}^{n}\) defined by_ \[\mathcal{C}^{\perp}\,{\buildrel\triangle\over{=}}\{\boldsymbol{d}\in\mathbb{ F}_{q}^{n}:\boldsymbol{d}\cdot\boldsymbol{c}=0,\;\forall\boldsymbol{c}\in \mathcal{C}\},\] _where \(\boldsymbol{x}\cdot\boldsymbol{y}=\sum_{i}x_{i}y_{i}\) stands for the inner product between the vectors \(\boldsymbol{x}\) and \(\boldsymbol{y}\)._ Now, we can expect that if \(f\) concentrates on fairly small weights, then \(\widehat{f}\) would also concentrate on rather small weights and therefore we would have a way of sampling low weight (dual) codewords and solve SCP for the dual code. The point is now that \(\sqrt{\frac{1}{Z}}\sum_{\boldsymbol{c}\in\mathcal{C}}\sum_{\boldsymbol{e}\in \mathbb{F}_{q}^{n}}\sqrt{f(e)}|\boldsymbol{c}+\boldsymbol{e}\rangle\) could be obtained by solving the DP problem on states that are easy to construct. This is the main idea of Regev's reduction. More precisely, the whole algorithm works as follows Step 1.Creation of the tensor product of a uniform superposition of codewords and a quantum superposition of noise \[|\phi_{1}\rangle=\sqrt{\frac{1}{q^{k}}}\sum_{\mathbf{c}\in\mathcal{C}}|\mathbf{c}\rangle \sum_{\mathbf{c}\in\mathbb{F}_{q}^{n}}\sqrt{f(e)}|\mathbf{c}\rangle.\] Step 2.Entangling the codeword with the noise by adding the first register to the second one and then swapping the two registers \[|\phi_{2}\rangle=\sqrt{\frac{1}{q^{k}}}\sum_{\mathbf{c}\in\mathcal{C}}\sum_{\mathbf{c }\in\mathbb{F}_{q}^{n}}\sqrt{f(e)}|\mathbf{c}+\mathbf{c}\rangle|\mathbf{c}\rangle.\] Step 3.Disentangling the two registers by decoding \(\mathbf{c}+\mathbf{c}\) and therefore finding \(\mathbf{c}\) which allows to erase the second register \[|\phi_{2}\rangle=\sqrt{\frac{1}{Z}}\sum_{\mathbf{c}\in\mathcal{C}}\sum_{\mathbf{c}\in \mathbb{F}_{q}^{n}}\sqrt{f(e)}|\mathbf{c}+\mathbf{c}\rangle|\mathbf{0}\rangle.\] (The different normalizing factor \(Z\) arises when the above decoding procedure is imperfect and we condition on measuring \(\mathbf{0}\) in the last register.) Step 4.Applying the quantum Fourier transform on the first register and get \[\frac{1}{\sqrt{Z}}\sum_{\mathbf{d}\in\mathcal{C}^{\perp}}\sqrt{\widehat{f}(\mathbf{d} )}|\mathbf{d}\rangle|\mathbf{0}\rangle\] Step 5.Measure the first register and get some \(\mathbf{d}\) in \(\mathcal{C}^{\perp}\). This approach is at the heart of the quantum reductions obtained in [10, 11, 12]. It is also a crucial ingredient in the paper [10] proving verifiable quantum advantage by constructing - among other things - one-way functions that are even collision resistant against classical adversaries but are easily invertible quantumly. In [10, 11, 12], the crucial erasing/disentangling step is performed with the help of a _classical_ decoding algorithm. Indeed any (classical or quantum) algorithm that can recover \(\mathbf{c}\) from \(\mathbf{c}+\mathbf{c}\) can be applied coherently to erase the last register in step \(3\)2. Footnote 2: Indeed, having such an algorithm means we can construct the unitary \(U:|\mathbf{c}+\mathbf{c}\rangle|0\rangle\to|\mathbf{c}+\mathbf{c}\rangle|\mathbf{c}\rangle\). Applying the inverse of this unitary will give the erasure operation. A key insight observed in [10] is that it is actually enough to recover \(|\mathbf{c}\rangle\) from the state \(\sum_{\mathbf{c}\in\mathbb{F}_{q}^{n}}\sqrt{f(\mathbf{c})}|\mathbf{c}+\mathbf{c}\rangle\) so we are given a superposition of all the noisy codewords \(\mathbf{c}+\mathbf{c}\) and not a fixed one. This means we have to solve the following problem **Problem 3** (\(\mathrm{QDP}(q,n,k,f)\)).: _The quantum decoding problem with positive integer parameters \(q,n,k\) and a probability distribution \(f\) on \(\mathbb{F}_{q}^{n}\) is defined as:_ * _Input: Take_ \(\mathbf{G}\in\mathbb{F}_{q}^{k\times n}\) _and_ \(\mathbf{u}\in\mathbb{F}_{q}^{k}\) _sampled uniformly at random over their domain. Let_ \(\mathbf{c}=\mathbf{u}\mathbf{G}\) _and_ \(|\psi_{\mathbf{c}}\rangle\mathop{=}\sum_{\mathbf{c}\in\mathbb{F}_{q}^{n}}\sqrt{f(e)}| \mathbf{c}+\mathbf{c}\rangle\)_. The (quantum) input to this problem is_ \((\mathbf{G},|\psi_{\mathbf{c}}\rangle)\)_._ * _Goal: given_ \((\mathbf{G},|\psi_{\mathbf{c}}\rangle)\)_, find_ \(\mathbf{c}\)_._ It's not clear a priori whether this is helpful or not. If one measures the state \(|\psi_{\mathbf{c}}\rangle\) then one recovers a noisy codeword and we are back to the classical decoding problem. However, having improvements by directly solving the (LWE variant of the) above problem has been proposed in [10] where a polynomial time quantum algorithm based on Regev's approach solving SIS is proposed for the \(l_{\infty}\) norm (and not the euclidean norm as is standard there) for extremely high rate codes. Here the decoding problem is obtained by measuring the qudits in an appropriate basis allowing to rule out certain values for the code-symbols, and then they use the Arora-Ge algorithm [1] for recovering completely the codeword by solving an algebraic system which for the parameters that are considered there, is of polynomial complexity. Despite the fact that the parameters of the SIS problem are highly degenerate, no efficient classical algorithm performing this task is known. This paper puts forward the S-LWE and the C-LWE problems. Informally the first problem is the one we solve in Step 3 above and the second one is just to create directly the uniform superposition of noisy codewords obtained at Step 3. ### Contributions Our work has 2 starting points. First, the quantum reduction of [13] between the short codeword problem and the decoding problem in the regime relevant for code-based cryptography _i.e._ a constant code rate \(\frac{k}{n}\) and constant error rate. Then, the key insight of [10] that one requires to solve the quantum decoding problem in the above reduction which can make it more efficient. Instead on focusing too much on the reduction, our aim is first to study here the quantum decoding problem for its own sake. Indeed, the problem is already interesting as a quantum generalization of the decoding problem and the fact it is used in the above reduction creates strong motivation for studying it. In this work, we focus only on the Bernoulli noise of parameter \(p\). This means we consider the error function \[f(\boldsymbol{e})=(1-p)^{n-|\boldsymbol{e}|}\left(\frac{p}{q-1}\right)^{| \boldsymbol{e}|}.\] which in turn means that for any \(\boldsymbol{c}=(c_{1},\ldots,c_{n})\in\mathbb{F}_{q}^{n}\), we can rewrite \[|\psi_{\mathbf{c}}\rangle\mathop{=}^{\triangle}\sum_{\boldsymbol{c}\in\mathbb{ F}_{q}^{n}}\sqrt{f(\boldsymbol{e})}|\boldsymbol{c}+\boldsymbol{e} \rangle=\bigotimes_{i=1}^{n}\left(\sqrt{1-p}|c_{i}\rangle+\sum_{\alpha\in \mathbb{F}_{q}^{*}}\sqrt{\frac{p}{q-1}}|c_{i}+\alpha\rangle\right).\] For this Bernoulli noise with parameter \(p\), the associated quantum decoding problem is written \(\mathrm{QDP}(q,n,k,p)\). We show that indeed, the complexity of the quantum decoding problem significantly differs from its classical counterpart. Our contributions can be summarized as **A polynomial time algorithm for \(\mathrm{QDP}\) when the noise is low enough (but still of constant rate).** We will show that the quantum problem \(\mathrm{QDP}\) defined here is probably much easier than its classical counterpart \(\mathrm{DP}\). Indeed, for fixed rate \(R\mathop{=}\frac{\triangle}{n}\) only exponential-time algorithms are known for \(\mathrm{DP}\) for natural noise models, for instance the Bernoulli i.i.d model where \(q=2\), \(\Pr(e_{i}=1)=p\) for which all algorithms solving it are exponential for \(p\) in \((0,1)\). This is not the case for the associated \(\mathrm{QDP}\) problem, where we will show that by using Unambiguous State Discrimination (USD) together with linear algebra we can solve the problem in _polynomial time_ up to some limiting value of \(p\) which is strictly between \(0\) and \(1\) for a fixed rate \(R\). We generalize this result for any \(q\) by generalizing existing bounds on USD and also present an algorithm for partial binary unambiguous state discrimination which could be of independent interest. A problem which can be solved above capacity.There is an information theoretic limit for _any algorithm_ solving classically or quantumly the classical decoding problem DP. When the rate exceeds the capacity of the noisy channel specified by \(f\) (and if this is an i.i.d. noise) then above the capacity of the noisy channel it is just impossible to solve with say polynomial error probability the decoding problem just because there are exponentially many candidates at least as likely as the right candidate. The problem becomes intractable just because of this reason. For instance in the Bernoulli model above, the rate \(R=k/n\) has to be smaller than \(1-h(p)\) where \(h(x)\) is the binary entropy function, \(h(x)=-x\log_{2}(x)-(1-x)\log_{2}(1-x)\). Somewhat surprisingly, it turns out that we can go above the Shannon capacity for the QDP problem. Moreover, with the help of the Pretty Good Measurement (PGM) we can fully characterize the noise range where the problem is tractable. Applying QDP solvers in Regev's reduction.Both algorithms (the one using USD and the other one based on PGM) can be applied to sample small weight dual codewords and solve SCP. By applying the quantum reduction steps above, together with our polynomial time solving QDP we obtain non-zero codewords of relative weight \(\omega\stackrel{{\triangle}}{{=}}w/n\) satisfying \(\omega\leq\frac{(q-1)(1-R)}{q}\). Interestingly enough, this is precisely the smallest weight that can be reached by the best known polynomial time algorithm, namely a minor variant of the Prange algorithm [14]. On the other hand, we will show that there is no hope to have a proper general reduction of SCP to QDP, by providing examples showing that we can solve QDP in a certain noise regime and still get nothing useful for SCP after using it in Regev's reduction. However, we can adapt the PGM to still have some small codewords up to the tractability bound. Our examples really show that we have to analyze properly the state that we have at Step 3. of the reduction on a case by case basis. We now perform a detailed description of our contributions. #### 1.3.1 Using USD as a means of improving quantum algorithms for QDP The binary setting.Our first idea, which extends naturally the work of [13] is to apply USD for the quantum decoding problem. We first consider the binary setting, _i.e._\(q=2\). This means the states \(|\Psi_{\mathbf{c}}\rangle=\sum_{\mathbf{c}\in\mathbb{F}_{2}^{n}}\sqrt{f(\mathbf{c})}|\mathbf{ c}+\mathbf{e}\rangle\) for which we want to recover \(\mathbf{c}\) are of the form \[|\Psi_{\mathbf{c}}\rangle=\bigotimes_{i=1}^{n}\sqrt{1-p}|c_{i}\rangle+\sqrt{p}|1- c_{i}\rangle.\] Consider a fixed coordinate \(i\) for which we have the state \(\sqrt{1-p}|c_{i}\rangle+\sqrt{p}|1-c_{i}\rangle\) which we call \(|\psi_{c_{i}}^{p}\rangle\). By measuring this state in the computational basis we get \(c_{i}\) wp. \(1-p\) and \(1-c_{i}\) wp. \(p\). This measurement is actually the measurement that distinguishes best \(|\psi_{0}\rangle\) and \(|\psi_{1}\rangle\). Another measurement of interest is unambiguous state discrimination. Here, the goal is not to distinguish optimally between \(|\psi_{0}^{p}\rangle\) and \(|\psi_{1}^{p}\rangle\) but to make sure that our guess is always correct but allowing for some abort. In this setting, we have the following **Proposition 1** (Unambiguous state discrimination).: _For any \(p\in[0,1]\), there exists a quantum measurement that on input \(|\psi_{c_{i}}^{p}\rangle\) outputs \(c_{i}\) wp. \(1-2\sqrt{p(1-p)}\) and outputs \(\bot\) otherwise._ Using this measurement, the probability of guessing correctly \(c_{i}\) is always smaller than \(1-p\) for \(p\leq\frac{1}{2}\). However, we know exactly when we succeed in guessing \(c_{i}\). This will be extremely useful for decoding. Indeed, if we recover \(k\) values of \(c_{i}\) we recover the complete codeword \(\mathbf{c}\) with good probability by linear algebra by using the fact that \(\mathbf{c}=\mathbf{m}\mathbf{G}\) with \(\mathbf{G}\in\mathbb{F}_{2}^{k\times n}\). This will lead to **Theorem 1**.: _Let \(R\in[0,1]\). For any \(p<\left(\frac{R}{2}\right)^{\perp}\), there exists a quantum algorithm that solves \(\mathrm{QDP}(2,n,\lfloor Rn\rfloor,p)\) WP. \(1-2^{-\Omega(n)}\) in time \(\poly(n)\). Here for a real number \(x\in[0,1]\), \(x^{\perp}\) stands for \(\frac{1-2\sqrt{x(1-x)}}{2}\)._ Interpretation as changing the noise channel and partial unambiguous state discrimination.A nice interpretation of the above algorithm is that when the error is in quantum superposition, one can use quantum measurements to change the noise model. For example in the above, if we are given \(|\psi_{c_{i}}\rangle=\sqrt{1-p}|c_{i}\rangle+\sqrt{p}|1-c_{i}\rangle\) then * One can measure in the computational basis to obtain \(c_{i}\) that has been flipped WP. \(p\). * One can use unambiguous state discrimination in which case \(c_{i}\) has been erased WP. \(2\sqrt{p(1-p)}\). What we show in Theorem 1 is that the second strategy is actually much more powerful for recovering the codeword \(\mathbf{c}\). A natural question to ask is whether this can further be generalized to other measurements. In this work, we actually generalize Unambiguous State Discrimination as follows: given \(|\psi_{c_{i}}\rangle\), the measurement will sometimes output \(\perp\) but it can also fail with some small probability. We prove the following **Proposition 2** (Partial Unambiguous State Discrimination).: _Let \(p,s\in[0,\frac{1}{2})\) with \(s\leq p\) and let \(u=\frac{p^{\perp}}{s^{\perp}}\). There exists a quantum measurement that when applied to \(|\psi_{c_{i}}\rangle=\sqrt{1-p}|c_{i}\rangle+\sqrt{p}|1-c_{i}\rangle\) outputs \(c_{i}\) WP. \(u(1-s)\), \((1-c_{i})\) WP. \(us\) and \(\perp\) WP. \(1-u\)._ Notice that this generalizes both the standard measurement (by taking \(s=p\)) and unambiguous state discrimination (by taking \(s=0\) which gives \(u=2p^{\perp}\)). This seems a very natural way of generalizing Unambiguous State Discrimination but is not something we have found in the literature and could be of independent interest. We can use this measurement not to provide new polynomial time algorithm but rather to give a reduction between different Quantum Decoding problems, which we detail in the full text. The general setting.The unambiguous state discrimination approach works in the \(q\)-ary setting as well. A difficulty here is that optimal unambiguous state discrimination is not known in general for more than 2 states, but in certain situations where we have a symmetric set of states [1] we know how to perform optimal USD. This would apply in our case case where \(q\) is prime. We have generalized sligthly the approach of [1] to be able to apply it to any finite field size \(q\). We get finally a result very similar to the binary case **Theorem 2**.: _Let \(R\in[0,1]\). For any \(p<\left(\frac{(q-1)R}{q}\right)^{\perp}\), there exists a quantum algorithm that solves \(\mathrm{QDP}(q,n,\lfloor Rn\rfloor,p)\) WP. \(1-2^{-\Omega(n)}\) in time \(\poly(n)\)._ Here we have used a notation which "generalizes" the \(p^{\perp}\) notation used in the binary setting. **Notation 1**.: _For a real number \(x\in[0,1]\), \(x^{\perp}\) stands for \(\frac{\left(\sqrt{(1-x)(q-1)}-\sqrt{x}\right)^{2}}{q}\)._ This quantity depends on \(q\) which will be clear from the context. Note that when \(q=2\) we get \(\frac{1-2\sqrt{x(1-x)}}{2}\) which coincides with the one given in the binary case. #### 1.3.2 Determining exactly the tractability of the quantum decoding problem We are now interested in the tractability of \(\operatorname{QDP}(q,n,k,p)\) meaning when is it possible from an information theoretic perspective to solve this problem. In order to study this problem, a fundamental quantity is \(\delta_{\min}(R)\) defined below, sometimes referred to as the Gilbert-Varshamov distance **Notation 2**.: _Let \(R\in[0,1]\). We define \(\delta_{\min}(R)\mathop{=}\limits^{\triangle}h_{q}^{-1}(1-R)\), where \(h_{q}(x)\mathop{=}\limits^{\triangle}-(1-x)\log_{q}(1-x)-x\log_{q}\left(\frac {x}{q-1}\right)\). \(h_{q}\) is a bijection from \(x\in\left[0,\frac{q-1}{q}\right]\) to \([0,1]\) and we define \(h_{q}^{-1}:[0,1]\to\left[0,\frac{q-1}{q}\right]\)st. \(h_{q}^{-1}(h_{q}(x))=x\) for \(x\in\left[0,\frac{q-1}{q}\right]\)._ For the classical setting, it is well understood that \(\operatorname{DP}(q,n,k,p)\) is not tractable when \(p>\delta_{\min}(\frac{k}{n})\), meaning that even an unbounded algorithm will solve the problem wp. \(o(1)\). We would like now to understand what happens in the quantum setting. Techniques based on (partial) unambiguous state discrimination will not work in the regime \(p>\delta_{\min}(R)\). Since we are only interested in the tractability of the problem, we can consider optimal quantum algorithms for discriminating between the states \(|\Psi_{\boldsymbol{c}}\rangle=\sum_{\boldsymbol{e}\in\mathbb{F}_{q}^{n}} \sqrt{f(\boldsymbol{e})}|\boldsymbol{c}+\boldsymbol{e}\rangle\) where \(f\) accounts for the Bernoulli noise of parameter \(p\). This problem can be addressed by using the _Pretty Good Measurement_ (PGM) which has turned out to be a very useful tool in quantum information. If we define \(\operatorname{P_{PGM}}\) as the probability that the pretty good measurement succeeds in solving our problem and define \(\operatorname{P_{OPT}}\) as the maximal probability that any measurement succeeds, we have [1, 1] \[\operatorname{P_{OPT}^{2}}\leq\operatorname{P_{PGM}}\leq\operatorname{P_{OPT}}.\] This means that if the problem is tractable then \(\operatorname{P_{OPT}}=\Omega(1)\) which implies \(\operatorname{P_{PGM}}=\Omega(1)\). On the other hand, if the problem is intractable then \(\operatorname{P_{OPT}}=o(1)\) which implies \(\operatorname{P_{PGM}}=o(1)\). In conclusion, in order to study the tractability of the quantum decoding problem, it is enough to look at the PGM associated with the problem of distinguishing the states \(\{|\Psi_{\boldsymbol{c}}\rangle\}\). We show the following **Theorem 3**.: _Let \(R\in(0,1)\)._ * _For_ \(p<\left(\delta_{\min}(1-R)\right)^{\perp}\)_,_ \(\operatorname{QDP}(q,n,\lfloor Rn\rfloor,p)\) _can be solved using the PGM wp._ \(\operatorname{P_{PGM}}=\Omega(1)\) _hence the problem is tractable._ * _For_ \(p>\left(\delta_{\min}(1-R)\right)^{\perp}\)_,_ \(\operatorname{QDP}(q,n,\lfloor Rn\rfloor,p)\)_, the probability that the PGM solves this problem is_ \(P_{PGM}=o(1)\) _hence the problem is intractable._ The pretty good measurement associated to this distinguishing problem actually has a a lot of structure. It is actually a projective measurement on an orthonormal basis corresponding which can be seen as a Fourier basis involving the shifted dual codes of the code \(\mathcal{C}\) we are working on. Comparing the complexity of the decoding problem and the quantum decoding problemWith this full characterization, we compare the hardness, and tractability of the classical and quantum decoding problems. For \(p=0\), we have of course a polynomial time algorithm to solve \(\operatorname{DP}(q,n,\lfloor Rn\rfloor,0)\). For \(0<p\leq\delta_{\min}(R)\), the problem is tractable and the best known classical or quantum algorithms run in time \(2^{\Omega(n)}\). For \(p>\delta_{\min}(R)\), we know the problem is intractable. In the quantum setting, we obtain a very different picture. A comparison of these results is presented in Figures 1 and 2 where we use the following terminology * Easy: there exists an algorithm that runs in time \(\poly(n)\). * Hard: the best known algorithm runs in time \(2^{\Omega(n)}\), but there could potentially be more efficient algorithms. * Intractable: we know that any (even unbounded) algorithm can solve the problem wp. at most \(o(1)\). This gives a proper characterization of the difficulty of the Quantum Decoding Problem. In our next contribution, we will apply them in Regev's quantum reduction in order to derive some results for the short codeword problem. As we will show, the results from Figure 2 will match exactly our knowledge for the short codeword problem. #### 1.3.3 Using our algorithms in Regev's reduction We are now interested in solving the short codeword problem using Regev's reduction and the algorithms we described in the previous section. The known hardness of the short codeword problem is summarized in the figure below For our coding context, the only known reduction is the following **Proposition 3** ([14], informal).: _Fix integers \(n,q\geq 2\) as well as parameters \(R,p\in(0,1)\) st. \(p\leq\delta_{\min}(R)\). From any quantum algorithm that solves \(\text{DP}(q,n,\lceil(1-R)n\rceil,p)\) with high probability, there exists a quantum algorithm that solves \(\text{SCP}(q,n,\lfloor Rn\rfloor,p^{\perp})\) with high probability where recall that \(p^{\perp}=\frac{\left(\sqrt{(1-p)(q-1)}-\sqrt{p}\right)^{2}}{q}\)._ How can we characterize the efficiency of this reduction? Let us consider the best algorithms for \(\text{DP}(q,n,\lceil(1-R)n\rceil,p^{\perp})\) and see what algorithms does it give for \(\text{SCP}(q,n,\lfloor Rn\rfloor,p)\). We obtain the following result, summarized in Figure 4. We can see that the obtained algorithm for the short decoding problem is significantly worse3 than best known algorithm for this problem (see Figure 3). But in the light of our previous results, this is understandable, Regev's reduction actually requires Figure 1: Hardness and tractability of the decoding problem \(\text{DP}(q,n,\lfloor Rn\rfloor,p)\), for any fixed \(R\in[0,1]\), as a function of \(p\). Figure 2: Hardness and tractability of the quantum decoding problem \(\text{QDP}(q,n,\lfloor Rn\rfloor,p)\), for any fixed \(R\in[0,1]\), as a function of \(p\). to solve the quantum decoding problem and we just showed that it is much simpler than the decoding problem. If we could directly use the above proposition with our algorithms, we would obtain the following results, summarized in Figure 5. Here, if we could apply Proposition 3 with our algorithms, we would recover exactly the same complexities as the best known algorithms for \(\mathrm{SCP}^{4}\). However, it's not clear whether this is the case. What we do is that for each of our algorithms, we try to perform Regev's reduction and see what we obtain. We show the following: * If we take our polynomial time algorithms involving unambiguous state discrimination for the quantum decoding problem in Regev's reduction, we can find in quantum polynomial time small codewords down to Prange's bound, _i.e._ down to \(\frac{(1-R)(q-1)}{q}\) (the Easy zone in Figure 3). The bound \(\left(\frac{(1-q)(1-R)}{q}\right)^{\perp}\) comes from bounds on quantum unambiguous state discrimination and it is quite remarkable that after the quantum reduction, it corresponds exactly to Prange's bound where the short codeword problem is easy. * If we use our algorithm involving the Pretty Good Measurement in Regev's reduction, the following happens: Figure 4: On the top, best known (classical or quantum) algorithms for \(\mathrm{DP}(q,n,\lceil(1-R)n\rceil,p)\). On the bottom, complexity of a quantum algorithm for \(\mathrm{SCP}(q,n,\lceil Rn\rceil,p)\) that uses the best algorithm for \(\mathrm{DP}(q,n,\lceil(1-R)n\rceil,p)\) and then uses Proposition 3 Figure 5: On the top, our quantum algorithms for \(\mathrm{QDP}(q,n,\lceil(1-R)n\rceil,p)\). On the bottom, complexity of a quantum algorithm for \(\mathrm{SCP}(q,n,\lceil Rn\rceil,p)\) that would use our algorithms \(\mathrm{QDP}(q,n,\lceil(1-R)n\rceil,p)\) and then Proposition 3 when applicable 1. If we apply the PGM directly, we will most often be in regimes where we measure \(\mathbf{0}\) in the final step so we will not be able to solve the Short Codeword problem. 2. We can slightly tweak the PGM so that we can solve the corresponding short codeword problem for all the regimes where it is tractable (the Hard zone in Figure 3). 3. We also show another example where we can slightly tweak the PGM but where the reduction utterly fails, meaning that the state we obtain after Step 4 is \(|\bot\rangle\), so measuring will give absolutely no information about a small dual codeword. This shows that there is no hope to perform a generic reduction (_i.e._ a generalization of Proposition 3) between the quantum decoding problem and the short codeword problem with this method. These results show that, while it is impossible to have a generic reduction from QDP to SCP with this method, it is - at least for our examples - possible to find algorithms for QDP that will give results according to Figure 5, and recover the areas where the problem is easy and where it is tractable. This can be seen as quite a surprise since our bounds on QDP essentially come from information theory and best known bounds on SCP comes from classical coding theory and seem unrelated at first. ## 2 Preliminaries ### Notations and basic probabilities Sets, finite field.The finite field with \(q\) elements is denoted by \(\mathbb{F}_{q}\). \(\mathbb{Z}_{q}\) denotes the ring of integers modulo \(q\). The cardinality of a finite set \(\mathcal{E}\) is denoted by \(|\mathcal{E}|\). The set of integers \(\{a,a+1,\cdots,b\}\) between the integers \(a\) and \(b\) is denoted by \(\llbracket a,b\rrbracket\). For a positive integer \(n\), \([n]\) denotes \(\llbracket 1,n\rrbracket\). \(x\gets S\) means that \(x\) is sampled uniformly from the set \(S\). Vector and matrices.For a Hermitian matrix \(\mathbf{M}\) we write that \(\mathbf{M}\succeq 0\) when \(\mathbf{M}\) is positive semi-definite. Vectors are row vectors as is standard in the coding community and \(\boldsymbol{x}^{\intercal}\) denotes the transpose of a vector or a matrix. In particular, vectors will always be denoted by bold small letters and matrices with bold capital letters. For a subset \(J\subseteq[n]\) of positions of the vector \(\boldsymbol{x}=x_{1},\ldots,x_{n}\), \(\boldsymbol{x}_{J}=(x_{j})_{j\in J}\) denotes the vector formed by the entries indexed by \(J\). For a matrix \(\mathbf{G}\in\mathbb{F}_{q}^{k\times n}\) and a subset of columns \(J\subseteq[n]\), \(\mathbf{G}_{J}\in\mathbb{F}_{q}^{k\times|J|}\) denotes the submatrix formed by its columns indexed by \(J\). **Lemma 1** (Hoeffding's inequality).: _Let \(X_{1},\ldots,X_{n}\) be independent random Bernoulli variables with parameter \(p\). We have_ \[\Pr[\sum_{i=1}^{n}X_{i}\leq pn-\alpha\sqrt{n}]\leq 2^{-2\alpha^{2}}.\] ### Random linear codes #### 2.2.1 Basic properties For a vector \(\boldsymbol{x}=x_{1},\ldots,x_{n}\in\mathbb{F}_{q}^{n}\), we define the Hamming weight \(|\boldsymbol{x}|=|\{i:x_{i}\neq 0\}|\). For \(q,n,w\in\mathbb{N}^{*}\) with \(q\geq 2\), we define the (Hamming) sphere of radius \(w\) as \(S_{w}^{q,n}=\{\boldsymbol{x}\in\mathbb{F}_{q}^{n}:|\boldsymbol{x}|=w\}\). A code \(\mathcal{C}\) can be specified by a generating matrix \(\mathbf{G}\in\mathbb{F}_{q}^{k\times n}\), in which case \(\mathcal{C}=\{\boldsymbol{u}\mathbf{G}:\boldsymbol{u}\in\mathbb{F}_{q}^{k}\}\) or via a parity-check matrix \(\mathbf{H}\in\mathbb{F}_{q}^{n\times(n-k)}\), in which case \(\mathcal{C}=\{\boldsymbol{y}:\mathbf{H}\boldsymbol{y}^{\intercal}=\mathbf{0}\}\). **Definition 2** (\(q\)-ary entropy).: _We define the \(q\)-ary entropy \(h_{q}:[0,1]\rightarrow[0,1]\) s.t. \(h(x)=-x\log\left(\frac{x}{q-1}\right)-(1-x)\log(1-x)\) if \(x\in(0,1)\) and \(h_{q}(0)=h_{q}(1)=0\)._ \(h_{q}\) is increasing for \(x\in[0,\frac{q-1}{q}]\) and deceasing for \(x\in[\frac{q-1}{q},1]\). Moreover, \(h_{q}(\frac{q-1}{q})=1\). **Definition 3** (Inverse \(q\)-ary entropy).: \(h_{q}\) _is a bijection from \(\left[0,\frac{q-1}{q}\right]\) to \([0,1]\) and we define \(h_{q}^{-1}:[0,1]\rightarrow\left[0,\frac{q-1}{q}\right]\) s.t. \(h_{q}^{-1}(h_{q}(x))=x\) for \(x\in\left[0,\frac{q-1}{q}\right]\)._ **Definition 4** (Relative Gilbert-Varshamov distance).: _The (relative) Gilbert-Varshamov distance for \(q\)-ary codes \(\delta_{\min}(R,q)\) corresponding to the rate \(R\) is defined as \(\delta_{\min}(R,q)=h_{q}^{-1}(1-R)\)._ **Definition 5** (Relative maximum weight).: _The (relative) maximum weight for \(q\)-ary code \(\delta_{\max}(R,q)\) is defined as the unique solution \(x\) in \([\frac{q-1}{q},1]\) of \(h_{q}(x)=R\) if it exists. If such an \(x\) does not exist, we just write \(\delta_{\max}(R,q)=\bot\)._ \(\delta_{\min}(R,q)\) corresponds to the typical asymptotic relative minimum distance of a random linear code over \(\mathbb{F}_{q}\) of rate \(R\), whereas the second quantity (when it is not \(\bot\)) is equal to the typical asymptotic relative maximum distance. Generally \(q\) will be clear from the context and we will drop the dependency in \(q\) and simply write \(\delta_{\min}(R)\) and \(\delta_{\max}(R)\). **Definition 6** (Inverse of a full rank matrix).: _Let \(k<n\) and \(\mathbf{G}\in\mathbb{F}_{q}^{k\times n}\) be a matrix of full rank \(k\). We define the pseudo-inverse \(\mathbf{G}^{-1}\in\mathbb{F}_{q}^{n\times k}\) as a matrix satisfying \(\forall\mathbf{u}\in\mathbb{F}_{q}^{k},\ (\mathbf{u}\mathbf{G})\cdot\mathbf{G}^{-1}=\mathbf{u}\)._ **Proposition 4**.: _Let \(m\geq k\) and let \(G\leftarrow\mathbb{F}_{q}^{k\times m}\). We have_ \[\Pr[\mathrm{rank}(G)=k]\geq 1-q^{k-m}.\] **Proposition 5**.: _Let \(\mathbf{c}=\mathbf{s}\mathbf{G}\) for some \(\mathbf{s}\in\mathbb{F}_{q}^{k}\) and \(G\in\mathbb{F}_{q}^{k\times n}\). Let \(J\subseteq[n]\) s.t. \(G_{J}\) is of rank \(k\). Then we have \(\mathbf{c}=\mathbf{c}_{J}\mathbf{G}_{J}^{-1}\mathbf{G}\)._ Proof.: Notice that \(\mathbf{c}_{J}=\mathbf{s}\mathbf{G}_{J}\). If \(\mathbf{G}_{J}\) is of full rank \(k\) then \(\mathbf{G}_{J}^{-1}\) is well defined and \(\mathbf{c}_{J}\mathbf{G}_{J}^{-1}=\mathbf{s}\mathbf{G}_{J}\mathbf{G}_{J}^{-1}=\mathbf{s}\). From there, we conclude \(\mathbf{c}_{J}\mathbf{G}_{J}^{-1}\mathbf{G}=\mathbf{s}\mathbf{G}=\mathbf{c}\). #### 2.2.2 Classical and quantum decoding problems Before defining our coding problem, we define the Bernoulli error distributions that we will use. **Definition 7**.: _For \(q\in\mathbb{N}^{*}\), with \(q\geq 2\) and \(\omega\in[0,1]\), we define the Bernoulli probability function \(b_{q}:F_{q}\rightarrow\mathbb{R}\) satisfying \(b_{q}(0)=1-w\) and \(b_{q}(i)=\frac{w}{q-1}\) for \(i\neq 0\)._ **Definition 8**.: _For \(q\in\mathbb{N}^{*}\), with \(q\geq 2\) and \(\omega\in[0,1]\) we define the distribution \(\mathcal{B}(q,\omega)\) sampled as follows: pick \(x\) w.p. \(b_{q}(x)\), return \(x\)._ We now define the Bernoulli distribution on vectors on \(F_{q}^{n}\) where each coordinate is taken according to \(\mathcal{B}(q,\omega)\). **Definition 9**.: _For \(q,n\in\mathbb{N}^{*}\), \(\omega\in[0,1]\), with \(q\geq 2\) we define the distribution \(\mathcal{B}(q,n,\omega)\) sampled as follows: for \(i\in\{1,\ldots,n\}\), \(x_{i}\leftarrow\mathcal{B}(q,\omega)\), return \(\mathbf{x}=x_{1},\ldots,x_{n}\). Notice that sampling from \(\mathcal{B}(q,n,\omega)\) is equivalent to the following sampling procedure: pick \(\mathbf{x}\) w.p. \(\left(\frac{\omega}{q-1}\right)^{|\mathbf{x}|}(1-\omega)^{n-|\mathbf{x}|}\), return \(\mathbf{x}\)._ What we are interested here is the decoding problem as it arises in cryptography, but we will describe it here by using the langage of information theory. We have a message \(\mathbf{m}\in F_{q}^{k}\) which is encoded via a generating matrix \(\mathbf{G}\in F_{q}^{k\times n}\). The encoded message \(\mathbf{m}\mathbf{G}\) is sent through a channel and an error \(\mathbf{e}\) occurs. The receiver gets the message \(\mathbf{m}\mathbf{G}+\mathbf{e}\) and his goal is to recover \(\mathbf{m}\). Notice that the receiver also knows the generating matrix \(\mathbf{G}\) so his goal is, given \(\mathbf{G}\) and \(\mathbf{y}=\mathbf{m}\mathbf{G}+\mathbf{e}\), to recover \(\mathbf{m}\). The way we model the error is that \(\mathbf{e}\) is sampled from the Bernoulli distribution \(\mathcal{B}(q,n,\omega)\) for some chosen \(\omega\). Note that there are other choices for the error model that can be of interest that we discuss in the next section. We first define the distribution of input/solution to our decoding problem. **Definition 10**.: _For \(q,n,k\in\mathbb{N}^{*}\), with \(q\geq 2\), for \(\omega\in[0,1]\), we define the distribution \(\boxed{\mathcal{D}(q,n,k,\omega)}\) sampled as follows: \(\mathbf{G}\leftarrow\{0,1\}^{k\times n},\ \mathbf{m}\leftarrow\mathbb{F}_{q}^{k},\ \mathbf{c}=\mathbf{m}\mathbf{G},\ \mathbf{e} \leftarrow\mathcal{B}(q,n,\omega),\ \mathbf{y}=\mathbf{c}+\mathbf{e}\), return \((\mathbf{G},\mathbf{y},\mathbf{c})\)._ We can now define our classical decoding problem **Definition 11**.: _For \(q,n,k\in\mathbb{N}^{*}\), with \(q\geq 2\), for \(\omega\in[0,1]\), the decoding problem \(\boxed{\text{DP}(q,n,k,\omega)}\) is the following. We sample \((\mathbf{G},\mathbf{y},\mathbf{c})\leftarrow\mathcal{D}(q,n,k,\omega)\) and the goal is, given only \((\mathbf{G},\mathbf{y})\), to recover \(\mathbf{c}\)._ Another problem of interest is finding short codewords. **Definition 12**.: _For \(q,n,k\in\mathbb{N}\) with \(q\geq 2\) and \(\omega\in(0,1)\), the short codeword problem \(\boxed{\text{SCP}(q,n,k,\omega)}\) is the following. We sample \(\mathbf{H}\leftarrow\mathbb{F}_{q}^{n\times(n-k)}\) and the goal is, given \(\mathbf{H}\), to find \(\mathbf{c}\in\mathbb{F}_{q}^{n}\backslash\{\mathbf{0}\}\) st. \(\mathbf{H}\mathbf{c}^{\intercal}=\mathbf{0}\) and \(|\mathbf{c}|\leq\omega n\)._ We now consider the quantum decoding problem. Now, instead of choosing a random error \(\mathbf{e}\) from \(\mathcal{B}(q,n,\omega)\) and constructing \(\mathbf{y}=\mathbf{c}+\mathbf{e}\), we construct a quantum state that is a superposition of all these noisy codewords. This motivates the following definition for the input/solution distribution. **Definition 13**.: _For \(q,n,k\in\mathbb{N}^{*}\), with \(q\geq 2\) and \(\omega\in[0,1]\), we define the distribution \(\boxed{\mathcal{D}_{\mathcal{Q}}(q,n,k,\omega)}\) sampled as follows: \(\mathbf{G}\leftarrow\{0,1\}^{k\times n},\ \mathbf{m}\leftarrow\mathbb{F}_{q}^{k},\mathbf{c}=\mathbf{m} \mathbf{G},\ |\psi_{\mathbf{c}}\rangle=\sum_{\mathbf{e}\in F_{q}^{n}}\sqrt{\omega^{|\mathbf{e}|}(1- \omega)^{n-|\mathbf{e}|}}|\mathbf{c}+\mathbf{e}\rangle\), return \((\mathbf{G},|\psi_{\mathbf{c}}\rangle,\mathbf{c})\)._ From there, we define our quantum decoding problem. **Definition 14**.: _For \(q,n,k\in\mathbb{N}^{*}\), with \(q\geq 2\), for \(\omega\in[0,1]\), the decoding problem \(\boxed{\text{QDP}(q,n,k,\omega)}\) is the following. We sample \((\mathbf{G},|\psi_{\mathbf{c}}\rangle,\mathbf{c})\leftarrow\mathcal{D}_{\mathcal{Q}}( q,n,k,\omega)\) and the goal is, given only \((\mathbf{G},|\psi_{\mathbf{c}}\rangle)\), to recover \(\mathbf{c}\)._ The above definition can be generalized to any probability function \(P:\mathbb{F}_{q}^{n}\rightarrow\mathbb{R}\) by considering the state \(|\psi_{\mathbf{c}}\rangle=\sum_{\mathbf{c}\in F_{q}^{n}}\sqrt{P(\mathbf{e})}|\mathbf{c}+\mathbf{e}\rangle\). Moreover, and this is specific to the quantum setting, this can be generalized to any function \(f:\mathbb{F}_{q}^{n}\rightarrow\mathbb{C}\) with \(\|f\|_{2}=1\) by considering the state \(|\psi_{\mathbf{c}}\rangle=\sum_{\mathbf{e}\in F_{q}^{n}}f(\mathbf{e})|\mathbf{c}+\mathbf{e}\rangle\). This is what motivates the following definitions. **Definition 15**.: _For \(q,n,k\in\mathbb{N}^{*}\), with \(q\geq 2\), for \(f:\mathbb{F}_{q}^{n}\rightarrow\mathbb{C}\) with \(\|f\|_{2}=1\), we define the distribution \(\boxed{\mathcal{D}_{\mathcal{Q}}(q,n,k,f)}\) sampled as follows: \(\mathbf{G}\leftarrow\{0,1\}^{k\times n},\ \mathbf{m}\leftarrow\mathbb{F}_{q}^{k},\mathbf{c}=\mathbf{m} \mathbf{G},\ |\psi_{\mathbf{c}}\rangle=\sum_{\mathbf{e}\in F_{q}^{n}}f(\mathbf{e})|\mathbf{c}+\mathbf{e}\rangle\), return \((\mathbf{G},|\psi_{\mathbf{c}}\rangle,\mathbf{c})\)._ **Definition 16**.: _For \(q,n,k\in\mathbb{N}^{*}\), with \(q\geq 2\), for \(f:\mathbb{F}_{q}^{n}\rightarrow\mathbb{C}\) with \(\|f\|_{2}=1\), the decoding problem \(\boxed{\text{QDP}(q,n,k,f)}\) is the following. We sample \((\mathbf{G},|\psi_{\mathbf{c}}\rangle,\mathbf{c})\leftarrow\mathcal{D}_{\mathcal{Q}}( q,n,k,f)\) and the goal is, given only \((\mathbf{G},|\psi_{\mathbf{c}}\rangle)\), to recover \(\mathbf{c}\)._ #### 2.2.3 Punctured codes and Prange's algorithm We will use in what follows the notion of punctured and shortened code. **Definition 17** (Punctured and shortened code).: _Let \(\mathcal{C}\subseteq\mathbb{F}_{q}^{n}\) be a linear code over \(\mathbb{F}_{q}\) of length \(n\). Let \(J\subseteq[n]\) be a subset of code positions. The punctured code \(\mathcal{C}_{J}\) with respect to \(J\) is defined as \(\mathcal{C}_{J}=\{\boldsymbol{c}_{J}:\boldsymbol{c}\in\mathcal{C}\}\). The shortened code \(C^{J}\) with respect to \(J\) is defined as \(\mathcal{C}^{J}=\{\boldsymbol{c}_{J}:\boldsymbol{c}\in\mathcal{C},\ \boldsymbol{c}_{[n] \setminus J}=\boldsymbol{0}\}\) (i.e. the set of codewords of \(\mathcal{C}\) where we keep only the positions in \(J\) and which are zero outside \(J\))._ It is readily seen that these two operations commute when taking the dual **Lemma 2**.: _For any linear code \(\mathcal{C}\) and any subset \(J\) of positions of this code_ \[\left(\mathcal{C}_{J}\right)^{\perp} = \left(\mathcal{C}^{\perp}\right)^{J}\] \[\left(\mathcal{C}^{J}\right)^{\perp} = \left(\mathcal{C}^{\perp}\right)_{J}.\] A variation of the Prange algorithm.We recall here a result which is essentially folklore in coding theory, namely that there is a probabilistic polynomial time algorithm for finding short codewords in a random linear code of dimension \(k\) and length \(n\) over \(\mathbb{F}_{q}\) which produces short codewords of weight \(\left\lfloor\frac{(q-1)(n-k)}{q}\right\rfloor\). It simply uses linear algebra. For this, consider a parity-check matrix \(\mathbf{H}\in\mathbb{F}_{q}^{(n-k)\times n}\) of \(\mathcal{C}\) and run \(\Theta\left(\sqrt{n}\right)\) times the following procedure 1. Choose uniformly at random subset \(J\) of \(k\) positions of \(\mathcal{C}\). Let \(\bar{J}=[n]\setminus J\). 2. If \(\mathbf{H}_{\bar{J}}\) is not of rank \(n-k\), abort and else choose \(\boldsymbol{c}\) on \(J\) as a random vector of Hamming weight \(1\). 3. Find the remaining entries of \(\boldsymbol{c}\) by solving the linear system \[\mathbf{H}_{\bar{J}}\boldsymbol{c}_{\bar{J}}^{\intercal}=-\mathbf{H}_{\bar{J}} \boldsymbol{c}_{\bar{J}}^{\intercal}\] 4. If \(|\boldsymbol{c}|=\left\lfloor\frac{(q-1)(n-k)}{q}\right\rfloor\) output \(\boldsymbol{c}\). The rationale behind this algorithm is that the expected weight of such a \(\boldsymbol{c}\) is \(1+\frac{(q-1)(n-k)}{q}\) and that it can be proved that it takes the right weight with probability \(\Omega\left(\frac{1}{\sqrt{n}}\right)\). Note that all the known (be they classical or quantum) algorithms that produce asymptotically relative weights \(\omega<\frac{(1-q)(1-R)}{q}\) where \(R=\frac{k}{n}\) is the code rate have exponential complexity. ### Distinguishing quantum states **Proposition 6** (Helstrom's measurement).: _Let \(|\psi_{0}\rangle,|\psi_{1}\rangle\) be \(2\) quantum pure states s.t. \(|\langle\psi_{0}|\psi_{1}\rangle|=u\). There exists a quantum projective measurement \(\Pi=\{\Pi_{0},\Pi_{1}\}\) s.t. \(\forall c\in\{0,1\},\ \mathrm{tr}(\Pi_{c}|\psi_{c}\rangle\langle\psi_{c}|)= \frac{1}{2}+\frac{\sqrt{1-u^{2}}}{2}\)._ In the above measurement, the measurement gives the correct answer w.p. \(\frac{1}{2}+\frac{\sqrt{1-u^{2}}}{2}\) and gives the opposite answer w.p. \(\frac{1}{2}-\frac{\sqrt{1-u^{2}}}{2}\). Another measurement of interest is the one arising in the context of unambiguous state discrimination. Here we allow the measurement to answer "I don't know" (which corresponds to outcome \(2\)). What we require from the measurement is that if the measurement does not answer \(2\) then it always answers the correct value. The optimal unambiguous measurement is given by the proposition below. **Proposition 7** (Unambiguous State Discrimination).: _Let \(|\psi_{0}\rangle,|\psi_{1}\rangle\) be \(2\) quantum pure states s.t. \(|\langle\psi_{0}|\psi_{1}\rangle|=u\). There exists a POVM \(F=\{F_{0},F_{1},F_{2}\}\) s.t. \(\forall c\in\{0,1\},\operatorname{tr}(F_{c}|\psi_{c}\rangle\langle\psi_{c}|)=1-u\) and \(\operatorname{tr}(F_{2}|\psi_{c}\rangle\langle\psi_{c}|)=u\)._ The optimal unambiguous measurement is not known when there are more than \(2\) states. We present a detailed analysis of USD with \(q\) states in Section 4.1, where we give known results and also provide some new ones. The final measurement of interest is the Pretty Good Measurement, which is a generic measurement to distinguish \(n\) quantum states. **Definition 18** (Pretty Good Measurement).: _Consider an ensemble \(\{|\psi_{i}\rangle\}_{i\in[n]}\) of \(n\) quantum pure states. The Pretty Good Measurement associated to this ensemble is the POVM \(\{M_{i}\}_{i\in[n]}\) with_ \[M_{i}=\rho^{-\frac{1}{2}}|\psi_{i}\rangle\langle\psi_{i}|\rho^{-\frac{1}{2}} \quad\text{ given }\quad\rho=\sum_{i\in[n]}|\psi_{i}\rangle\langle\psi_{i}|.\] _One can easily check that each \(M_{i}\succcurlyeq 0\) and that \(\sum_{i}M_{i}=\rho^{-\frac{1}{2}}\rho\rho^{-\frac{1}{2}}=I\)._ **Proposition 8**.: _[_2, 11_]_ _Consider an ensemble \(\{|\psi_{i}\rangle\}_{i\in[n]}\) of \(n\) quantum pure states and \(\{M_{i}\}_{i\in[n]}\) the associated pretty good measurement. We consider the setting where \(i\) is chosen at random and we want to recover \(i\) from \(|\psi_{i}\rangle\). Let \(P_{\text{PGM}}\) be the probability of success using the PGM and \(P_{\text{OPT}}\) be the optimal success probability. This means_ \[P_{\text{PGM}} =\frac{1}{n}\sum_{i}\operatorname{tr}\left(|\psi_{i}\rangle \langle\psi_{i}|M_{i}\right)\] \[P_{\text{OPT}} =\max_{\{N_{i}\}}\frac{1}{n}\sum_{i}\operatorname{tr}(|\psi_{i} \rangle\langle\psi_{i}|N_{i})\] _where the maximum is over all POVMs \(\{N_{i}\}_{i\in[n]}\). We have_ \[P_{\text{PGM}}\leq P_{\text{OPT}}\leq\sqrt{P_{\text{PGM}}}.\] ### The classical and quantum Fourier transform on \(\mathbb{F}_{q}^{n}\) In this article, we will use the quantum Fourier transform on \(\mathbb{F}_{q}^{n}\) where \(\mathbb{F}_{q}\) is the finite field \(\mathbb{F}_{q}\). Definition and basic properties.It is based on the characters of the group \((\mathbb{F}_{q}^{n},+)\) which are defined as follows (for more details see [13, Chap 5, S1], in particular a description of the characters in terms of the trace function is given in [13, Ch. 5, S1, Th. 5.7]). **Definition 19**.: _Fix \(q=p^{s}\) for a prime integer \(p\) and an integer \(s\geq 1\). The characters of \(\mathbb{F}_{q}\) are the functions \(\chi_{y}:\mathbb{F}_{q}\to\mathbb{C}\) indexed by elements \(y\in\mathbb{F}_{q}\) defined as follows_ \[\chi_{y}(x) \stackrel{{\triangle}}{{=}} e^{\frac{2i\pi\operatorname{tr}(x-y)}{p}},\quad\text{with}\] \[\operatorname{tr}(a) \stackrel{{\triangle}}{{=}} a+a^{p}+a^{p^{2}}+\cdots+a^{p^{s-1}}.\] _where the product \(x\cdot y\) corresponds to the product of elements in \(\mathbb{F}_{q}\). We extend the definition to vectors \(\mathbf{x},\mathbf{y}\in\mathbb{F}_{q}^{n}\) as follows:_ \[\chi_{\mathbf{y}}(\mathbf{x})\operatorname{\stackrel{{\triangle}}{{=}}} \Pi_{i=1}^{n}\chi_{y_{i}}(x_{i}).\] When \(q\) is prime, we have \(\chi_{y}(x)=e^{\frac{2i\pi x_{y}}{q}}\). In the case where \(q\) is not prime, the above definition is not necessarily easy to handle for computations. Fortunately, characters have many desirable properties that we can use for our calculations. **Proposition 9**.: _The characters \(\chi_{\mathbf{y}}:\mathbb{F}_{q}^{n}\to\mathbb{C}\) have the following properties_ 1. _(Group Homomorphism)._ \(\forall\mathbf{y}\in\mathbb{F}_{q}^{n}\)_,_ \(\chi_{\mathbf{y}}\) _is a group homomorphism from_ \((\mathbb{F}_{q}^{n},+)\) _to_ \((\mathbb{C},\cdot)\) _meaning that_ \(\forall\mathbf{x},\mathbf{x}^{\prime}\in\mathbb{F}_{q}^{n}\)_,_ \(\chi_{\mathbf{y}}(\mathbf{x}+\mathbf{x}^{\prime})=\chi_{\mathbf{y}}(\mathbf{x})\cdot\chi_{\mathbf{y}}( \mathbf{x}^{\prime})\)_._ 2. _(Symmetry)._ \(\forall\mathbf{x},\mathbf{y}\in\mathbb{F}_{q}^{n},\ \chi_{\mathbf{y}}(\mathbf{x})=\chi_{\mathbf{x}}(\mathbf{y})\)__ 3. _(Orthogonality of characters). The characters are orthogonal functions meaning that_ \(\forall\mathbf{x},\mathbf{x}^{\prime}\in\mathbb{F}_{q}^{n}\)_,_ \(\sum_{\mathbf{y}\in\mathbb{F}_{q}^{n}}\chi_{\mathbf{y}}(\mathbf{x})\overline{\chi_{\mathbf{y} }(\mathbf{x}^{\prime})}=q^{n}\delta_{\mathbf{x},\mathbf{x}^{\prime}}\)_. In particular_ \(\sum_{\mathbf{y}\in\mathbb{F}_{q}^{n}}|\chi_{\mathbf{y}}(\mathbf{x})|^{2}=q\) _and_ \(\forall\mathbf{x}\in\mathbb{F}_{q}^{n}\setminus\{0\},\ \sum_{\mathbf{y}\in\mathbb{F}_{q}^{n}}\chi_{\mathbf{y}}(\mathbf{x})=0\)_._ Notice that these imply some other properties on characters. For instance \(\chi_{\mathbf{y}}(\mathbf{0})=1\) or \(|\chi_{\mathbf{y}}(\mathbf{x})|=1\) for any \(\mathbf{x},\mathbf{y}\in\mathbb{F}_{q}^{n}\). The orthogonality of characters, allows to define a unitary transform which is is nothing but the classical or the quantum Fourier transform on \(\mathbb{F}_{q}^{n}\). **Definition 20**.: _For a function \(f:\mathbb{F}_{q}^{n}\to\mathcal{C}\), we define the (classical) Fourier transform \(\hat{f}\) as_ \[\widehat{f}(\mathbf{x})=\frac{1}{\sqrt{q^{n}}}\sum_{\mathbf{y}\in\mathbb{F}_{q}^{n}} \chi_{\mathbf{x}}(\mathbf{y})f(\mathbf{y}).\] _The quantum Fourier transform \(\mathrm{QFT}\) on \(\mathbb{F}_{q}^{n}\) is the quantum unitary satisfying \(\forall\mathbf{x}\in\mathbb{F}_{q}^{n}\),_ \[\mathrm{QFT}\left|\mathbf{x}\right> = \frac{1}{\sqrt{q^{n}}}\sum_{\mathbf{y}\in\mathbb{F}_{q}^{n}}\chi_{ \mathbf{x}}(\mathbf{y})|\mathbf{y}\rangle.\] _We will also write \(|\widehat{\psi}\rangle\mathop{\stackrel{{\triangle}}{{=}}} \mathrm{QFT}\left|\psi\right>\)._ Note that when \(|\psi\rangle=\sum_{\mathbf{x}\in\mathbb{F}_{q}^{n}}f(\mathbf{x})|\mathbf{x}\rangle\) we have \[|\widehat{\psi}\rangle=\sum_{\mathbf{x}\in\mathbb{F}_{q}^{n}}\widehat{f}(\mathbf{x})| \mathbf{x}\rangle.\] The Fourier transform can also be viewed as expressing the coefficients of a state in the Fourier basis \(\left\{|\widehat{\mathbf{x}}\rangle,\mathbf{x}\in\mathbb{F}_{q}^{n}\right\}\) as shown by **Fact 1**.: _Let \(|\psi\rangle=\sum_{\mathbf{y}\in\mathbb{F}_{q}^{n}}f(\mathbf{y})|\mathbf{y}\rangle\), then_ \[|\psi\rangle=\sum_{\mathbf{x}\in\mathbb{F}_{q}^{n}}\widehat{f}(-\mathbf{x})|\widehat{ \mathbf{x}}\rangle.\] This follows on the spot from the fact that if \(|\psi\rangle=\sum_{\mathbf{x}\in\mathbb{F}_{q}^{n}}c_{\mathbf{x}}|\widehat{\mathbf{x}}\rangle\), then \[c_{\mathbf{x}}=\langle\widehat{\mathbf{x}}|\psi\rangle=\frac{1}{\sqrt{q^{n}}}\sum_{ \mathbf{y}\in\mathbb{F}_{q}^{n}}\overline{\chi_{\mathbf{x}}(y)}f(y)=\widehat{f}(-\mathbf{x }).\] Translations amount to multiplication by a phase in the Fourier basis.It will be convenient for what follows to bring in the shift and phase operators which are defined by **Definition 21** (shift and phase operators).: _For \(\mathbf{b}\) in \(\mathbb{F}_{q}^{n}\), let \(X_{\mathbf{b}}\) be the shift operator \(X_{\mathbf{b}}|\mathbf{x}\rangle=|\mathbf{x}+\mathbf{b}\rangle\) and \(Z_{\mathbf{b}}\) be the phase operator \(Z_{\mathbf{b}}=\chi_{\mathbf{x}}(\mathbf{b})|\mathbf{x}\rangle\)._ The main properties of the Fourier transform follow from the fact that the characters are the common eigenbasis of all shift operators (and therefore all convolution operators). In the quantum setting, this amounts to the fact that the quantum states \(\{|\widehat{\mathbf{x}}\rangle,\mathbf{x}\in\mathbb{F}_{q}^{n}\}\) form an eigenbasis of the shift operators as shown by **Proposition 10**.: _We have for all \(\mathbf{b}\) in \(\mathbb{F}_{q}^{n}\) that \(|\widehat{\mathbf{x}}\rangle\) is an eigenstate of \(X_{\mathbf{b}}\) associated to the eigenvalue \(\chi_{\mathbf{x}}(-\mathbf{b})\) and_ \[X_{\mathbf{b}}\cdot\mathrm{QFT} = \mathrm{QFT}\!\cdot\!Z_{-\mathbf{b}} \tag{2}\] \[\mathrm{QFT}\!\cdot\!X_{\mathbf{b}} = Z_{\mathbf{b}}\cdot\mathrm{QFT}\,. \tag{3}\] Proof.: Let \(\mathbf{x}\in\mathbb{F}_{q}^{n}\). We observe that \[X_{\mathbf{b}}\cdot\mathrm{QFT}\,|\mathbf{x}\rangle = \frac{1}{\sqrt{q^{n}}}\sum_{\mathbf{y}\in\mathbb{F}_{q}^{n}}\chi_{ \mathbf{x}}(\mathbf{y})|\mathbf{y}+\mathbf{b}\rangle\] \[= \frac{1}{\sqrt{q^{n}}}\sum_{\mathbf{y}\in\mathbb{F}_{q}^{n}}\chi_{ \mathbf{x}}(\mathbf{y}-\mathbf{b})|\mathbf{y}\rangle\] \[= \chi_{\mathbf{x}}(-\mathbf{b})\frac{1}{\sqrt{q^{n}}}\sum_{\mathbf{y}\in \mathbb{F}_{q}^{n}}\chi_{\mathbf{x}}(\mathbf{y})|\mathbf{y}\rangle\] \[= \mathrm{QFT}\!\cdot\!Z_{-\mathbf{b}}|\mathbf{x}\rangle.\] This computation shows that \(|\widehat{\mathbf{x}}\rangle=\mathrm{QFT}\,|\mathbf{x}\rangle\) is an eigenstate of the shift operator \(X_{\mathbf{b}}\) associated to the eigenvalue \(\chi_{\mathbf{x}}(-\mathbf{b})\). The other equality follows from this and the symmetry property 2 of Proposition 9 which implies that \[\mathrm{QFT}^{\dagger}=\overline{\mathrm{QFT}} \tag{4}\] where by \(\overline{M}\) we mean the (complex) conjugate operator of the operator \(M\) which is defined by \(\overline{M}\!\stackrel{{\triangle}}{{=}}\!\sum_{x,y}\overline{ M_{x,y}}|x\rangle\langle y|\) when \(M=\sum_{x,y}M_{x,y}|x\rangle\langle y|\). (2) namely implies that \[\mathrm{QFT}^{\dagger}\!\cdot\!X_{\mathbf{b}}^{\dagger}=Z_{-\mathbf{b}}^{\dagger} \cdot\mathrm{QFT}^{\dagger}\] This in turn means that \[\overline{\mathrm{QFT}}\cdot X_{-\mathbf{b}}=Z_{\mathbf{b}}\cdot\overline{\mathrm{QFT }},\] or equivalently \[\overline{\overline{\mathrm{QFT}}\cdot X_{-\mathbf{b}}}=\overline{Z_{\mathbf{b}} \cdot\overline{\mathrm{QFT}}}\] which gives \[\mathrm{QFT}\!\cdot\!X_{-\mathbf{b}}=Z_{-\mathbf{b}}\cdot\mathrm{QFT},\] and therefore proving (3). We will focus on the following quantum states \(|\psi\rangle=\sqrt{1-\tau}|0\rangle+\sum_{\alpha\in\mathbb{F}_{q}^{\ast}} \sqrt{\frac{\tau}{q-1}}|\alpha\rangle\) associated to a \(q\)-ary channel of crossover probability \(\tau\). Indeed, when we measure such a quantum state, we namely get an element of \(\mathbb{F}_{q}\) which can be viewed as a sample of an error output by such a channel. The quantum Fourier transform applied to such states yields a state of the same form, since it is readily verified that **Lemma 3**.: _Let \(\tau\in[0,\frac{q-1}{q}]\) and \(|\psi\rangle=\sqrt{1-\tau}|0\rangle+\sum_{\alpha\in\mathbb{F}_{q}^{*}}\sqrt{ \frac{\tau}{q-1}}|\alpha\rangle\). We have_ \[\mathrm{QFT}\,|\psi\rangle=\sqrt{1-\tau^{\perp}}|0\rangle+\sum_{\alpha\in \mathbb{F}_{q}^{*}}\sqrt{\frac{\tau^{\perp}}{q-1}}|\alpha\rangle\] _with \(\tau^{\perp}=\frac{\left(\sqrt{(q-1)(1-\tau)}-\sqrt{\tau}\right)^{2}}{q}\)._ Proof.: We write \[\mathrm{QFT}\,|\psi\rangle =\sqrt{\frac{1-\tau}{q}}\sum_{y\in\mathbb{F}_{q}}|y\rangle+\sqrt{ \frac{\tau}{q(q-1)}}\sum_{y\in\mathbb{F}_{q}}\sum_{\alpha\in\mathbb{F}_{q}^{*} }\chi_{\alpha}(y)|y\rangle\] \[=\left(\sqrt{\frac{1-\tau}{q}}+\sqrt{\frac{q\tau}{q-1}}\right)|0 \rangle+\sum_{y\in\mathbb{F}_{q}^{*}}\left(\sqrt{\frac{1-\tau}{q}}-\sqrt{ \frac{\tau}{q(q-1)}}\right)|y\rangle\] where in the last equality we used the fact that for \(y\neq 0\), we have \(\sum_{\alpha\in\mathbb{F}_{q}}\chi_{\alpha}(y)=\sum_{\alpha\in\mathbb{F}_{q}} \chi_{y}(\alpha)=0\) (by using first the symmetry property and then the orthogonality property of characters of Proposition 9). This implies that \(\sum_{\alpha\in\mathbb{F}_{q}^{*}}\chi_{\alpha}(y)=-\chi_{0}(y)=-1\). In order to conclude, notice that \[\sqrt{\frac{\tau^{\perp}}{q-1}}=\frac{\sqrt{(q-1)(1-\tau)}-\sqrt{\tau}}{\sqrt {q(q-1)}}=\sqrt{\frac{1-\tau}{q}}-\sqrt{\frac{\tau}{q(q-1)}}\] which means we can rewrite \(\mathrm{QFT}\,|\psi\rangle=\sqrt{1-\tau^{\perp}}|0\rangle+\sum_{y\in\mathbb{F }_{q}^{*}}\sqrt{\frac{\tau^{\perp}}{q-1}}\). We will also need to describe how the quantum Fourier transform acts on shifts of \(|\psi\rangle\) **Lemma 4**.: _Let \(\tau\in[0,\frac{q-1}{q}]\), \(b\in\mathbb{F}_{q}\) and denote by \(|\psi_{b}\rangle\) the state \(X_{b}|\psi\rangle\) where \(|\psi\rangle\mathop{=}\limits^{\triangle}\sqrt{1-\tau}|0\rangle+\sum_{\alpha \in\mathbb{F}_{q}^{*}}\sqrt{\frac{\tau}{q-1}}|\alpha\rangle\). We have_ \[|\psi_{b}\rangle = \sqrt{1-\tau}|b\rangle+\sum_{\alpha\neq b}\sqrt{\frac{\tau}{q-1}} |\alpha\rangle\] \[\mathrm{QFT}\,|\psi_{b}\rangle = \sqrt{1-\tau^{\perp}}|0\rangle+\sum_{\alpha\in\mathbb{F}_{q}^{*}} \chi_{\alpha}(b)\sqrt{\frac{\tau^{\perp}}{q-1}}|\alpha\rangle.\] Proof.: The first point follows right away from the definition of these quantities, whereas the second point follows on the spot from Fact 10 and the previous lemma: \[\mathrm{QFT}\,|\psi_{b}\rangle = \mathrm{QFT}\,\cdot X_{b}|\psi\rangle\] \[= Z_{b}\cdot\mathrm{QFT}\,|\psi\rangle\ \ \ (\mathrm{by\ Fact\ 10})\] \[= Z_{b}\left(\sqrt{1-\tau^{\perp}}|0\rangle+\sum_{\alpha\in \mathbb{F}_{q}^{*}}\sqrt{\frac{\tau^{\perp}}{q-1}}|\alpha\rangle\right)\ \ ( \mathrm{by\ Lemma\ 3})\] \[= \sqrt{1-\tau^{\perp}}|0\rangle+\sum_{\alpha\in\mathbb{F}_{q}^{*} }\chi_{\alpha}(b)\sqrt{\frac{\tau^{\perp}}{q-1}}|\alpha\rangle.\] Applying the quantum Fourier transform on periodic states.Rgeev's reduction applies to states which are periodic. In our case, they will be of the form \(\frac{1}{\sqrt{Z}}\sum_{\mathbf{c}\in\mathcal{C}}\sum_{\mathbf{e}\in\mathbb{F}_{q}^{n}}f( \mathbf{e})|\mathbf{c}+\mathbf{e}\rangle\) where \(Z\) is some normalizing constant, \(\mathcal{C}\) some linear code of length \(n\) over \(\mathbb{F}_{q}\) and \(f\) some function from \(\mathbb{F}_{q}^{n}\) to \(\mathbb{C}\). This state can be written as \(\frac{1}{\sqrt{Z}}\sum_{\mathbf{x}\in\mathbb{F}_{q}^{n}}g(\mathbf{x})|\mathbf{x}\rangle\) where \(g(\mathbf{x})=\sum_{\mathbf{c}\in\mathcal{C}}f(\mathbf{x}-\mathbf{c})\). We clearly have in this case \(g(\mathbf{x}+\mathbf{c})=g(\mathbf{x})\) for any \(\mathbf{x}\in\mathbb{F}_{q}^{n}\) and any \(\mathbf{c}\in\mathcal{C}\). For such states, we have the following **Proposition 11**.: _Consider a function \(f:\mathbb{F}_{q}^{n}\mapsto\mathbb{C}\). We have for all linear codes \(\mathcal{C}\subseteq\mathbb{F}_{q}^{n}\):_ \[\mathrm{QFT}\left(\frac{1}{\sqrt{Z}}\sum_{\mathbf{c}\in\mathcal{C}}\sum_{\mathbf{e}\in \mathbb{F}_{q}^{n}}f(\mathbf{e})|\mathbf{c}+\mathbf{e}\rangle\right)=\frac{|\mathcal{C}|} {\sqrt{Z}}\sum_{\mathbf{y}\in\mathcal{C}^{\perp}}\widehat{f}(\mathbf{y})|\mathbf{y}\rangle\] _where \(Z\) is some normalizing constant._ Proof.: The proposition follows from the following computation \[\mathrm{QFT}\left(\frac{1}{\sqrt{Z}}\sum_{\mathbf{c}\in\mathcal{C}} \sum_{\mathbf{e}\in\mathbb{F}_{q}^{n}}f(\mathbf{e})|\mathbf{c}+\mathbf{e}\rangle\right) = \frac{1}{\sqrt{Z}}\sum_{\mathbf{c}\in\mathbb{F}_{q}^{n}}f(\mathbf{e})\sum _{\mathbf{c}\in\mathcal{C}}\sum_{\mathbf{x}\in\mathbb{F}_{q}^{n}}\chi_{\mathbf{y}}(\mathbf{c}+ \mathbf{e})|\mathbf{y}\rangle \tag{5}\] \[= \frac{1}{\sqrt{Z}}\sum_{\mathbf{c}\in\mathbb{F}_{q}^{n}}f(\mathbf{e}) \sum_{\mathbf{y}\in\mathbb{F}_{q}^{n}}\chi_{\mathbf{y}}(\mathbf{c})|\mathbf{y}\rangle\sum_{\bm {c}\in\mathcal{C}}\chi_{\mathbf{y}}(\mathbf{c})\] \[= \frac{|\mathcal{C}|}{\sqrt{Z}}\sum_{\mathbf{c}\in\mathbb{F}_{q}^{n}} \chi_{\mathbf{y}}(\mathbf{c})f(\mathbf{e})\sum_{\mathbf{y}\in\mathcal{C}^{\perp}}|\mathbf{y}\rangle\] \[= \frac{|\mathcal{C}|}{\sqrt{Z}}\sum_{\mathbf{y}\in\mathcal{C}^{\perp}} \widehat{f}(\mathbf{y})|\mathbf{y}\rangle\] where (5) follows from a slight generalization of (3) of Proposition 9, namely that \[\sum_{\mathbf{c}\in\mathcal{C}}\chi_{\mathbf{y}}(\mathbf{c}) = |C|\;\;\mathrm{if}\;\mathbf{y}\in\mathcal{C}^{\perp}\] \[= 0\;\;\mathrm{otherwise},\] which follows by a similar reasoning by noticing that \(\mathcal{C}^{\perp}\) can be viewed as the set of trivial characters acting on \(\mathcal{C}\): \[\{\mathbf{y}\in\mathbb{F}_{q}^{n}:\chi_{\mathbf{y}}(\mathbf{c})=1,\;\forall\mathbf{c}\in \mathcal{C}\}=\mathcal{C}^{\perp}.\] ## 3 Algorithms for the binary quantum decoding problem ### Quantum polynomial time algorithm using unambiguous state discrimination We present our first quantum algorithm that directly uses unambiguous state discrimination. **Theorem 4**.: _Let \(R\in(0,1)\). For any \(\omega<\left(\frac{R}{2}\right)^{\perp}\mathop{=}\limits^{\triangle}\frac{1}{2} -\sqrt{\frac{R}{2}(1-\frac{R}{2})}\), there exists a quantum algorithm that solves \(\mathrm{QDP}(2,n,\lfloor Rn\rfloor,\omega)\) w.p. \(1-2^{-\Omega(n)}\)._ Proof.: We fix \(R,\omega\), as well as \(n\in\mathbb{N}\) and \(k=\lfloor Rn\rfloor\). We consider an instance of \(QDP(2,R,k,\omega)\) so we have a random matrix \(\mathbf{G}\leftarrow\{0,1\}^{k\times n}\), \(\boldsymbol{c}=\boldsymbol{m}\mathbf{G}\) for a randomly chosen \(\boldsymbol{m}\leftarrow\{0,1\}^{k}\) and the state \(|\Psi_{\boldsymbol{c}}\rangle=\bigotimes_{i=1}^{n}|\psi_{c_{i}}^{\omega}\rangle\) where \(|\psi_{c_{i}}^{\omega}\rangle=\sqrt{1-\omega}|c_{i}\rangle+\sqrt{\omega}|1-c_ {i}\rangle\). We consider the following algorithm for solving our Quantum Decoding Problem Quantum algorithm for QDP using USD Start from \(|\Psi_{\boldsymbol{c}}\rangle=\bigotimes_{i=1}^{n}|\psi_{c_{i}}^{\omega}\rangle\). Notice that \(|\langle\psi_{0}^{\omega}|\psi_{1}^{\omega}\rangle|=2\sqrt{\omega(1-\omega)}\). Perform the optimal unambiguous measurement from Proposition 7 on each qubit of \(|\Psi_{\boldsymbol{c}}\rangle\) in order to guess \(c_{i}\), which can be done w.p. \(p=1-|\langle\psi_{0}^{\omega}|\psi_{1}^{\omega}\rangle|=1-2\sqrt{\omega(1- \omega)}\mathop{=}\limits^{\triangle}2\omega^{\perp}\). Let \(J\subseteq[n]\) be the set of indices where this measurement succeeds. The algorithm recovers here \(\boldsymbol{c}_{J}\). If \(G_{J}\in\{0,1\}^{k\times|J|}\) is of rank \(k\), recover \(\boldsymbol{c}\) from \(\boldsymbol{c}_{J}\) by computing \(\boldsymbol{c}_{J}G_{J}^{-1}G\). Let \(p=2\omega^{\perp}\). Since \(\omega<\left(\frac{R}{2}\right)^{\perp}\), we have \(2\omega^{\perp}>R\) and there exists an absolute constant \(\gamma>0\) s.t. \(p=R+\gamma\). Let \(X_{i}\) be the random variable s.t. \(X_{i}(i\in J)=1\) and \(X_{i}(i\notin J)=0\). The \(X_{i}\) are independent random Bernoulli variables with parameter \(p\). Using Hoeffding's inequality, we first compute \[P_{1}=\Pr\left[|J|\geq k+\frac{\gamma n}{2}\right]\geq\Pr\left[\sum_{i=1}^{n}X_ {i}\geq pn-\frac{\gamma n}{2}\right]\geq 1-2^{-\frac{\gamma^{2}n}{2}}\] Then, using Proposition 4 we compute \[P_{2}=\Pr\left[\text{rank}(G_{J})=k\,\Big{|}\,\,|J|\geq k+\frac{\gamma n}{2} \right]\geq 1-2^{-\gamma n}.\] Notice that the algorithm recovers \(\boldsymbol{c}_{J}\) so from Proposition 5, if \(\text{rank}(G_{J})=k\) then the algorithm successfully recovers \(\boldsymbol{c}\). If we define \(P_{succ}\) to be the probability of success of the algorithm, we therefore have \[P_{Succ}\geq\Pr[\text{rank}(G_{J})=k]\geq P_{1}P_{2}\geq 1-2^{-\Omega(n)}.\] Using complex phases.It is also possible to put complex phases in front of the error. This means we consider the states \[|\Psi_{\boldsymbol{c}}\rangle=\bigotimes_{i=1}^{n}\sqrt{1-\omega}|c_{i}\rangle +\sqrt{\omega}e^{i\theta}|1-c_{i}\rangle.\] Interesting phenomena appear and we refer to Appendix A for a full analysis. ### Reduction between quantum decoding problems in the binary setting The above algorithm is interesting as it presents an polynomial time algorithm for the quantum decoding problem in a regime where its classical counterpart requires - with our current knowledge - an exponential classical or quantum algorithm. However, it completely fails when \(\omega>\left(\frac{R}{2}\right)^{\perp}\) and the best algorithm for \(\text{QDP}(2,n,\lfloor Rn\rfloor,\omega)\) is still by first measuring and then solving \(\text{DP}(2,n,\lfloor Rn\rfloor,\omega)\). Is there a way to improve the best algorithms \(\text{QDP}(2,n,\lfloor Rn\rfloor,\omega)\) by using ideas of the previous section? The answer is yes. Instead of using USD, we use what we call partial Unambiguous State Discrimination. Our measurement will still abort with some probability but when it does not abort, we still allow a small probability failure but which will typically be smaller than if we used Helstrom's measurement. With this technique we can actually show a general reduction theorem for QDP. **Theorem 5**.: _Let \(R\in(0,1)\). Let \(\omega\in[0,\frac{1}{2})\) and \(\omega^{\prime}\in[0,\frac{1}{2})\) satisfying: \(\omega^{\prime}\leq\omega\) and \(\frac{\omega^{\perp}}{(\omega^{\prime})^{\perp}}>R\). Let any \(p>\frac{\omega^{\perp}}{(\omega^{\prime})^{\perp}}\). Let also \(k=\lfloor Rn\rfloor\). Then \(\mbox{\rm QDP}(2,n,k,\omega)\preccurlyeq\mbox{\rm QDP}(2,\lfloor pn\rfloor,k, \omega^{\prime})\) meaning that if we have an algorithm that solves \(\mbox{\rm QDP}(2,\lfloor pn\rfloor,k,\omega^{\prime})\), we can use it to solve \(\mbox{\rm QDP}(2,n,k,\omega)\)._ In order to prove our theorem, we first present our partial unambiguous state discrimination protocol. As a special case, we obtain our previous algorithm by taking \(\omega^{\prime}=0\) (the theorem can then be applied when \(R<2\omega^{\perp}\)). #### 3.2.1 Partial unambiguous state discrimination We define \(|\psi_{b}^{\omega}\rangle=\sqrt{1-\omega}|b\rangle+\sqrt{\omega}|1-b\rangle\). Recall that \(\langle\psi_{0}^{\omega}|\psi_{1}^{\omega}\rangle=2\sqrt{\omega(1-\omega)}=1- 2\omega^{\perp}\). Fix \(\omega,\omega^{\prime}\in(0,\frac{1}{2})\) with \(\omega^{\prime}\leq\omega\). We use the following lemma **Lemma 5**.: _Let \(\alpha=\sqrt{\frac{\omega^{\perp}}{(\omega^{\prime})^{\perp}}}\) and \(\beta=\sqrt{1-\alpha^{2}}\). There exists a unitary \(U\) operation acting on \(\mbox{\rm span}\{|0\rangle,|1\rangle,|2\rangle\}\) s.t._ \[U|\psi_{0}^{\omega}\rangle=\alpha|\psi_{0}^{\omega^{\prime}} \rangle+\beta|2\rangle\] \[U|\psi_{1}^{\omega}\rangle=\alpha|\psi_{1}^{\omega^{\prime}} \rangle+\beta|2\rangle\] Proof.: With the choice of \(\alpha\) that was made the hermitian product \(\langle\psi_{0}^{\omega}|\psi_{1}^{\omega}\rangle\) and their image is preserved. As a matter of fact \[\langle\psi_{0}^{\omega}|\psi_{1}^{\omega}\rangle=2\sqrt{\omega(1-\omega)}=1- 2\omega^{\perp}. \tag{6}\] Now, if we let \(|\psi_{b}^{\prime}\rangle\mathop{=}\limits^{\triangle}\alpha|\psi_{b}^{\omega^ {\prime}}\rangle+\beta|2\rangle\) for \(b\in\{0,1\}\), then we have \[\langle\psi_{0}^{\prime}|\psi_{1}^{\prime}\rangle = |\alpha|^{2}\langle\psi_{0}^{\omega^{\prime}}|\psi_{1}^{\omega^{ \prime}}\rangle+|\beta|^{2}\] \[= |\alpha|^{2}(1-2\left(\omega^{\prime}\right)^{\perp})+|\beta|^{2} \quad\mbox{( by (\ref{eq:1}))}\] \[= 1-2|\alpha|^{2}\left(\omega^{\prime}\right)^{\perp}\quad\mbox{( by using $|\beta|^{2}=1-|\alpha|^{2}$)}\] \[= 1-2\omega^{\perp}\qquad\mbox{(with our choice of $\alpha$)}.\] By definition of \(\beta\), \(|\psi_{0}^{\prime}\rangle\) and \(|\psi_{1}^{\prime}\rangle\) are both of norm \(1\). This together with the equality \(\langle\psi_{0}^{\omega}|\psi_{1}^{\omega}\rangle=\langle\psi_{0}^{\prime}| \psi_{1}^{\prime}\rangle\) we just proved shows that \(U\) as defined above preserves the hermitian product on \(\mbox{\rm span}\{\psi_{0}^{\omega},\psi_{1}^{\omega}\}=\mbox{\rm span}\{|0 \rangle,|1\rangle\}\). It suffices to choose \(U|2\rangle\) of norm \(1\) and orthogonal to both \(|\psi_{0}^{\prime}\rangle\) and \(|\psi_{1}^{\prime}\rangle\) to obtain a unitary transform since by construction it preserves the hermitian product on \(\mbox{\rm span}\{|0\rangle,|1\rangle,|2\rangle\}\). **Proposition 12**.: _Let \(\omega,\omega^{\prime}\in(0,\frac{1}{2})\) with \(\omega^{\prime}<\omega\). There exists a quantum measurement s.t. when it is applied on \(|\psi_{b}^{\omega^{\prime}}\rangle\), the resulting state is \(|\psi_{b}^{\omega^{\prime}}\rangle\) w.p. \(\frac{\omega^{\perp}}{(\omega^{\prime})^{\perp}}\) and \(|2\rangle\) w.p. \(1-\frac{\omega^{\perp}}{(\omega^{\prime})^{\perp}}\)._ Proof.: Start from \(|\psi_{b}^{\omega}\rangle\) and apply the unitary \(U\) from Lemma 5. Then, perform the two outcomes projective measurement \(\{(|0\rangle\langle 0|+|1\rangle\langle 1|)\,,|2\rangle\langle 2|\}\) on the state \(U|\psi_{b}^{\omega}\rangle=\alpha|\psi_{b}^{\omega^{\prime}}\rangle+\beta|2\rangle\). We obtain the first outcome w.p. \(|\alpha|^{2}=\frac{\omega^{\perp}}{(\omega^{\prime})^{\perp}}\) and the resulting state is \(|\psi_{b}^{\omega^{\prime}}\rangle\) and the second outcome w.p. \(|\beta|^{2}\) and the resulting outcome is \(|2\rangle\). Unambiguous state discrimination can be seen as a special case of this operation by taking \(\omega^{\prime}=0\), which gives \(\alpha=\sqrt{2\omega^{\perp}}\) and the probability of success is \(\alpha^{2}=2\omega^{\perp}=1-\langle\psi_{0}^{\omega}|\psi_{1}^{\omega}\rangle\). #### 3.2.2 Proof of Theorem 5 In order to prove Theorem 5, one can just apply the algorithm of Section 3.1 in a similar fashion. We take any \(\omega,\omega^{\prime}\in(0,\frac{1}{2})\) with \(\omega^{\prime}\leq\omega\) and \(\frac{\omega^{\perp}}{(\omega^{\prime})^{\perp}}>R\). We also fix \(p>\frac{\omega^{\perp}}{(\omega^{\prime})^{\perp}}\) We want to solve \(\text{QDP}(2,n,k,\omega)\) using an algorithm that solves \(\text{QDP}(2,\lfloor pn\rfloor,k,\omega^{\prime})\). We start from \(\mathbf{G}\in\mathbb{F}_{q}^{k\times n}\) as well as \(|\psi_{\mathbf{c}}\rangle=\bigotimes_{i=1}^{n}|\psi_{c_{i}}^{\omega}\rangle\). We consider the following algorithm Quantum algorithm for QDP using partial USD 1. Perform the quantum measurement of Proposition 12 on each register of \(|\psi_{\mathbf{c}}\rangle\). Let \(J\subseteq[n]\) be the set of indices where this measurement succeeds _i.e._ where we obtain \(|\psi_{c_{i}}^{\omega^{\prime}}\rangle\). By discarding the indices not in \(J\), we obtain \[|\phi_{\mathbf{c}_{J}}\rangle=\bigotimes_{i\in J}|\psi_{c_{i}}^{\omega^{ \prime}}\rangle.\] 2. Notice that \(\mathbf{c}_{J}\in\mathcal{C}_{J}\) and recovering \(\mathbf{c}_{J}\) from \(|\phi_{\mathbf{c}_{J}}\rangle\) is a quantum decoding problem on \(\mathcal{C}_{J}\), more precisely an instance of \(\text{QDP}(2,|J|,k,\omega^{\prime})\). As long as \(|J|\geq\lfloor pn\rfloor\), we use our \(\text{QDP}(q,n,\lfloor pn\rfloor,\omega^{\prime})\) (by potentially removing excess coordinates if necessary if \(|J|>\lfloor pn\rfloor\)) to recover \(\mathbf{c}_{J}\). 3. We recover \(\mathbf{c}\) from \(\mathbf{c}_{J}\) by computing \(\mathbf{c}_{J}\mathbf{G}_{J}^{-1}\mathbf{G}\). By definition, we recover \(\mathbf{c}_{J}\). We just have to bound the probability to recover \(\mathbf{c}\). Notice that in Step 1, we have from Proposition 12 that the measurement will succeed w.p. \(\frac{\omega^{\perp}}{(\omega^{\prime})^{\perp}}>p>R\) for each index. As in Section 3.1, this implies that with overwhelming probability, \(|J|\geq\lfloor pn\rfloor\) which in turn implies that we can recover \(\mathbf{c}\) from \(\mathbf{c}_{J}\) with overwhelming probability. #### 3.2.3 Interpretation of the above as changing the noise model In this section, we show how performing (partial) unambiguous state discrimination on a state \(|\psi_{b}\rangle=\sqrt{1-\omega}|b\rangle+\sqrt{\omega}|1-b\rangle\) can be seen as a way to change the noise model applied on the bit \(b\). We first define different notions of noisy channels in the binary setting. **Definition 22**.: _For a bit \(b\), an error probability \(\omega\) and abort probability \(p\), we define the distributions of the Binary Symmetric Channel BSC\((b,\omega)\), of the Binary Erasure Channel \(BEC(b,p)\) and of the Binary Symmetric with Errors and Erasures Channel BSEEC\((b,\omega,p)\) sampled as follows:_ \[BSC(b,\omega):\text{ return }b\text{ wp. }(1-\omega)\text{ and }(1-b) \text{ wp. }\omega.\] \[BEC(b,p):\text{ return }b\text{ wp. }(1-p)\text{ and }\bot\text{ wp. }p.\] \[BSEEC(b,\omega,p):\text{ return }b\text{ wp. }(1-p)(1-\omega),(1-b) \text{ wp. }(1-p)\omega\text{ and }\bot\text{ wp. }p.\] For a bit \(b\), flipping it w.p. \(\omega\) can be seen as passing \(b\) through a binary symmetric channel \(BSC(\omega)\). Having this error in superposition means that we have access to the quantum state. Our results can be interpreted as follows **Proposition 13**.: _From \(|\psi_{b}\rangle=\sqrt{1-\omega}|b\rangle+\sqrt{\omega}|1-b\rangle\) it is possible to:_ 1. _Generate_ \(y\gets BSC(b,\omega)\) _simply by measuring_ \(|\psi_{b}\rangle\) 2. _Generate_ \(y\gets BEC(b,1-2\omega^{\perp})\) _by performing unambiguous state discrimination on_ \(|\psi_{b}\rangle\)_._ 3. _Generate_ \(y\gets BSEEC(b,(\frac{\omega^{\perp}}{1-p})^{\perp},p)\) _for any abort probability_ \(p\in[0,1-2\omega^{\perp}]\)_, by performing partial unambiguous state discrimination on_ \(|\psi_{b}\rangle\)_._ Notice that the third case generalizes the \(2\) first cases by respectively taking \(p=0\) and \(p=1-2\omega^{\perp}\). This shows the advantage of having the noise in quantum superposition. It is possible to change the noise from the one coming from a Binary Symmetric Channel to the one coming from a Binary Erasure Channel or a Binary Symmetric with Errors and Erasures Channel. ## 4 Polynomial time algorithm for \(\mathrm{QDP}\) in the \(q\)-ary setting As we saw in the previous section, unambiguous state discrimination is crucial for polynomial time algorithm for \(\mathrm{QDP}\). While this task is very well understood in the binary case, we do not have any general formula in the \(q\)-ary setting. Fortunately, the states we consider will have enough structure so that we can fully characterize the optimal unambiguous state discrimination algorithm. We first present this characterization, which is essentially a generalization of the work of [1]. We then use this unambiguous state discrimination in the \(q\)-ary setting to derive our quantum algorithm for \(\mathrm{QDP}\) in the \(q\)-ary setting, in the same spirit as what we did in Section 3.1. ### Unambiguous state discrimination in the \(q\)-ary setting **Definition 23**.: _An unambiguous state discrimination measurement associated to some states \(|\psi_{0}\rangle,\ldots,|\psi_{N-1}\rangle\) is a POVM \(\{E_{0},\ldots,E_{N-1},E_{F}\}\) (where \(E_{F}\) stands for the failure outcome) s.t._ \[\forall i,j\neq i\in\llbracket 0,N-1\rrbracket,\ \mathrm{tr}(E_{i}|\psi_{j} \rangle\langle\psi_{j}|)=0.\] _To such a POVM, we associate the quantities \(P_{j}\mathop{=}\limits^{\triangle}\mathrm{tr}(E_{j}|\psi_{j}\rangle\langle\psi _{j}|)\) (the probability of correctly guessing \(j\) when given \(|\psi_{j}\rangle\), as well as the average success probability \(\overline{P_{D}}\mathop{=}\limits^{\triangle}\frac{1}{N}\sum_{j=0}^{N-1}P_{j}\)._ The optimal unambiguous measurement is not known when there are more than \(2\) states, however it is known in a case where the states we want to distinguish are linearly independent, have the same _a priori_ probabilities and are symmetric in the following sense [1] **Definition 24** (symmetric states).: _A set \(\{|\psi_{0}\rangle,\cdots,|\psi_{N-1}\rangle\}\) in a Hilbert space \(\mathcal{H}\) of dimension \(N\) is symmetric if and only if there exists a unitary transformation \(U\) of order \(N\) on \(\mathcal{H}\) such that for any \(i\) and \(j\) in \(\llbracket 0,N-1\rrbracket\) we have \(|\psi_{j}\rangle=U^{j-i}|\psi_{i}\rangle\)._ In such a case, the optimal unambiguous measurement is known [1] **Proposition 14** (Unambiguous State Discrimination of Symmetric States).: _Let \(\{|\psi_{0}\rangle,\cdots,|\psi_{N-1}\rangle\}\) be a set of \(N\) symmetric states associated to a unitary transform \(U\). Let \(\{E_{0},\ldots,E_{N-1},E_{F}\}\) be any unambiguous state discrimination measurement associated to these states and let \(P_{j}\) and \(\overline{P_{D}}\) be the associated success probabilities. \(\overline{P_{D}}\) always satisfies_ \[\overline{P_{D}}\leq N\min_{r\in\llbracket 0,N-1\rrbracket}|c_{r}|^{2}, \tag{7}\] _where \(c_{\tau}\) are the coordinates of \(|\psi_{0}\rangle\) in the eigenbasis \(\{|\gamma_{r}\rangle,r\in\llbracket 0,N-1\rrbracket\}\) of \(U\), i.e. \(|\psi_{0}\rangle=\sum_{i=0}^{N-1}c_{r}|\gamma_{r}\rangle\). There is a POVM which meets (7) with equality._ A corollary of this result is obtained by taking the Hilbert space of dimension a prime number \(p\) and take \(U\) as the shift operator \(U|x\rangle=|x+1\rangle\) where addition is performed in \(\mathbb{F}_{p}\). It is easy to verify that in this case, the maximal average probability of discrimination \(\overline{P_{D}}^{\max}\) is given by \[\overline{P_{D}}^{\max}=p\min_{r\in\mathbb{F}_{p}}|\widehat{f}(r)|^{2}\] when \(|\psi_{0}\rangle=\sum_{e\in\mathbb{F}_{p}}f(e)|e\rangle\). This is a consequence of the fact that an eigenbasis of \(U\) is given by \(\{|\widehat{x}\rangle,x\in\mathbb{F}_{p}\}\) (this is implied by Proposition 10) and from \[|\psi_{0}\rangle=\sum_{x\in\mathbb{F}_{p}}\widehat{f}(-x)|\widehat{x}\rangle.\] The last equation follows from Fact 1. We will actually use and prove a slightly more general result, where in particular the dimension of the Hilbert space is not prime anymore (in which case we can not apply Proposition 14) **Proposition 15**.: _Let \(|\psi\rangle=\sum_{\mathbf{y}\in\mathbb{F}_{q}^{n}}f(\mathbf{y})|\mathbf{y}\rangle\) for some function \(f:\mathbb{F}_{q}^{n}\to\mathbb{C}\) s.t. \(\|f\|_{2}=1\) and for \(\mathbf{b}\in\mathbb{F}_{q}^{n}\), let \(|\psi_{\mathbf{b}}\rangle\mathop{=}^{\triangle}X_{\mathbf{b}}|\psi\rangle\). When the states \(|\psi_{\mathbf{b}}\rangle\) are all linearly independent, unambiguous state discrimination of the states \(\{|\psi_{\mathbf{b}}\rangle,\ \mathbf{b}\in\mathbb{F}_{q}^{n}\}\) is possible and has a maximal average probability of discrimination given by_ \[\overline{P_{D}}^{\max}=q^{n}\min_{\mathbf{x}\in\mathbb{F}_{q}^{n}}|\widehat{f}( \mathbf{x})|^{2}.\] The proof of this statement borrows many ideas from [1]. Before giving it, we have to recall a few points (see [1, 2] for more details) about unambiguous state discrimination. Unambiguous state discrimination of linearly independent states.Let \(\mathcal{H}\) be the Hilbert space spanned by the \(|\psi_{\mathbf{b}}\rangle\)'s for \(\mathbf{b}\) ranging over \(\mathbb{F}_{q}^{n}\). An optimal (leading to the maximal average probability of discrimination) POVM \(\{E_{\mathbf{b}},\mathbf{b}\in\mathbb{F}_{q}^{n}\}\cup\{E_{F}\}\) distinguishing unambiguously all the \(|\psi_{\mathbf{b}}\rangle\), where \(E_{\mathbf{b}}\) detects unambiguously \(|\psi_{\mathbf{b}}\rangle\) for all \(\mathbf{b}\) in \(\mathbb{F}_{q}^{n}\), can be chosen of the form \[E_{\mathbf{b}}=\frac{P_{\mathbf{b}}}{|\langle\psi_{\mathbf{b}}^{\perp}|\psi_{\mathbf{b}} \rangle|^{2}}|\psi_{\mathbf{b}}^{\perp}\rangle\langle\psi_{\mathbf{b}}^{\perp}| \tag{8}\] where \(P_{\mathbf{b}}\) is the probability of detecting \(|\psi_{\mathbf{b}}\rangle\) given that the input state was of this form and the \(\{|\psi_{\mathbf{b}}^{\perp}\rangle,\ \mathbf{b}\in\mathbb{F}_{q}^{n}\}\) are the reciprocal states of the \(|\psi_{\mathbf{b}}\rangle\)'s. \(|\psi_{\mathbf{b}}^{\perp}\rangle\) is the state (unique up to a irrelevant phase) which belongs to \(\mathcal{H}\) and is orthogonal to all other \(|\psi_{\mathbf{a}}\rangle\) for \(\mathbf{a}\) ranging over \(\mathbb{F}_{q}^{n}\setminus\{\mathbf{b}\}\). The average probability of discrimination is then \[\overline{P_{D}}=\frac{1}{q^{n}}\sum_{\mathbf{b}\in\mathbb{F}_{q}^{n}}P_{\mathbf{b}}.\] Let \[E_{D}\mathop{=}^{\triangle}\sum_{\mathbf{b}\in\mathbb{F}_{q}^{n}}E_{\mathbf{b}}\] Since \(E_{D}+E_{F}=\mathbb{1}\) and \(E_{F}\) should be a positive semi-definite operator, it is readily verified that an optimum POVM (i.e. one that gives the maximum average probability of discrimination) has necessarily its maximum eigenvalue \(\lambda_{\max}(E_{D})\) equal to \(1\). From these considerations, we see that if we bring in \(\mathbf{A}_{\mathbf{b}}\mathop{=}^{\triangle}\frac{1}{|\langle\psi_{\mathbf{b}}^{ \perp}|\psi_{\mathbf{b}}\rangle|^{2}}|\psi_{\mathbf{b}}^{\perp}\rangle\langle\psi_{\mathbf{ b}}^{\perp}|\) then the problem of maximizing \(\overline{P_{D}}\) is nothing but the problem of maximizing \(\frac{1}{q^{n}}\sum_{\mathbf{b}\in\mathbb{F}_{q}^{n}}P_{\mathbf{b}}\) given that \(0\leq P_{\mathbf{b}}\leq 1\) for all \(\mathbf{b}\) in \(\mathbb{F}_{q}^{n}\) and \(\mathbb{1}-\sum_{\mathbf{b}\in\mathbb{F}_{q}^{n}}P_{\mathbf{b}}\mathbf{A}_{\mathbf{b}}\succeq 0\) (_i.e._ is a positive semi-definite matrix). No general solution to this problem is known, with the notable exception of the symmetric states case given above and our case given in Proposition 15. An averaging argument can be used in such a case to show that actually in the optimal solution all the \(P_{j}\) can be chosen to be equal which makes the optimization trivial. An averaging argument.The proof of Proposition 14 of [1] relies essentially on an averaging argument which is used to show that there is an optimal POVM that satisfies a certain kind of invariance relation and whose individual discrimination probabilities are all the same. We show that a similar result also holds in our case **Lemma 6**.: _Assume that an optimal POVM is \(\{E_{\mathbf{b}},\mathbf{b}\in\mathbb{F}_{q}^{n}\}\cup\{E_{F}\}\). Denote by \(\overline{P_{D}}^{\text{max}}\) its average probability of discrimination. Define for all \(\mathbf{b}\) in \(\mathbb{F}_{q}^{n}\), \(E_{\mathbf{b}}^{\text{ave}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Choosing the appropriate basis.The appropriate basis which simplifies a lot the computation is the common diagonalization basis of all the \(X_{\mathbf{b}}\)'s. It is given by the "character" basis \(\{|\widehat{\mathbf{x}}\rangle,\mathbf{x}\in\mathbb{F}_{q}^{n}\}\) (see Proposition 10) and we have \[X_{\mathbf{b}}|\widehat{\mathbf{x}}\rangle=\chi_{\mathbf{x}}(-\mathbf{b})|\widehat{\mathbf{x}}\rangle. \tag{11}\] From this, we deduce that for all \(\mathbf{b}\) in \(\mathbb{F}_{q}^{n}\) we have \[X_{\mathbf{b}}=\sum_{\mathbf{x}\in\mathbb{F}_{q}^{n}}\chi_{\mathbf{x}}(-\mathbf{b})|\widehat{ \mathbf{x}}\rangle\langle\widehat{\mathbf{x}}| \tag{12}\] If we express \(|\psi_{\mathbf{0}}\rangle\) in this basis, we obtain \[|\psi_{\mathbf{0}}\rangle=\sum_{\mathbf{x}\in\mathbb{F}_{q}^{n}}c_{\mathbf{x}}|\widehat{ \mathbf{x}}\rangle\] then all the other ones are given by \[|\psi_{\mathbf{b}}\rangle=X_{\mathbf{b}}|\psi_{\mathbf{0}}\rangle=\sum_{\mathbf{x}\in\mathbb{F }_{q}^{n}}c_{\mathbf{x}}\chi_{\mathbf{x}}(-\mathbf{b})|\widehat{\mathbf{x}}\rangle. \tag{13}\] It is readily verified that the reciprocal states are given by \[|\psi_{\mathbf{b}}^{\perp}\rangle=\frac{1}{\sqrt{Z}}\sum_{\mathbf{x}\in\mathbb{F}_{q} ^{n}}\frac{1}{c_{\mathbf{x}}}\chi_{\mathbf{x}}(-\mathbf{b})|\widehat{x}\rangle \tag{14}\] where \(Z=\sum_{\mathbf{x}\in\mathbb{F}_{q}^{n}}|c_{\mathbf{x}}|^{-2}\). Indeed, we observe that for any \(\mathbf{a}\) and \(\mathbf{b}\) in \(\mathbb{F}_{q}^{n}\) we have \[\langle\psi_{\mathbf{a}}^{\perp}|\psi_{\mathbf{b}}\rangle=\frac{1}{\sqrt{Z}}\sum_{\bm {x}\in\mathbb{F}_{q}^{n}}\overline{\chi_{\mathbf{x}}(-\mathbf{a})}\chi_{\mathbf{x}}(-\mathbf{ b})=\frac{1}{\sqrt{Z}}\sum_{\mathbf{x}\in\mathbb{F}_{q}^{n}}\chi_{\mathbf{x}}(\mathbf{a}-\mathbf{ b})=\frac{q^{n}}{\sqrt{Z}}\delta(\mathbf{a},\mathbf{b}) \tag{15}\] where \(\delta(\mathbf{x},\mathbf{y})\) is the Kronecker function which is equal to \(1\) iff \(\mathbf{x}=\mathbf{y}\) and to \(0\) otherwise. We have now all the tools we need to prove Proposition 15. Proof of Proposition 15.: From Lemma 6 we can choose the \(E_{\mathbf{b}}\) of the optimal POVM as \[E_{\mathbf{b}}=\frac{\overline{P_{D}}}{|\langle\psi_{\mathbf{b}}^{\perp}|\psi_{\mathbf{b} }\rangle|^{2}}|\psi_{\mathbf{b}}^{\perp}\rangle\langle\psi_{\mathbf{b}}^{\perp}|. \tag{16}\] By (15) we know that \(|\langle\psi_{\mathbf{b}}^{\perp}|\psi_{\mathbf{b}}\rangle|^{2}=\frac{q^{2n}}{Z}\), and therefore by plugging this expression in (16) and using (13) and (14) we obtain \[E_{\mathbf{b}} = \frac{Z}{q^{2n}}\frac{\overline{P_{D}}}{Z}\sum_{\begin{subarray} {c}\mathbf{x}\in\mathbb{F}_{q}^{n}\\ \mathbf{y}\in\mathbb{F}_{q}^{n}\end{subarray}}\frac{1}{\overline{c_{\mathbf{x}}c_{\bm {y}}}}\chi_{\mathbf{x}}(-\mathbf{b})\overline{\chi_{\mathbf{y}}(-b)}|\widehat{\mathbf{x}} \rangle\langle\widehat{\mathbf{y}}|\] \[= \frac{\overline{P_{D}}}{q^{2n}}\sum_{\begin{subarray}{c}\mathbf{x} \in\mathbb{F}_{q}^{n}\\ \mathbf{y}\in\mathbb{F}_{q}^{n}\end{subarray}}\frac{1}{\overline{c_{\mathbf{x}}c_{\bm {y}}}}\chi_{\mathbf{b}}(-\mathbf{x})\chi_{\mathbf{b}}(\mathbf{y})|\widehat{\mathbf{x}}\rangle \langle\widehat{\mathbf{y}}|\] \[= \frac{\overline{P_{D}}}{q^{2n}}\sum_{\begin{subarray}{c}\mathbf{x} \in\mathbb{F}_{q}^{n}\\ \mathbf{y}\in\mathbb{F}_{q}^{n}\end{subarray}}\frac{1}{\overline{c_{\mathbf{x}}c_{\bm {y}}}}\chi_{\mathbf{b}}(\mathbf{y}-\mathbf{x})|\widehat{\mathbf{x}}\rangle\langle\widehat{\mathbf{ y}}|\] From this we infer that \[E_{D} = \sum_{\mathbf{b}\in\mathbb{F}_{q}^{n}}E_{\mathbf{b}}\] \[= \frac{\overline{P_{D}}}{q^{2n}}\sum_{\mathbf{b}\in\mathbb{F}_{q}^{n}} \sum_{\mathbf{x}\in\mathbb{F}_{q}^{n}}\frac{1}{\overline{c_{\mathbf{x}}}c_{\mathbf{y}}} \chi_{\mathbf{b}}(\mathbf{y}-\mathbf{x})|\widehat{\mathbf{x}}\rangle\langle\widehat{\mathbf{y}}|\] \[= \frac{\overline{P_{D}}}{q^{2n}}\sum_{\mathbf{x}\in\mathbb{F}_{q}^{n}} \sum_{\mathbf{b}\in\mathbb{F}_{q}^{n}}\frac{1}{\overline{c_{\mathbf{x}}}c_{\mathbf{y}}} \chi_{\mathbf{b}}(\mathbf{y}-\mathbf{x})|\widehat{\mathbf{x}}\rangle\langle\widehat{\mathbf{y}}|\] \[= \frac{\overline{P_{D}}}{q^{n}}\sum_{\mathbf{x}\in\mathbb{F}_{q}^{n}} \frac{1}{|c_{\mathbf{x}}|^{2}}|\widehat{\mathbf{x}}\rangle\langle\widehat{\mathbf{x}}|,\] where in the last line we used that \(\sum_{\mathbf{b}\in\mathbb{F}_{q}^{n}}\chi_{\mathbf{b}}(\mathbf{y}-\mathbf{x})=q^{n}\delta( \mathbf{x},\mathbf{y})\) by Proposition 9. Since the \(|\widehat{\mathbf{x}}\rangle\langle\widehat{\mathbf{x}}|\)'s form an orthonormal set of projectors we have that \(\lambda_{\max}(E_{D})=\frac{\overline{P_{D}}}{q^{n}\min_{\mathbf{x}\in\mathbb{F}_ {q}^{n}}|c_{\mathbf{x}}|^{2}}\). From the fact that we should have \(\lambda_{\max}(E_{D})\leq 1\) in order \(E_{F}\) to be positive semi-definite, we have \[\overline{P_{D}}\leq q^{n}\min_{\mathbf{x}\in\mathbb{F}_{q}^{n}}|c_{\mathbf{x}}|^{2}.\] Clearly the optimum is attained when we have equality here and therefore \[\overline{P_{D}}^{\max} = q^{n}\min_{\mathbf{x}\in\mathbb{F}_{q}^{n}}|c_{\mathbf{x}}|^{2}\] \[= q^{n}\min_{\mathbf{x}\in\mathbb{F}_{q}^{n}}|\widehat{f}(\mathbf{x})|^{2}\] where we used Fact 1 for the last point which gives \(c_{\mathbf{x}}=\widehat{f}(-\mathbf{x})\). **Remark 1**.: _It is readily seen that the two crucial ingredients of the proof are that (i) we can take an "average" of an optimal solution to show that there is an optimal solution where all states are discriminated with the same probability, (ii) a basis which simplifies the computation. (i) holds in a more general case where the set of states is of the form \(\{U|\psi\rangle,U\in G\}\) where \(G\) is a finite group of unitaries. On top of that, (ii) holds for instance if the group \(G\) is Abelian, the nice basis is then provided by the common diagonalization basis of the \(U\)'s. It other words, it is straightforward to generalize Proposition 14 in the case where the set of states is of the form \(\{U|\psi\rangle,U\in G\}\) where \(G\) is a finite Abelian group._ ### Quantum polynomial time algorithm for \(\mathrm{QDP}\) in the \(q\)-ary setting The goal of the previous subsection was to extend unambiguous state discrimination to our \(q\)-ary setting. When we apply Proposition 15 in our case we obtain **Proposition 16** (Unambiguous state discrimination, \(q\)-ary case).: _Let \(\omega\leq\frac{q-1}{q}\). For each \(a\in\mathbb{F}_{q}\), we define \(|\psi_{a}\rangle=\sqrt{1-\omega}|a\rangle+\sum_{b\neq a}\sqrt{\frac{\omega}{q- 1}}|b\rangle\). There exists a POVM \(\{\{E_{a}\}_{a\in\mathbb{F}_{q}},E_{F}\}\) s.t._ \[\forall a\in\mathbb{F}_{q},\ \mathrm{tr}(E_{a}|\psi_{a}\rangle \langle\psi_{a}|)\mathop{=}\limits^{\triangle}P_{usd}=\frac{q\cdot\omega^{ \perp}}{q-1}\] \[\forall a,b\neq a\in\mathbb{F}_{q},\ \mathrm{tr}(E_{b}|\psi_{a} \rangle\langle\psi_{a}|)=0\] Notice that since \(\{\{E_{a}\}_{a\in\mathbb{F}_{q}},E_{F}\}\) is a POVM, this implies for each \(a\in\mathbb{F}_{q}\)\(\operatorname{tr}(E_{F}|\psi_{a}\rangle\langle\psi_{a}|)=1-\mathrm{P}_{\mathrm{ usd}}=1-\frac{q\cdot\omega^{\perp}}{q-1}\) Proof.: We define \(|\psi\rangle=\sum_{x\in\mathbb{F}_{q}}f(x)|x\rangle\) with \(f(0)=\sqrt{1-\omega}\) and \(f(x)=\sqrt{\frac{\omega}{q-1}}\) for \(x\in\mathbb{F}_{q}^{*}\). With this definition, \(|\psi_{a}\rangle=X_{a}|\psi\rangle\). As computed in Lemma 3, we have \[\widehat{f}(0)=\sqrt{1-\omega^{\perp}}\quad;\quad\widehat{f}(y)=\sqrt{\frac{ \omega^{\perp}}{q-1}}\ \text{ for }y\in\mathbb{F}_{q}^{*}.\] with \(\omega^{\perp}=\frac{\left(\sqrt{(q-1)(1-\omega)}-\sqrt{\omega}\right)^{2}}{q}.\) One can check that for \(\omega\in[0,\frac{q-1}{q}]\), we have \(\hat{f}(0)\geq\hat{f}(y)\) for \(y\in\mathbb{F}_{q}^{*}\). We use Proposition 15 with \(n=1\) to immediately get \[\mathrm{P}_{\mathrm{usd}}=q\cdot\min_{y}|\hat{f}(y)|^{2}=\frac{q\cdot\omega^{ \perp}}{q-1}\] It also turns that this operation can be implemented efficiently in poly-log time (in \(q\)) as shown by **Proposition 17**.: _Consider the unitary \(U\) acting on \(|\psi_{a}\rangle|0\rangle\) as_ \[U|\widehat{0}\rangle|0\rangle =|\widehat{0}\rangle\left(u|0\rangle+\sqrt{1-u^{2}}|1\rangle \right) \text{ with }u=\sqrt{\frac{\omega^{\perp}}{(1-\omega^{\perp})(q-1)}}\] \[U|\widehat{\alpha}\rangle|0\rangle =|\widehat{\alpha}\rangle|0\rangle \forall\alpha\in\mathbb{F}_{q}^{*}\] _With our choice of function \(f\), the above unambiguous state discrimination quantum measurement can be done by applying \(U\) on \(|\psi_{\alpha}\rangle|0\rangle\) and and then measuring the output state in the computational basis. This can be done in time \(O(\mathsf{polylog}(q))\)._ Proof.: Let us start the proof by writing \(|\psi_{\alpha}\rangle\) in the Fourier basis \(\{\widehat{x},\ x\in\mathbb{F}_{q}\}\). This can be done by observing that \[|\psi_{\alpha}\rangle = X_{\alpha}|\psi\rangle\] \[= X_{\alpha}\cdot\mathrm{QFT}\cdot\mathrm{QFT}^{\dagger}\left|\psi\right\rangle\] \[= X_{\alpha}\cdot\mathrm{QFT}\left(\sqrt{1-\omega^{\perp}}|0 \rangle+\sum_{\gamma\in\mathbb{F}_{Q}^{*}}\sqrt{\frac{\omega^{\perp}}{q-1}}| \gamma\rangle\right)\ \text{(by Lemma \ref{lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemmalemma:lemma:lemma:lemma:lemmalemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemmalemma:lemma:lemma:lemma:lemma:lemma:lemma:lemmalemma:lemma:lemma:lemmalemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemma:lemmalemma:lemma:lemma:lemma:lemmalemma:lemma:lemma:lemmalemma:lemma:lemma:lemma:lemma:lemma:lemmalemma:lemma:lemma:lemmalemma:lemmalemma:lemma:lemmalemma:lemma:lemmalemma:lemmalemma:lemmalemma:lemmalemma:lemmalemmalemma:lemma Applying \(U\) on \(|\psi_{\alpha}\rangle|0\rangle\), we obtain \[|\phi^{\prime}_{\alpha}\rangle:=U|\Psi_{\alpha}\rangle|0\rangle=\left(u\sqrt{1- \omega^{\perp}}|\widehat{0}\rangle+\sum_{\gamma\in\mathbb{F}_{q}^{*}}\chi_{- \alpha}(\gamma)\sqrt{\frac{\omega^{\perp}}{q-1}}|\widehat{\gamma}\rangle\right) |0\rangle+\sqrt{1-u^{2}}\sqrt{1-\omega^{\perp}}|\widehat{0}\rangle|1\rangle.\] Notice that \(u\sqrt{1-\omega^{\perp}}=\sqrt{\frac{\omega^{\perp}}{q-1}}\) and that \(|\alpha\rangle=\frac{1}{\sqrt{q}}\sum_{\gamma\in\mathbb{F}_{q}}\chi_{-\alpha} (\gamma)|\widehat{\gamma}\rangle\) (by Fact 1). From there, we can rewrite \[|\phi^{\prime}_{\alpha}\rangle=\sqrt{\frac{q\omega^{\perp}}{q-1}}|\alpha \rangle|0\rangle+\sqrt{1-u^{2}}\sqrt{1-\omega^{\perp}}|\widehat{0}\rangle|1\rangle.\] We now measure all the qubits in the computational basis. If the last qubit is \(0\), the measurement outputs the value \(\alpha\) in the first register. If the last qubit is \(1\), we output Fail. The measurement succeeds and outputs the correct value \(\alpha\) w.p. \(\frac{q\omega^{\perp}}{q-1}\). The time to perform \(U\) is essentially the time to perform two Quantum Fourier Transforms so \(U\) can be efficiently computed in time \(O(\mathsf{polylog}(q))\), the whole measurement can be done in time \(O(\mathsf{polylog}(q))\). We can now present our polynomial time algorithm in the \(q\)-ary setting: **Theorem 6**.: _Let \(R>0\) and \(\omega\in(0,\frac{q-1}{q})\) satisfying \(\frac{q\cdot\omega^{\perp}}{q-1}>R\). There exists a quantum algorithm that solves \(\mathrm{QDP}(q,n,\lfloor Rn\rfloor,\omega)\) in time \(\mathsf{poly}(n,\log(q))\)._ Proof.: We fix \(R>0,k=\lfloor Rn\rfloor\) and \(\omega\in(0,\frac{q-1}{q})\) satisfying \(\frac{q\omega\cdot\cdot}{q-1}>R\). We are given a random generating matrix \(\mathbf{G}\in\mathbb{F}_{q}^{k\times n}\) with associated code \(\mathcal{C}\) as well as a state \(|\psi_{\mathbf{c}}\rangle=\bigotimes_{i=1}^{n}|\psi_{c_{i}}\rangle\) for a randomly chosen \(\mathbf{c}\in\mathcal{C}\), where \[|\psi_{c_{i}}\rangle=\sqrt{1-\omega}|c_{i}\rangle+\sum_{x\neq c_{i}}\sqrt{ \frac{\omega}{q-1}}|x\rangle.\] As in Section 3.1, we consider the following algorithm. Quantum algorithm for QDP using \(q\)-ary USD 1. Perform the optimal unambiguous measurement given in Proposition 16 from Proposition 7 on each register \(i\) in order to guess \(c_{i}\), which can be done w.p. \(\mathrm{P}_{\mathrm{usd}}=\frac{q\omega^{\perp}}{q-1}\). Let \(J\subseteq[n]\) be the set of indices where this measurement succeeds. The algorithm recovers here \(\mathbf{c}_{J}\). 2. If \(G_{J}\in\mathbb{F}_{q}^{k\times|J|}\) is of rank \(k\), recover \(\mathbf{c}\) from \(\mathbf{c}_{J}\) by computing \(\mathbf{c}_{J}G_{J}^{-1}G\). By our choice of \(\omega\), we have \(\mathrm{P}_{\mathrm{usd}}>R\) which means that there exists an absolute constant \(\gamma>0\) s.t. \(\mathrm{P}_{\mathrm{usd}}=R+\gamma\). This in turn implies that the success probability of this algorithm is \(1-o(1)\), using the same arguments as in Section 3.1. ## 5 (In)tractability of the quantum decoding problem In this section we provide a full characterization of the tractability of \(\mathrm{QDP}(q,n,k,\omega)\). We show that the problem is tractable _i.e._ there exists a quantum algorithm that solves the problem w.p. \(1-o(1)\) (as \(n\to+\infty\) and \(q=\Omega(1)\)) for any absolute constant \(\omega<\left(\delta_{\min}(q,1-k/n)\right)^{\perp}\). We will simplify the notation as alluded in Subsection 2.1 and write from now on just \(\delta_{\min}(1-k/n)\) instead of \(\delta_{\min}(q,1-k/n)\). Moreover, \(R\) denotes in the whole section the rate \(\frac{k}{n}\) of the code we decode: \[R\mathop{=}^{\triangle}k/n.\] Notice here that we do not put here any restriction on the running time of the algorithm. On the other hand, we show that the problem is intractable _i.e._ all quantum algorithms solve the problem w.p. at most \(o(1)\) for any absolute constant \(\omega>\left(\delta_{\min}(1-R)\right)^{\perp}\). In order to prove our results, we will focus on a single quantum algorithm: the one that performs a pretty good measurement. Recall that in the quantum decoding problem, we have to recover \(\mathbf{c}\) from \(|\psi_{\mathbf{c}}\rangle=\sum_{\mathbf{c}}\sqrt{(\frac{\omega}{q-1})^{|\mathbf{c}|}(1- \omega)^{n-|\mathbf{c}|}}|\mathbf{c}+\mathbf{e}\rangle\). In order to prove our results, we will focus on a single quantum algorithm performing the pretty good measurement on the Fourier transforms of these states. For the tractability result, we show that the PGM recovers \(\mathbf{c}\) w.p. \(1-o(1)\). For the intractability result, we show that the PGM recovers \(\mathbf{c}\) w.p. \(o(1)\). But we know from Proposition 8 that this implies that any quantum algorithm will recover \(\mathbf{c}\) w.p. \(o(1)\) hence the intractability result. We first study the PGM for any error function \(f\) and then apply our results to \(f(\mathbf{e})=\sqrt{(\frac{\omega}{q-1})^{|\mathbf{c}|}(1-\omega)^{n-|\mathbf{c}|}}\) in order to show our (in)tractability results. ### Computing the PGM associated to the quantum decoding problem We fix a generating matrix \(\mathbf{G}\) and an associated code \(\mathcal{C}\). In order to study our PGM, we define the shifted dual codes of \(\mathcal{C}\) \[\mathcal{C}_{\mathbf{s}}^{\perp}\mathop{=}^{\triangle}\{\mathbf{x}\in\mathbb{F}_{q}^{ n}:\mathbf{G}\mathbf{x}^{\intercal}=\mathbf{s}\}\] Notice that \(\mathcal{C}_{\mathbf{0}}^{\perp}=\mathcal{C}^{\perp}\) where \(\mathcal{C}^{\perp}\) is the dual code of \(\mathcal{C}\). For each shifted dual code \(\mathcal{C}_{\mathbf{s}}^{\perp}\), we fix an element \(u_{\mathbf{s}}\in\mathcal{C}_{\mathbf{s}}^{\perp}\). We have \(\mathcal{C}_{\mathbf{s}}^{\perp}=\{u_{\mathbf{s}}+\mathbf{d}:\mathbf{d}\in\mathcal{C}^{\perp}\}\). This means that for all \(\mathbf{c}\) in \(\mathcal{C}\) and all \(\mathbf{y}\) in \(\mathcal{C}_{\mathbf{s}}^{\perp}\) \[\chi_{\mathbf{c}}(\mathbf{y})=\chi_{\mathbf{c}}(u_{\mathbf{s}})\] Moreover, for any \(\mathbf{s},\mathbf{s}^{\prime}\) in \(\mathbb{F}_{q}^{k}\) s.t. \(\mathbf{s}^{\prime}\neq\mathbf{s}\), since \(u_{\mathbf{s}}+u_{\mathbf{s}^{\prime}}\notin\mathcal{C}^{\perp}\), we have \[\sum_{\mathbf{c}\in\mathcal{C}}\chi_{\mathbf{c}}(u_{\mathbf{s}}+u_{\mathbf{s}^{\prime}})=0 \tag{17}\] Now fix any error function \(f:\mathbb{F}_{q}^{n}\to\mathbb{C}\) s.t. \(\|f\|_{2}=1\), and consider the states \(|\psi_{\mathbf{c}}\rangle=\sum_{\mathbf{c}\in\mathbb{F}_{q}^{n}}f(\mathbf{e})|\mathbf{c}+\mathbf{e}\rangle\). The goal is to recover \(\mathbf{c}\). Actually, we will start from \(|\widehat{\psi_{\mathbf{c}}}\rangle=\mathrm{QFT}\left|\psi_{\mathbf{c}}\right\rangle\) instead of \(|\psi_{\mathbf{c}}\rangle\) and apply the Pretty Good Measurement on the ensemble of states \(\{|\widehat{\psi_{\mathbf{c}}}\rangle\}\). The distinguishing problem is equivalent since applying QFT is a unitary operation. We first define the states \[|W_{\mathbf{s}}\rangle\mathop{=}^{\triangle}\sum_{\mathbf{y}\in\mathcal{C}_{\perp}^{ \perp}}\hat{f}(\mathbf{y})|\mathbf{y}\rangle\quad\text{ (not normalized)}\] \[|\widetilde{W}_{\mathbf{s}}\rangle\mathop{=}^{\triangle}\frac{|W_{\mathbf{s}}\rangle }{\||W_{\mathbf{s}}\rangle\|}\] as well as \(n_{\mathbf{s}}\mathop{=}^{\triangle}\||W_{\mathbf{s}}\rangle\|=\sqrt{\sum_{\mathbf{y}\in \mathcal{C}_{\mathbf{s}}^{\perp}}|\hat{f}(\mathbf{y})|^{2}}\). We first write \(|\widehat{\psi_{\mathbf{c}}}\rangle\) in the \(\{|W_{\mathbf{s}}\rangle\}\) basis. **Lemma 7**.: \(|\widehat{\psi_{\mathbf{c}}}\rangle=\sum_{\mathbf{s}\in\mathbb{F}_{q}^{k}}\chi_{\mathbf{c} }(u_{\mathbf{s}})|W_{\mathbf{s}}\rangle\)_._ Proof.: We write \[|\widehat{\psi_{\mathbf{c}}}\rangle =\frac{1}{\sqrt{q^{n}}}\sum_{\mathbf{y},\mathbf{e}\in\mathbb{F}_{q}^{n}} \chi_{\mathbf{c}+\mathbf{e}}(\mathbf{y})f(\mathbf{e})|\mathbf{y}\rangle\] \[=\frac{1}{\sqrt{q^{n}}}\sum_{\mathbf{s}\in\mathbb{F}_{q}^{k}}\sum_{\bm {s}\in\mathcal{C}_{\mathbf{s}}^{\perp}}\chi_{\mathbf{c}}(\mathbf{x})\sum_{\mathbf{e}\in\mathbb{ F}_{q}^{n}}\chi_{\mathbf{c}}(\mathbf{x})f(\mathbf{e})|\mathbf{x}\rangle\] \[=\sum_{\mathbf{s}\in\mathbb{F}_{q}^{k}}\chi_{\mathbf{c}}(\mathbf{u}_{\mathbf{s}}) \sum_{\mathbf{x}\in\mathcal{C}_{\mathbf{s}}^{\perp}}\hat{f}(\mathbf{x})|\mathbf{x}\rangle=\sum _{\mathbf{s}\in\mathcal{C}_{\mathbf{s}}^{\perp}}\chi_{\mathbf{c}}(\mathbf{u}_{\mathbf{s}})|W_{\bm {s}}\rangle.\] We can now explicit the PGM associated to the states \(|\widehat{\psi_{\mathbf{c}}}\rangle\). **Proposition 18**.: _The PGM associated to the ensemble of states \(\{|\widehat{\psi_{\mathbf{c}}}\rangle\}_{\mathbf{c}\in\mathcal{C}}\) is the projective measurement \(\{|Y_{\mathbf{c}}\rangle\langle Y_{\mathbf{c}}|\}_{\mathbf{c}\in\mathcal{C}}\) where \(|Y_{\mathbf{c}}\rangle=\frac{1}{\sqrt{q^{k}}}\sum_{\mathbf{s}\in\mathbb{F}_{q}^{k}} \chi_{\mathbf{c}}(\mathbf{u}_{\mathbf{s}})|\widehat{W}_{\mathbf{s}}\rangle\)._ Proof.: We write the PGM \(\{M_{\mathbf{c}}\}\) associated to the states \(|\widehat{\psi_{\mathbf{c}}}\rangle\) using Definition 18. \[M_{\mathbf{c}}=\rho^{-1/2}|\widehat{\psi_{\mathbf{c}}}\rangle\langle \widehat{\psi_{\mathbf{c}}}|\rho^{-1/2}\quad\text{ given }\rho=\sum_{\mathbf{c}\in\mathbb{F}_{q}^{k}}| \widehat{\psi_{\mathbf{c}}}\rangle\langle\widehat{\psi_{\mathbf{c}}}|\] We now write \[\rho=\sum_{\mathbf{c}\in\mathcal{C}}|\widehat{\psi_{\mathbf{c}}}\rangle \langle\widehat{\psi_{\mathbf{c}}}|=\sum_{\mathbf{c}\in\mathcal{C}}\sum_{\mathbf{s},\mathbf{s }^{\prime}\in\mathbb{F}_{q}^{k}}\chi_{\mathbf{c}}(\mathbf{u}_{\mathbf{s}}-\mathbf{u}_{\mathbf{s}^ {\prime}})|W_{\mathbf{s}}\rangle\langle W_{\mathbf{s}^{\prime}}|=q^{k}\sum_{\mathbf{s}\in \mathbb{F}_{q}^{k}}|W_{\mathbf{s}}\rangle\langle W_{\mathbf{s}}|\] where we use Equation 17 as well as \(\chi_{\mathbf{c}}(\mathbf{0})=1\) for the last equality. Using the fact that the \(|W_{\mathbf{s}}\rangle\) are pairwise orthogonal (since they have disjoint support in the computational basis), the \(|\widetilde{W}_{\mathbf{s}}\rangle\langle\widetilde{W}_{\mathbf{s}}|\)'s are pairwise orthogonal projectors and we have \[\rho=q^{k}\sum_{\mathbf{s}\in\mathbb{F}_{q}^{k}}n_{\mathbf{s}}^{2}| \widetilde{W}_{\mathbf{s}}\rangle\langle\widetilde{W}_{\mathbf{s}}|\ \ \text{hence}\ \ \rho^{-1/2}=\frac{1}{\sqrt{q^{k}}}\sum_{\mathbf{s}\in\mathbb{F}_{q}^{k}}\frac{1}{n_ {\mathbf{s}}}|\widetilde{W}_{\mathbf{s}}\rangle\langle\widetilde{W}_{\mathbf{s}}|\] and \[\rho^{-1/2}|\widehat{\psi_{\mathbf{c}}}\rangle=\frac{1}{\sqrt{q^{k}}} \sum_{\mathbf{s}\in\mathbb{F}_{q}^{k}}\chi_{\mathbf{c}}(\mathbf{u}_{\mathbf{s}})|\widetilde{W} _{\mathbf{s}}\rangle:=|Y_{\mathbf{c}}\rangle. \tag{18}\] Here \(|Y_{\mathbf{c}}\rangle\) is a pure state of norm 1. Also, notice that these states are pairwise orthogonal. So \(M_{\mathbf{c}}=|Y_{\mathbf{c}}\rangle\langle Y_{\mathbf{c}}|\) and the PGM is just the projective measurement \(\{|Y_{\mathbf{c}}\rangle\langle Y_{\mathbf{c}}|\}_{\mathbf{c}\in\mathcal{C}}\). Finally, we can explicit the probability that the PGM succeeds on the states \(|\widehat{\psi_{\mathbf{c}}}\rangle\). **Proposition 19**.: _The PGM succeeds to recover \(\mathbf{c}\) from \(|\widehat{\psi_{\mathbf{c}}}\rangle\) w.p. \(\frac{1}{q^{k}}\left(\sum_{\mathbf{s}\in\mathbb{F}_{q}^{k}}n_{\mathbf{s}}\right)^{2}\)._ Proof.: From the previous proposition, the PGM we use is the projective measurement \(\{|Y_{\mathbf{c}}\rangle\langle Y_{\mathbf{c}}|\}_{\mathbf{c}\in\mathcal{C}}\) with \(|Y_{\mathbf{c}}\rangle=\frac{1}{\sqrt{q^{k}}}\sum_{\mathbf{s}\in\mathbb{F}_{q}^{k}} \chi_{\mathbf{c}}(\mathbf{u}_{\mathbf{s}})|\widetilde{W}_{\mathbf{s}}\rangle\). For each \(\mathbf{c}\in\mathcal{C}\), we write using Lemma 7 as well as the expression of \(|Y_{\mathbf{c}}\rangle\) the probability \(p_{\mathbf{c}}\) that this measurement succeeds \[p_{\mathbf{c}}\mathop{=}^{\triangle}|\langle Y_{\mathbf{c}}|\widehat{ \psi_{\mathbf{c}}}\rangle|^{2}=\frac{1}{q^{k}}\left|\sum_{\mathbf{s}}\langle \widetilde{W}_{\mathbf{s}}|W_{\mathbf{s}}\rangle\right|^{2}=\frac{1}{q^{k}}\left(\sum_{ \mathbf{s}\in\mathbb{F}_{q}^{k}}n_{\mathbf{s}}\right)^{2} \tag{19}\] which immediately gives us the result. **Remark 2**.: _Since \(|\widehat{\psi_{\mathbf{c}}}\rangle\) is of norm \(1\), we have immediately from Lemma 7 that \(\sum_{\boldsymbol{s}\in\mathbb{F}_{q}^{k}}n_{\boldsymbol{s}}^{2}=1\). In the case where all the norms are equal, we have \(n_{\boldsymbol{s}}=\sqrt{q^{-k}}\) which gives indeed \(P_{PGM}=1\). On the other hand, if these norms are highly unbalanced the probability that the PGM succeeds is very low._ ### (In)tractability results #### 5.2.1 First computations and probabilistic arguments on random codes We go back to our quantum decoding problem. Our error function corresponds to the \(q\)-ary symmetric channel so \(f(\boldsymbol{e})=\left(\sqrt{1-\omega}\right)^{n-|\boldsymbol{e}|}\left(\sqrt {\frac{\omega}{q-1}}\right)^{|\boldsymbol{e}|}\) we have (see Lemma 3) \[\hat{f}(\boldsymbol{y})=(\sqrt{1-\omega^{\perp}})^{n-|\boldsymbol{y}|}\left( \sqrt{\frac{\omega^{\perp}}{q-1}}\right)^{|\boldsymbol{y}|},\] with \(\omega^{\perp}=\frac{\left(\sqrt{(q-1)(1-\omega)}-\sqrt{\omega}\right)^{2}}{q}.\) For a fixed \(\mathbf{G}\in\mathbb{F}_{q}^{k\times n}\) and associated code \(\mathcal{C}\) (we will not make this dependency explicit in the notation to simplify it), we define \[n_{\boldsymbol{s},\mathcal{C}} \stackrel{{\triangle}}{{=}} \|\sum_{\boldsymbol{y}\in\mathcal{C}_{\boldsymbol{z}}^{\perp}} \hat{f}(\boldsymbol{y})|\boldsymbol{y}\|\] \[a_{\boldsymbol{s},\mathcal{C}}(t) \stackrel{{\triangle}}{{=}} \left|\left\{\boldsymbol{y}\in\mathcal{C}_{\boldsymbol{s}}^{ \perp}:|\boldsymbol{y}|=t\right\}\right|\] We also define \(S(t)\!\triangleq\!\frac{(q-1)^{t}\binom{n}{t}}{q^{k}}\). Notice that \(n_{\boldsymbol{s},\mathcal{C}}\) corresponds exactly to \(n_{\boldsymbol{s}}\) defined in the previous section but we made the dependency in \(\mathcal{C}\) explicit. Our goal is to compute the success probability of the PGM on average on \(\mathbf{G}\) so using Proposition 19, we want to bound the quantity \[\mathrm{P}_{\mathrm{PGM}}=\mathsf{E}_{\mathbf{G}}\left[\frac{1}{q^{k}}\left( \sum_{\boldsymbol{s}\in\mathbb{F}_{q}^{k}}n_{\boldsymbol{s},\mathcal{C}} \right)^{2}\right].\] We first write \[n_{\boldsymbol{s},\mathcal{C}}^{2}=\sum_{y\in\mathcal{C}_{\boldsymbol{z}}^{ \perp}}|\hat{f}(\boldsymbol{y})|^{2}=\sum_{t=0}^{n}a_{\boldsymbol{s},\mathcal{ C}}(t)\left(\frac{\omega^{\perp}}{q-1}\right)^{t}(1-\omega^{\perp})^{n-t} \tag{20}\] and recall from Remark 2 that \(\sum_{\boldsymbol{s}\in\mathbb{F}_{q}^{k}}n_{\boldsymbol{s},\mathcal{C}}^{2}=1\). We see that to compute \(P_{PGM}\), we have to say something about the terms \(a_{\boldsymbol{s}}(t,\mathcal{C})\). We first have the following, which was proven for example in [10]: **Proposition 20**.: \(\forall t\neq 0,\ \mathsf{E}_{\mathbf{G}}\left[a_{\boldsymbol{s}}(t,\mathcal{C}) \right]=S(t).\)__ But the expected value will not be enough. We will need concentration bounds coming from the second moment technique **Proposition 21** (Second moment technique, Proposition 3 from [10]).: _Fix any \(\boldsymbol{s}\in\mathbb{F}_{q}^{k}\) and \(t\in\llbracket 1,n\rrbracket\). For any \(\varepsilon>0\), we have_ \[\Pr_{G}\left[|a_{\boldsymbol{s},\mathcal{C}}(t)-S(t)|\geq\varepsilon S(t) \right]\leq\frac{q-1}{\varepsilon^{2}S(t)}.\] _In particular, take \(\varepsilon=S(t)^{-1/3}\), we have_ \[\Pr_{G}\left[a_{\mathbf{s},\mathcal{C}}(t)\leq S(t)(1-\frac{1}{S(t)^{1/3}})\right] \leq\frac{q-1}{S(t)^{1/3}}.\] We can now observe two things 1. From the above proposition combined with Equation 20, we have that when \(S(t)\) is exponential, which happens when \(t=\gamma n\) with \(\gamma\in(\delta_{\min}(1-R),\delta_{\max}(1-R))\) \[n_{\mathbf{s},\mathcal{C}}^{2}\approx\sum_{t=\lfloor\delta_{\min}(1-R)n\rfloor}^{ \lceil\delta_{\max}(1-R)n\rceil}S(t)\left(\frac{\omega^{\perp}}{q-1}\right)^{t }(1-\omega^{\perp})^{n-t}.\] (21) 2. In order to estimate the above sum, first notice that \[\sum_{t=0}^{n}S(t)\left(\frac{\omega^{\perp}}{q-1}\right)^{t}(1-\omega^{\perp })^{n-t}=\frac{1}{q^{k}}\sum_{t=0}^{n}\binom{n}{t}(\omega^{\perp})^{t}(1- \omega^{\perp})^{n-t}=\frac{1}{q^{k}}.\] (22) But the above sum is actually the cumulative sum of the binomial distribution with parameters \(n\) and \(\omega^{\perp}\). It concentrates around the weight \(n\omega^{\perp}\). This is formalized by the following proposition **Proposition 22**.: _For any absolute constant \(\varepsilon>0\),_ \[\sum_{t=\lfloor(\omega^{\perp}-\varepsilon)n\rfloor}^{\lceil(\omega^{\perp}+ \varepsilon)n\rceil}S(t)\left(\frac{\omega^{\perp}}{q-1}\right)^{t}(1-\omega^ {\perp})^{n-t}=\frac{1}{q^{k}}\left(1-o(1)\right). \tag{23}\] We now have all the tools for our (in)tractability proofs. The main idea is the following: when \(t=\omega n\) with \(\omega<\delta_{\min}(1-R)^{\perp}\), we have \(\omega^{\perp}\in(\delta_{\min}(1-R),\delta_{\max}(1-R))\) and so we can combine Equations 21,23 to show that for most \(\mathbf{G}\), \(n_{\mathbf{s},\mathcal{C}}^{2}=\frac{1}{q^{k}}(1-o(1))\). On the other hand, when \(\omega>\delta_{\min}(1-R)^{\perp}\), we have \(\omega^{\perp}\notin(\delta_{\min}(1-R),\delta_{\max}(1-R))\) and so we can combine Equations 21,22,23 to show that for most \(\mathbf{G}\), \(n_{\mathbf{s},\mathcal{C}}^{2}=o(1)\). The next sections will make these arguments formal and show how this allows us to conclude. #### 5.2.2 Tractability We use the notations previously defined in Section 5.2.1. \(\omega\) will be considered as a fixed constant in \((0,1)\). Our main claim is the following **Proposition 23**.: _If \(\omega<(\delta_{\min}(1-R))^{\perp}\) then \(P_{PGM}=\frac{1}{q^{k}}\mathbf{E}_{\mathbf{G}}\left[\left(\sum_{\mathbf{s}\in \mathbb{F}_{q}^{k}}n_{\mathbf{s},\mathcal{C}}\right)^{2}\right]=1-o(1).\)_ Proof.: Using (20) we know that \(n_{\mathbf{s},\mathcal{C}}^{2}=\sum_{t=0}^{n}a_{\mathbf{s},\mathcal{C}}(t)\left(\frac {\omega^{\perp}}{q-1}\right)^{t}(1-\omega^{\perp})^{n-t}.\) Since \(\omega<(\delta_{\min}(1-R))^{\perp}\), we have \(\omega^{\perp}>\delta_{\min}(1-R)\) so we fix \(\delta>0\) s.t. \(\omega^{\perp}-\delta>\delta_{\min}(1-R).\) We therefore write \[n_{\mathbf{s},\mathcal{C}}^{2}\geq\sum_{t=\lfloor(\omega^{\perp}-\delta)n\rfloor}^ {\lfloor(\omega^{\perp}+\delta)n\rfloor}a_{\mathbf{s},\mathcal{C}}(t)\left(\frac {\omega^{\perp}}{q-1}\right)^{t}(1-\omega^{\perp})^{n-t}.\] We define \(t_{0}=\lfloor(\omega^{\perp}-\delta)n\rfloor\) and \(t_{1}=\lfloor(\omega^{\perp}+\delta)n\rfloor\). Recall that \(a_{\mathbf{s},\mathcal{C}}(t)\) is typically close to \(S(t)\) as shown by Lemma 21. This gives for \(t\in\left(\lfloor(\omega^{\perp}-\delta)n\rfloor,\lfloor(\omega^{\perp}+\delta) n\rfloor\right)\) \[\Pr_{G}\left[a_{\mathbf{s},\mathcal{C}}(t)\leq S(t)(1-\frac{1}{S(t_ {0})^{1/3}})\right] \leq\Pr_{G}\left[a_{\mathbf{s},\mathcal{C}}(t)\leq S(t)(1-\frac{1}{S(t )^{1/3}})\right]\] \[\leq\frac{q-1}{S(t)^{1/3}}\leq\frac{q-1}{S(t_{0})^{1/3}}.\] and \[\Pr_{G}\left[\forall t\in\llbracket t_{0},t_{1}\rrbracket,\ a_{\mathbf{s}, \mathcal{C}}(t)\geq S(t)(1-\frac{1}{S(t_{0})^{1/3}})\right]\geq 1-\frac{(q-1)(t_{1}-t_ {0}+1)}{S(t_{0})^{1/3}}\] This implies \[\Pr_{G}\left[n_{\mathbf{s},\mathcal{C}}^{2}\geq\sum_{t=t_{0}}^{t_{1}}S(t)(1-S(t_{ 0})^{-1/3})\left(\frac{\omega^{\perp}}{q-1}\right)^{t}(1-\omega^{\perp})^{n-t} \right]\geq 1-\frac{(q-1)(t_{1}-t_{0}+1)}{S(t_{0})^{1/3}}. \tag{24}\] We have \[\sum_{t=t_{0}}^{t_{1}}S(t)(1-S(t_{0})^{-1/3})\left(\frac{\omega^{ \perp}}{q-1}\right)^{t}(1-\omega^{\perp})^{n-t} =(1-S(t_{0})^{-1/3})\sum_{t=t_{0}}^{t_{1}}S(t)\left(\frac{\omega^ {\perp}}{q-1}\right)^{t}(1-\omega^{\perp})^{n-t}\] \[=\frac{(1-S(t_{0})^{-1/3})}{q^{k}}K(n)\] where \(K(n)\!\stackrel{{\triangle}}{{=}}\!\sum_{t=t_{0}}^{t_{1}}\binom{ n}{t}\left(\omega^{\perp}\right)^{t}(1-\omega^{\perp})^{n-t}\) and \(K(n)=1-o(1)\) as \(n\) tends to infinity by Proposition 22. By plugging this equality in the left-hand side of (24) we obtain \[\Pr_{G}\left[n_{\mathbf{s},\mathcal{C}}\geq\sqrt{\frac{1-S(t_{0})^{-1/3}}{q^{k}} K(n)}\right]\geq 1-\frac{(q-1)(t_{1}-t_{0})}{S(t_{0})^{1/3}}.\] Since \(t_{0}=\lfloor(\omega^{\perp}-\delta)n\rfloor\) with \(\delta_{\min}(1-R)<(\omega^{\perp}-\delta)<\delta_{\max}(1-R)\), we have \(S(t_{0})=q^{\Omega(n)}\) which implies \[\mathsf{E}_{G}[n_{\mathbf{s},\mathcal{C}}] \geq\sqrt{\frac{1-S(t_{0})^{-1/3}}{q^{k}}K(n)}\cdot\Pr_{G}\left[ n_{\mathbf{s},\mathcal{C}}\geq\sqrt{\frac{1-S(t_{0})^{-1/3}}{q^{k}}K(n)}\right]\] \[\geq\sqrt{\frac{1-S(t_{0})^{-1/3}}{q^{k}}K(n)}\cdot\left(1-\frac {(q-1)(t_{1}-t_{0})}{S(t_{0})^{1/3}}\right)\] \[=\frac{1}{\sqrt{q^{k}}}\left(1-o(1)\right)\] which gives \[\mathsf{E}_{G}[\sum_{\mathbf{s}\in\mathbb{F}_{q}^{k}}n_{\mathbf{s}, \mathcal{C}}]\geq\sqrt{q^{k}}(1-o(1)) \tag{25}\] In order to conclude, we use Jensen's inequality \(\mathsf{E}_{G}(X^{2})\geq(\mathsf{E}_{G}(X))^{2}\) and Equation 25 to get \[P_{PGM}=\frac{1}{q^{k}}\mathsf{E}_{G}\left[\left(\sum_{\mathbf{s}\in\mathbb{F}_{q }^{n}}n_{\mathbf{s},\mathcal{C}}\right)^{2}\right]\geq\frac{1}{q^{k}}\left( \mathsf{E}_{G}\left[\sum_{\mathbf{s}\in\mathbb{F}_{q}^{k}}n_{\mathbf{s},\mathcal{C}} \right]\right)^{2}\geq 1-o(1)\] #### 5.2.3 Intractability Again, we use the same notation as in the previous sections with a fixed \(\omega\in(0,1)\). We show that if \(\omega\) is too large then \(P\mathop{\stackrel{{\triangle}}{{=}}}\mathsf{E}_{\mathbf{G}}( \operatorname{P}_{\mathrm{PGM}})\) is an \(o(1)\) as shown by **Theorem 7**.: _Let \(\mathbf{G}\in\mathbb{F}_{q}^{k\times n}\) be a random generating matrix and \(\mathcal{C}\) be the associated code. Let \(\omega>\left(\delta_{\min}(1-R)\right)^{\perp}\) and let the states \(|\psi_{\mathbf{c}}\rangle=\sum_{\mathbf{c}\in\mathbb{F}_{q}^{n}}f(\mathbf{e}) |\mathbf{c}+\mathbf{e}\rangle\) with_ \[f(\mathbf{e})=\sqrt{\left(\frac{\omega}{q-1}\right)^{|\mathbf{c}|}(1-\omega) ^{n-|\mathbf{e}|}}.\] _The pretty good measurement distinguishes the states \(|\psi_{\mathbf{c}}\rangle\) w.p. \(P=o(1)\)._ Again, we will heavily build on the expression of \(n_{\boldsymbol{s},\mathcal{C}}\) given by (20) in terms of the \(a_{\boldsymbol{s},\mathcal{C}}(t)\)'s. The proof is based on the following steps Step 1.Let us start by giving an upper-bound on \(a_{\boldsymbol{s},\mathcal{C}}(t)\) which holds with probability close to \(1\) for large values of \(K\) **Lemma 8**.: _For any \(K>0\), any \(t\) in \([\![1,n]\!]\) and any \(\boldsymbol{s}\in\mathbb{F}_{q}^{k}\), we have \(\operatorname{Pr}_{\mathbf{G}}\left[a_{\boldsymbol{s},\mathcal{C}}(t)\geq K \cdot S(t)\right]\leq\frac{1}{K}\) which directly implies_ \[\operatorname{Pr}_{\mathbf{G}}\left[a_{\boldsymbol{s},\mathcal{C}}(t)\leq K \cdot S(t)\right]\geq 1-\frac{1}{K}. \tag{26}\] Proof.: This is just Markov's inequality \(\operatorname{Pr}_{\mathbf{G}}\left[a_{\boldsymbol{s},\mathcal{C}}(t)\leq K \mathsf{E}_{\mathbf{G}}(a_{\boldsymbol{s},\mathcal{C}}(t))\right]\leq\frac{1} {K}\) by recalling that \(\mathsf{E}_{\mathbf{G}}(a_{\boldsymbol{s},\mathcal{C}}(t))=S(t)\). A rather immediate corollary of this result is that **Corollary 1**.: _For any \(\delta>0\), \(\boldsymbol{s}\) in \(\mathbb{F}_{q}^{k}\) and \(t\) in \([\![1,n]\!]\setminus[(\delta_{\min}(1-R)-\delta)n,(\delta_{\max}(1-R)+\delta)n]\), we have with probability \(1-q^{-\Omega(n)}\) that \(a_{\boldsymbol{s},\mathcal{C}}(t)=0\)._ Proof.: In such a case we have \(S(t)=q^{-\Omega(n)}\), since \(S(t)\leq q^{n(h_{q}(t/n)-k/n)}\) and we use Lemma 8 with \(K=\frac{1}{\sqrt{S(t)}}\) to obtain \[\operatorname{Pr}_{\mathbf{G}}\left[a_{\boldsymbol{s},\mathcal{C}}(t)\leq \sqrt{S(t)}\right]\geq 1-\sqrt{S(t)}=1-q^{-\Omega(n)}.\] We can conclude by using the fact that \(a_{\boldsymbol{s},\mathcal{C}}(t)\) is an non negative integer so if \(a_{\boldsymbol{s},\mathcal{C}}(t)\leq\sqrt{S(t)}<1\) then necessarily \(a_{\boldsymbol{s},\mathcal{C}}(t)=0\). Step 2.The previous results allow to show that **Lemma 9**.: _Let \(\omega>\left(\delta_{\min}(1-R)\right)^{\perp}\). There exists an \(\varepsilon>0\) such that for any \(\boldsymbol{s}\in\mathbb{F}_{q}^{k}\) which is non zero we have_ \[\operatorname{Pr}_{\mathbf{G}}\left[n_{\boldsymbol{s},\mathcal{C}}\geq q^{-k/ 2-\varepsilon n}\right]=q^{-\Omega(n)}.\] Proof.: Let us recall (20) \[n_{\boldsymbol{s},\mathcal{C}}^{2}=\sum_{y\in\mathcal{C}_{\boldsymbol{s}}^{ \perp}}|\hat{f}(\boldsymbol{y})|^{2}=\sum_{t=0}^{n}a_{\boldsymbol{s},\mathcal{ C}}(t)\left(\frac{\omega^{\perp}}{q-1}\right)^{t}(1-\omega^{\perp})^{n-t}.\] By using Corollary 1 and \(a_{\mathbf{s},\mathcal{C}}(0)=0\) for \(\mathbf{s}\neq 0\), we obtain that for any absolute constant \(\delta>0\), \[\forall\mathbf{s}\in F_{q}^{k}\backslash\{\mathbf{0}\},\ \Pr_{\mathbf{G}}\left[n_{\mathbf{s}, \mathcal{C}}^{2}=\sum_{t=\lfloor(\delta_{\min}-\delta)n\rfloor}^{\lceil(\delta _{\max}+\delta)n\rceil}a_{\mathbf{s},\mathcal{C}}(t)\left(\frac{\omega^{\perp}}{q-1 }\right)^{t}(1-\omega^{\perp})^{n-t}\right]\geq 1-q^{\Omega(n)}, \tag{27}\] where to simplify notation we simply write \(\delta_{\min}\) and \(\delta_{\max}\) for \(\delta_{\min}(1-R)\) and \(\delta_{\max}(1-R)\) respectively. Then by using Lemma 8 we also deduce that for any \(\delta,\delta^{\prime}>0\), \[\forall\mathbf{s}\in F_{q}^{k}\backslash\{\mathbf{0}\},\ \Pr_{\mathbf{G}}\left[n_{\mathbf{s}, \mathcal{C}}^{2}\leq\sum_{t=\lfloor(\delta_{\min}-\delta)n\rfloor}^{\lceil( \delta_{\max}+\delta)n\rceil}q^{\delta^{\prime}n}S(t)\left(\frac{\omega^{\perp }}{q-1}\right)^{t}(1-\omega^{\perp})^{n-t}\right]\geq 1-q^{-\Omega(n)}\] We observe now that \[\sum_{t=\lfloor(\delta_{\min}-\delta)n\rfloor}^{\lceil(\delta_{\max}+\delta)n \rceil}q^{\delta^{\prime}n}S(t)\left(\frac{\omega^{\perp}}{q-1}\right)^{t}(1- \omega^{\perp})^{n-t}=q^{\delta^{\prime}n-k}\sum_{t=\lfloor(\delta_{\min}- \delta)n\rfloor}^{\lceil(\delta_{\max}+\delta)n\rceil}p(t) \tag{28}\] where \(p(t)\stackrel{{\triangle}}{{=}}\binom{n}{t}(\omega^{\perp})^{t}(1 -\omega^{\perp})^{n-t}\) is the probability that a binomial variable of parameters \(n\) and \(\omega^{\perp}\) takes the value \(t\). By using the fact that \(\omega^{\perp}\leq(\delta_{\min}-\delta^{\ast})n\) for some \(\delta^{\ast}>0\) and the Hoeffding inequality (see Lemma 1) we deduce that for \(\delta=\delta^{\ast}/2\), it holds that \(\sum_{t=\lfloor(\delta_{\min}-\delta)n\rfloor}^{\lceil(\delta_{\max}+\delta)n \rceil}p(t)\leq q^{-\delta^{\prime\prime\prime}n}\) for some \(\delta^{\prime\prime\prime}>0\). By choosing \(\delta^{\prime}<\delta^{\prime\prime\prime}\), we obtain that \(n_{\mathbf{s},\mathcal{C}}\) is less than \(q^{-k/2-\frac{\delta^{\prime\prime\prime}-\delta^{\prime}}{2}n}\) with probability \(1-q^{-\Omega(n)}\). We just have to choose \(\varepsilon=(\delta^{\prime\prime\prime}-\delta^{\prime})/2\) to finish the proof. We are ready now to prove Theorem 7. Proof of Theorem 7.: For \(\mathbf{s}\in\mathbb{F}_{q}^{k}\), let \(G_{\varepsilon}(\mathbf{s})=\{\mathbf{G}\in\mathbb{F}_{q}^{k\times n}:n_{\mathbf{s}, \mathcal{C}}\geq q^{-k/2-\varepsilon n}\}\). Also, for \(\mathbf{G}\in\mathbb{F}_{q}^{k\times n}\), let \(S_{\varepsilon}(\mathbf{G})=\{\mathbf{s}\neq 0:n_{\mathbf{s},\mathcal{C}}\geq q^{-k/2- \varepsilon n}\}\). The previous lemma tells us that \(\forall\mathbf{s}\neq\mathbf{0},|G_{\varepsilon}(\mathbf{s})|=o(|G|)\) where \(|G|=q^{nk}\) is the total number of possible matrices \(\mathbf{G}\in\mathbb{F}_{q}^{k\times n}\). Now, notice that \[\sum_{\mathbf{s}\neq 0}|G_{\varepsilon}(\mathbf{s})|=\left|\{(\mathbf{G},\mathbf{s}):n_{\mathbf{s },\mathcal{C}}\geq q^{-k/2-\varepsilon n}\}\right|=\sum_{\mathbf{G}}|S_{ \varepsilon}(\mathbf{G})|.\] This implies that \(\sum_{G\in\mathbf{G}}|S_{\varepsilon}(G)|=o(|G|q^{k})\) and \(\mathsf{E}_{\mathbf{G}}[|S_{\varepsilon}(\mathbf{G})|]=o(q^{k})\). Now fix \(\mathbf{G}\in\mathbb{F}_{q}^{k\times n}\). We write \[\left(\sum_{\mathbf{s}\in\mathbb{F}_{q}^{n}}n_{\mathbf{s},\mathcal{C}} \right)^{2} =\left(n_{\mathbf{0},\mathcal{C}}+\sum_{\mathbf{s}\in S_{\varepsilon}( \mathbf{G})}n_{\mathbf{s},\mathcal{C}}+\sum_{\mathbf{s}\notin S_{\varepsilon}( \mathbf{G})}n_{\mathbf{s},\mathcal{C}}\right)^{2}\leq 3n_{\mathbf{0},\mathcal{C}}^{2}+3 \left(\sum_{\mathbf{s}\in S_{\varepsilon}(\mathbf{G})}n_{\mathbf{s},\mathcal{C}} \right)^{2}+3\left(\sum_{\mathbf{s}\notin S_{\varepsilon}(\mathbf{G})}n_{\mathbf{s}, \mathcal{C}}\right)^{2} \tag{29}\] \[\leq 3n_{\mathbf{0},\mathcal{C}}^{2}+3|S_{\varepsilon}(\mathbf{G}) |\sum_{\mathbf{s}\neq\mathbf{0}}n_{\mathbf{s},\mathcal{C}}^{2}+3\left(q^{k/2-\varepsilon n }\right)^{2}\] (30) \[\leq 3|S_{\varepsilon}(\mathbf{G})|+o(q^{k}) \tag{31}\] Here (29) follows from the inequality \((x+y+z)^{2}\leq 3x^{2}+3y^{2}+3z^{2}\) (which can be proved by noticing that \(3x^{2}+3y^{2}+3z^{2}-(x+y+z)^{2}=(x-y)^{2}+(y-z)^{2}+(x-z)^{2}\)). (30) follows from the Cauchy-Schwartz inequality and (31) is a consequence of \(\sum_{\mathbf{s}\in\mathbb{F}_{q}^{k}}n_{\mathbf{s},\mathcal{C}}^{2}=1\) which also gives \(n_{\mathbf{0},\mathcal{C}}^{2}\leq 1\). In order to conclude, we write \[P=\mathsf{E}_{\mathbf{G}}\left[\frac{1}{q^{k}}\left(\sum_{\mathbf{s}\in F_{q}^{k}}n _{\mathbf{s},\mathcal{C}}\right)^{2}\right]\leq\frac{1}{q^{k}}\left(3\mathsf{E}_{ \mathbf{G}}[|S_{\varepsilon}(\mathbf{G})|]+o(q^{k})\right)=o(1).\] From the quantum decoding problem to the short codeword problem In this section, we show how to apply our algorithm for the quantum decoding problem into Regev's reduction in order to obtain quantum algorithms for the short codeword problem. We fix \(n,k^{\prime},q\in\mathbb{N}\) with \(q\geq 2\) as well as \(\omega^{\prime}\in(0,1)\). We start from a random instance \(\mathbf{G}^{\prime}\in\mathbb{F}_{q}^{k^{\prime}\times n}\) of \(\mathrm{SCP}(q,n,k^{\prime},\omega^{\prime})\). Let \(\mathcal{C}^{\prime}\) be the code associated to \(\mathbf{G}^{\prime}\) and \(\mathcal{C}=(\mathcal{C}^{\prime})^{\perp}\) the dual code of \(\mathcal{C}^{\prime}\). The idea will be to solve a quantum decoding problem associated to \(\mathcal{C}\), _i.e._ from the state \(|\psi_{\boldsymbol{c}}\rangle\stackrel{{\triangle}}{{=}}\sum_{ \boldsymbol{c}\in\mathbb{F}_{q}^{n}}f(\boldsymbol{c})|\boldsymbol{c}+ \boldsymbol{c}\rangle\) where \(\boldsymbol{c}\) belongs to \(\mathcal{C}\), we want to recover \(\boldsymbol{c}\). Then we apply the quantum Fourier transform and measure in the computational basis to obtain a short codeword of \(\mathcal{C}^{\prime}\). We also define \(k=n-k^{\prime}\) and \[\omega\stackrel{{\triangle}}{{=}}(\omega^{\prime})^{\perp}= \frac{\left(\sqrt{(q-1)(1-\omega^{\prime})}-\sqrt{\omega^{\prime}}\right)^{2} }{q}\quad;\quad f(\boldsymbol{c})\stackrel{{\triangle}}{{=}} \left(\sqrt{\frac{\omega}{q-1}}\right)^{|\boldsymbol{c}|}\left(\sqrt{1-\omega }\right)^{n-|\boldsymbol{c}|}\] Recall also, using \(\omega^{\perp}=\omega^{\prime}\) that \[\widehat{f}(\boldsymbol{y})=\left(\sqrt{\frac{\omega^{\prime}}{q-1}}\right)^ {|\boldsymbol{y}|}\left(\sqrt{1-\omega^{\prime}}\right)^{n-|\boldsymbol{y}|}.\] Remark.We use this notation \(k^{\prime},\omega^{\prime}\) so that the problem we reduce to is a \(\mathrm{QDP}(q,n,k,\omega)\) with a generating matrix \(\mathbf{G}\in\mathbb{F}_{q}^{k\times n}\). This allows us to keep notation consistent with the previous section but be aware that the Short Codeword problem we are solving is on \(\mathcal{C}^{\prime}=\mathcal{C}^{\perp}\). ### Regev's reduction for codes We now describe Regev's reduction for codes. As we will see, this does not necessarily give a reduction from the short codeword problem to the quantum decoding problem because of the small error in the quantum decoding algorithm. We consider the formulation of this reduction from [10] and adapted in [11] in the context of codes. We first construct \[|\Omega_{0}\rangle=\frac{1}{\sqrt{|\mathcal{C}|}}\sum_{\boldsymbol{c}\in \mathcal{C}}\sum_{\boldsymbol{c}\in\mathbb{F}_{q}^{n}}f(\boldsymbol{c})| \boldsymbol{c}\rangle|\boldsymbol{c}\rangle\] and add \(\boldsymbol{c}\) to the second register to obtain \[|\Omega_{1}\rangle=\frac{1}{\sqrt{|\mathcal{C}|}}\sum_{\boldsymbol{c}\in \mathcal{C}}|\boldsymbol{c}\rangle|\psi_{\boldsymbol{c}}\rangle,\qquad\qquad \text{where }|\psi_{\boldsymbol{c}}\rangle=\sum_{\boldsymbol{c}\in\mathbb{F}_{q}^{n}}f( \boldsymbol{c})|\boldsymbol{c}+\boldsymbol{c}\rangle.\] The idea is then to recover \(\boldsymbol{c}\) from \(|\psi_{\boldsymbol{c}}\rangle\) using an algorithm for the quantum decoding problem. If this can be done perfectly, we can actually use this algorithm to erase the first register and obtain the state \[|\Omega_{2}\rangle=\frac{1}{\sqrt{|\mathcal{C}|}}\sum_{\boldsymbol{c}\in \mathcal{C}}|\psi_{\boldsymbol{c}}\rangle.\] We then apply the Quantum Fourier Transform on this state to get \[|\widehat{\Omega_{3}}\rangle=\frac{1}{\sqrt{|\mathcal{C}|}}\sum_{\boldsymbol{c }\in\mathcal{C}}|\widehat{\psi_{\boldsymbol{c}}}\rangle=\sqrt{|\mathcal{C}|} \sum_{\boldsymbol{y}\in\mathcal{C}^{\prime}}\widehat{f}(\boldsymbol{y})| \boldsymbol{y}\rangle.\] This follows from Proposition 11. Finally, we measure this state in the computational basis and hope to find a small codeword. The algorithm can be summarized by \[\begin{array}{ll}\mbox{\bf Algorithm of the quantum reduction.}\\ \mbox{Initial state preparation}:&|\Omega_{0}\rangle=\frac{1}{\sqrt{|\mathcal{C} |}}\sum_{e\in\mathcal{C}}\sum_{e\in\mathbb{F}_{q}^{n}}f(e)|c\rangle|e\rangle \\ \mbox{adding $c$ to $e$:}&|\Omega_{1}\rangle=\quad\frac{1}{\sqrt{| \mathcal{C}|}}\sum_{e\in\mathcal{C}}\sum_{e\in\mathbb{F}_{q}^{n}}f(e)|c\rangle |c+e\rangle=\quad\frac{1}{\sqrt{|\mathcal{C}|}}\sum_{e\in\mathcal{C}}|c\rangle |\psi_{e}\rangle\end{array}\] decoding and erasing 1st register \[\mapsto |\Omega_{2}\rangle=\quad\frac{1}{\sqrt{|\mathcal{C}|}}\sum_{e\in \mathcal{C}}|\mathbf{0}\rangle|\psi_{e}\rangle\] QFT on the 2nd register: \[\mapsto |\Omega_{3}\rangle=\frac{1}{\sqrt{|\mathcal{C}|}}\sum_{e\in \mathcal{C}}|\mathbf{0}\rangle|\widehat{\psi_{e}}\rangle=\sqrt{|\mathcal{C}| }\sum_{\mathbf{y}\in\mathcal{C}^{\prime}}\widehat{f}(\mathbf{y})|\mathbf{0}\rangle| \mathbf{y}\rangle\] measuring the whole state: \[\mapsto |\mathbf{0}\rangle|\mathbf{y}\rangle\ \ (\mbox{where $\mathbf{y}\in\mathcal{C}^{ \prime}=\mathcal{C}^{\perp}$})\] There are a two issues that can make the above algorithm not work as we want: * The quantum decoding problem used in order to go from \(|\Omega_{1}\rangle\) to \(|\Omega_{2}\rangle\) does not work perfectly in many cases. Even if we have an algorithm which works w.p. \(1-o(1)\), this can greatly change the state \(|\Omega_{3}\rangle\) that we have at the end5. This also means we have to explicit each time our quantum decoding procedure and analyze thoroughly the resulting state. Footnote 5: This seems counterintuitive at first as we would expect the final state to be \(\varepsilon\)-close to the ideal state if the quantum decoding succeeds w.p. \(1-\varepsilon\). However, we are in regimes where an ideal quantum decoder does not exist so such continuity arguments will not hold. As it will appear in our analysis, it is possible to slightly tweak the measurements used in the Quantum Decoding Problem and greatly change the outcome state. * Even if we obtain exactly the state \(\sqrt{|\mathcal{C}|}\sum_{\mathbf{y}\in\mathcal{C}^{\prime}}\widehat{f}(\mathbf{y})| \mathbf{y}\rangle\) we want, for values of \(\omega^{\prime}\) which are too large, this algorithm will actually always output \(\mathbf{y}=\mathbf{0}\) but we want a small non-zero codeword so our algorithm will not work. In this section, we show that our algorithms (or slight variants of our algorithms) can be successfully used in Regev's reduction in order to solve the Short Codeword Problem despite the above shortcomings. We show the following: 1. If we take our polynomial time algorithms for the quantum decoding problem (Section 4) we can find in quantum polynomial time small codewords down to Prange's bound, _i.e._ down to \(\frac{(n-k^{\prime})(q-1)}{q}=\frac{k(q-1)}{q}\). Notice however, that our algorithm obtains a state \(|\Omega_{2}\rangle\) very far from the theoretical state \(|\Omega_{2}\rangle=\frac{1}{\sqrt{|\mathcal{C}|}}\sum_{e\in\mathcal{C}}|\psi _{e}\rangle\), but we still show how to obtain a small codeword in \(\mathcal{C}\) after performing QFT and measuring. 2. If we consider the tractability regime and if we take the Pretty Good Measurement associated to the states \(|\widehat{\psi_{e}}\rangle\), we show that we actually exactly get the state \(|\Omega_{2}\rangle\) we are looking for (up to a normalization factor). We then look at this PGM and 2 variants: 1. If we finish the analysis with the PGM, we will most often be in regimes where we measure \(\mathbf{0}\) in the final step so we will not be able to solve the Short Codeword problem. 2. We can slightly tweak the PGM so that it will give us a short codeword down to the tractability bound. 3. We also show another example where we can slightly tweak the PGM but where the reduction utterly fails, meaning that the state we obtain before measuring is \(|\perp\rangle\). This shows that there is no hope to perform generic reduction between the quantum decoding problem and the short codeword problem with this method. ### The quantum reduction with unambiguous state discrimination We first show how to use our quantum polynomial time algorithms for the quantum decoding in Regev's reduction. We first construct \[|\Omega_{1}\rangle=\frac{1}{\sqrt{|\mathcal{C}|}}\sum_{\mathbf{c}\in\mathcal{C}} |\mathbf{c}\rangle|\psi_{\mathbf{c}}\rangle,\qquad\qquad\text{where }|\psi_{\mathbf{c}}\rangle=\sum_{\mathbf{c}\in \mathbb{F}_{q}^{n}}f(\mathbf{c})|\mathbf{c}+\mathbf{c}\rangle.\] We then apply unambiguous state discrimination measurement on \(|\psi_{\mathbf{c}}\rangle=\bigotimes_{i=1}^{n}|\psi_{c_{i}}\rangle\). Recall that by using the version of unambiguous state discrimination presented in Proposition 17 for each \(i\), we perform a unitary \(U\) on the \(i^{th}\) register of \(|\Omega_{1}\rangle\) that does the following for each \(c_{i}\in\mathbb{F}_{q}\): \[U|\psi_{c_{i}}\rangle|0\rangle=\sqrt{\mathrm{p_{usd}}}|c_{i}\rangle|0\rangle+ \sqrt{1-\mathrm{p_{usd}}}|\widehat{0}\rangle|1\rangle.\] Here \[\mathrm{p_{usd}}=q\cdot\frac{\omega^{\perp}}{q-1}=\frac{q}{q-1}\omega^{\prime}. \tag{32}\] After applying this (coherent) USD, we obtain the state \[|\Omega_{2}\rangle =\frac{1}{\sqrt{|\mathcal{C}|}}\sum_{\mathbf{c}\in\mathcal{C}}\left( |\mathbf{c}\rangle\otimes\left(\bigotimes_{i=1}^{n}\sqrt{\mathrm{p_{usd}}}|c_{i} \rangle|0\rangle+\sqrt{1-\mathrm{p_{usd}}}|\widehat{0}\rangle|1\rangle\right)\right)\] \[=\frac{1}{\sqrt{|\mathcal{C}|}}\sum_{\mathbf{c}\in\mathcal{C}}|\mathbf{c} \rangle\sum_{J\subseteq[n]}\beta_{J}|\widetilde{\mathbf{c}}_{J}\rangle.\] where \[|\widetilde{\mathbf{c}}_{J}\rangle=\bigotimes_{i=1}^{n}|\gamma_{i}\rangle\quad \text{ with }\left\{\begin{array}{ll}|\gamma_{i}\rangle=|c_{i}\rangle|0\rangle&\text{ if }i\in J\\ |\gamma_{i}\rangle=|\widehat{0}\rangle|1\rangle&\text{ otherwise }\end{array}\right.\] and \(\beta_{J}=\sqrt{(1-\mathrm{p_{usd}})^{n-|J|}(\mathrm{p_{usd}})^{|J|}}\). \(J\) here corresponds to the set of indices where the USD succeeded. Notice that one can efficiently recover \(J\) from \(|\widetilde{\mathbf{c}}_{J}\rangle\) by looking at the outcome registers, so we can add it to obtain the state \[|\Omega_{3}\rangle=\frac{1}{\sqrt{|\mathcal{C}|}}\sum_{\mathbf{c}\in\mathcal{C}} |\mathbf{c}\rangle\sum_{J\subseteq[n]}\beta_{J}|\widetilde{\mathbf{c}}_{J}\rangle|J\rangle.\] We now measure \(J\) to obtain the state \[|\Omega_{4}(J)\rangle=\frac{1}{\sqrt{|\mathcal{C}|}}\sum_{\mathbf{c}\in\mathcal{C }}|\mathbf{c}\rangle|\widetilde{\mathbf{c}}_{J}\rangle.\] Notice that \(|J|\) follows the distribution \(D(\mathrm{p_{usd}})\) with \(\mathrm{p_{usd}}>R\) so, there exists an absolute constant \(\varepsilon>0\), s.t. \((R+\varepsilon)n\leq|J|\leq(\mathrm{p_{usd}})n\) w.p. at least \(\frac{1}{2}-o(1)\) (the probability that \(|J|\leq\mathrm{p_{usd}}n\) is at least \(\frac{1}{2}\)). Moreover, using the same argument as in Section 3.1, we can recover \(\mathbf{c}\) from \(\mathbf{c}_{J}\) w.p. \(1-o(1)\). This means we can erase the register \(\mathbf{c}\) in \(|\Omega_{4}(J)\rangle\) to get \[|\Omega_{5}(J)\rangle=\frac{1}{\sqrt{|\mathcal{C}|}}\sum_{\mathbf{c}\in\mathcal{C }}|\mathbf{c}_{J}\rangle=\frac{1}{\sqrt{|\mathcal{C}|}}\sum_{\mathbf{c}\in\mathcal{C }_{J}}|\mathbf{c}\rangle.\] We apply the Fourier transform on this state to get \[|\widehat{\Omega_{5}(J)}\rangle=\frac{1}{\sqrt{|(\mathcal{C}_{J})^{\perp}|}}\sum_{ \boldsymbol{y}\in(\mathcal{C}_{J})^{\perp}}|\boldsymbol{y}\rangle.\] By measuring this state, we get a vector \(\boldsymbol{y}\in(\mathcal{C}_{J})^{\perp}\) of weight at most \(\frac{(q-1)|J|}{q}\leq\frac{(q-1)\mathrm{p}_{\mathrm{w}\text{a}}n}{q}=\omega^{ \prime}n\) w.p. \(\Theta(1)\). Here we used (32) for the last equality. The crux is that by Lemma 2 we have \[(\mathcal{C}_{J})^{\perp}=(\mathcal{C}^{\perp})^{J}.\] In other words, we get words in \(\mathcal{C}^{\perp}\) shortened at \(J\), meaning dual codewords that are \(0\) outside \(J\). We have therefore constructed a word \(\boldsymbol{z}\in\mathcal{C}^{\perp}\) s.t. \(\boldsymbol{z}_{j}=\boldsymbol{y}_{j}\) if \(j\in J\) and \(\boldsymbol{z}_{j}=0\) otherwise. In conclusion, we just proved the following **Theorem 8**.: _The above algorithm, that performs Regev's reduction and uses unambiguous state discrimination for the quantum decoding problem, can solve in polynomial time \(\text{SCP}(q,n,k^{\prime},\omega^{\prime})\) for \(\omega^{\prime}>\frac{(q-1)k}{q}=\frac{(q-1)(n-k^{\prime})}{q}\) w.p. \(\Theta(1)\)._ Notice that we can repeat this algorithm to amplify the success probability. Our algorithm can go down to Prange's bound \(\frac{(q-1)(n-k^{\prime})}{q}\), which is the best known bound for polynomial time algorithms for the short codeword problem. The whole algorithm is summarized by: **Algorithm of the quantum reduction in the case of USD.** Initial state preparation: \[|\Omega_{0}\rangle = \frac{1}{\sqrt{|\mathcal{C}|}}\sum_{\boldsymbol{c}\in\mathcal{C} }\sum_{\boldsymbol{c}\in\overline{\boldsymbol{x}}_{q}^{n}}f(\boldsymbol{c})| \boldsymbol{c}\rangle|\boldsymbol{c}\rangle\] adding \(\boldsymbol{c}\) to \(\boldsymbol{c}\): \[\mapsto |\Omega_{1}\rangle = \frac{1}{\sqrt{|\mathcal{C}|}}\sum_{\boldsymbol{c}\in\mathcal{C} }\sum_{\boldsymbol{c}\in\overline{\boldsymbol{x}}_{q}^{n}}f(\boldsymbol{c})| \boldsymbol{c}\rangle|\boldsymbol{c}+\boldsymbol{c}\rangle=\quad\frac{1}{ \sqrt{|\mathcal{C}|}}\sum_{\boldsymbol{c}\in\mathcal{C}}|\boldsymbol{c} \rangle|\psi_{\boldsymbol{c}}\rangle\] applying coherent USD: \[\mapsto |\Omega_{2}\rangle = \frac{1}{\sqrt{|\mathcal{C}|}}\sum_{\boldsymbol{c}\in\mathcal{C} }\left(|\boldsymbol{c}\rangle\otimes\left(\bigotimes_{i=1}^{n}\sqrt{\mathrm{p }_{\mathrm{w}\text{a}}}|c_{i}\rangle|0\rangle+\sqrt{1-\mathrm{p}_{\mathrm{w} \text{a}}}|\widehat{0}\rangle|1\rangle\right)\right)\] \[= \frac{1}{\sqrt{|\mathcal{C}|}}\sum_{\boldsymbol{c}\in\mathcal{C} }|\boldsymbol{c}\rangle\sum_{J\subseteq[n]}\beta_{J}|\widetilde{\boldsymbol{c }}_{J}\rangle\] put \(J\) in the last register using \(|\widetilde{\boldsymbol{c}}_{J}\rangle\)\(\mapsto\)\(|\Omega_{3}\rangle\)\(=\)\(\frac{1}{\sqrt{|\mathcal{C}|}}\sum_{\boldsymbol{c}\in\mathcal{C}}|\boldsymbol{c} \rangle\sum_{J\subseteq[n]}\beta_{J}|\widetilde{\boldsymbol{c}}_{J}\rangle|J\rangle\) measure \(J\)\(\mapsto\)\(|\Omega_{4}\rangle\)\(=\)\(\frac{1}{\sqrt{|\mathcal{C}|}}\sum_{\boldsymbol{c}\in\mathcal{C}}| \boldsymbol{c}\rangle|\widetilde{\boldsymbol{c}}_{J}\rangle\) erase \(\boldsymbol{c}\)\(\mapsto\)\(|\Omega_{5}\rangle\)\(=\)\(\frac{1}{\sqrt{|\mathcal{C}|}}\sum_{\boldsymbol{c}\in\mathcal{C}}| \boldsymbol{c}_{J}\rangle=\frac{1}{\sqrt{|\mathcal{C}|}}\sum_{\boldsymbol{c}\in \mathcal{C}_{J}}|\boldsymbol{c}\rangle\) QFT: \(\mapsto\)\(|\Omega_{6}\rangle\)\(=\)\(\frac{1}{\sqrt{|(\mathcal{C}_{J})^{\perp}|}}\sum_{\boldsymbol{y}\in(\mathcal{C}_{J})^{ \perp}}|\boldsymbol{y}\rangle\) measuring the whole state: \(\mapsto\)\(|\boldsymbol{y}\rangle\)\((\)where \(\boldsymbol{y}\in(\mathcal{C}_{J})^{\perp}=(\mathcal{C}^{\perp})^{J}\subset \mathcal{C}^{\perp})\) ### The Quantum reduction with the Pretty Good Measurement We now study Regev's reduction when we use the PGM for the quantum decoding problem. We consider the basis \(\{|Y_{\boldsymbol{c}}\rangle\}_{\boldsymbol{c}\in\mathcal{C}}\) described in the previous section associated to the states \(\{|\widehat{\psi_{\boldsymbol{c}}}\rangle\}_{\boldsymbol{c}\in\mathcal{C}}\). We showed that \[\forall\mathbf{c}\in\mathcal{C},\ \langle\widehat{\psi_{e}}|Y_{\mathbf{c}}\rangle= \sqrt{\mathrm{P}_{\mathrm{PGM}}}\] where \(\mathrm{P}_{\mathrm{PGM}}\) is the probability that the Pretty Good Measurement succeeds. We now unfold Regev's reduction. We start from the state \(|\Omega_{1}\rangle\) with a slight change, we apply namely immediately the QFT on the second register to get \[|\Omega_{1}\rangle=\frac{1}{\sqrt{|\mathcal{C}|}}\sum_{\mathbf{c}\in \mathcal{C}}|\mathbf{c}\rangle|\widehat{\psi_{\mathbf{c}}}\rangle\] with \(|\psi_{\mathbf{c}}\rangle=\sum_{\mathbf{c}}f(\mathbf{e})|\mathbf{c}+\mathbf{e}\rangle\). We then perform coherently the PGM on the second register and write the output on the third register. This means that if we write each \(|\widehat{\psi_{\mathbf{c}}}\rangle=\sum_{\mathbf{c}^{\prime}\in\mathcal{C}}\alpha_{ \mathbf{c},\mathbf{c}^{\prime}}|Y_{\mathbf{c}^{\prime}}\rangle\), we obtain \[|\Omega_{2}\rangle=\frac{1}{\sqrt{|\mathcal{C}|}}\sum_{\mathbf{c}\in \mathcal{C}}|\mathbf{c}\rangle\sum_{\mathbf{c}^{\prime}\in\mathcal{C}}\alpha_{\mathbf{c}, \mathbf{c}^{\prime}}|Y_{\mathbf{c}^{\prime}}\rangle|\mathbf{c}^{\prime}\rangle\] We then subtract the value of the third register in the first register to get \[|\Omega_{3}\rangle=\frac{1}{\sqrt{|\mathcal{C}|}}\sum_{\mathbf{c}, \mathbf{c}^{\prime}\in\mathcal{C}}\alpha_{\mathbf{c},\mathbf{c}^{\prime}}|\mathbf{c}-\mathbf{c}^{ \prime}\rangle|Y_{\mathbf{c}^{\prime}}\rangle|\mathbf{c}^{\prime}\rangle\] Finally, we reverse the PGM between registers 2 and 3 to obtain the state \[|\Omega_{4}\rangle=\frac{1}{\sqrt{|\mathcal{C}|}}\sum_{\mathbf{c}, \mathbf{c}^{\prime}\in\mathcal{C}}\alpha_{\mathbf{c},\mathbf{c}^{\prime}}|\mathbf{c}-\mathbf{c}^{ \prime}\rangle|Y_{\mathbf{c}^{\prime}}\rangle|\mathbf{0}\rangle\] From the discussion at the beginning of this section, we have that for any \(\mathbf{c}\in\mathcal{C}\), \(\alpha_{\mathbf{c},\mathbf{c}}=\sqrt{\mathrm{P}_{\mathrm{PGM}}}\). This means we can rewrite the above state as \[|\Omega_{4}\rangle =\frac{1}{\sqrt{|\mathcal{C}|}}\left(\sum_{\mathbf{c}^{\prime}\in \mathcal{C}}\sqrt{\mathrm{P}_{\mathrm{PGM}}}|\mathbf{0}\rangle|Y_{\mathbf{c}^{\prime} }\rangle+\sum_{\mathbf{c},\mathbf{c}^{\prime}\neq\mathbf{c}}\alpha_{\mathbf{c},\mathbf{c}^{\prime }}|\mathbf{c}-\mathbf{c}^{\prime}\rangle|Y_{\mathbf{c}^{\prime}}\rangle\right)\] \[=\sqrt{\mathrm{P}_{\mathrm{PGM}}}|\mathbf{0}\rangle\left(\frac{1}{ \sqrt{|\mathcal{C}|}}\sum_{\mathbf{c}\in\mathcal{C}}|Y_{\mathbf{c}}\rangle\right)+ \sum_{\mathbf{c},\mathbf{c}^{\prime}\neq\mathbf{c}}\alpha_{\mathbf{c},\mathbf{c}^{\prime}}|\mathbf{c}- \mathbf{c}^{\prime}\rangle|Y_{\mathbf{c}^{\prime}}\rangle.\] The next step of the reduction is to measure the first register of \(|\Omega_{4}\rangle\). Since the states \(|Y_{\mathbf{c}^{\prime}}\rangle\) are orthogonal and of norm 1, we measure 0 w.p. \(\mathrm{P}_{\mathrm{PGM}}\) in the first register and the second register becomes \[|\Omega_{5}\rangle=\frac{1}{\sqrt{|\mathcal{C}|}}\sum_{\mathbf{c}\in \mathcal{C}}|Y_{\mathbf{c}}\rangle=|\widetilde{W}_{0}\rangle=\frac{1}{n_{0}}\sum_ {\mathbf{y}\in\mathcal{C}^{\prime}}\widehat{f}(\mathbf{y})|\mathbf{y}\rangle\] where \(n_{0}\stackrel{{\triangle}}{{=}}\|\sum_{\mathbf{y}\in\mathcal{C}^{ \prime}}\widehat{f}(\mathbf{y})|\mathbf{y}\rangle\|\). We measure this final state to potentially measure a small codeword. Let \(a(t)=|\{\mathbf{y}\in\mathcal{C}^{\prime}:|\mathbf{y}|=t\}\). Note that this quantity corresponds to \(a_{\mathbf{0}}(t,\mathcal{C}^{\prime})\) as defined in Subsection 5.2.1. The probability \(p(t)\) that the above algorithms finds a word of weight \(t\) in \(\mathcal{C}^{\prime}\) is \[p(t)=\frac{1}{n_{0}^{2}}a(t)|\widehat{f}(t)|^{2}, \tag{33}\] where we overload the notation \(\widehat{f}\) to mean that \(\widehat{f}(t)=\widehat{f}(\mathbf{y})\) for any \(\mathbf{y}\) s.t. \(|\mathbf{y}|=t\) (recall that \(\widehat{f}(\mathbf{y})\) is constant for any of these \(\mathbf{y}\)). The issue here is that for \(\omega^{\prime}\) small enough, we will almost always measure \(0\). Indeed, \[p_{0}=\frac{|\widehat{f}(0)|^{2}}{n_{0}^{2}}=\frac{|\widehat{f}(0)|^{2}}{\sum_ {t}a(t)|\widehat{f}(t)|^{2}}=\frac{|\widehat{f}(0)|^{2}}{|\widehat{f}(0)|^{2} +\sum_{t\neq 0}a(t)|\widehat{f}(t)|^{2}}.\] Here, notice that \[|\widehat{f}(0)|^{2}=(1-\omega^{\prime})^{n}\] \[\Pr_{G}\left[\sum_{t\neq 0}a(t)|\widehat{f}(t)|^{2}\leq\frac{2}{q ^{k}}\right]\geq 1-o(1)\] where for the last inequality, we use the concentration bounds for \(a(t)=a_{\mathbf{0}}(t,\mathcal{C}^{\prime})\) of Section 5.2.1. So when \(\omega^{\prime}<1-q^{-\frac{k}{n}}\), we measure \(0\) with high probability. This unfortunately happens quite often and it is a problem because in our short codeword problem, we want to find a small non-zero vector. #### 6.3.1 A counterexample that shows complete failure We show that things can go even worse when slightly changing the measurement used. We show that instead of measuring \(|\mathbf{0}\rangle\), we can measure some given state \(|\bot\rangle\) orthogonal to all the \(|\widetilde{W}_{\mathbf{s}}\rangle\). Recall from Proposition 18 that \[|Y_{\mathbf{c}}\rangle=\frac{1}{\sqrt{q^{k}}}\sum_{\mathbf{s}\in\mathbb{F}_{q}^{k}} \chi_{\mathbf{c}}(u_{\mathbf{s}})|\widetilde{W}_{\mathbf{s}}\rangle\] Also, the state resulting from Regev's reduction is the state \(\sum_{\mathbf{c}\in\mathcal{C}}|Y_{\mathbf{c}}\rangle=|\widetilde{W}_{0}\rangle\). Our modified measurement can give an extra outcome which will be an \(\mathbf{u}\in\mathbb{F}_{q}^{n}\backslash\mathcal{C}\) and we define \(\mathsf{S}=\mathcal{C}\cup\{\mathbf{u}\}\). Let \[|Z_{\mathbf{c}}\rangle \mathop{=}\frac{1}{\sqrt{q^{k}}}\left(|\bot\rangle+\sum_{\mathbf{s} \neq\mathbf{0}}\chi_{\mathbf{c}}(u_{\mathbf{s}})|\widetilde{W}_{\mathbf{s}}\rangle\right) \quad\forall\mathbf{c}\in\mathcal{C}\] \[|Z_{\mathbf{u}}\rangle \mathop{=}|\widetilde{W}_{0}\rangle\] Notice that the \(|Z_{\mathbf{y}}\rangle\) are pairwise orthogonal and \(\text{span}(\{|Z_{\mathbf{y}}\rangle\}_{\mathbf{y}\in\mathsf{S}})=\text{span}(\{| \widetilde{W}_{\mathbf{s}}\rangle\}_{\mathbf{s}\in\mathbb{F}_{q}^{k}},|\bot\rangle)= \text{span}(\{|\widetilde{\psi_{\mathbf{c}}}\rangle\}_{\mathbf{c}\in\mathcal{C}},|\bot\rangle)\). This means the measurement \(\{|Z_{\mathbf{y}}\rangle\}_{\mathbf{y}\in\mathsf{S}}\) will be complete when measuring any \(|\widetilde{\psi_{\mathbf{c}}}\rangle\). Recall that \(|\widehat{\psi_{\mathbf{c}}}\rangle=\frac{1}{\sqrt{q^{k}}}\sum_{\mathbf{s}\in\mathbb{ F}_{q}^{k}}\chi_{\mathbf{s}}(\mathbf{u}_{\mathbf{s}})|W_{\mathbf{s}}\rangle\), so we have \[\forall\mathbf{c}\in\mathcal{C},\ \langle\widehat{\psi_{\mathbf{c}}}|Z_{\mathbf{c}}\rangle= \frac{1}{\sqrt{q^{k}}}\sum_{\mathbf{s}\neq 0}n_{s}=\sqrt{\text{P}_{\text{PGM}}}- \frac{n_{0}}{\sqrt{q^{k}}}\geq\sqrt{\text{P}_{\text{PGM}}}-\frac{1}{\sqrt{q^{ k}}}\] where we use in the second equality that \(\sqrt{\text{P}_{\text{PGM}}}=\frac{1}{\sqrt{q^{k}}}\sum_{\mathbf{s}\in\mathbb{F}_{q}^{k}}n_{\mathbf{s}}\) and in the last inequality that \(n_{0}\leq 1\). This means the above measurement solves the quantum decoding problem wp. \(\left(\sqrt{\text{P}_{\text{PGM}}}-\frac{n_{0}}{\sqrt{q^{k}}}\right)^{2}\geq \left(\sqrt{\text{P}_{\text{PGM}}}-\frac{1}{\sqrt{q^{k}}}\right)^{2}\) which is \(1-o(1)\) as long as \(P_{PGM}=1-o(1)\). Now, we perform the reduction presented in Section 6.3. We just rewrite the states of the reduction \[|\Omega_{1}\rangle =\frac{1}{\sqrt{|\mathcal{C}|}}\sum_{\mathbf{c}\in\mathcal{C}}|\mathbf{c} \rangle|\widehat{\psi_{\mathbf{c}}}\rangle\] \[|\Omega_{2}\rangle =\frac{1}{\sqrt{|\mathcal{C}|}}\sum_{\mathbf{c}\in\mathcal{C}}|\mathbf{c} \rangle\sum_{\mathbf{y}\in\mathsf{S}}\beta_{\mathbf{c},\mathbf{y}}|Z_{\mathbf{y}}\rangle|\mathbf{y} \rangle \text{where }\beta_{\mathbf{c},\mathbf{y}}=\langle\widehat{\psi_{\mathbf{c}}}|Z_{\mathbf{y}}\rangle\] \[|\Omega_{3}\rangle =\frac{1}{\sqrt{|\mathcal{C}|}}\sum_{\mathbf{c}\in\mathcal{C},\mathbf{y} \in\mathsf{S}}\beta_{\mathbf{c},\mathbf{y}}|\mathbf{c}-\mathbf{y}\rangle|Z_{\mathbf{y}}\rangle|\mathbf{y}\rangle\] \[|\Omega_{4}\rangle =\frac{1}{\sqrt{|\mathcal{C}|}}\sum_{\mathbf{c}\in\mathcal{C},\mathbf{y} \in\mathsf{S}}\beta_{\mathbf{c},\mathbf{y}}|\mathbf{c}-\mathbf{y}\rangle|Z_{\mathbf{y}}\rangle|\bm {0}\rangle=\left(\sqrt{P_{PGM}}-\frac{n_{0}}{\sqrt{q^{k}}}\right)|\mathbf{0} \rangle\frac{1}{\sqrt{|\mathcal{C}|}}\sum_{\mathbf{c}\in\mathcal{C}}|Z_{\mathbf{c}} \rangle+\frac{1}{\sqrt{|\mathcal{C}|}}\sum_{\mathbf{c}\in\mathcal{C}}\sum_{\mathbf{y }\in\mathsf{S},\mathbf{y}\neq\mathbf{c}}\beta_{\mathbf{c},\mathbf{y}}|\mathbf{c}-\mathbf{y}\rangle|Z_{ \mathbf{y}}\rangle\] where in the last line, we dropped in the last equality the third register and we used that \(\beta_{\mathbf{c},\mathbf{c}}=\left(\sqrt{P_{PGM}}-\frac{n_{0}}{\sqrt{q^{k}}}\right)\) for each \(\mathbf{c}\in\mathcal{C}\). This means that when we measure the first register, we obtain \(\mathbf{0}\) w.p. \(\left(\sqrt{P_{\mathrm{PGM}}}-\frac{n_{0}}{\sqrt{q^{k}}}\right)^{2}\), and the resulting state is \(|\Omega_{5}\rangle=\sum_{\mathbf{c}\in\mathcal{C}}|Z_{\mathbf{c}}\rangle=|\bot\rangle\) which shows that the reduction entirely fails in this case. #### 6.3.2 A measurement that works Finally, we show a measurement that will make the reduction work when \(\mathrm{P_{PGM}}=1-o(1)\). The idea is similar to the one of Section 6.3.1. We add an extra outcome \(\mathbf{u}\in\mathbb{F}_{q}^{n}\backslash\mathcal{C}\) and define \(\mathsf{S}=\mathcal{C}\cup\{\mathbf{u}\}\). We now define \[|U_{0}\rangle =\frac{\sum_{\mathbf{y}\in\mathcal{C}^{\prime},\mathbf{y}\neq\mathbf{0}} \widehat{f}(\mathbf{y})|\mathbf{y}\rangle}{\|\sum_{\mathbf{y}\in\mathcal{C}^{\prime},\mathbf{y }\neq\sigma}\widehat{f}(\mathbf{y})|\mathbf{y}\rangle\|}\] \[\forall\mathbf{c}\in\mathcal{C},\ |Z_{\mathbf{c}}\rangle =\frac{1}{\sqrt{q^{k}}}\left(|U_{0}\rangle+\sum_{\mathbf{s}\neq\mathbf{0 }}\chi_{\mathbf{c}}(u_{\mathbf{s}})|\widehat{W}_{\mathbf{s}}\rangle\right)\] \[|Z_{\mathbf{u}}\rangle =|\mathbf{0}\rangle\] The \(|Z_{\mathbf{c}}\rangle\) are exactly the states \(|Y_{\mathbf{c}}\rangle\) of the pretty good measurement but we removed the \(|\mathbf{0}\rangle\) component of \(|\widetilde{W}_{0}\rangle\). As in the previous subsection, the \(|Z_{\mathbf{y}}\rangle\) are orthogonal. In order to make the measurement complete, we added the extra basis element \(|Z_{\mathbf{u}}\rangle=|\mathbf{0}\rangle\). We therefore have a projective measurement \(\{|Z_{\mathbf{y}}\rangle\}_{\mathbf{y}\in\mathsf{S}}\). Again, we have \(\langle\widehat{\psi_{\mathbf{c}}}|Z_{\mathbf{c}}\rangle\geq\sqrt{\mathrm{P_{PGM}}}- \frac{1}{\sqrt{q^{k}}}\) and independent of \(\mathbf{c}\) so w.p. at least \(\left(\sqrt{\mathrm{P_{PGM}}}-\frac{1}{\sqrt{q^{k}}}\right)^{2}\), we get the state \(|U_{0}\rangle\). Then, if we measure this state \(|U_{0}\rangle\), we will get a codeword of weight \(t\) w.p. \[p(t)=\frac{a(t)|\widehat{f}(t)|^{2}}{\sum_{t\neq 0}a(t)|\widehat{f}(t)|^{2}},\ \forall t\neq 0\] and \(p(0)=0\), where \(a(t)\) is the number of codewords of weight \(t\) in \(\mathcal{C}^{\prime}\). Recall that \[\widehat{f}(t)=\left(\sqrt{\frac{\omega^{\prime}}{q-1}}\right)^{t}\left(\sqrt{ 1-\omega^{\prime}}\right)^{n-t}.\] and we are in the regime where \(P_{PGM}=1-o(1)\) which means that \(\omega<(\delta_{\min}(1-\frac{k}{n}))^{\perp}\) and hence \(\omega^{\prime}>\delta_{\min}(\frac{n-k}{n})=\delta_{\min}(\frac{k^{\prime}}{n})\). Using Proposition 22 and the expression of \(\widehat{f}(t)\), as well as concentration bounds for \(a(t)\), we have that for any absolute constant \(\varepsilon>0\), \(\sum_{t=\lfloor(\omega^{\prime}-\varepsilon)n\rfloor}^{\lfloor(\omega^{\prime }-\varepsilon)n\rfloor}p(t)=1-o(1)\). This means we will measure a word of weight approximately \(\lfloor\omega^{\prime}n\rfloor\) in \(\mathcal{C}^{\prime}\). Discussion.These 3 examples above show that it is very easy to slightly modify the algorithm for solving the quantum decoding problem and drastically change the result after Regev's reduction. We therefore cannot have proper reduction theorems between the quantum decoding problem and the short codeword problem but we have to analyze on a case by case basis whether an algorithm for the quantum decoding problem can be used for finding a short codeword. On the positive side of the reduction, we can summarize our results as follows: **Proposition 24**.: _Let \(q,n,k\in\mathbb{N}\) with \(q\geq 2\) and \(\omega\in(0,1)\). Let also \(R=\lfloor\frac{k}{n}\rfloor\), \(\omega^{\prime}=\omega^{\perp}\) and \(k^{\prime}=n-k\)._ * _For_ \(\omega<(\frac{q-1R}{q})^{\perp}\)_, there exists a quantum algorithm running in time_ \(\mathsf{poly}(n,\log(q))\) _that solves_ \(\mathrm{QDP}(q,n,k,\omega)\) _w.p._ \(1-o(1)\) _(Theorem_ 6_). Moreover, this algorithm can be used using Regev's reduction to solve_ \(\text{SCP}(q,n,k^{\prime},\omega^{\prime})\) _in time_ \(poly(n,\log(q))\) _(Section_ 6.2_)._ * _For_ \(\omega<(\delta_{\min}(1-R))^{\perp}\)_, there exists a quantum algorithm (for which we don't specify the running time but which could be exponential in_ \(n\)_) that solves_ \(\mathrm{QDP}(q,n,k,\omega)\) _w.p._ \(1-o(1)\) _(Proposition_ 23_). This algorithm can be (slightly tweaked but with success probability still_ \(1-o(1)\)_) and used in Regev's reduction to solve_ \(\text{SCP}(q,n,k^{\prime},\omega^{\prime})\) _w.p._ \(\Theta(1)\) _(Section_ 6.3_)._
2309.10829
Comparative study of Deep Learning Models for Binary Classification on Combined Pulmonary Chest X-ray Dataset
CNN-based deep learning models for disease detection have become popular recently. We compared the binary classification performance of eight prominent deep learning models: DenseNet 121, DenseNet 169, DenseNet 201, EffecientNet b0, EffecientNet lite4, GoogleNet, MobileNet, and ResNet18 for their binary classification performance on combined Pulmonary Chest Xrays dataset. Despite the widespread application in different fields in medical images, there remains a knowledge gap in determining their relative performance when applied to the same dataset, a gap this study aimed to address. The dataset combined Shenzhen, China (CH) and Montgomery, USA (MC) data. We trained our model for binary classification, calculated different parameters of the mentioned models, and compared them. The models were trained to keep in mind all following the same training parameters to maintain a controlled comparison environment. End of the study, we found a distinct difference in performance among the other models when applied to the pulmonary chest Xray image dataset, where DenseNet169 performed with 89.38 percent and MobileNet with 92.2 percent precision. Keywords: Pulmonary, Deep Learning, Tuberculosis, Disease detection, Xray
Shabbir Ahmed Shuvo, Md Aminul Islam, Md. Mozammel Hoque, Rejwan Bin Sulaiman
2023-09-16T11:58:04Z
http://arxiv.org/abs/2309.10829v2
Comparative study of Deep Learning Models for Binary Classification on Combined Pulmonary Chest X-Ray dataset ###### Abstract CNN-based deep learning models for disease detection have become popular recently. We compared the binary classification performance of eight prominent deep learning models: DenseNet 121, DenseNet 169, DenseNet 201, EffecientNet b0, EffecientNet lite4, GoogleNet, MobileNet and ResNet18 for their binary classification performance on combined Pulmonary Chest X-rays dataset.Despite the widespread application in different fields in medical images, there remains a knowledge gap in determining their relative performance when applied to the same dataset, a gap this study aimed to address.The dataset combined Shenzhen, China (CH) and Montgomery, USA (MC) data. We trained our model for binary classification, calculated different parameters of the mentioned models, and compared them. The models were trained to keep in mind all following the same training parameters to maintain a controlled comparison environment.End of the study, we found a distinct difference in performance among the other models when applied to the pulmonary chest X-ray image dataset, where DenseNet169 performed with 89.38% and MobileNet with 92.2% precision. Pulmonary, Deep Learning, Tuberculosis, Disease detection, X-ray ## I Introduction Tuberculosis (TB) continues to pose a significant threat to global public health, affecting a considerable number of individuals worldwide. According to data presented by the World Health Organisation (WHO), it is anticipated that the prevalence of tuberculosis would see a rise to around 10.6 million individuals in the year 2021, indicating an increase from the 10.1 million cases documented in the preceding year of 2020. Unfortunately, in the year 2021, there was a significant number of fatalities, totaling 1.6 million, that were due to tuberculosis. Among these cases, it was observed that 187,000 individuals were HIV-positive. There has been a notable rise compared to the preceding year, 2020, wherein there were 1.5 million fatalities documented, encompassing 214,000 individuals who tested positive for HIV [1]. Despite significant progress in efforts to reduce tuberculosis (TB), it continues to be the leading cause of mortality caused by a single infectious agent, surpassing the burden of HIV/AIDS [2]. This alarming toll highlights the urgent need for accurate and timely TB diagnosis to improve patient outcomes and reduce transmission rates. Pulmonary chest X-rays have long been a cornerstone in TB diagnosis, offering a non-invasive method to visualize lung abnormalities associated with the disease [3]. However, manual interpretation of these X-rays by radiologists can be time-consuming and subjective, leading to variations in diagnostic accuracy and potential delays in treatment initiation. Moreover, with the increasing number of TB cases worldwide, the burden on healthcare systems to handle this high volume of radiological data poses additional challenges in providing prompt and accurate diagnoses. In recent years, researchers have turned to machine learning as a potential solution to enhance TB diagnosis from pulmonary chest X-rays. Machine learning algorithms can learn patterns from vast datasets, and when applied to medical imaging, they have the potential to aid radiologists in detecting subtle features indicative of TB with high accuracy[4-7]. Machine learning can expedite diagnosis, facilitate early detection, and improve overall patient outcomes by automating and optimizing the diagnostic process. In this study, we conducted a comprehensive analysis by combining two diverse datasets, namely the Schenzen and Montgomery datasets, to create a larger and more robust dataset for tuberculosis (TB) diagnosis. Using this enriched dataset, we performed a comparative performance evaluation of eight deep transfer learning models: DenseNet121, DenseNet169, DenseNet201, EfficientNet B0, EfficientNet Lite4, GoogleNet, MobileNet, and ResNet18. Our objective was to identify the most effective models for diagnosing TB accurately. We employed various performance metrics to assess the models' capabilities to achieve this. Through rigorous evaluation, we ranked the models based on their performance, presenting a clear hierarchy from the best-performing to the lowest. Furthermore, this study delves into the reasons behind the varying responses of these models. By analyzing the strengths and weaknesses of each model, we gained valuable insights into their suitability for TB diagnosis. The main contributions of our research are as follows: * Fusion Dataset: By combining two datasets from two geographical regions, we created a larger and more representative dataset, improving our analysis's generalization and robustness. * Comparative Performance Analysis: We rigorously compared eight deep transfer learning models, providing a comprehensive understanding of their effectiveness in TB diagnosis. * Performance Ranking: Our study presents a clear ranking of the models, helping clinicians and researchers identify the most promising models for practical implementation. * Insightful Analysis: By examining the reasons behind the models' performance, we offer valuable guidance for model selection and potential areas of improvement. Overall, our research contributes to advancing TB diagnosis using deep transfer learning techniques, ultimately supporting efforts to combat this global health challenge. Section 2 of this article represents the literature review, and Section 4 depicts the materials and methodology of this study. The result analysis and discussion are explained in section 5. Section 4 points out the conclusion and future work of this study. ## II Literature review Tuberculosis is a global threat. Multidrug-resistant bacteria and opportunistic infections in immunocompromised HIV/AIDS patients have made tuberculosis diagnosis harder. Untreated TB patients have high mortality rates. Diagnostics use century-old methods. Usually slow and unreliable. To reduce disease prevalence, the authors propose an automated method for detecting tuberculosis in traditional posteroanterior chest radiographs. Graph cut segmentation extracts the lung region first. They compute texture and form data for this lung region to binary classify X-rays as normal or aberrant. Their approach is tested using two datasets: one from the county's health department's tuberculosis control program in the US and one from Shenzhen Hospital in China. Field-ready tuberculosis screening computer-aided diagnostic system performs like human specialists. The first set has an area under the ROC curve (AUC) of 87% (78.3% accuracy) and the second set of 90% (84% accuracy). They initially compare their system to radiologists. Radiologists' false positive rate is half that of their technology, and their accuracy is 82% [8]. Deep learning algorithms have improved radiological picture anomaly identification, paving the path for its implementation in CAD systems. However, CAD systems for pulmonary tuberculosis (TB) diagnosis lack high-quality training data, adequate quantity and variety, and fine-region annotations [28]. The process of diagnosing digital chest X-rays (CXR) requires the detection of lung areas. This is achieved by a robust lung segmentation method that utilises nonrigid registration and image retrieval techniques. The method employs patient-specific adaptive lung models to accurately identify the boundaries of the lungs. The accuracy rates achieved on two chest X-ray (CXR) datasets [17, 18], and [19] from Montgomery County, Maryland, United States of America, and India, respectively, were 94.1% and 91.7%. The computation of lung area symmetry was performed by the authors through the use of multi-scale shape characteristics. This involved the incorporation of both local and global representations of the lung regions. Additionally, edge- and texture-based features were considered, taking into account the interior content of the regions. The researchers have provided evidence to support the suitability of their feature representation for the purpose of chest X-ray screening to detect pulmonary problems. The researchers achieved encouraging results on two benchmark datasets of chest X-ray (CXR) images from the National Institutes of Health (NIH) and India by employing a voting-based ensemble approach that combines three distinct classifiers: random forest (RF), multilayer perception (MLP) neural networks, and Bayesian network (BN). The greatest accuracy for detecting abnormalities (ACC) was found to be 91.0%, with an area under the receiver operating characteristic curve (AUC) of 0.96. This was achieved by the cross-population test, which yielded a maximum abnormality detection accuracy (ACC) of 89.0% and an AUC of 0.96. In this scholarly investigation, a modified version of the AlexNet architecture, referred to as MAN, is introduced for the purpose of detecting lung abnormalities in biomedical imaging. The proposed approach involves the comparison of lung CT scans with chest X-rays [21]. During the preliminary assessment, the medical artificial neural network (MAN) categorises the chest X-ray images into two classes: normal and pneumonia. The deep learning (DL) strategy recommended in this study demonstrates a higher level of accuracy (>96%) compared to other DL strategies that were evaluated. The classification of lung CT images into malignant or benign categories is determined by the analysis of MAN architecture, both with and without the inclusion of EFT. The classification accuracy of the recommended Multiple Attention Network (MAN) with Support Vector Machine (SVM) classifier, which is 86.45%, is noticeably lower compared to the classification accuracy of a comparable Deep Learning (DL) framework with EfficientNet Transfer Learning (EFT), which exceeds 97%. This comparison highlights that the MAN framework exhibits satisfactory performance when applied to picture datasets. The data utilised in this study was obtained from the CXR arm of the PLCO dataset, which is a comprehensive lung cancer screening trial. The dataset consisted of a vast collection of 198,000,000 chest X-rays (CXR) that were annotated with abnormal information based on the images. These CXRs were gathered from various clinical sites around the United States [23]. The terms Hierarchical Label Conditional Probability (HLCP) and Unconditional Probability generated from the Chain Rule are denoted as HLCP and HLUP, as mentioned in reference [23]. The HLUP (High-Level Understanding of Patterns) has been modified by incorporating the Cross-Entropy (CE) loss function and applying the chain rule. Table 1: Summary Table of Relevant Works for Pulmonary CXR Detection. ## III Materials and Methodology ### Dataset Collection Two publicly available chest X-ray datasets have been produced for the purpose of diagnosing tuberculosis. These datasets are known as the Shenzhen dataset [32] and the Montgomery country dataset [33]. Shenzhen serves as the primary digital picture database for tuberculosis. The collaborative effort between the National Library of Medicine in Maryland, United States, and the Shenzhen No. 3 People's Hospital, Guangdong Medical College in Shenzhen, China, results in the production of this resource. The dataset comprises 662 instances, with a class distribution consisting of 336 instances exhibiting symptoms of tuberculosis and 326 instances classified as normal. The Montgomery County X-ray database comprises X-ray pictures acquired from the tuberculosis control program administered by the Department of Health and Human Services in Montgomery County, Maryland, United States of America. The dataset has a total of 138 thoracic X-rays, with a class distribution of 80 X-rays classified as normal and 58 X-rays exhibiting symptoms indicative of tuberculosis. It is imperative to acknowledge that both datasets are focused on the identification of tuberculosis in chest X-ray pictures. The "China Set - The Shenzhen set" comprises a greater quantity of X-ray images in comparison to the "Montgomery County X-ray Set," which, although fewer in number, still has a significant number of photos. Each data set has both instances of normal cases and instances of tuberculosis-affected patients, rendering them suitable for the purpose of training and assessing machine-learning models designed for tuberculosis detection. We have merged the two datasets with some preprocessing. This has ensured that the final dataset is consistent, balanced, and ready for use in training a tuberculosis detection model. Proper data preparation and management are crucial to achieving accurate and reliable results when applying machine-learning techniques to medical image analysis tasks [34]. ### Dataset pre-processing In training our deep learning models, we utilized a series of systematic transformations, carefully aligned to fit our specific needs, given the dataset's characteristics. The first important step for our dataset preprocessing was to prepare the labels for our dataset. Our dataset comprises X-ray images in a folder and labels in another folder in text format. The labels were text files that contained the diagnosis in string. But the string contained in the labels varied from sample to sample. As we used binary classification for this work, we cleaned the labels first from unnecessary information for this study, and we labeled normal diagnosis as normal and diseased samples as TB (Tuberculosisis). Then only we proceeded with the next steps in our data pre-processing. As our dataset, composed of X-ray images, was already preprocessed, and converted into a universally recognized PNG image format, we could bypass extensive preprocessing steps. Nevertheless, we faced the challenge of a relatively small sample size for the training of deep learning models, a common concern in many machine learning applications. \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline Refs & Dataset & Models & Accuracy & Note, Year \\ \hline [t] & Montgomery & SVM & 78.3 & AUC to 66.59\% and sensitivity is \\ & County & Classifier & 99.2014 \\ \hline [t] & Shenzhen & SVM (Linear) & 82.5 & AUC is approximately 89\% for set A, and AUC is 82.5\% for set B, \\ & Hospital & SVM (PK) & 76.4 & 2014 \\ & & SVM (GBF) & 76.4 & \\ & & KNN & 80.7 & \\ & & NN & 82.6 & \\ & & ADT & 84.1 & \\ & & LLR & & \\ \hline [10] & BRT & cohort Image & 95.4 & X-rays of analog imaging system; \\ & Database & boundary & 94.1 & Montgomery and India CXR \\ & Montgomery & method & 91.7 & \multicolumn{1}{l}{} \\ & County & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \\ & India & & & \\ \hline [13] & Chest X-ray & Knowledge & 82.6/Avears & Denmark-121 & AUCB0.97\%, \\ & 14 & Distillation & g9 & ResNet-152 (79.01\%), \\ & & Deep & & VOG-19 (76.17\%), ResNet-50 (71.66\%), MobileNet+v167.109\%, and ResNet-32 (66.05\%), 2020 \\ \hline [16] & MC (USA), GN (China), RN(India) & BN & 95 for RF, CH & All cause combination of 3 candidates plus higher accuracy, 2017 \\ & N/Data & RF & & \\ \hline [20] & Chest X-Ray & Modified & 97.27 & Randomized and learned features give maximum accuracy over VOG \\ & LIDC-IDRI & (MAN)-SVM & 16.19, AlexNet, ResNet+v60, MAN. \\ & database & & SoftMax, MAN-CNN, MAN-RF,2020 \\ \hline [22] & CRR & arms & ILLP- & 88.7 \\ & PLCO & **feature** & & all, RLLP, RLCP, BLRCP, 2020 \\ & dataset & & & \\ \hline [23] & CRR & ML-ank & 88.3 & Lung and heart segmentation, spatial labels, adaptive normalization strategy, Re- labelling (94.5), 2019 \\ \hline [25] & CNR, Municipal & DLAD & 86.6 & Carbot AI CXR v1.22, 2023 \\ & Hospital & & & \\ & Czech & & & \\ & Republic & & & \\ \hline [26] & Six different & CovTPhNet & 99.76 (TB) & proposed model CovTPhNet used for TB, Pneumonia, Covid 19, 2022 \\ \hline [27] & bounding & RetinaNet & 77 & Precision 0.89, recall 0.57, specificity 0.94, 2020 \\ & box & dataset & & \\ & from CXR & & & \\ \hline [30] & 1007 & AlexNet, & 99 & DCNN, 2017 \\ & posterior content & GoogLeNet & & \\ & or & chest & & \\ & radiographs & & & \\ \hline [32] & NC, & TransUNet & 98.36 & Other used model: AESe, TransM, MedicalTransformer(MedT), \\ & Student & & & TransUNet, UNeX, 2023 \\ & JRRT & & & \\ \hline \end{tabular} \end{table} TABLE I: Summary Table of Relevant Works for Pulmonary CXR Detection. To mitigate this challenge, we engaged the practice of Data Augmentation, specifically by employing 224x224 random resized crops of the X-ray images. This method effectively expanded our sample size and introduced a level of variability that would enable our model to generalize better to unseen data. Alongside implementing random resized crops, we also applied a Random Horizontal Flip. This is a common practice for increasing the variability of the data further, enhancing the robustness of the model against different orientations of the same image. Before the transformed images were inputted into our deep learning models, we converted them into tensors. This standard step transforms the image data into a numerical format that the deep learning model can efficiently process efficiently. Finally, it's important to note that we used models pre-trained on ImageNet. To ensure consistency and transferability of learned features from the pre-trained models, we normalized our images using the specific ImageNet normalization parameters. These parameters are the mean value [0.485, 0.456, 0.406], and the standard deviation value [0.229, 0.224, 0.225]. Ultimately, we split our dataset into training, validation, and test set. After splitting, the training set contained 60% of the total data samples. The training and validation both contained 20% of the total data samples. This comprehensive preprocessing and transformation approach allowed us to optimize the limited resources of our dataset while preparing it for an effective deep-learning training process. ### Workflow and model preparation In our work, we used a collection of strong, pre-built deep learning models that were chosen for their reliable feature extraction abilities developed on large image datasets. Due to their consistently higher performance, our models are often used in classification tasks. Our selected models are: DenseNet121, DenseNet169, DenseNet201, EfficientNet-b0, EfficientNet-Lite4, GoogLeNet and ResNet18. In this study, we worked with binary classification, and we followed the workflow as shown in in Fig 1. As discussed before, we first selected eight pre-trained models trained on the imagenet dataset. As these models learned the general characteristics of standard images, so their model weight is a good starting point to start our model training. But as we are working with X-ray medical images with very different characteristics than standard images, we decided to train the full models without freezing any layers. As mentioned in the previous dataset preparation section, we split the total dataset on training, validation, and test set. Then we trained our model on the training set and evaluated on the validation set. At the end when the training was complete, we tested our model on the prepared test set. To implement this workflow we used a Python-based Pytorch deep learning library along with standard machine and deep learning tools such as sklearn, PIL (Python Imaging Library), matplotlib, seaborn, pandas, numpy etc. Our work can be divided into four main stages namely: 1. Data preprocessing 2. Model preparation 3. Model training 4. Model evaluation/testing In figure 2, we can see the the steps involved data preprocessing, model preparation, training and evaluation processes. The data preprocessing stage involves data loading, data samples resizing, data augmentation and data normalization of the data samples in the dataset. In model preparation we loaded the pre trained models then modified the fully connected classification layer for binary classification. Then we compiled the model for the training. The training process contains forward pass, backward pass, updating weights and validation. The forward pass process contains of forward propagation and loss computation. We used crossentropy loss function for loss calculation. In the backward pass the is performed to calculate the gradients of the loss with respect to the model's parameters. In update weights step the optimizer uses these gradients to update the model's parameters. We used Stochastic Gradient Descent (SGD) optimizer with initial learning rate 0.001 and momentum 0.9. For optimal model training we additionally used learning rate scheduler(StepLR) in the training process. For learning rate scheduler settings we used step_size=7, gamma=0.1 where step_size represents the number of epochs after which the learning rate is multiplied by multiplication factor gamma. We trained each model for 100 epochs. During training we saved the best model weights for evaluation in the next step. During evaluation we loaded the best model weight saved during training. Then we tested the model with the test dataset and calculated different relevant metrics and plotted relevant diagrams which will be discussed in the results section. Figure 1: Proposed workflow diagram ### Model Evaluation We have chosen appropriate evaluation metrics to assess the model's performance. The used metrics for binary classification tasks like tuberculosis detection are: * **Accuracy:** The proportion of correctly classified samples. \[Accuracy\ =\ \frac{True\ positive\ +\ True\ Negative}{Total\ number\ of\ cases}\] * **Precision:** The ability of the model to correctly identify true positive cases out of all predicted positive cases. \[Precision\ =\ \frac{True\ positives}{True\ positives\ +\ False\ positives}\] * **Recall (Sensitivity or True Positive Rate)**: The ability of the model to correctly identify true positive cases out of all actual positive cases. \[Recall\ =\ \frac{True\ positives}{True\ positives\ +\ False\ Negatives}\] * **F1 Score**: The harmonic mean of precision and recall, providing a balanced measure between the two. \[F1\ score=\frac{2*Precision\ *\ Recall}{Precision\ +\ Recall}\] * **Confusion Matrix**: Construct a confusion matrix to visualize the model's performance and gain insights into true positives, false positives, true negatives, and false negatives. * **Receiver Operating Characteristic (ROC) Curve**: Plot the ROC curve and calculate the Area Under the Curve (AUC) to evaluate the model's performance across different probability thresholds. * **Precision-Recall Curve**: Plot the precision-recall curve to examine the trade-off between precision and recall at various probability thresholds. It's crucial to emphasize that medical applications require thorough validation and consultation with domain experts to ensure the model's reliability and safety before deploying it in real-world settings. The chosen evaluation metrics and procedures have aligned with our objectives and requirements of the tuberculosis detection task using chest X-ray images. ## IV Result analysis and discussion The following table highlights the performance of different convolutional neural network (CNN) architectures evaluated on our binary classification task. Performance metrics such as Accuracy, Precision, Recall, F1 Score, and ROC AUC are used to understand the strengths and weaknesses of each model. DenseNet169 performed the best in terms of accuracy (89.375%) and ROC AUC (92.202%), suggesting that it was the most effective model at classifying the given data correctly, as well as distinguishing between classes. DenseNet201 and MobileNet showed high precision (90.909% and 92.424% respectively), meaning that they were quite adept at correctly predicting the positive class and minimizing false positives. ResNet18 had the highest recall (81.013%) of all models, indicating that it was the best at identifying all true positives in the data. DenseNet169 also had the highest F1 score (89.172%), which suggests it had a good balance between precision and recall, and could maintain robust performance in different conditions. It is worth noting that even though EfficientNet b0 and EfficientNet lite4 had the lowest accuracy scores, they still maintained respectable ROC AUC scores (87.436% and 85.045% respectively). This might indicate their relatively better performance in balancing true positive and false positive rates across different thresholds. Finally considering overall performance on our specific dataset, and the importance of precision and recall in medical diagnostics it can be said that, DenseNet169 is an effective \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Model Name**} & \multirow{2}{*}{\begin{tabular}{c} **Accuracy** \\ (\%) \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} **Precision** \\ (\%) \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} **Recall** \\ (\%) \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} **F1 score** \\ (\%) \\ \end{tabular} } & \multirow{2}{*}{ \begin{tabular}{c} **ROC AUC** \\ (\%) \\ \end{tabular} } \\ \hline DenseNet121 & **84.375** & **89.706** & **77.215** & **82.902** & **89.717** \\ \hline DenseNet169 & **89.375** & **89.744** & **88.608** & **89.172** & **92.202** \\ \hline DenseNet201 & **84.375** & **90.909** & **75.949** & **82.759** & **90.045** \\ \hline EfficientNet b0 & **79.375** & **84.858** & **70.856** & **77.241** & **87.436** \\ \hline EfficientNet lite4 & **78.125** & **80.556** & **73.418** & **76.821** & **85.045** \\ \hline GoogleNet & **80.625** & **88.710** & **69.620** & **78.014** & **87.654** \\ \hline MobileNet & **85.625** & **92.424** & **77.215** & **84.138** & **91.858** \\ \hline ResNet18 & **85.000** & **87.671** & **81.013** & **84.211** & **89.905** \\ \hline \end{tabular} \end{table} Table 2: Summary of performance metrics of CNN models Figure 2: Model training and evaluation block diagram model for TB detection for our dataset with X-ray images. In the figure below we can see the training - validation loss plot along with confusion metrics and ROC curve for DenseNet169. ## V Conclusion and Future Work The performance differences among the models may be due to factors like architecture depth, number of parameters, receptive field size, and the type and arrangement of layers. Each architecture has its own strengths and weaknesses, making it more or less suitable for different tasks or data distributions. This research has achieved our target of achieving higher accuracy using less complex pre-processing techniques and less computational power. DenseNet169 has demonstrated the highest values in four out of five performance matrices: accuracy 89.375%, Precision 89.744%, F1 score 89.172%, and ROC area under the curve 92.202%. Regarding Recall, we got the highest value from MobileNet is 92.424% (>88.608% of DenseNet169 and >90.909% of DenseNet201). We suggest using this proposed model along with manual crosschecks by the clinicians before the final decision of surgery or serious medical operation. Our research is a clear indication of readily available image dataset (MC and CH) usage for TB detection, which can be turned into a device for developing and under-developed countries to automatize the health sector. There is a scope for future researchers to improve in gaining 100% accuracy.
2309.06093
Bulk viscous late acceleration under near equilibrium conditions in f(R, T) gravity with mixed matter
Various studies have shown that the late acceleration of the universe can be caused by the bulk viscosity associated with dark matter. But recently, it was indicated that a cosmological constant is essential for maintaining Near Equilibrium Conditions (NEC) for the bulk viscous matter during the accelerated expansion of the universe. In the present study, we investigate a model of the universe composed of mixed dark matter components, with viscous dark matter (vDM), and inviscid cold dark matter (CDM) as it's constituents, in the context of $f(R,T)$ gravity and showed that the model predicts late acceleration by satisfying NEC throughout the evolution, without cosmological constant. We have also compared the model predictions with combined Type Ia Supernovae and observational Hubble data sets and thereby determined the estimated values of different cosmological parameters.
Vishnu A Pai, Titus K Mathew
2023-09-12T09:52:13Z
http://arxiv.org/abs/2309.06093v2
Bulk viscous late acceleration under near equilibrium conditions in \(\boldsymbol{f(R,T)}\) gravity with mixed dark matter. ###### Abstract Various studies have shown that the late acceleration of the universe can be caused by the bulk viscosity associated with dark matter. But recently, it was indicated that a cosmological constant is essential for maintaining Near Equilibrium Conditions (NEC) for the bulk viscous matter during the accelerated expansion of the universe. In the present study, we investigate a model of the universe composed of mixed dark matter components, with viscous dark matter (vDM), and inviscid cold dark matter (CDM) as it's constituents, in the context of \(f(R,T)\) gravity and showed that the model predicts late acceleration by satisfying NEC throughout the evolution, without cosmological constant. We have also compared the model predictions with combined Type Ia Supernovae and observational Hubble data sets and thereby determined the estimated values of different cosmological parameters. **Keywords:**\(f(R,T)\) gravity, Bulk Viscosity, Negative viscous coefficient, Near Equilibrium Condition ## 1 Introduction Explaining late-accelerated expansion of the universe using bulk viscous property of the matter sector is an intriguing possibility. Such cosmological models are special in the sense that, they are devoid of any exotic dark energy component and the negative pressure necessary for the cosmic acceleration is generated via bulk viscosity. Theoretically, inclusion of dissipative effects in matter sector seems to be obvious as the concept of an ideal fluid used in the concordance model, could only be an approximation to reality. Hence, by considering dissipative traits for the fluid, one can gain better understanding about the evolution of the universe. From previous studies in inflationary cosmology [1, 2], it was already established (well before the SNe Ia observations) that, fluids with bulk viscosity has the innate ability to predict an accelerated expansion of the universe, that too in the absence of cosmological constant or any exotic cosmic component. This is the fact that motivates authors to investigate the possibilities of viscous driven late acceleration of the universe. For incorporating dissipative effects in cosmology, there exists different formalism's in the literature, which are proposed on the basis of relativistic hydrodynamics. Widely investigated class of dissipative models are based on Eckart's and Landau-Lifshitz formalism [3, 4]. Despite of it's drawbacks, such as the acausal nature of the resulting solutions and unstable behavior of the equilibrium states, Eckart's formalism is considered to be a good first order approximation for investigating viscous nature in the cosmological context. More general formalism, such as Israel-Stewart theory or its truncated version, [5, 6, 7, 8], which includes higher order corrections from equilibrium, are also used to discuss the dissipative evolution of the universe. Additionally, in recent studies, a new approach for incorporating dissipative effects in comic matter, have also been proposed [9, 10, 11, 12, 13, 14, 15, 16, 17]. Nevertheless, owing to it's simplicity, we follow Eckart theory in the present work to study of viscous cosmology, since the higher order theories pose difficulties in cosmological modeling. All relativistic dissipative theories, from acausal Eckart's theory to full causal IS theory, are developed under the assumption that the viscous fluid remains in a near-equilibrium state. Hence, such theories can at most allow only a minute deviation from the local equilibrium pressure of the viscous fluid. However, in [1], Marteens inferred that, for explaining the inflation as driven by bulk viscosity of matter, the near equilibrium condition has to be violated. Hence, the author proposed that, in cases where theories with dissipative phenomena are used to explain the accelerated expansion, one is forced to postulate the validity of such theories even in regimes where fluid is far-from equilibrium. Following this assumption, several cosmological models have been proposed to explain late acceleration of the universe [18, 19, 20, 21, 22], which are, reasonably successful in predicting the cosmological evolution. But the postulate regarding the validity of the dissipative phenomena in situations where fluid is far from equilibrium, still remains only as an assumption which has no definite proof at al. Hence, it turns out that, the safest way to explain the accelerated expansion in the presence of viscous matter, is by maintaining the validity of near equilibrium condition (NEC) throughout the evolution. In technical terms, NEC suggests that the magnitude of viscous pressure of the fluid (\(\Pi\)) should not exceed the magnitude of its equilibrium pressure (\(p^{0}\)), i.e., \[\frac{\left|\Pi\right|}{p^{0}}\ll 1. \tag{1}\] This immediately rules out the possibility of explaining cosmic acceleration, by associating bulk viscosity to the cold dark matter (CDM) component, since the kinetic pressure of it is zero and hence it inevitably violates NEC. In recent studies on late acceleration by Cruz et al. [23, 24, 25], based on Eckart's theory, it was inferred that the NEC can be satisfied for a particular viscous model (\(\zeta=\zeta_{0}\rho\)) in the context of Einstein's gravity, with warm dark matter (WDM) in the presence of a cosmological constant. This implies that, the presence of cosmological constant is deemed inevitable for the validity of the NEC in the context of Einstein gravity. It turns out that the presence of cosmological constant can decrease the effect of viscous pressure \(\Pi\), in generating late acceleration, so that \(\left|\Pi/p^{0}\right|\ll 1.\) Since these works demands the presence of cosmological constant for the validity of NEC in the context of Einstein's gravity it is of great significance to investigate dissipative models of recent acceleration by satisfying the NEC without cosmological constant, in the context of modified gravity theories. In the present work, we are analyzing such a possibility in the context of \(f(R,T)\) gravity theory. Modified gravity theories are of much importance for obtaining suitable description of gravity at large scales, relevant to study of the universe [26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43]. The \(f(R,T)\) gravity [44], is a generalization of Einstein's gravity, where the gravitational Lagrangian is assumed to be an arbitrary function of the Ricci scalar, \(R\) and the trace of the energy momentum tensor of the cosmic component, \(T\). This assumed functional form implies a minimal/non-minimal coupling between geometry of the spacetime and matter. Such a coupling between geometry and the trace of energy-momentum tensor can be induced by the imperfect nature of the fluid or through quantum effects. One of the ways in which trace of matter stress energy tensor could enter the gravitational Lagrangian is through the quantum effect called Trace anomaly or conformal symmetry breaking [45, 43]. Modified gravity models with geometry-matter coupling are significant since they can also provide a theoretical explanation for the late-time acceleration of the Universe, without postulating the existence of dark energy. In addition, such theories are also known to provide an explicit breaking of equivalence principle which is constrained using solar system experiments [43, 46, 47, 48]. One of the effect of this coupling is the appearance of extra terms in the conservation law for matter component, which has the effect of an orthogonal force on test particles which in-turn renders their motion to be non-geodesic [44]. In [49], authors interpret the appearance of such extra terms, as due to particle creation process. However, Azevedo and Avelino [50] strongly argued that, such extra terms are not really implying particle creation process, rather, they are to be taken as effective contribution to particles momentum, in cosmological time scales. Authors also point out that, there occurs an active energy exchange between matter and spacetime (without changing the number of particles), due to which the evolution process becomes thermodynamically non-adiabatic. In the present work, we follow this new approach where the extra terms are considered as non-adiabatic contributions to particle four momentum. It is to be noted that, almost all previous studies of viscous models in the context of \(f(R,T)\) gravity were done without respecting the NEC [51, 52, 53, 54, 55, 56, 57]. Our primary aim is to construct a viable viscous cosmological model in the context of \(f(R,T)\) gravity that explains the late accelerated expansion of the universe by satisfying NEC, without the cosmological constant. For this, we consider a model of the universe with mixed dark matter components. The choice of mixed dark matter is due to the fact that the \(f(R,T)\) model with only viscous dark matter, can satisfy NEC, but lead to an ever-accelerating phase, and thus fail to predict the observed transition [58]. We also point out that, considering a universe with mixture of dark matter components is not at all new in cosmology [59, 60, 61, 62, 63]. However, different from previous works, here we assume viscous dark matter (vDM - which can be stiff/hot/warm) and inviscid CDM as the constituents of mixed dark matter component. The vDM component has both kinetic and bulk viscous pressures, while the inviscid CDM component remains pressure-less throughout the evolution. This implies a significant difference in the definition of their energy momentum tensors and hence their traces. Owing to this difference, each component can have different explicit coupling with geometry. In the present work, we propose a modified Lagrangian by accommodating the traces corresponding to the vDM and CDM, and then formulate the effective field equation. From the field equation, the Friedmann equations and continuity equation for the two cosmic components are obtained by assuming FLRW metric for the spacetime. We will then develop the general constraints on the model parameters, that are necessary for satisfying (i) NEC, (ii) critical energy condition (CEC) and (iii) the second law of thermodynamics (SLT). Analytical solution for the Hubble parameter is then derived by assuming a phenomenological form of coefficient of bulk viscosity, as \(\zeta=\zeta_{1}H^{-1}\rho+\zeta_{0}H\)[64], where \(\zeta_{1}\), and \(\zeta_{0}\) are constant parameters. We will show that the cosmological predictions in this model depends strongly on the value of coupling parameter \(\tilde{\lambda}\), between matter and geometry of spacetime. For a specific choice of \(\tilde{\lambda}\) the model is capable of showing \(\Lambda\)CDM like behavior. We also investigate the special case with \(\zeta_{0}=0\) because of its interesting dynamical behavior. Finally, we will compare these models with the combined observational Hubble data (OHD) and Type Ia supernovae (SNe Ia) data, to extract value of model parameters and test feasibility of each model. ## 2 Field equations in \(f(R,T)\) gravity with mixed dark matter components Motivated by conformal anomaly in quantum mechanics, one can generalize \(f(R)\) gravity by assuming a trace dependent minimal/non-minimal coupling in the gravitational Lagrangian and hence consider the possibility of \(f(R,T)\) gravity [44]. The most general action in \(f(R,T)\) gravity can then be expressed as, \[S=\frac{1}{16\pi}\int f(R,T)\sqrt{-g}\;d^{4}x+\int L_{m}\sqrt{-g}\;d^{4}x. \tag{2}\] where \(L_{m}\) is the matter Lagrangian and \(g=|g_{\mu\nu}|\). Throughout the analysis we have followed (+,-,-,-) signature for the metric tensor and have set \(G=c=1\). In the present case, we have two dark matter components, CDM and vDM, having two different energy momentum tensors \(T_{\mu\nu}^{(m)}\) and \(T_{\mu\nu}^{(vm)}\). Obviously, one can also expect the trace of these energy momentum tensors (i.e, \(T_{m}\) and \(T_{\nu m}\)) to differ from each other. Owing to this reason, their coupling with geometry of the spacetime can be different, and to incorporate this change, we modify the above action as, \[S=\frac{1}{16\pi}\int f(R,T_{m},T_{vm})\sqrt{-g}\;d^{4}x+\int L_{m}\sqrt{-g} \;d^{4}x+\int L_{vm}\sqrt{-g}\;d^{4}x. \tag{3}\] Here, \(f(R,T_{m},T_{vm})\) represents some arbitrary function of Ricci scalar and traces of both components, and \(L_{m},\ \&\,L_{vm}\) are the Lagrangian of CDM and vDM respectively. These Lagrangians satisfy the relation, \[T_{\mu\nu}^{(i)}=-\frac{2}{\sqrt{-g}}\frac{\delta\left(\sqrt{-g}\;L_{i} \right)}{\delta g^{\mu\nu}}, \tag{4}\] where, \(i\) denotes the cosmic component that is involved. For instance, \(i=m\) for CDM component and \(i=vm\) for the vDM. By assuming that the matter Lagrangian densities, \(L_{m}\) and \(L_{vm}\) depends only on metric tensor components [44], we vary the action (2) with respect to metric \(g^{\mu\nu}\) and obtain the equation, \[\delta S=\frac{1}{16\pi}\int\left[f_{T_{m}}\frac{\delta T_{m}}{ \delta g^{\mu\nu}}\delta g^{\mu\nu}+f_{T_{vm}}\frac{\delta T_{vm}}{\delta g ^{\mu\nu}}\delta g^{\mu\nu}+16\pi\frac{1}{\sqrt{-g}}\frac{\delta\left(\sqrt{- g}\;L_{m}\right)}{\delta g^{\mu\nu}}\right.\\ \left.+16\pi\frac{1}{\sqrt{-g}}\frac{\delta\left(\sqrt{-g}\;L_{ vm}\right)}{\delta g^{\mu\nu}}+f_{R}\delta R-\frac{g_{\mu\nu}f}{2}\delta g^{\mu\nu} \right]\sqrt{-g}\;d^{4}x. \tag{5}\] Here, we denoted, \(f=f\left(R,T_{m},T_{vm}\right)\), \(f_{T_{i}}=\partial f/\partial T_{i}\) and \(f_{R}=\partial f/\partial R\). The variation of Ricci scalar is then given by, \[\delta R=R_{\mu\nu}\delta g^{\mu\nu}+g^{\mu\nu}g^{\mu\nu}-\nabla_{\mu}\nabla_{ \nu}\delta g^{\mu\nu} \tag{6}\] The derivative of the trace of energy-momentum tensors with respect to metric components can be expressed as, \[\frac{\delta T_{i}}{\delta g^{\mu\nu}}=\frac{\delta\left(g^{\alpha\beta}T_{ \alpha\beta}^{(i)}\right)}{\delta g^{\mu\nu}}=T_{\mu\nu}^{(i)}+\Theta_{\mu\nu} ^{(i)}. \tag{7}\] Where \(\Theta_{\mu\nu}^{(i)}\) is given by, \[\Theta_{\mu\nu}^{(i)}\equiv g^{\alpha\beta}\frac{\delta T_{\alpha\beta}^{(i)} }{\delta g^{\mu\nu}}. \tag{8}\] Once we know the energy momentum tensors and the Lagrangian densities of the assumed components, it is possible to express \(\Theta_{\mu\nu}^{(i)}\) of that component as, \[\Theta_{\mu\nu}^{(i)}=-2T_{\mu\nu}^{(i)}+g_{\mu\nu}L_{(i)}-2g^{\alpha\beta} \frac{\partial^{2}L_{(i)}}{\partial g^{\mu\nu}\partial g^{\alpha\beta}}, \tag{9}\] in which \(\alpha\) and \(\beta\) are summing indices. By extremizing (5) we get the field equation for \(f(R,T_{m},T_{vm})\) gravity as, \[f_{R}R_{\mu\nu}-\frac{1}{2}fg_{\mu\nu}+\left(g_{\mu\nu}\Box-\nabla_{\mu} \nabla_{\nu}\right)f_{R}=8\pi\sum_{i}T_{\mu\nu}^{(i)}-\sum_{i}f_{T_{i}}\left( T_{\mu\nu}^{(i)}+\Theta_{\mu\nu}^{(i)}\right). \tag{10}\] Here, sum over \(\mathbf{i}\) indicates a sum over the components \(\mathbf{m}\) and \(\mathbf{vm}\). In the conventional \(f(R,T)\) theory, one usually assumes the minimal coupling of the form as, \(f(R,T)=R+2\mathcal{F}(T)\)[44]. Similarly, we assume a minimal coupling between matter and geometry, as, \(f=R+2\mathcal{F}\left(T_{m},T_{vm}\right).\) Owing to this choice, \(f_{R}=1\), and the third term on the left hand side of equation (10) vanishes. The above equation can then be written as, \[R_{\mu\nu}-\frac{1}{2}R\;g_{\mu\nu}=8\pi\sum_{i}T_{\mu\nu}^{(i)}+\mathcal{F} \left(T_{m},T_{vm}\right)g_{\mu\nu}-\left\{2\sum_{i}\mathcal{F}_{T_{i}}\left( T_{m},T_{vm}\right)\left[T_{\mu\nu}^{(i)}+\Theta_{\mu\nu}^{(i)}\right]\right\}. \tag{11}\] We will now consider the cosmic components as non-interacting with each other, and consequently, we choose, \(\mathcal{F}\left(T_{m},T_{vm}\right)=\lambda T_{vm}+\kappa T_{m}\), where \(\lambda\) and \(\kappa\) are constant coupling parameters. The CDM component having trace \(T_{m}\) is modeled as a pressure-less, perfect fluid having energy momentum tensor, \[T_{\mu\nu}^{(m)}=\rho_{m}u_{\mu}u_{\nu}, \tag{12}\] while the vDM component corresponding to trace \(T_{vm}\) has an energy momentum tensor of the form, \[T_{\mu\nu}^{(vm)}=\left(\rho_{vm}+p_{vm}\right)u_{\mu}u_{\nu}+p_{vm}g_{\mu\nu}. \tag{13}\] Here, \(p_{vm}=p_{vm}^{0}+\Pi\), is regarded as the effective pressure of vDM component, having \(p_{vm}^{0}\) as the equilibrium kinetic pressure and \(\Pi\) as the bulk viscous pressure. Also, the kinetic pressure term obeys barotropic equation of state \(p_{vm}^{0}=\omega\rho_{vm}\) with a constant \(\omega\). For determining \(\Theta_{\mu\nu}^{(i)}\), we use (8), (4) and also consider the ansatz \(L_{(i)}=-p_{i}\)[44] for the matter Lagrangian. Here, \(p_{i}\) is the pressure of the \(i^{th}\) component. With these results we obtain the effective field equation as, \[R_{\mu\nu}-\frac{1}{2}R\;g_{\mu\nu}=8\pi\left[\bar{T}_{\mu\nu}^{(m)}+\bar{T}_ {\mu\nu}^{(vm)}\right], \tag{14}\] where, \[\bar{T}_{\mu\nu}^{(m)}=T_{\mu\nu}^{(m)}+\frac{\kappa}{8\pi}\left[2T_{\mu\nu}^{ (m)}+T_{m}g_{\mu\nu}\right] \tag{15}\] and \[\bar{T}_{\mu\nu}^{(vm)}=T_{\mu\nu}^{(vm)}+\frac{\lambda}{4\pi}\left[T_{\mu\nu }^{(vm)}+p_{vm}g_{\mu\nu}+\frac{T_{vm}g_{\mu\nu}}{2}\right] \tag{16}\] are the effective energy momentum tensors of respective fluid components. To gain simplicity in further investigation, we assume that the explicit coupling of the inviscid matter with geometry as close to zero, i.e. \(\kappa\approx 0\). This doesn't mean that we are avoiding this component, instead we stipulate that, owing to it's lack of pressure, this component may have only a negligible explicit coupling with the geometry. ### Dynamics of the Universe We consider the universe to be spatially flat, homogeneous & isotropic, and therefore, adopt the FLRW metric to define the line element, \[ds^{2}=-dt^{2}+a(t)^{2}\left(dx^{2}+dy^{2}+dz^{2}\right). \tag{17}\] Here, \(a(t)\) represents the scale factor of the universe and 't' is the cosmic time. The Friedmann equations that describe the dynamics of the universe can be obtained by substituting this metric into the field equation (14), and using equations (15), (16), as, \[3H^{2}=\bar{\rho}_{vm}+\rho_{m}=\rho_{vm}+\tilde{\lambda}\left(3\rho_{vm}-p_{ vm}\right)+\rho_{m} \tag{18}\] \[2\dot{H}+3H^{2}=-\bar{p}_{vm}=-\left[p_{vm}+\tilde{\lambda}\left(3p_{vm}-\rho_ {vm}\right)\right]. \tag{19}\] Here, \(H=\dot{a}/a\) represents the Hubble parameter of the universe, where the over-dot signifies a derivative with respect to the cosmic time \(t\). Note that we have re-scaled \(\tilde{\lambda}=\lambda c^{4}/8\pi G\) and have chosen \(8\pi G/c^{4}=1\). Also the effective energy density of viscous matter turns out to be \(\bar{\rho}_{vm}=\rho_{vm}+\tilde{\lambda}\left(3\rho_{vm}-p_{vm}\right)\) as evident form equation (18) and the corresponding effective pressure as seen from (19) is, \(\bar{p}_{vm}=p_{vm}+\tilde{\lambda}\left(3p_{vm}-\rho_{vm}\right)\). As stated previously, the effective pressure of the vDM component, i.e, \(p_{vm}\), takes the form, \[p_{vm}=p_{vm}^{0}+\Pi=\omega\rho_{vm}+\Pi. \tag{20}\] Here, \(\Pi\) represents the bulk viscous pressure and according to Eckart's theory \(\Pi=-3\zeta H\), where \(\zeta\) represents the coefficient of bulk viscosity. Accordingly, (19) can now be expressed as, \[2\dot{H}+3H^{2}=-\left[\omega(1+3\tilde{\lambda})-\tilde{\lambda}\right]\rho_ {vm}-(1+3\tilde{\lambda})\Pi. \tag{21}\] Note that, we can also re-interpret this equations as, \[2\dot{H}+3H^{2}=-\left[\bar{p}_{vm}^{0}+\bar{\Pi}\right], \tag{22}\] where, \[\bar{p}_{vm}^{0}=\left[\omega(1+3\tilde{\lambda})-\tilde{\lambda}\right]\rho_ {vm}=\bar{\omega}\rho_{vm} \tag{23}\] is coupled kinetic pressure with a modified equation of state parameter \(\bar{\omega}\) and, \[\bar{\Pi}=(1+3\tilde{\lambda})\Pi, \tag{24}\] is the coupled bulk viscous pressure. The corresponding continuity equations for the non-interacting cosmic components are as given below. For vDM component it is, \[\dot{\rho}_{vm}+3H\left(\rho_{vm}+p_{vm}\right)=-\frac{\tilde{\lambda}\left( \dot{\rho}_{vm}-\dot{p}_{vm}\right)}{1+2\tilde{\lambda}}, \tag{25}\] or alternatively, \[\dot{\bar{\rho}}_{vm}+3H\left(\bar{\rho}_{vm}+\bar{p}_{vm}\right)=0. \tag{26}\] And for CDM we have, \[\dot{\rho}_{m}+3H\rho_{m}=0. \tag{27}\] For CDM component, using (27) and remembering that \(H=\dot{a}/a\), immediately implies the solution, \[\rho_{m}=\rho_{m}^{0}a^{-3} \tag{28}\] where, \(\rho_{m}^{0}\) is the present value of CDM density. Note that, all the equations mentioned in this section reduce to the conventional equations of the standard model, if one sets \(\lambda=\kappa=0\). We will now proceed to obtain the constraints on model parameters subjected to NEC, CEC and SLT requirements. Constraints on model parameters due to NEC and CEC In this section we develop the general constraints on the model parameters, for a late accelerating universe, subjected to NEC and CEC (critical energy condition). * NEC, as mentioned before, demands that the magnitude of bulk viscous pressure of the fluid is much less than its equilibrium kinetic pressure, i.e, \(|\Pi/p_{vm}^{0}|\ll 1\). * CEC means, the energy density of vDM should be non-negative (\(\rho_{vm}\geq 0\)). In imposing these constraints we consider, both negative and positive possibilities for viscous pressure. In almost all literatures in Einstein gravity, only the first choice (i.e, \(\Pi<0\)) is used, primarily because, only a negative \(\Pi\) can generate late acceleration while at the same time satisfy the SLT. However, in the present context, the second possibility, \(\Pi>0\), which corresponds to negative viscous coefficient, cannot be ruled out. This is because of the fact that a positive \(\Pi\) can have a negative minimal coupling to gravity, causing a negative effective coupled viscous pressure (\(\bar{\Pi}\)) and still cause the late accelerated expansion. That is, from (24), if \(\tilde{\lambda}<-1/3\), then \(\bar{\Pi}\) can be negative, even if \(\Pi>0\). In such a case, the negativity of \(\bar{\Pi}\) will be due to negative nature of coupling parameter \(\tilde{\lambda}\). However, to accept the viability of having a positive viscous pressure in this modified gravity, one must also investigate the entropy evolution equation associated with vDM component and check whether it satisfies the SLT, which we will do in Sec. 4. ### Constraints based on NEC To formulate the constraints, let us first rewrite the NEC given in (1) in a more convenient form as, \[-1\ll\frac{\Pi}{p_{vm}^{0}}\ll 1 \tag{29}\] By combining the Friedmann equations (18) and (19) we can have the acceleration equation as, \[\frac{\ddot{a}}{a}=-\frac{1}{6}\left((3+8\tilde{\lambda})(p_{vm}^{0}+\Pi)+ \rho_{vm}+\rho_{m}\right). \tag{30}\] Now, imposing the condition for accelerated expansion, \(\ddot{a}/a>0\), leads to \[(3+8\tilde{\lambda})(p_{vm}^{0}+\Pi)+\rho_{vm}+\rho_{m}<0, \tag{31}\] with \(\rho_{m}>0\) and \(\rho_{vm}>0\) always. Notice that the above inequality depends on the sign of \((3+8\tilde{\lambda})\) and based on that, we can have two distinct cases: #### 3.1.1 For \((3+8\tilde{\lambda})<0\) (or \(\tilde{\lambda}<-3/8\)) Dividing both sides of (31) by \(3+8\tilde{\lambda}\) causes a flip in the inequality since, \(\tilde{\lambda}<-3/8\). It can then be re-written as, \[-\Pi<p_{vm}^{0}+\frac{(\rho_{vm}+rho_{m})}{3+8\tilde{\lambda}}. \tag{32}\] Further, a division by \(p_{vm}^{0}\) on both sides results to, \[-\frac{\Pi}{p_{vm}^{0}}<1+\frac{\left(1+\frac{\rho_{m}}{\rho_{vm}}\right)}{ \omega(3+8\tilde{\lambda})}. \tag{33}\] where \(\omega=p_{vm}^{0}/\rho_{vm}>0.\) For \(\Pi<0\), the left side of this inequality is strictly positive. For NEC to be satisfied, value of the second term on the right side of the above inequality should be less than 0. However, since the left side of inequality is strictly positive, we cannot have a negative number in the right hand side. Hence, the only possible values of \((1+\rho_{m}/\rho_{vm})/(\omega(3+8\tilde{\lambda}))\), where NEC remains be satisfied is between 0 and -1. Then, for the case where \(\Pi>0\), we express the above inequality as, \[\frac{\Pi}{p_{vm}^{0}}>-1-\frac{\left(1+\frac{\rho_{m}}{\rho_{vm}}\right)}{ \omega(3+8\tilde{\lambda})}. \tag{34}\] This is done to ensure positivity in the left side of the inequality. Notice that, according to (34), the NEC is not necessarily violated during the accelerated expansion, if the value of second term on the right hand side is greater than -2. For satisfying NEC throughout the expansion, one also need to constrain the model parameters by demanding a decelerated expansion of the universe. Interestingly, even though the inequalities ((33) and (34)), were formulated by demanding accelerated expansion for the universe, the same inequalities can be used to investigate the validity of NEC during the prior deceleration phase. For a decelerating universe, one requires \(\ddot{a}/a<0\) and in this context (31) flips direction, which will in turn flip the inequalities (33) and (34). This then leads to the following two cases. \[\textbf{For }\Pi<0:\ \ \ \ -\frac{\Pi}{p_{vm}^{0}}>1+\frac{\left(1+\frac{\rho_{ m}}{\rho_{vm}}\right)}{\omega(3+8\tilde{\lambda})}. \tag{35}\] \[\textbf{For }\Pi>0:\ \ \ \ \frac{\Pi}{p_{vm}^{0}}<-1-\frac{\left(1+\frac{\rho_{ m}}{\rho_{vm}}\right)}{\omega(3+8\tilde{\lambda})}. \tag{36}\] Analyzing these expressions, we see that NEC in the case \(\Pi<0\) is not necessarily violated, if the second term on right side is constrained to a value less than 0. Similarly, NEC in the case \(\Pi>0\) will be satisfied if value of the second term in the right side is greater than -2. However, since left side of (36) is strictly positive, the second term on the right hand side must have a value less than -1. Hence, for satisfying NEC during decelerated expansion in the case where \(\Pi>0\), we must constrain the value of \((1+\rho_{m}/\rho_{vm})/(\omega(3+8\tilde{\lambda}))\) between -1 and -2. In short, in order to satisfy NEC associated with vDM throughout the expansion, we must constrain the value of \((1+\rho_{m}/\rho_{vm})/(\omega(3+8\tilde{\lambda}))\) to be; between 0 and -1 for case \(\Pi<0\) and; between -2 and -1 for the case \(\Pi>0\). #### 3.1.2 For \((3+8\tilde{\lambda})>0\) (or \(\tilde{\lambda}>-3/8\)) Rearrange the inequality (31) and dividing both side by \((3+8\tilde{\lambda})\) we obtain, \[-\Pi>p_{vm}^{0}+\frac{(\rho_{vm}+\rho_{m})}{3+8\tilde{\lambda}}. \tag{37}\] Since \((3+8\tilde{\lambda})>0\), the inequality doesn't flip direction. If both sides of this inequality is now divided by \(p_{vm}^{0}\), we obtain, \[-\frac{\Pi}{p_{vm}^{0}}>1+\frac{\left(1+\frac{\rho_{m}}{\rho_{vm}}\right)}{ \omega(3+8\tilde{\lambda})}. \tag{38}\] In this case, since both \(\omega\) and \(3+8\tilde{\lambda}\) are greater than zero, and the second term on right side remains positive definite. This makes the overall term on the right hand side of the above inequality to be always greater than one and hence, the NEC is violated at all times. That is, for \(\Pi<0\), the ratio \(-\Pi/p_{vm}^{0}\) will always be greater than 1 and for the case \(\Pi>0\), the ratio \(\Pi/p_{vm}^{0}\) will always be less than -1, which is of course unacceptable. Hence, during accelerated expansion of the universe, there is no such scenario where NEC is satisfied for the vDM fluid having \(\tilde{\lambda}>-3/8\) and \(\omega>0\). ### Constraints based on CEC Here we obtain the constraints corresponds to CEC, i.e. \(\rho_{vm}>0\) (or equivalently \(\Omega_{\rho_{vm}}>0\)). To obtain the constraints, we use the Friedmann equation obtained by combining (18) with (20), given as, \[3H^{2}=[1+\tilde{\lambda}(3-\omega)]\rho_{vm}-\tilde{\lambda}\Pi+\rho_{m}. \tag{39}\] On rearranging the above equation we get, \[\Omega_{\rho_{vm}}=\frac{\rho_{vm}}{3H^{2}}=\frac{1+\tilde{\lambda}\Omega_{ \Pi}-\Omega_{\rho_{m}}}{1+\tilde{\lambda}(3-\omega)}, \tag{40}\] where, \(\Omega_{\rho_{m}}=\rho_{m}/3H^{2}\) and \(\Omega_{\Pi}=\Pi/3H^{2}\). Then, the CEC requirement \(\Omega_{\rho_{vm}}>0\), implies the constraint, \[\frac{1+\tilde{\lambda}\Omega_{\Pi}-\Omega_{\rho_{m}}}{1+\tilde{ \lambda}(3-\omega)}>0. \tag{41}\] Analyzing the above inequality we learn that, CEC can hold for the following two distinct cases. * For \(1+\tilde{\lambda}\Omega_{\Pi}-\Omega_{\rho_{m}}>0\) and \(1+\tilde{\lambda}(3-\omega)>0\), we get the corresponding constraints as, \[\Omega_{\Pi}<\frac{\Omega_{\rho_{m}}-1}{\tilde{\lambda}}\quad \text{and}\quad(3-\omega)<\frac{-1}{\tilde{\lambda}}\] (42) * For \(1+\tilde{\lambda}\Omega_{\Pi}-\Omega_{\rho_{m}}<0\) and \(1+\tilde{\lambda}(3-\omega)<0\), we get the constraints as, \[\Omega_{\Pi}>\frac{\Omega_{\rho_{m}}-1}{\tilde{\lambda}}\quad \text{and}\quad(3-\omega)>\frac{-1}{\tilde{\lambda}}\] (43) It is interesting to note that, for \(\omega\in(0,1]\) and \(\tilde{\lambda}<0\), both of the above cases are applicable. Whereas, for same value of \(\omega\) but with \(\tilde{\lambda}>0\), only the second case is valid because \(3-\omega\not<-1/\tilde{\lambda}\), for any \(\tilde{\lambda}>0\), hence we omit it. Since, we are concerned with scenario where NEC is satisfied, which occurs for \(\tilde{\lambda}<-3/8\), both these inequalities are applicable. ## 4 Entropy Production and Second Law of thermodynamics In this section we will consider the entropy generation and the consequent constraints if any on the parameters. In the context of (3+1) Einstein's gravity, for the validity of SLT, the viscous coefficient must be greater than zero (i.e, \(\zeta>0\))[65]. Here, we will show that, such a condition is not mandatory in the present model. The First law of thermodynamics is given by, \[dQ=dU+pdV, \tag{44}\] where \(dQ=\mathcal{T}dS\) is the heat energy that enters or leaves the system, \(dU\) is the total change in the internal energy of the system and \(pdV\) is the amount of work done by (or on) the system. Here, \(\mathcal{T}\) is the temperature and \(dS\) is the change in entropy. For the vDM component, contained in the three-space volume \(V=V_{0}a^{3}\), with total internal energy \(U=\rho_{vm}V\), the first law takes the form, \[\mathcal{T}dS=\left(\rho_{vm}+p_{vm}^{0}\right)dV+Vd\rho_{vm}. \tag{45}\] Using the definition of particle number density, \(n=N/V\) we can rewrite (45) as, \[\frac{\mathcal{T}dS}{N}=\frac{-\left(\rho_{vm}+p_{vm}^{0}\right) }{n^{2}}dn+\frac{d\rho_{vm}}{n}. \tag{46}\] This equation can be further modified to a relation for the entropy production rate as, \[\mathcal{T}n\dot{s}=\frac{-\left(\rho_{vm}+p_{vm}^{0}\right)}{n} \dot{n}+\dot{\rho}_{vm}. \tag{47}\] Here, \(\dot{s}=(1/N)(dS/dt)\), is the entropy generated per particle. We assume that the particle four-current satisfies the condition \(n^{\alpha}_{\ ;\alpha}=0\)[66, 3], because a violation of this leads to particle creation process [49] which gives non-equilibrium description for the evolution of the fluid. Since we require the fluid to have a near-equilibrium state, we consider the entropy production in the present model as arising due to the combination of two processes; non-adiabaticity in the expansion [50] (arising from minimal coupling) and the dissipative effect of bulk viscosity. As a result of these, the number density evolves as, \[\dot{n}+3Hn=0. \tag{48}\] Using (25) and (48), we can re-write (47) as, \[\mathcal{T}n\dot{s}=-\left\{\frac{-\tilde{\lambda}\left(\dot{p}_{vm}-\dot{p}_{vm} \right)}{1+2\tilde{\lambda}}+3\Pi H\right\}. \tag{49}\] For a fluid to satisfy SLT, it requires \(S^{\alpha}_{\ :\alpha}\geq 0\), where \(S^{\alpha}\) is the entropy flow vector. In Eckart theory, entropy flow vector is defined as, \(S^{\alpha}=sn^{\alpha}\). It is then easy to see that, given (48), we can have \(S^{\alpha}_{\ :\alpha}=n\dot{s}\). Hence, for SLT to be satisfied, it then requires \(n\dot{s}\geq 0\) and for this, the entire quantity inside the brackets on the right hand side should be negative. This in turn can be satisfied with \(\Pi<0\) or \(\Pi>0\). **For**\(\Pi<0\), The overall term inside the curly brackets can be negative in two different ways, (a) \(\tilde{\lambda}\left(\dot{p}_{vm}-\dot{p}_{vm}\right)/(1+2\tilde{\lambda})>0\) and (b) \(\tilde{\lambda}\left(\dot{p}_{vm}-\dot{p}_{vm}\right)/(1+2\tilde{\lambda})<0\) provided that it's norm satisfies, \(\left|\tilde{\lambda}\left(\dot{p}_{vm}-\dot{p}_{vm}\right)/(1+2\tilde{ \lambda})\right|<\left|3\Pi H\right|\). **For**\(\Pi>0\), the entropy production is positive if, \(\left|\tilde{\lambda}\left(\dot{p}_{vm}-\dot{p}_{vm}\right)/(1+2\tilde{ \lambda})\right|>\left|3\Pi H\right|\), provided \(\tilde{\lambda}\left(\dot{p}_{vm}-\dot{p}_{vm}\right)/(1+2\tilde{\lambda})>0\). As we have already discussed (please refer Sec. 3.1), both \(\Pi<0\) and \(\Pi>0\) are compatible for explaining the late accelerating epoch while satisfying NEC for vDM, and from above equations, it is clear that both these cases are also compatible with SLT. Note that, while extracting values of model parameters during data analysis, we will consider these conditions also. ## 5 Evolution of Hubble parameter In this section we obtain the exact evolution of the Hubble parameter by considering a suitable functional form for the coefficient of bulk viscosity. In literatures, coefficient of bulk viscosity is most often assumed to be function of the expansion rate, it's time derivative and/or energy density. But in general, \(\zeta\) can be a function of all these variables, i.e., \(\zeta\sim\zeta(H,\dot{H},\rho_{vm})\). Recently [64], a parameterized form of bulk viscous coefficient of the form \(\zeta\sim\zeta_{j}H^{1-2j}\rho_{vm}^{j}\) was proposed, where \(j\) is a suitable number. This general model is shown to offer a class of viscous forms depending on the value of the constant number \(j\). Some popular viscous models such as \(\zeta\propto H\) and \(\zeta\propto\sqrt{\rho_{vm}}\), can also arise as special cases of this form. In the present analysis, we consider the ansatz, \(\zeta=\zeta_{0}H+\zeta_{1}\rho_{vm}H^{-1}\), where \(\zeta_{0}\) and \(\zeta_{1}\) are constants, the values of which are to be determined by comparing model with observational data. Here, the first term corresponds to the \(j=0\) case and the second term corresponds to \(j=1\). In addition to this, we will also consider a special case of the viscous coefficient obtained from our general ansatz, by setting \(\zeta_{0}=0\). ### Case I: with \(\zeta=\zeta_{0}H+\zeta_{1}\rho_{vm}/H\) The total pressure \(p_{vm}\) can be determined by using the relation for viscous pressure \(\Pi=-3\zeta H\), in equation (20). Having this, the Friedmann equations (18) and (21), now takes the form, \[3H^{2}=\rho_{vm}\left[1+\tilde{\lambda}\left(3-\omega+3\zeta_{1}\right) \right]+3\tilde{\lambda}\zeta_{0}H^{2}+\rho_{m} \tag{50}\] \[2\dot{H}+3H^{2}=\left[\tilde{\lambda}-\left(1+3\tilde{\lambda}\right)\left( \omega-3\zeta_{1}\right)\Big{]}\,\rho_{vm}+3\zeta_{0}(1+3\tilde{\lambda})H^ {2}. \tag{51}\] Combining the above two equations and substituting for vDM density using (40), we obtain the first order differential equation in Hubble parameter as, \[\frac{2\dot{H}}{3}+\left\{\frac{(1+2\tilde{\lambda})\left[1+\omega-3\zeta_{1 }-(1+4\tilde{\lambda})\zeta_{0}\right]}{1+\tilde{\lambda}\left(3-\omega+3 \zeta_{1}\right)}\right\}H^{2}=\left[\frac{(\omega-3\zeta_{1})(1+3\tilde{ \lambda})-\tilde{\lambda}}{1+\tilde{\lambda}\left(3-\omega+3\zeta_{1}\right) }\right]\frac{\rho_{m}}{3} \tag{52}\] Now, substituting for \(\rho_{m}\) using (28) and making a change of variables from '\(t^{\prime}\) to '\(a^{\prime}\)', we then get, \[\frac{2aH}{3}\frac{dH}{da}+\left\{\frac{(1+2\tilde{\lambda})\left[1+\omega-3 \zeta_{1}-(1+4\tilde{\lambda})\zeta_{0}\right]}{1+\tilde{\lambda}\left(3- \omega+3\zeta_{1}\right)}\right\}H^{2}=\\ \left[\frac{(\omega-3\zeta_{1})(1+3\tilde{\lambda})-\tilde{ \lambda}}{1+\tilde{\lambda}\left(3-\omega+3\zeta_{1}\right)}\right]\frac{\rho_ {m}^{0}a^{-3}}{3} \tag{53}\] The solution of this equation gives the evolution of the Hubble parameter as, \[H=H_{0}\sqrt{\tilde{\Omega}_{m}^{0}a^{-3}+\left(1-\bar{\Omega}_{m}^{0}\right)a^{- \beta}} \tag{54}\] Where \(\bar{\Omega}_{m}^{0}\) and \(\beta\) are constants given by, \[\bar{\Omega}_{m}^{0}=\frac{\Omega_{m}^{0}}{1+\eta} \tag{55}\] \[\beta=\frac{3(2\tilde{\lambda}+1)\left[(\omega+1)-3\zeta_{1}-(1+4\tilde{ \lambda})\zeta_{0}\right]}{1+\tilde{\lambda}(3-\omega+3\zeta_{1})}. \tag{56}\] where, \(\eta\) and \(\Omega_{m}^{0}\) are defined as, \[\eta=\frac{(1+2\tilde{\lambda})(1+4\tilde{\lambda})\zeta_{0}}{\tilde{\lambda} -\omega-3\tilde{\lambda}\omega+3(1+3\tilde{\lambda})\zeta_{1}} \tag{57}\] \[\Omega_{m}^{0}=\frac{\rho_{m}^{0}}{3H_{0}^{2}}. \tag{58}\] From (54), it is clear that the Hubble parameter depends on the scale factor through two terms, \(a^{-3}\) and \(a^{-\beta}.\) The first term is associated with evolution of CDM component, and the second term describes the evolution of vDM. For this model to predict a transition from prior deceleration to a late accelerating era, it is imperative that, \(\beta<2\). It is to be noted that, for \(\beta<0\) the model become phantom like, for \(\beta=0\) it becomes \(\Lambda\)CDM like and for \(0<\beta<2\) model shows quintessence behaviors. ### Case Ia: \(\zeta=\zeta_{1}\rho_{vm}/H\) This special case arises if we assume \(\zeta_{0}=0\) in the previous model. The Hubble parameter in this case takes the simplified form, \[H=H_{0}\sqrt{\Omega_{m}^{0}a^{-3}+\left(1-\Omega_{m}^{0}\right)a^{-\beta}} \tag{59}\] Since \(\zeta_{0}=0\), the parameter \(\eta\) vanishes hence from (54) we then have \(\Omega_{m}^{0}=\bar{\Omega}_{m}^{0}\) and also the parameter \(\beta\) becomes, \[\beta=\frac{3(2\tilde{\lambda}+1)\left[(\omega+1)-3\zeta_{1}\right]}{1+\tilde {\lambda}(3-\omega+3\zeta_{1})}. \tag{60}\] Even though this model also shows a late accelerated expansion of the universe for (\(\beta<2\)), the dynamics of this model is notably different from the previous one, as will be evident (later on) from the evolution of cosmological parameters. But unlike the previous model, the present one will not reduces to \(\Lambda\)CDM if \(\beta=0\) (refer equation (60)). The case \(\beta=0\), implies two possibilities, i.e, \(\tilde{\lambda}=-1/2\) or \(\zeta_{1}=(1+\omega)/3\). First case is immediately ruled out since, it causes (49) to diverge. While in the second case, we learn that, by setting \(\zeta_{1}=(1+\omega)/3\), the Hubble parameter given in equation (59) becomes independent of coupling parameter \(\lambda\), because of this, the value of \(\tilde{\lambda}\) cannot be determined by comparing model with data. However, the conservation equation (25) is seen to depend on value of \(\tilde{\lambda}\). This poses a serious issue as value of \(\tilde{\lambda}\) is indeterminate in this case. Hence, we can omit \(\Lambda\)CDM like behavior of this case from further analysis. ### Case Ib: \(\Lambda\)CDM behavior as a particular case The Hubble parameter for the case I, as given in (54), shows an exact \(\Lambda\)CDM behavior when \(\beta=0\). According to (56), this can be realized in two different ways. First case is with \(\tilde{\lambda}=-1/2\) and the second is with \(\tilde{\lambda}=\epsilon=\left\{\left[(\omega+1)-3\zeta_{1}\right]/\zeta_{0}-1 \right\}/4\). For \(\tilde{\lambda}=-1/2\), as explained in Case Ia, the entropy production as given by equation (49) will diverge and becomes indeterminate, hence, we neglect this scenario from further analysis. However, this is not the case for, \(\tilde{\lambda}=\epsilon.\) Here, we see that, considering \(\tilde{\lambda}=\epsilon\) in (57) and (55), leads to a new definition for \(\bar{\Omega}_{m}^{0}\) as, \[\bar{\Omega}_{m}^{0}=\left[\frac{4\left(1-\omega+3\zeta_{1}\right)}{3-\omega+ 3\zeta_{1}+\zeta_{0}}-1\right]\Omega_{m}^{0}. \tag{61}\] Accordingly, the evolution of Hubble parameter in (56) reduces to \(\Lambda\)CDM like behavior \[H=H_{0}\sqrt{\Omega_{m}^{0}a^{-3}+\big{(}1-\bar{\Omega}_{m}^{0}\big{)}}, \tag{62}\] but with a modified dark matter density. This suggest that, the vDM component having a minimal coupling to gravity which satisfies the condition \(\tilde{\lambda}=\epsilon\), is capable of showing cosmological constant like behavior in the late phase of the universe. ## 6 Evolution of Cosmological observables using Best estimated value of model parameters In the previous section, we obtained the solutions for the Hubble parameter for different cases. Now we will analyze its evolution subjected to NEC, CEC and SLT by applying the constraints that were developed in Sec. 3 and Sec. 4. For this purpose, we first extracted the model parameters, i.e., \(\tilde{\lambda},\omega,\zeta_{0},\zeta_{1},....\) etc subjected to the conditions given equations (33), (34) (corresponding the NEC), (41) (which corresponds to CEC) and (49) (corresponding to SLT). We use type Ia supernovae data [67] and the observational Hubble data [68] for the computation. To ensure the continuance of smooth discussion, we have given details of the data analysis in appendix A. The best estimated value of model parameters hence obtained are provided in Table 1. ### Case I Fig. 1 shows the evolution of the Hubble parameter for all the three cases while, Fig. 2, compares the model prediction of the apparent magnitude of the SNe Ia at various redshifts with the corresponding observed values. From the two figures, we notice that, the evolution of \(H\) in Case I and Case Ia are not much different from \(\Lambda\)CDM like behavior shown in Case Ib, so that all of them predicts late acceleration. However, both Case I and Case Ia predict far future quintessence era for the universe, whereas Case Ib depicts an end de-Sitter epoch. Also, since the best estimated value of the viscous coefficient \(\zeta_{1}\) is negative for both Case I and Ia (refer table (1)), the viscous pressure in both cases are obtained to be positive. One may now ask the obvious question that, "If both viscous pressure and kinetic pressure are positive, what produces the negative pressure that is necessary for the recent cosmic expansion?". To answer this, we analyze (21), (22), (23) and (24). From which we find that, even though the bulk viscous pressure (\(\Pi\)) is positive, the coupling of vDM with gravity makes the coefficient \((1+3\lambda)\), as given in equation (24) negative, which results in a negatively coupled bulk viscous pressure \(\bar{\Pi}\). Please see Fig. 3. It is this negative pressure, that causes the late accelerated expansion of the universe. Despite of having a \begin{table} \begin{tabular}{|c|c|c|c|} \hline Model & Case I & Case Ia & Case Ib \\ Parameters & & & \\ \hline \(H_{0}\) & \(70.15^{+1.51}_{-1.50}\) & \(70.16^{+1.32}_{-1.30}\) & \(70.35^{+1.15}_{-1.17}\) \\ \(\tilde{\lambda}\) & \(-0.498^{+0.003}_{-0.002}\) & \(-0.497^{+0.002}_{-0.002}\) & \(0.88\) \\ \(\alpha\) & \(0.19^{+0.15}_{-0.15}\) & \(0.18^{+0.16}_{-0.15}\) & \(-\) \\ \(\Delta\) & \(0.53^{+0.35}_{-0.30}\) & \(0.49^{+0.34}_{-0.36}\) & \(-\) \\ \(\Omega_{m}^{0}\) & \(0.25^{+0.01}_{-0.01}\) & \(0.26^{+0.01}_{-0.01}\) & \(0.27^{+0.17}_{-0.15}\) \\ \(\zeta_{0}\) & \(0.19\) & \(-\) & \(-0.003^{+0.03}_{-0.03}\) \\ \(\zeta_{1}\) & \(-0.22\) & \(-0.23\) & \(0.50^{+0.32}_{-0.32}\) \\ \(\omega\) & \(0.87^{+0.12}_{-0.13}\) & \(0.87^{+0.11}_{-0.12}\) & \(0.48^{+0.30}_{-0.30}\) \\ M & \(19.35^{+0.02}_{-0.02}\) & \(19.33^{+0.02}_{-0.02}\) & \(19.34^{+0.01}_{-0.01}\) \\ \(\chi_{min}^{2}\) & \(1069.12\) & \(1068.94\) & \(1075.94\) \\ \(\chi_{dof}^{2}\) & \(0.979\) & \(0.978\) & \(0.983\) \\ \hline \end{tabular} \end{table} Table 1: Best estimated values of model parameters for each case by applying the respective model constraints and comparing models to OHD+SNe Ia data sets. Figure 3: Evolution of different pressure terms associated with Case I, under best estimated value of model parameters. \(p^{0}_{vm}\) and \(\Pi\) are the uncoupled kinetic and bulk viscous pressure, while \(\bar{p}^{0}_{vm}\) and \(\bar{\Pi}\) represent their respective coupled versions. Note that each pressure is scaled by a factor of \(3H_{0}^{2}\). Figure 2: Graph comparing the value of apparent magnitude with values obtained from it’s theoretical model using best estimated value of model parameters. Figure 1: Graph comparing the exact evolution of Hubble parameter obtained by using best estimated value of model parameters from combined OHD+SNe Ia data sets. negative coupling parameter, the coupled kinetic pressure of the viscous fluid, \(\bar{p}_{vm}^{0}\) as in (23) is still positive, due to the positiveness of the coupled equation of state, \(\bar{\omega}.\) Also, it is easy to see that for parameter values in the domain \(\tilde{\lambda}\in(-1/2,-3/8)\) and \(\omega\in(0,1),\) coupled kinetic pressure is always greater than zero and only the coupled viscous pressure can attain a negative value (\(\bar{\Pi}<0\)). Therefore, with a negative coupling constant \(\tilde{\lambda},\) the effective viscous pressure of vDM can generate the adequate negative pressure, for causing late accelerated expansion of the universe. In Fig. 4, we confirm that the effective pressure of coupled viscous fluid is indeed negative and diverging as \(a\to 0\) and later in the far future stage of evolution, (i.e, as \(z\rightarrow-1\)), it asymptotically approaches zero from the negative side. To check whether, the case \(\Pi>0\) will satisfy the SLT or not, we studied the evolution of entropy production, as given in relation (49), with scale factor, for best estimated value of model parameters. The corresponding graph is presented as Fig. 5. Analyzing the same, we learn that the entropy rate \(\dot{s},\) is indeed positive, and is decreasing asymptotically to zero as \(a\rightarrow\infty\). At the outset it is evident from this figure that, the entropy is at the increase, even with \(\Pi>0\). The nature of evolution of the deceleration parameter (\(q\)) can be studied using the standard relation, \[q=-\frac{\ddot{a}}{aH^{2}}=-\left(1+\frac{\dot{H}}{H^{2}}\right). \tag{63}\] Fig. 6 shows it's evolutionary behavior with change in redshift. Accordingly, the transition to late accelerated epoch has occurred at a redshift, around \(z_{T}=0.797\) and the present value of deceleration parameter is \(q_{0}\approx-0.568\). These values are very close to estimates of the current concordance model. However, \(q\) is not approaching the value \(-1\) as in the standard \(\Lambda\)CDM model, but to a value greater than -1. Hence, we Figure 4: Evolution of effective pressure \(\bar{p}_{vm}\) with scale factor in different models corresponding to the best estimated value of model parameters. Figure 5: Evolution of (49) plotted against scale factor (\(a\)) for best estimated value of model parameters. Here, \(\tau\) has the definition \(\tau=Tn/3H_{0}^{3}\). conclude that the present case leads to an end quintessence behavior rather than a de-Sitter one. In Fig. 7, we have shown the evolution of the effective equation of state of coupled vDM component which can be obtained from \(\bar{p}_{vm}\) and \(\bar{\rho}_{vm}\) as, \[\bar{\omega}_{eff}=\frac{\bar{p}_{vm}}{\bar{\rho}_{vm}}=\frac{\bar{p}_{vm}^{0}+ \bar{\Pi}}{\bar{\rho}_{vm}}. \tag{64}\] It shows that, the effective equation of state of coupled vDM component, evolves from \(\bar{\omega}_{eff}\to 0\) as \(a\to 0\) to a value greater than -1, as \(z\rightarrow-1\). It's present values is found to be around \(\bar{\omega}_{eff}\approx-0.958\) which is very close to -1. However, since the effective equation of state of vDM saturates to a value greater than -1, we can again confirm that, coupled vDM is mimicking a quintessence dark energy component. At this juncture, it is to be noted that the the inherent or barotropic equation of state \(\omega\) (refer equation (23)) of the vDM, corresponding to its kinetic pressure, would have value around \(\omega\approx 0.87\) (close to stiff matter like fluid). But due to its minimal coupling with geometry (through the parameter \(\tilde{\lambda}\)), it attains a 'coupled equation of state' (23), which has a value around \(\bar{\omega}\approx 0.067\). This shows that, the vDM fluid having an almost stiff matter like equation of state, behaves like a viscous warm dark matter like fluid owing to its negative minimal coupling to gravity. We obtained the age of the universe predicted by this model, by integrating the Hubble parameter (i.e. Figure 6: Evolution of deceleration parameter with redshift for best estimated value of model parameters. Figure 7: Evolution of effective equation of state (\(\bar{\omega}_{eff}\)) of minimally coupled vDM component with redshift in each model, for best estimated value of model parameters. Figure 8: Variation in NEC with scale factor, graphed using the best estimated value of model parameters. The green line represents the boundary where, \(\Pi/p_{vm}^{0}=1\). Figure 9: Evolution of energy density associated with each component against change in scale factor for case I. Note that each axis has been scaled logarithmically. \(dt/da=[aH(a)]^{-1}\)) of the universe as, \[t_{0}-t_{b}=\int_{0}^{a_{0}}\frac{da}{aH(a)}. \tag{65}\] Here, \(t_{0}\) represents the present time (corresponds to \(a=a_{0}=1\)) and \(t_{b}\) is the time at which the big bang (corresponds to \(a=0\)) occurred. Hence, the interval \(t_{0}-t_{b}\) represents the current age of the universe since the big bang. Using the extracted model parameters, the calculated age is approximately around 13.91 Gyrs, which is very close to \(\Lambda\)CDM model prediction, but slightly greater. Finally, by analyzing Fig. 8 and Fig. 9, we confirm that this model is well behaving under NEC and CEC conditions. ### Case Ia The evolution of the Hubble parameter in this case, included in Fig.1, implies a transition into the accelerated expansion at the late stage of the evolution. Behavior of this model is similar to that of case I and is not too much different from that of the \(\Lambda\)CDM. This model also predicts the magnitudes of the distant supernovae as is evident from Fig. 2. However, a distinguishing feature of this case is that, the ratio of the bulk viscous pressure to the kinetic pressure is a constant, which satisfies the relation, \(\big{|}\Pi\big{|}/p_{vm}^{0}=3\zeta_{1}/\omega\). From best estimated values of these parameters (and from Fig. 8), it is evident that the above ratio is less than one, which confirms the validity of NEC. This constancy in the ratio of \(\Pi\) and \(p_{vm}^{0}\) also implies a similar evolutionary natures for \(\Pi\) and \(p_{vm}^{0}\), which can be seen from Fig. 10. The figure also shows that \(\Pi\), \(p_{vm}^{0}\) and \(\bar{p}_{vm}^{0}\) are always positive, while the coupled viscous pressure, which is responsible for the late accelerated expansion of the universe, remains negative. Evolution of the effective pressure, which comprises of the coupled kinetic and bulk viscous pressures of vDM approaches zero from the negative side as \(a\rightarrow\infty\), as shown in Fig. 4. Another feature which distinguishes this model from the previous case, is the constancy of the effective equation of state, with its value is around \(\bar{\omega}_{eff}=-0.946\). This shows that, the minimally coupled vDM component effectively mimics a quintessence dark energy component with constant equation of state. However, it is to be noted that the barotropic equation of state \(\omega\), of vDM implies a stiff matter like behavior (refer Table (1)), which on coupling with spacetime acquire the character of quintessence fluid of constant negative equation of state. The evolution of the deceleration parameter (\(q\)) given in Fig. 6, reveals that the effect of the negative equation of state of vDM will dominate only at the later stage, at which the universe enter an accelerating epoch. The transition to late accelerated epoch is found to have occurred at a redshift of \(z_{T}=0.798\) and the present value of \(q\) is around \(q_{0}\approx-0.553\). Following this, we estimated the age of the universe predicted by this model, using (59) and (65), to be around 13.87 Gyrs which is, very close to the age predicted by \(\Lambda\)CDM model. Finally, we have analyzed the Figure 10: Evolution of different pressure terms associated with Case Ia, under best estimated value of model parameters. evolution of density of vDM by plotting Fig. 11. It is clear that, the density of vDM is always greater than zero, which confirm the validity of CEC. ### Case Ib In a previous section, we have shown that the model in this special case is dynamically equivalent to the standard \(\Lambda\)CDM model. Now, let us analyze the behavior of cosmological parameters using the best estimated value of model parameters given in Table 1. Compared to the previous cases (Case I & Ia), we found that, when NEC constraints are imposed on the system while simultaneously requiring the condition \(\tilde{\lambda}=\epsilon\), the density of vDM, i.e, \(\rho_{vm}\), becomes negative at certain epochs of the evolution of the universe. It then runs out that, if one lifts the NEC constraints while evaluating the model parameters, this negativity in the energy density disappears and at the same time the model retains \(\Lambda\)CDM nature. Hence, we investigate this special case without caring the NEC. We point out that, our motivation for considering this case, is just to find, under what conditions the mixed matter component model shows \(\Lambda\)CDM like behavior. Fig. 1 and Fig. 2, shows that this model exhibit good fit to both OHD and SNe Ia data, and asymptotically evolves towards a de Sitter epoch. The transition into the recent acceleration is found to have occurred at a redshift of \(z_{T}=0.717\) (see Fig. 6), while the present value of deceleration parameter is \(q_{0}\approx-0.575\). Fig. 7 reveals the evolution of the effective equation of state of coupled vDM component, which suggest a decay from \(\bar{\omega}_{eff}\to 0\) at \(a\to 0\) to \(\bar{\omega}_{eff}\rightarrow-1\) as \(a\rightarrow\infty\). Here, we notice that the effective equation of state \(\bar{\omega}_{eff}\) approaches zero, as \(a\to 0\), much faster than in Case I. From Fig. 8, we see that, even though NEC is not satisfied for the model, it's value saturates at around \(|\Pi/p_{vm}^{0}|\approx 3.1\) in the far future evolution. In addition, from Fig. 12 and Fig. 5 we find that model is well behaving under both CEC and SLT conditions. In Fig. 4, it is shown that the effective pressure \(\bar{p}_{vm}\), of the coupled fluid remains as a constant throughout the evolution. By analyzing (64), we notice that, this is due to the complementary behaviors of the effective equation of state and the density of the coupled dark matter fluid. However, it should be noted that, even if the effective pressure of vDM remains as a constant, the kinetic pressure \(\bar{p}_{vm}^{0}\) and the viscous pressure \(\bar{\Pi}\) of the coupled vDM fluid varies as shown in Fig. 13. In the asymptotic far future, both the effective pressure (\(\bar{p}_{vm}\)) and effective density (\(\bar{\rho}_{vm}\)) becomes constant and hence, the net effective equation of state of coupled vDM fluid becomes -1. The age predicted by this model is determined using (62) and (65) and is found to be 13.61 Gyrs, which is significantly close to the value predicted by the standard model. Figure 11: Evolution of different energy density terms with scale factor for case Ia. Notice that, the two lines are not perfectly horizontal, but have slight inclination to them. This suggests that the vDM density is not a constant but is indeed decreasing over time. Figure 12: Evolution of energy density associated with each component against change in scale factor for case Ib. Figure 13: Evolution of different pressure terms associated with Case Ib, under best estimated value of model parameters. Results and Discussion In explaining the recent accelerated expansion of the universe using dissipative effects in the matter sector, in the context of Einstein's gravity, it was shown that the presence of cosmological constant is essential to satisfy NEC during the evolution of the viscous fluid, that too for some particular choice of bulk viscous coefficient [23, 24, 25]. However, in such models where both dark energy and viscous matter are considered, the late accelerated expansion is driven mainly by the dark energy component, while the contribution by viscous matter remains significantly small. This means, in Einstein's gravity, it is not possible to have late accelerated expansion of the universe driven by bulk viscous matter whilst maintaining it's near equilibrium state. Hence it is necessary to search, the possibility of generating late acceleration using viscous effects in the matter sector by satisfying the NEC, without the needing dark energy, in a modified gravity context. In the present work, we have explored such a possibility in the context \(f(R,T)=R+2\lambda T_{vm}\) gravity. We showed that, by considering mixed dark matter components (non-interacting mixture of viscous DM and inviscid CDM) in the context of \(R+2\lambda T_{vm}\) gravity and imposing suitable constraints on the model parameters, it is possible to attain a viscous-driven late accelerated expansion of the universe while simultaneously satisfying the NEC for the dissipative matter, even in the absence of a cosmological constant. First, we formulated the general constraints on model parameters by imposing the NEC, CEC and SLT requirements on vDM component. The constraints developed are general since, they were derived without assuming any phenomenological form for bulk viscous coefficient. An in depth analysis of the constraints showed that, in this modified gravity regime, the possibilities of simultaneously satisfying NEC, CEC and SLT throughout the expansion exists, only if the coupling parameter has a value in the domain \(\tilde{\lambda}\in(-1/2,-3/8)\). Interestingly, we also noticed that, this was possible for both negative and positive cases of viscous pressures (i.e, with both \(\Pi>0\) and \(\Pi<0\)). One of the intriguing result obtained in this context, which is contrary to the result in Einstein gravity, was the possibility of having a positive viscous pressure for the vDM component while still having a late accelerated expansion of the universe. This result was obtained as a direct consequence of considering a vDM component with a negative coupling to geometry (i.e, with \(\tilde{\lambda}\in(-1/2,-3/8)\)). By analyzing (22), we learned that, for having a late accelerated expansion of the universe generated by bulk viscosity, one must have the coupled viscous pressure \(\bar{\Pi}<0\). And from (23), we saw that, this can occur for two distinct cases, either with \(\Pi<0\) and \(\tilde{\lambda}>-1/3\) or with \(\Pi>0\) and \(\tilde{\lambda}<-1/3\). Even though both cases can equally explain late accelerated expansion of the universe, only the latter case, i.e, \(\Pi>0\) with \(\tilde{\lambda}<-1/3\), satisfies the NEC requirement with vDM component (refer Sec. 3.1). Then, to confirm the viability of having such a positive viscous pressure in this modified gravity, we investigated the entropy evolution associated with vDM component and showed that, in the context of this modified gravity, it is possible to satisfy SLT associated with vDM component even with \(\Pi>0\). A complete analysis of the model was then carried out by suitably choosing the viscous coefficient, by keeping NEC, CEC and SLT requirements satisfied throughout the expansion. As the primary choice, we have considered the form of viscous coefficient as, \(\zeta=\zeta_{1}\rho_{vm}/H+\zeta_{0}H\), which we investigated as Case I, and then considered \(\zeta=\zeta_{1}\rho_{vm}/H\) as it's special case, which we labeled as Case Ia. In both these cases, the Hubble parameter model predicted a universe which starts off from an initial big bang singularity, which then undergoes a decelerated expansion followed by a late accelerated epoch which ends with a quintessence/de-Sitter/phantom era in it's far future evolution. However, we found that, a far-future de-Sitter epoch for Case Ia is not favorable as the value of parameter \(\tilde{\lambda}\) in this scenario becomes indeterminate from the data analysis. For academic curiosity, we have separately analyzed the special case where the Case I shows \(\Lambda\)CDM like behavior as Case Ib. We then compared each of these models with OHD+SNe Ia data, by applying the model constraints, and obtained the best estimated value of model parameters. Comparison of the models with observational data showed that, in addition to having a good fit to observational data, Case I and Case Ia satisfies all three necessary conditions (NEC, CEC and SLT) throughout the evolution. Both of these models depicted a late accelerating universe which approaches a quintessence era in their far future evolution. From detailed analysis, we learned that, the recent accelerated expansion in these models is driven by the coupled vDM component and the past deceleration is obtained as a result of CDM density dominating over vDM density. We also learned that, the viscous pressure (\(\Pi\)) in these cases is positive and hence cannot directly drive the accelerated expansion of the universe. However, owing to a negative minimal coupling with geometry, the effective coupled bulk viscous pressure (\(\bar{\Pi}\)) becomes negative and hence causes the late-accelerated expansion. Affirming the same, we also saw that, in the absence of viscous pressure (\(\Pi\)), it is not possible to achieve accelerated expansion in these models with any \(\tilde{\lambda}\in(-1/2,-3/8)\) for any \(\omega\in(0,1]\). From analysis of Case Ib, we learned that, the model was successful in predicting the end de Sitter phase for the universe, however, the NEC associated with vDM component had to be violated. Nevertheless, this model obeys CEC requirement and also returns a good fit to data and also predicts values of cosmological parameters close to their standard values. Contrary to previous cases (i.e., Case I and Ia), we found that the viscous pressure in this model is negative and by studying the entropy evolution of vDM component, we showed that this model is also in agreement with SLT. To show a quick comparison between models, we have tabulated the values obtained for cosmological observables corresponding to each case and the standard \(\Lambda\)CDM values in Table 2. To sum up, we found that NEC associated with vDM component with viscous coefficient \(\zeta=\zeta_{1}\rho_{vm}/H+\zeta_{0}H\) or \(\zeta=\zeta_{1}\rho_{vm}/H\), could be maintained throughout the evolution in \(R+2\lambda T_{vm}\) gravity even in the absence of cosmological constant or dark energy. The simultaneous NEC and CEC requirement, lead to the scenario where bulk viscous pressure of the vDM component becomes positive, but owing to a negative minimal coupling with geometry, the effective pressure of the vDM fluid becomes negative which then leads to the recent accelerated expansion of the universe. However, we saw that, in this modified gravity, having a positive bulk viscous pressure needn't necessarily violate the SLT, hence allowing the possibility of a negative viscous coefficient. Finally, the models studied based on these constraints show quintessence behavior in their far future evolution and in addition to being a good fit to data, the predicted values of cosmological variables are significantly close to the standard values determined from the current concordance model. ## Appendix A Data Analysis For estimating best fit value of model parameters we compare analytical models with observational data. For this we have chosen, Type Ia Supernovae data (SNe Ia) [67] which contains a total of 1048 data points within a redshift range of \(0.01\leq z\leq 2.26\) and the Observational Hubble data (OHD) [68] which contains 51 data points within redshift range of \(0.07\leq z\leq 2.36\). Comparison of each model with combined data set is done using standard \(\chi^{2}\) analysis employed through Markov chain Monte Carlo (MCMC) estimation technique by utilizing emce python package [72] in lmfit python library [73]. Since the constraints on free parameters are expressions in the form of inequalities, they are implemented using expression bound techniques available in lmfit library. For our analysis using the OHD data, we compare the values of theoretical Hubble parameter \(H_{t}\), obtained for different redshifts, with those in the observational Hubble data \(H_{o}\) which are also measured at different redshifts. The required \(\chi^{2}\) function which is to be minimized is then given by, \[\chi^{2}_{OHD}((a,b,.,H_{0}))=\sum_{k=1}^{n}\frac{\left[H_{t}(a,b,.,H_{0})-H_ {o}\right]^{2}}{\sigma_{k}^{2}} \tag{66}\] Where, \(a,b,..,H_{0}\) represents the model parameters whose best estimates are to be found, \(n\) is the total number of data points available for the analysis and \(\sigma_{k}^{2}\) is the variance in the measured value of \(k^{th}\) data. For comparing the models with type Ia supernovae data (SNe Ia), we use the theoretical expression for the \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Observables & \(\Lambda\)CDM & Case I & Case Ia & Case Ib \\ \hline Age (in Gyrs) & 13.79[69] & 13.91 & 13.87 & 13.61 \\ \(q^{0}\) & \(-0.55\)[70] & \(-0.568\) & \(-0.553\) & \(-0.575\) \\ \(z_{T}\) & 0.683[71] & 0.797 & 0.798 & 0.717 \\ \(\bar{\omega}^{0}_{vm}\) & - & \(-0.958\) & \(-0.946\) & \(-0.991\) \\ \(\omega_{DE}\) & \(-1\) & - & - & - \\ \hline \end{tabular} \end{table} Table 2: Comparison between cosmological observables obtained for each case with values predicted by \(\Lambda\)CDM model. distance modulus \(\mu_{t}\) of \(k^{th}\) supernovae at red shift \(z_{k}\), given by, \[\mu_{t}(z_{k},a,b,.,H_{0}) =m-M \tag{67}\] \[=5\log_{10}\left[\frac{d_{L}(z_{k},a,b,.,H_{0})}{\text{Mpc}}\right] +25.\] Here, \(m\) and \(M\) are apparent and absolute magnitudes of supernovae and \(d_{L}\) is the luminosity distance defined for a flat universe. Through out the analysis we treat \(M\) as a nuisance parameter. The relation for luminosity distance is then given by, \[d_{L}(z,a,b,.,H_{0})=c(1+z)\int_{0}^{z}\frac{dz^{\prime}}{H(z^{\prime},a,b,.,H_ {0})}. \tag{68}\] Hence, the required \(\chi^{2}\) function is, \[\chi^{2}_{SNe}((a,b,.,H_{0}))=\sum_{k=1}^{n}\frac{\left[\mu_{t}(a,b,.,H_{0})- \mu_{o}\right]^{2}}{\sigma_{k}^{2}}. \tag{69}\] For the combined data analysis using both OHD and SNe Ia data sets, the \(\chi^{2}\) function to be minimized is given by, \[\chi^{2}_{total}=\chi^{2}_{OHD}+\chi^{2}_{SNe}. \tag{70}\] Using these equations, we perform \(\chi^{2}\) minimization for the each model and estimate the best fit values of all model parameters. The viability of a given model based on the observational data is determined by finding the chi square per degrees of freedom, which is defined as, \[\chi^{2}_{dof}=\frac{\chi^{2}_{min}}{n-n_{p}}. \tag{71}\] Here, \(n\) is the number of available data points and \(n_{p}\) is the number of model parameters. The model is considered as, good fit to data if \(\chi^{2}_{dof}\approx 1\), over fits the data if \(\chi^{2}_{dof}\ll 1\) and bad fit to data if \(\chi^{2}_{dof}\gg 1\). For restricting values of model parameters in each model, we will consider constraints developed in Sec. 3 with suitable modifications and we will also mention the domain of values that were chosen for bounding each parameter. From our detailed analysis using different sets of priors, we have reported only those cases that satisfy NEC, CEC and SLT simultaneously, except for Case Ib, where we have neglected NEC and varied the parameters freely. Case I:For the general model given in (54), we have considered the parameter bounds as, \[H_{0}\in[60,80]:\tilde{\lambda}\in(-0.5,-3/8]:\Delta\in[0,1]:\Omega^{0}_{m} \in[0,1]:\omega\in[0.5,1]:\alpha\in[0,0.5] \tag{72}\] Here, we have introduced two dummy parameters for the purpose of implementing the inequality constrains that where developed based on NEC and CEC requirements. These are given as, \[\Delta=\frac{3\zeta_{1}}{\omega}+\frac{\zeta_{0}\left[1+\tilde{\lambda}\left( 3-\omega+3\zeta_{1}\right)\right]}{1-\tilde{\lambda}\zeta_{0}-\Omega^{0}_{m}} \tag{73}\] \[\alpha=3\zeta_{1}+\omega \tag{74}\] From (72) one may notice that we have chosen \(\omega\in[0.5,1]\) and have not considered \(\omega<0.5\) this is because, in such cases, NEC and CEC were not simultaneously satisfied throughout the evolution. Case Ia:For estimating best fit values of parameters associated with the model (59), i.e., case Ia, we will assume the following priors for model parameters, \[H_{0}\in[60,80]:\tilde{\lambda}\in(-0.5,-3/8]:\Delta\in[0,1]:\Omega^{0}_{m}\in [0,1]:\omega\in[0.5,1]:\alpha\in[0,0.5] \tag{75}\] In this special case dummy variables defined in (73) and (74) changes as, \[\Delta=\frac{3\zeta_{1}}{\omega} \tag{76}\] \[\alpha=3\zeta_{1}+\omega \tag{77}\] Figure 14: Corner plot of 2D posterior contours with 1 sigma (68%), two sigma (95%) and three sigma confidence level (99.7%) and 1D marginalized posterior distributions of model parameters for the combined OHD+SNe Ia data plotted using [74] for the general case (i.e, Case I). Case Ib:For this special case we consider only the CEC and SLT to be the necessary requirement for the fluid and postulate that Eckart theory remains validity in the domain where fluid is far from equilibrium. This is because, during data analysis, when this special case was analyzed by applying NEC, the energy density of the vDM component became negative in certain epochs of evolution. Since we consider violation of CEC (having a negative energy density for vDM component) as unacceptable behavior for any model, we had to neglect such a scenario from analysis. However, it was observed that when NEC constraints are lifted, and only CEC and SLT are applied, model shows acceptable behavior. Hence, priors chosen in this case is, \[H_{0}\in[60,80]:\zeta_{1}\in[-1,1]:\zeta_{0}\in[-1,1]:\Omega_{m}^{0}\in[0,1]: \omega\in[0,1]:\tilde{\lambda}=\epsilon \tag{78}\] The best estimated value of model parameters associated with general model and its special cases, extracted by following \(\chi^{2}\) minimization technique are provided in Table 1. Also, the a contour plot with one, two and three sigma confidence level for case I is provided as Fig. 14. ## Acknowledgments Vishnu A Pai acknowledges Cochin University of Science and Technology, Kochi, for providing financial support.
2309.09388
Effect of polydispersity on the transport and sound absorbing properties of three-dimensional random fibrous structures
A technique is proposed that uses a multi-scale approach to calculate transport properties of compressed felts using only image analysis and numerical calculations. From the image analysis fiber diameter distribution and fiber orientation are determined. From a known porosity and the latter two characteristics, two representative elementary volumes (REV) are constructed: one based on the volume-weighted average diameter and one on an inverse volume-weighted average diameter. Numerical calculations on the former showed that it correctly estimates viscous and thermal permeabilities, while the latter correctly estimates tortuosity and viscous and thermal characteristic lengths. From these calculations, micro-macro analytical expressions are developed to estimate the transport properties of polydisperse composite felts based solely on open porosity, fiber diameter polydiversity, and fiber orientation. Good agreements are obtained between analytical predictions and measurements of transport properties. The predicted transport properties are also used in the Johnson-Champoux-Allard-Lafarge (JCAL) equivalent fluid model to predict the sound absorption coefficient of the felts. Excellent agreements are obtained with impedance tube measurements.
Q. V. Tran, C. Perrot, R. Panneton, M. T. Hoang, L. Dejaeger, V. Marcel, M. Jouve
2023-09-17T21:57:47Z
http://arxiv.org/abs/2309.09388v4
Effect of polydispersity on the transport and sound absorbing properties of three-dimensional random fibrous structures ###### Abstract A technique is proposed that uses a multi-scale approach to calculate transport properties of compressed felts using only image analysis and numerical calculations. From the image analysis fiber diameter distribution and fiber orientation are determined. From a known porosity and the latter two characteristics, two representative elementary volumes (REV) are constructed: one based on the volume-weighted average diameter and one on an inverse volume-weighted average diameter. Numerical calculations on the former showed that it correctly estimates viscous and thermal permeabilities, while the latter correctly estimates tortuosity and viscous and thermal characteristic lengths. From these calculations, micro-macro analytical expressions are developed to estimate the transport properties of polydisperse composite felts based solely on open porosity, fiber diameter polydiversity, and fiber orientation. Good agreements are obtained between analytical predictions and measurements of transport properties. The predicted transport properties are also used in the Johnson-Champoux-Allard-Lafarge (JCAL) equivalent fluid model to predict the sound absorption coefficient of the felts. Excellent agreements are obtained with impedance tube measurements. keywords: Multiscale model, fibrous media, representative elementary volume, transport properties, sound absorption coefficient, compression effect Pacs: 0000, 1111 2000 MSC: 0000, 1111 ## 1 Introduction Nonwoven fabrics are some of the most widespread man-made porous materials that are used in many engineering fields including health and medical care, energy or sound proofing applications. The main constituents of nonwoven are fibers that are linked together by cohesive bounds induced by the manufacturing process in the form of fibrous networks with transverse isotropy. Nonwoven fibrous media with a wide diversity of physical and mechanical properties (Dirrenberger et al. [1], Altendorf et al. [2], Bosco et al. [3]) can be manufactured by tailoring the nature of the raw materials and the manufacturing process conditions (e.g., type of geometry, bale opening and weighting of the fibers, fibers web creation, thermal bounding thickness adjustment and cutting). However, the links between composite nonwoven manufacturing parameters, the resulting fibrous microstructures, and the product performance are still not fully evidenced. For example, the permeability \(k_{0}\) (Darcy [4]) and the viscous characteristic length \(\Lambda\) (Johnson et al. [5]) of felts often follow a nonlinear evolution with their porosity, the microstructural origins of which are still questioning. Thus, the construction of the aforementioned links constitutes a subject of intense research. In particular, there is still a need for relevant multiscale and multiphysics models that could (1) account for the complexity of composite felt microstructures and related transport and sound absorbing properties and (2) be implemented in numerical simulation tools for computer-aided design of composite felts applications or for monitoring of the composite felts manufacturing process itself. For that purpose, numerous theoretical studies have been conducted in the last decades. In most cases, composite felts are modeled as aligned fiber bundles (Berdichevsky and Cai [6], Boutin [7], Thiery and Boutin [8], Piegay et al. [9], Tarnow [10; 11; 12], Umnova et al. [13], Semeniuk and Goransson [14], Semeniuk et al. [15]). These approaches often assume that the representative elementary volume of a composite felt can be reduced to the most basic geometric information, that is, porosity \(\phi\) and fiber size, so that it is based on a bicomposite cylindrical pattern made of an internal cylindrical fiber and an external fluid shell that ensures fluid connectivity. The proposed analytical models are interesting because they encapsulate the essential parts of the physics and are easily configurable. However, they do not account for the complexity of the geometry and the combined effect that spatial randomness in the pore space has on flow problems. To better understand the effects of visco-thermal micro-mechanisms on the values of transport coefficients of composite felts, many fiber scale numerical studies were conducted (Koponen et al. [16], Martys and Garboczi [17], Tomadakis and Robertson [18; 19], Schladitz et al. [20], Umnova et al. [13], Altendorf and Jeulin [21], Peyrega et al. [22], Luu et al. [23; 24], He et al. [25], Soltani et al. [26], Tucny et al. [27]. For example, Luu et al. [23; 24] performed numerical simulations using networks of straight cylindrical fibers to investigate the effect of porosity, fiber radius, and fiber orientation on the in-plane and through-plane transport properties of fibrous media. Using random porous media from two-dimensional models, Martys and Garboczi [17] demonstrated the important effect that spatial randomness in the pore space has on flow problems. This analysis showed that, in a random pore structure with a distribution of pore sizes, the viscous fluid flow will tend to go through the largest pore necks, decreasing the importance of the narrowest necks. They also highlighted that the sizes of the dynamically connected pore regions were not exactly the same for the electric and fluid flow cases (Martys and Garboczi [17]). In particular, for a fibrous material made of wood fibers with an open porosity \(\phi=0.64\), Peyrega and Jeulin [28] and Peyrega et al. [22] showed that the volume-weighted average radius \(r_{v}\) was an appropriate size of the fiber radii to quantitatively predict its sound absorbing properties at normal incidence. This approach assumed a two-dimensional Boolean model of random cylinders composed of overlapping fibers, where the locations of the centers of the discs were determined according to a random Poisson point process. This analysis was extended to three-dimensional models for glass wool samples obtained with various processing parameters by He et al. [25]. Keeping in mind that at fixed porosity, fewer fibers are introduced into a given volume when the fiber radius is volume-weighted, these results highlight that the \(r_{v}\) length scale provides an effective way to reconstruct pore space. This space encompasses the largest pores, forming a continuous path for the flow of viscous fluids in actual fibrous media. These numerical results confirm the trends reported in several complementary experimental and semi-empirical studies (Delany and Bazley [29], Bies and Hansen [30], Miki [31], Garai and Pompoli [32], Manning and Panneton [33], Kerdudou et al. [34], Xue et al. [35], Pelegrinis et al. [36]). Briefly, they highlight (i) the central role of fiber distributions (in size and orientation) and (ii) the need for a proper characterization of the geometry and transport processes in polydisperse fiber structures. This is particularly true for nonwovens that exhibit a wide distribution of fiber diameters and lengths [22; 25]. On the one hand, noticeable progress has been achieved in the purpose of characterizing the transport parameters of porous materials thanks to dedicated testing devices (Stinson and Daigle [37], Leclaire et al. [38], Ayrault et al. [39]). These tests are interesting but still remain difficult to carry out, as they require permeable porous media to enable the propagation of ultrasonic waves through the thickness of the material. On the other hand, significant progress has also been achieved to characterize finely the microstructures of nonwoven fibrous media with imaging techniques such as scanning electron microscope images (Luu et al. [23]) and optical granulomorphometry (He et al. [25]) or X-ray microtomography coupled with advanced image analysis procedures (Lux [40], Peyrega et al. [41], Depriester et al. [42]). For instance, He et al. investigated the effect of fiber distributions (orientation, length, diameter) on several transport parameters of low density glass wools from optical granulomorphometry (length, diameter) and scanning electron microscope images (orientation) for ten products provided with two different classes of surface densities. Angular orientation and volume averaging of fiber diameters were used to reconstruct virtual geometries and quantitatively predict the viscous permeabilities \(k_{0}\) of the corresponding samples. However, they did not fully capture the overall transport properties, in particular with respect to the high frequency parameters (viscous \(\Lambda\) and thermal \(\Lambda^{\prime}\) characteristic lengths). In light of the above, the objective of this study is to propose a multiscale model for the overall transport and long-wavelength sound-absorbing properties of composite fefts, taking into account the appropriate descriptors of the polydisperse microstructure that can be obtained using images. For this purpose, two types of composite nonwovens with several compression ratios were manufactured and thermobonded. Their microstructures were characterized using scanning electron microscope images. We also characterized their transport and sound-absorbing properties. The combination of these data makes it possible to formulate relevant hypotheses for the architecture of fiber networks and their transport processes on the fiber scale. These features were then upscaled using homogenization with multiple scale asymptotic expansions for periodic structures (Sanchez-Palencia [43], Bensoussan et al. [44], Auriault et al. [45]). This method proposes a rigorous framework to deduce the effective coefficients of importance and the effective equations that govern the macro-fields of the equivalent continuum media of composite fefts. It also provides well-posed boundary value problems to be solved on representative elementary volumes (REVs) to estimate their macroscale properties. These problems are first solved numerically using the finite element method. Then, a second semi-analytical multiscale model is proposed, approximating the numerical results obtained by curve fitting and yielding unified models which assume the effective coefficients of importance as a function of porosity, fiber orientations, and effective fiber radii. Predictions of the numerical and semi-analytical models are compared with experimental data and discussed. ## 2 Materials and experimental methods ### Felts Two nonwoven materials are investigated (Fig. 1): namely "cotton felt" and "PET felt". Raw materials entering into the initial composition (Tab. 1) together with the corresponding manufacturing process are discussed in the following. Note that in the textile industry, the fineness (\(t\)) of the fibers is specified by dtex, which enables a linear density estimate of the fiber size. To calculate the diameter of the fiber from the fineness \(t\) (dtex) and the mass given by unit volume of the fiber material \(\rho_{f}\), the following formula is used: \(D_{f}=\sqrt{4t/\pi\rho_{f}}\). #### 2.1.1 Cotton felt The fabrication of the cotton felt uses an airlay process, where the aerodynamic web forming is a dry procedure to form a web out of a wide variety of staple fibers. The fibers leave from a rotating drum into a turbulent air flow. Suctioning into a perforated moving conveyor belt or a perforated drum leads to the formation of a random three-dimensional web structure (Handbook of nonwovens, Chap. 4 [46]; Gramsch et al. [47]). The input fiber material is a mixture of 75% shoddy fibers and 25% bicomponent fibers in mass. The core of the bicomponent is made of PET, and its surface is made of coPET in a 1:1 ratio. The bicomponent fibers are homogeneous with circular cross sections, whereas the shoddy fibers obtained after tearing of textile waste are not homogeneous. This shoddy is made from a mixture of 55% cotton and 45% PET. In post-processing, the nonwoven material called felt is reinforced by thermobonding with a chosen compression ratio. Here, the bicomponent fibers have an adhesive effect. #### 2.1.2 PET felt The input fiber material is a mixture of 60% PET fibers and 40% bicomponent fibers in mass. The same bicomponent fibers are used as for cotton felts. The fibers are homogeneous with circular cross sections and regular lengths. In web forming, the web is formed by a roller card. Fiber tufts and bundles are disentangled to form a parallel layer of fibers. The fibers in the card web have a lengthwise orientation. Then, this card web is laid in several layers on a take-off belt via a conveyor belt system with an oscillating carriage movement. This take-off belt moves 90 degrees to the cross-lapper. The fiber web is mechanically bonded by needling through the use of barbed needles. A portion of horizontal fibers are reoriented into the vertical plane in the form of fiber tufts. This nonwoven material is called needlefelt (Handbook of nonwovens, Chap. 8 [48]; Nonwoven Fabric, Chap. 6 [49]). Finally, thermobonding reinforcement is also applied along with the chosen compression ratio. ### Characterization of the microstructure The microstructure of the non-woven fibrous media was first characterized using Scanning Electron Microscope (SEM) images (Fig. 2). The reader is referred to A for a detailed description of the preparation and cutting of samples prior to acquisition of SEM images. Based on these two-dimensional images, typical fiber diameters were measured manually (Figs. 2a). To determine the in-plane [respectively out-of-plane] orientation distributions of the fibers, we superimposed straight segments on the fibers on the surface of the fibrous materials and extracted the in-plane orientation angle \(\varphi\) (Fig. 2b) [respectively out-of-plane orientation angle \(\theta\) (Fig. 2c)] for each segment of all identified fibers on orthogonal sections of the materials. ### Characterization of transport and acoustic properties The open porosity \(\phi\) and true mass density \(\rho\) were determined using the pressure / mass method (Salissou and Panneton [50]). This method makes it possible to precisely determine the uncertainty in porosity depending on the volume of samples tested. This is important since open porosity will be a fundamental property for the proposed multiscale model to work properly. Figure 1: Felt samples thermobonded at different thicknesses (sample diameter, \(45mm\)). Figure 2: Example of SEM images of cotton felt F2 and dimensional measurements of fibers (Fiji software). Measurement of: (a) fiber diameters (blue is cotton fibers, red is bicomponent fiber) in the \(xy\)-plane; (b) azimuthal or horizontal angle (\(\varphi\)) in the \(xy\)-plane, and (c) zenithal or vertical angle measurements (\(\theta\)) in the \(xz\)-plane. For each felt family (cotton and PET) and each fibrous material in a family (F1, F2, F3, F4, B1, B2), the density and porosity were measured. To ensure sufficient precision of measurements, for each fibrous material within a family, measurements were performed in batches of 12 specimens of cylindrical samples with a diameter of 45 mm (repeated three times per batch). The airflow resistivity \(\sigma\) was measured at a flow velocity of 0.5 mm/s following the static airflow method described in the ISO 9053-1:2018 standard. For each fibrous material, three cylindrical samples with a diameter of 45 mm were cut and all leaks were carefully avoided by adding petroleum jelly to the circumference of the sample. The tortuosity \(\alpha_{\infty}\) was measured using the high-frequency ultrasound transmission technique (Allard et al. [51]). Three samples with a diameter of 100 mm for each fibrous material were measured in air. The viscous \(\Lambda\) and thermal \(\Lambda^{\prime}\) characteristic lengths could not be validly measured with the two-gas ultrasound transmission technique (air and argon, Kino [52]). It was also impossible for us to obtain valid results with the acoustic method of Panneton and Olny ([53], [54]). Indeed, due to acoustic measurements limited to 4000 Hz and to vibration effects, the stationarity criterion of these methods was not respected over the characteristic lengths. The same was true for thermal static permeability \(k^{\prime}_{0}\). Consequently, the Kozeny-Carman formula approach, as described in Henry et al. [55], was used to estimate the two characteristic lengths \(\Lambda\) and \(\Lambda^{\prime}\). This approach involves using the directly measured values for porosity \(\phi\), resistivity \(\sigma\), and tortuosity \(\alpha_{\infty}\), as detailed in B. For the same reason, only an estimate of the static thermal permeability \(k^{\prime}_{0}\) could be obtained. It used the following relation between \(\Lambda^{\prime}\) and \(k^{\prime}_{0}\)[54]: \[k^{\prime}_{0}=M^{\prime}\frac{\phi\Lambda^{{}^{\prime}2}}{8}. \tag{1}\] The coefficient \(M^{\prime}\) is the dimensionless thermal shape factor. It differs from unity when the shape of the porous medium does not consist of circular cylindrical pores arranged in a parallel formation. From an educated guess based on the mean value of the results found for fibers in Tab. 2 of [54], it was set to \(M^{\prime}=2.09\). Therefore, an estimate of \(k^{\prime}_{0}\) was obtained from this equation using the measured porosity and the estimated thermal characteristic length. Finally, the sound absorption coefficient (hard-backed) of each felt was measured at normal incidence in an acoustic impedance tube of 44.44 mm in diameter. The incident acoustic plane wave traveled along the \(z\)-axis and excited the front (or rear) face of the felt in the \(xy\)-plane (refer to Fig. 2). The three-microphone method described in the ISO 10534-2:2023 standard was used. The microphone spacing and tube diameter allowed valid measurements in the frequency range 45 to 4300 Hz. Three samples per felt were measured on both faces to capture variations from one specimen to another and to verify how symmetric the felts were in thickness. The side of the specimen that is not facing the sound excitation is in contact with a hard reflective backing. To prevent air leakage between the tube wall and the specimens, a thin layer of Teflon was applied around the sample. ## 3 Experimental results and discussion ### Characterization of the microstructure The SEM images shown in Fig. 2 give typical features of the studied nonwoven fibrous media, fibers and fiber connections. From these images and the corresponding measurements, several important remarks can be made. #### 3.1.1 Fiber network Figure 2 shows that the nonwoven fibrous medium consisted of a more or less densely connected fibrous network through the heat bonding process. It shows a generally uniform fiber orientation distribution \(\varphi\) in the \(xy\)-plane (Fig. 3c). The standard deviation on the out-of-plane angle \(\theta\) decreases as the compression ratio increases (Fig. 3d, Tab. 2). For all compression ratios (from F1 to F4 and B1 to B2), the average value of \(\theta\) remains close to \(90^{o}\). These features reveal a transversely isotropic fiber orientation (see the corresponding second-order fiber orientation tensor in Advani and Tucker [56]), which could be obtained using the numerical generation process parameterized with a preferred fiber alignment along the \(Oz\) direction. Moreover, the observation regarding fiber connections tends to show that fibers can intersect; which could be considered in further simulations. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Cotton felt & Thickness & Compression ratio & Density & \multicolumn{2}{c}{Mass composition} \\ \cline{4-6} & (\(mm\)) & & (\(kg/m^{3}\)) & Bicomponent & Shoddy \\ \hline F1 & \(20.3\pm 0.3\) & \(1.0\pm 0.00\) & \(56.9\pm 5.5\) & 25\% & 75\% \\ F2 & \(16.1\pm 0.6\) & \(1.3\pm 0.05\) & \(72.4\pm 10.6\) & 25\% & 75\% \\ F3 & \(11.2\pm 0.2\) & \(1.8\pm 0.04\) & \(103.5\pm 10.8\) & 25\% & 75\% \\ F4 & \(5.9\pm 0.2\) & \(3.4\pm 0.10\) & \(184.8\pm 27.9\) & 25\% & 75\% \\ \hline PET felt & Thickness & Compression ratio & Density & \multicolumn{2}{c}{Mass composition} \\ \cline{4-6} & (\(mm\)) & & (\(kg/m^{3}\)) & Bicomponent & PET & PET \\ & & & & 4.4\(dtex\) & \(6.7dtex\) & \(17dtex\) \\ \hline B1 & \(10.3\pm 0.5\) & \(1.0\pm 0.00\) & \(141.0\pm 8.4\) & 40\% & 30\% & 30\% \\ B2 & \(4.3\pm 0.1\) & \(2.4\pm 0.17\) & \(344.9\pm 15.5\) & 40\% & 30\% & 30\% \\ \hline \hline \end{tabular} \end{table} Table 1: Information of the cotton felts and PET felts #### 3.1.2 Fibers Figure 2 reveals that the fibers exhibited a rather small radius of curvature at the scale of a few hundred micrometers so that each fiber \(i\) could generally be ascribed a mean tangent unit vector \(\overrightarrow{p_{i}}\) to characterize its orientation (Fig. 3a). Furthermore, the fibers exhibited a more or less cylindrical shape with possible intersections due to the manufacturing process (Fig. 2c). The fibers had a mean diameter \(D_{m}=13.78~{}\mu\)m for the cotton felts (F1-F4) and \(D_{m}=23.95~{}\mu\)m for the PET felts (B1-B2), see Tab. 2. For each family of felts, these parameters were practically constant regardless of the compression ratio. In addition, the fiber diameter distributions were nearly the same for the cotton felt family (F1-F4), Fig. 3b. Finally, a small peak can be distinguished at \(D_{m}=20.1~{}\mu\)m that corresponds to the bicomponent fibers and a second peak at \(D_{m}=39.5~{}\mu\)m that corresponds to the second population of PET fibers (\(17~{}dtex\)). The first population of PET fibers (\(6.7~{}dtex\)) does not appear clearly due to the fact that it is embedded in the central peak of PET felts. It should be mentioned that the thermocompression process on PET fibers had the effect of spreading the distributions of fiber diameter populations (Fig. 3b, B1, and B2). This was not expected. This may be because B1 was not heat-bonded, unlike B2, and the fibers were deformed in B2 after heat-bonded. Also, additional inaccuracy can be attributed to the manual measurement procedure. However, the two fiber diameter distributions were relatively similar at the end. ### Characterization of the transport properties The transport properties, including the open porosity \(\phi\), static airflow resistivity \(\sigma\) (or alternatively static viscous permeability \(k_{0}=\eta/\sigma\), where \(\eta\) is the dynamic viscosity of the air), tortuosity \(\alpha_{\infty}\), viscous \(\Lambda\) and thermal \(\Lambda^{\prime}\) characteristic lengths, and \begin{table} \begin{tabular}{c c c c c} \hline \hline & Diameter & \multicolumn{3}{c}{Zenith angle \(\theta^{o}\)} \\ \hline Samples & Number of measurements & \(D_{m}(\mu m)\) & Nb of measurements & \(\theta^{o}\) \\ \hline F1 & 2386 & 13.5 \(\pm\) 5.6 & 823 & 91.9 \(\pm\) 36.2 \\ F2 & 2086 & 14.1 \(\pm\) 5.8 & 850 & 94.1 \(\pm\) 28.9 \\ F3 & 2389 & 13.8 \(\pm\) 6.2 & 803 & 85.9 \(\pm\) 18.9 \\ F4 & 2214 & 13.7 \(\pm\) 5.6 & 864 & 87.3 \(\pm\) 8.6 \\ B1 & 2131 & 23.6 \(\pm\) 6.9 & 727 & 87.9 \(\pm\) 18.6 \\ B2 & 1780 & 24.3 \(\pm\) 8.4 & 644 & 87.7 \(\pm\) 5.3 \\ \hline \hline \end{tabular} \end{table} Table 2: Statistics related to fiber diameters and angular orientation of fibers as experimentally determined from SEM images Figure 3: (a) The orientation of a fiber in three-dimensional space in spherical coordinates. The estimated probability density functions of (b) the fiber diameter; (c) the azimuthal angle \(\varphi\); (d) the zenithal angle \(\theta\) as plotted using a non-parametric kenel method. static thermal permeability \(K_{0}\), are expected to be predicted using analytical expressions as a function of the morphological parameters. For example, Tarnow [10] proposed an equation to determine the airflow resistivity for 2D cylinders of equal radii distributed in a square or random lattice. Modifications of Tarnow's equations were suggested by Xue et al. [35] for situations in which a fibrous medium comprises more than one fiber component and when the radius of each fiber component varies within a certain range. Furthermore, Tamayol and Bahrami [57] used a scale analysis technique (or semi-empirical approach) to determine the transverse permeability of various fibrous matrices, including square, staggered and hexagonal arrangements of aligned fibers, as well as simple two-directional mats and simple cubic structures. Umnova et al. [13] proposed an analytical method to predict the tortuosity, the characteristic lengths, and the static thermal permeability of a regular array of rigid parallel cylinders parallel or perpendicular to the flow direction. Pompoli and Bonfiglio [58] provided a modification of existing formulations of transport parameters based on numerical simulations for two-dimensional random structures considering fiber diameters with symmetric and asymmetric distribution. Luu et al. [24] proposed a microstructural model for the transport parameters of three-dimensional networks of rectilinear fiber with constant diameter allowing for possible intersections. The equations were derived from rationalized numerical simulations in the form of master curves expressed as functions of porosity \(\phi\), mean fiber radius \(r_{m}\), and \(\Omega_{zz}\) an effective parameter that parameterizes the angular orientation of the fibers. For fibrous materials manufactured by thermo-compression with different thicknesses, Lei et al. [59] assumed that the transport parameters can be separated into two groups, depending (\(\phi\), \(\Lambda^{\prime}\), \(k_{0}^{\prime}\)) or not (\(\sigma\), \(\alpha_{\infty}\), \(\Lambda\)) on the orientation of the fibers. In their approach, porosity depends on the compression rate \(n\) according to Castagnede et al. [60] formula; and \(\Lambda^{\prime}\) and \(k_{0}^{\prime}\) are determined as analytical functions of porosity \(\phi\) (Umnova et al. [13]) as predicted by the Castagnede et al. [60] formula. Then, the model allowing prediction of \(\sigma\) is an extension of the Tarnow [10] model by considering averaging over an angular distribution function. The same principle is used to predict \(\alpha_{\infty}\) and \(\Lambda\), where this time the Umnova formula [13] is used before performing the angular averaging. We note, however, that Lei et al. [59] model requires prior knowledge of the transport properties value before compression, which supposes available initial experimental measurements. To compare the prediction of these models with our experimental data as a function of the compression ratio for cotton felts (F1-F4), we propose a standard dimensionless representation. Here, the average fiber diameter \(D_{m}\) is used to make all the dimensions of the transport properties dimensionless. When fibrous materials are characterized by a wide distribution of fiber diameters (here the cotton-felt family, F1-F4), Fig. 4 shows that the aforementioned models do not provide a relevant prediction for the transport parameters of the nonwoven fibrous materials studied. The model of Lei et al. [59] predicts the correct evolution of the transport parameters with the compression ratio when the experimental data are known at \(n=1\). Hence, Fig. 4 suggests that the transport behavior of the considered polydisperse fibrous media is ruled by representative volume elements different from those often assumed in previous models. In particular, we formulate the hypothesis that these models do not adequately account for the contribution of the polydispersity of fiber diameters and the particular physics induced by these geometries (Fig. 3b). Figure 4: Comparison between experimental estimates of the transport parameters on cotton fels F1 to F4 and the corresponding predictions with literature models (Lei et al. [59], Xue et al. [35], Luu et al. [23], Ummova et al. [13], Tarnow [10], Pompoli and Bonfiglio [58]). Note that the compression ratio of 1 refers to F1. ## 4 New microstructural model focusing on fiber characteristic sizes The experimental data collected in the previous section and the comparisons with literature models showed the difficulty of classical analytical microstructural models in predicting the transport properties of nonwoven fibrous media with large fiber diameter polydispersity. Furthermore, these models do not always succeed in predicting transport properties as a function of the compression ratio. Consequently, this section presents the development of two three-dimensional (3D) microstructural models in the porosity range \(0.65\leq\phi\leq 0.99\). These models take into account, in particular, the polydispersity of the fiber diameter and the fiber angular orientation. They are built on several assumptions related both to the fibrous microstructures and to fiber-scale thermoviscous dissipation mechanisms of locally heterogeneous fibrous materials in the long-wavelength regime. The approach assumes that the local characteristic sizes governing the transport phenomena within a polydisperse random fibrous microstructure depends on the time scale range of interest. This leads to the introduction of two specific diameters into the reconstruction procedure of two idealized 3D microstructures from which an upscaling technique is applied, namely the numerical homogenization method in the low- and high- frequency asymptotic regimes. Finally, additional equations are proposed to rationalize the results into compact analytical estimates for the dimensionless transport parameters of polydisperse fibrous structures. ### Idealized microstructures The typical Representative Elementary Volume (REV) of the nonwoven fibrous materials studied is seen as a 3D random fibrous network with \(N\) straight cylindrical fibers. A fiber \(i\) in the REV is of diameter \(D_{i}\) and defined by its center location \(M_{i}\) and its orientation vector \(\overrightarrow{p_{i}}\). The fibrous medium studied exhibits a structure with transverse isotropy. Compressing the medium causes anisotropy. Following Schladitz et al. [20], this anisotropy can be described by a density function of the directional distribution \(p_{\beta}(\theta,\varphi)\) (Stoyanet al. [61]). For the materials studied, with isotropy in the \(xy\)-plane, the function is: \[p_{\beta}(\theta)=\frac{1}{4\pi}\frac{\beta sin(\theta)}{(1+(\beta^{2}-1)cos^{ 2}\theta)^{\frac{1}{3}}}, \tag{2}\] where \(\beta>0\) is the anisotropy parameter. Furthermore, it is assumed that there is a good scale separation between the size \(L\) of the reconstructed domain (REV size) and the smallest size between the macroscopic size of the nonwoven fibrous test samples (cylindrical samples with a diameter of 45 mm) and the macroscopic size of the acoustic compression wave \(\mathcal{L}=\lambda 2\pi\) (of wavelength \(\lambda\)). Note that the order of magnitude of size \(L\) is given by the ratio \(L/D_{m}\). This magnitude is chosen so that the porosity of the REV is equal to the experimental value within 0.1% of the relative difference. In addition, for the sake of simplicity, fibers are allowed to intersect during construction of a REV, which is consistent with the bounds visible on SEM images due to the thermo-compression process. ### Idealized transport phenomena In order to accurately upscale the transport and sound absorption phenomena of the fibrous media being studied, the diameters of the fibers are weighted according to their volume at low frequencies and inversely weighted according to their volume at high frequencies. These weighted diameters are given, respectively, by: \[D_{v}=\frac{1}{\sum_{i=1}^{N_{f}}V_{i}}\sum_{i=1}^{N_{f}}V_{i}D_{i}, \tag{3}\] and \[D_{iv}=\frac{1}{\sum_{i=1}^{N_{f}}\sum_{i=1}^{N_{f}}\sum_{i=1}^{N_{f}}\frac{1 }{V_{i}}D_{i}}, \tag{4}\] where \(V_{i}\) is the volume of fiber \(i\). For the samples studied, these diameters are given in Tab. 3. This apparently strong assumption is supported by the fact that the viscous boundary layer \(\delta_{v}\) scales as \(\sqrt{\eta/(\omega\rho_{0})}\), in which \(\eta\) is the dynamic viscosity of the fluid, \(\rho_{0}\) is its density at rest and \(\omega\) is the angular frequency of the sound wave. Indeed, due to the large viscous boundary layer \(\delta_{v}\) at low frequencies and the local heterogeneities in the fiber network, the flow will tend to pass more through the largest pore necks. On the other hand, at high frequencies, inertial forces associated with fluid density dominate fluid motion, increasing the importance of the narrowest necks. The reader is also referred to Martys and Garboczi [17] for a basic description of these transport phenomena supplemented by computer simulation studies. Consequently, when the considered nonwoven fibrous materials are subjected to a macroscopic long-wavelength plane compressional wave, the elementary transport parameters corresponding to the propagation of the sound wave through the materials are mostly influenced by the largest fibers at low frequencies and the smallest fibers at high frequencies. Note that more small-volume fibers can be introduced into a REV of fixed volume and porosity than large-volume fibers. Therefore, REV containing small-volume fibers will contain narrower constrictions than REV filled with large-volume fibers. It should be noted that a volume-weighted average diameter was previously introduced by Peyrega et al. [22] and He et al. [25] to predict with success the permeability of heterogeneous fibrous materials. Here, we extend this idea to the inverse volume-weighted average diameter. Physically, \(D_{i}v\) is thought \begin{table} \begin{tabular}{c c c c c c} \hline \hline Samples & \(CV(\%)\) & \(D_{v}(\mu m)\) & \(D_{v}(\mu m)\) & \(\beta\) & \(\Omega_{zz}\) \\ \hline F1 & 40.3 & \(19.5\pm 0.3\) & \(8.9\pm 0.4\) & 1.4 & 0.22 \\ F2 & 39.8 & \(18.7\pm 0.2\) & \(9.1\pm 0.3\) & 1.7 & 0.21 \\ F3 & 41.9 & \(19.1\pm 0.2\) & \(9.3\pm 0.2\) & 3 & 0.12 \\ F4 & 38.9 & \(18.5\pm 0.2\) & \(9.5\pm 0.2\) & 6.5 & 0.04 \\ B1 & 26.6 & \(26.7\pm 0.2\) & \(19.8\pm 0.3\) & 3.5 & 0.09 \\ B2 & 33.1 & \(31.2\pm 0.2\) & \(18.6\pm 0.3\) & 12 & 0.01 \\ \hline \hline \end{tabular} \end{table} Table 3: Estimated microstructural descriptors of the studied materials. \(\Omega_{zz}\) is the angular orientation parameter [24]. to be the counterpart of \(D_{v}\) to create the pore space that contains the smallest pores that form a continuous pathway through the fibrous polydisperse material in the high-frequency regime. It is easy to see that these arguments can be generalized to thermal effects. ### Theoretical upscaling Under an harmonic excitation, at angular frequency \(\omega\), the local fluid velocity is governed by the linearized Navier-Stokes equations. At low frequencies, the viscous drag forces dominate, and the Navier-Stokes equations simplify to the Stokes equations where the fluid is incompressible. At high frequencies, inertial forces dominate, and there is a strong analogy between the inertial flow problem and the electrical conduction problem (Brown (1962), Johnson et al. (1965)). In this case, the Navier-Stokes equations can be replaced by the electric conduction problem. In this high-frequency analogy, the solid phase acts as an insulator and the fluid phase as a conductor (Johnson et al. (1963), Zhou and Sheng (2006)). Therefore, using this analogy together with theoretical developments (Auriault et al. (2005), Levy (1995)), from the homogenization method for periodic structures with multiscale asymptotic expansions (Bensoussan et al. (1996), Sanchez-Palencia (2007)), several interesting results can be mentioned. Among them, it is possible to show that the macroscopic transport properties of interest (\(k_{0};\alpha_{\infty},\Lambda\)) derive from generic boundary value problems (Stokes problem; electric conduction problem). Furthermore, an approximate but robust function \(k(\omega)\) can be provided that predicts the dependence of viscoinertial effects using the low (\(k_{0}\)) and high (\(\alpha_{\infty},\Lambda\)) frequency properties as input to the model. Finally, an analog frequency-dependent description \(k^{\prime}(\omega)\) of the thermal exchanges between the frame and the saturating fluid involving two macroscopic transport properties (\(k^{\prime}_{0};\Lambda^{\prime}\)) can also be introduced (Lafarge et al. (1998)). ### Estimates of the transport properties #### 4.4.1 Numerical homogenization Taking advantage of the analogy mentioned above and theoretical developments, the transport properties of the random fibrous microstructures of the model were determined using a finite element method to solve Stokes, Laplace, and Poisson equations in the pore space. The transport properties of the nonwoven fibrous materials are then calculated by (i) generating for each studied nonwoven fibrous material two REVs, one for each asymptotic regime; (ii) solving the local partial differential equations which govern the phenomena at low and high frequencies, and (iii) computing the resulting transport parameters thanks to spatial averaging of the resulting fields. For step (i), two series of numerical REVs, one with a mean volume-weighted diameter \(D_{v}\) and one with a mean inverse volume-weighted fiber diameter \(D_{iv}\), were generated to mimic the fibrous microstructures of the manufactured nonwoven seen by the sound wave in the low- and high-frequency regimes, respectively. Briefly, for each series, \(N\) straight fibers \(i\) of diameter \(D_{i}\), with orientation vector \(\overrightarrow{p_{i}}\), were generated within many REVs of volume \(L^{3}\). Following Schladitz et al. (2005), Altendorf and Jeulin (2006), Chapelle et al. (2007), a stationary Poisson line process is defined with a one-parametric directional distribution \(p_{\beta}(\theta,\varphi)\). This parameter captures the degree to which the nonwoven is pressed. Practically, the values of \(L\) were set such that the relative difference between the porosity of the geometric model and the measured porosity of the corresponding nonwoven fibrous material is less than 0.1%; for 100 realizations of the geometrical model. Figure 5 presents a convergence study on \(L\), for the materials studied, in terms of ratio \(L/D_{m}\). One can note that a ratio greater than 20 meets this porosity requirement. Thus, fiber networks were generated in REVs with various porosities \(\phi\), ranging from 0.76 to 0.948, a fiber diameter distribution based on Gamma law, and a density function of directional distribution \(p_{\beta}(\theta,\varphi)\), see Eq.2. The Gamma law and the parameter \(\beta\) were determined by fitting the experimental results obtained from the SEM images. An example for material F2 is shown in Fig. 6. The figure shows the best-fit Gamma law and directional distribution. The figure also shows that the generation procedure allowed fibrous networks to be obtained with fiber diameter and orientation distributions close to those measured experimentally. The best-fit \(\beta\) values for each material samples are given in Tab.3. Figure 7 shows six examples of idealized monodisperse fibrous networks with isotropic (or uncompressed) (\(\beta=1\)), strecthted (\(\beta=0\)) and compressed (\(\beta>1\)) structures to show the influence of the parameter \(\beta\). For step (ii), periodic boundary conditions were ascribed to solve the boundary value problems on a REV. For a given fiber in contact with a couple of bounding surfaces, a point of the fiber was randomly determined along its length. The fiber was cut at this point so that one segment of the fiber could be translated to maintain continuity at the boundaries. A visual description of this process is given in C. Figure 8 shows the periodic microstructural models reconstructed for material F2. Figure 8a shows detailed information on the polydispersity of the fiber diameters and the directional distribution that accounts for the compression ratio. Three monodisperse models of the same medium are also presented: one with a mean fiber diameter \(D_{m}\) (Fig. 8b), one with a volume-weighted average diameter \(D_{v}\) (Fig. 8c)), and one with an inverse volume-weighted average diameter \(D_{iv}\) (Fig. 8d). Assuming that all diameters follow a Gamma law, the polydispersity is easily quantified by the coefficient of variation \(CV\). This coefficient is defined as the ratio of the standard deviation on the fiber diameters to the mean value \(D_{m}\). For the materials studied, the values of \(CV\) are given in Tab. 3. Figure 8 underlines the inequality \(D_{v}\geq D_{m}\geq D_{iv}\) and the interest in using the two different microstructural descriptors \(D_{v}\) and \(D_{iv}\) to predict the transport properties corresponding to, respectively, low-frequency (\(k_{0}\), \(k^{\prime}_{0}\)) and high-frequency (\(\Lambda\), \(\Lambda^{\prime}\), \(\alpha_{\infty}\)) transport phenomena at known porosity. the laws. Their relevance was checked using the microstructure generator and finite element simulations in the next section. The assumptions and main expressions of the semi-analytical model are detailed in the following. 1. The Gamma distribution offers a proper description of the distribution of fiber diameters. One characteristic of this fiber diameter polydiversity is the coefficient of variation \(CV\). 2. A stationary Poisson line process with a one-parametric directional distribution \(p_{\beta}(\theta,\varphi)\) captures the angular orientation of a transversely isotropic fibrous medium and the degree to which the nonwoven was pressed. 3. The model should capture the geometry of the samples for a wide range of possible porosities (\(0.65\leq\phi\leq 0.99\)) and anisotropic parameters (\(0\leq\beta\leq 20\)). 4. A systematic mapping can be found by simulations in realizations of the geometric model. On the one hand, this mapping allows us to define \(r_{v}\) and \(r_{iv}\) as functions of \(r_{m}\) and \(CV\), which are easily measurable microstructure descriptors. Here \(r\) stands for radius. On the other hand, there is a mapping between the anisotropy parameter \(\beta\) and the orientation tensor governed by \(\Omega_{zz}\) (Tab. 4). 5. The fibers could intersect so that \(\Lambda^{\prime}/r_{iv}\), the dimensionless ratio of two times the pore volume \(V_{p}\) to pore surface area \(S_{p}\) divided by inverse volume-weighted average radius, can be written as given by Luu et al. [24]: \[\frac{\Lambda^{\prime}}{r_{iv}}=\frac{\phi}{1-\phi+c},\] (5) where \(c\) is a constant accounting for the effects of fiber Figure 5: Evolution of porosity \(\phi\) of the simulated three-dimensional random fibrous microstructures as a function of the size of the cubic box \(L/D_{m}\), and comparison with the characterized value of porosity. intersections on this high-frequency property. 6. Archie's law [70] that relates porosity to tortuosity holds. This law is given by: \[\alpha_{\infty}=(1/\phi)^{\gamma},\] (6) where \(\gamma\) is a constant that can vary between porous materials. This relation is defined for a series of materials from the same formation or manufacturing process. The detailed information on the pore structure is contained in the exponent \(\gamma\). Theoretical studies have shown that \(\gamma\) depends on the shape of the structuring element. When the microstructure is modeled as being built up of straight cylinders with mainly different orientations, a variable exponent could be used to handle the details of the pore space taken as a function of the angular orientation (\(\beta\) or \(\Omega_{zz}\)) \[\alpha_{\infty z}=(\frac{1}{\phi})^{Q(\Omega_{zz})},\] (7) where \(Q(\Omega_{zz})\) is function of angular orientation (\(\beta\) or \(\Omega_{zz}\)). 7. The relation between the characteristic lengths derived by Johnson et al. [5] holds. This relation can be written as \[\frac{\Lambda^{\prime}}{\Lambda}=1-\frac{\ln(\alpha_{\infty})}{\ln(\phi)},\] (8) This relation holds for the felts studied in which the porosity decreases by uniform growth of the insulating (solid) phase into the pore space. With Eq.7, the previous relation becomes \[\frac{\Lambda^{\prime}}{\Lambda}=1+P(\Omega_{zz}).\] (9) In principle, the function of angular orientation is the same as the one of Eq.7 but its fitted values could fluctuate to try to compensate for the oversimplifications of Eqs. 6 and 8. This is why \(Q(\Omega_{zz})\) is replaced by a new function \(P(\Omega_{zz})\). 8. Several classical models aim to represent the dependence of permeability on the geometric characteristics of the fiber network. The most classical model is the Kozeny Figure 6: Illustration of a comparison between the distributions of fiber diameters and orientations as determined experimentally and from the corresponding models, also shown are the distributions after reconstruction. Figure 7: Various configurations corresponding to the variation of fiber orientation states with \(\beta\) ranging from 0 to 100, respectively. Carman equation (see Eq. (16) of [71]) given by: \[\frac{k_{0}}{r_{v}^{2}}=\zeta\frac{\phi^{3}}{(1-\phi)^{2}}, \tag{10}\] where \(\zeta\) is the Kozeny "constant" which depends on the particle shape and size forming the solid skeleton. It can be shown that the through-plane normalized permeability \(k_{0}/r_{v}^{2}\) also depends on the fiber orientations (\(\beta\) or \(\Omega_{zz}\)). Indeed, the ratio \(k_{0}/r_{v}^{2}\) increases significantly for larger fiber alignment in the direction of the macroscopic pressure gradient. It is assumed that a simple expression to estimate the normalized permeability \(k_{0}/r_{v}^{2}\) as a function of \(\phi^{3}/(1-\phi+m)^{2}\) and fiber orientation (\(\Omega_{zz}\)) can take the form \[\log_{10}\left(\frac{k_{0z}}{r_{v}^{2}}\right)=A\log_{10}\left(\frac{\phi^{3} }{(1-\phi+m)^{2}}\right)+S(\Omega_{zz}), \tag{11}\] where \(A\) and \(S(\Omega_{zz})\) are parameters to be calibrated by simulation for obtaining a general form. 9. Because diffusion of heat does not provide any preferred direction (spatially uniform heating), static thermal permeability \(k_{0}^{\prime}\), normalized by the square of the volume-weighted fiber radius \(r_{v}^{2}\), can generally be written as a function independent of fiber orientation. In addition, the relation between \(k_{0}^{\prime}\) and \(\Lambda^{\prime}\) was introduced in Eq. 1. Then, combining Eqs.1 and 5, the normalized thermal permeability as a function of the open porosity can be expressed as \[\frac{k_{0}^{\prime}}{r_{v}^{2}}=m_{1}\frac{\phi^{3}}{(1-\phi+m_{2})^{2}},\] (12) where \(m_{1}\) and \(m_{2}\) are calibration constants. It should be noted that this relation is normalized by the volume-weighted fiber radius \(r_{v}\), as \(k_{0}^{\prime}\) is a low-frequency parameter. The value of \(m_{1}\) accounts for the shape of the porous network, while \(m_{2}\) may be different from \(c\) as the effects of the fiber intersections may be different at low frequencies. Equations 5, 7, 9, 11 and 12 form the semi-analytical model (or micromacro relationships) for transversely isotropic polydisperse nonwoven fibrous media. They depend only on the open porosity \(\phi\), the angular orientation (\(\beta\) or \(\Omega_{zz}\)), and the coefficient of variation \(CV\). The main equations of the semi-analytical model are summarized in Tab. 4, where the constants and polynomials were determined with the numerical results presented in the following section. Figure 8: Randomly overlapping fiber periodic structures of cotton felt F2; (a) polydisperse fibrous media; (b) monodisperse fibrous media with mean fiber diameter, \(D_{m}\); (c) monodisperse fibrous media with volume-weighted mean diameter, \(D_{v}\); (d) monodisperse fibrous media with inverse volume-weighted mean diameter, \(D_{lv}\). ## 5 Model prediction and discussion ### Numerical results By taking advantage of two specific weighted fiber diameters, we have proposed that the studied polydisperse fibrous microstructures subjected to several compression rates and thermo-mechanical bounding could be modeled by two different REVs (i.e., volume weighted \(r_{v}\) and inverse volume weighted \(r_{iv}\) fiber radii) corresponding to the transport phenomena than can be simulated in the low and high frequency regimes. Therefore, it was possible to extract from three elementary boundary value problems (Stokes, Laplace, Poisson), and from the computation of the corresponding solution fields (Figs. 9 and 10), the expressions of the through-plane static viscous \(k_{0}\) and thermal \(k_{0}^{\prime}\) permeabilities of the nonwoven fibrous medium, as well as their through-plane viscous characteristic length \(\Lambda\) and tortuosity \(\alpha_{\infty}\). For its part, the thermal characteristic length \(\Lambda^{\prime}\) was calculated directly by twice the ratio of pore volume to surface area in each REV mesh. Figure 11 shows the evolution of \(k_{0}/r_{v}^{2}\), \(k_{0}^{\prime}/r_{v}^{2}\), \(\Lambda/r_{iv}\), \(\Lambda^{\prime}/r_{iv}\), and \(\alpha_{\infty}\) with the porosity \(\phi\), for nonwovens with transverse isotropy and with a preferred orientation (Fig. 7). These predictions were obtained with a domain size \(L/D\) allowing for convergence on porosity by taking five realizations for each porosity. From this figure, several remarks can be drawn: * The through-plane viscous permeability \(k_{0}/r_{v}^{2}\) increases non-linearly with the porosity and diverges as the porosity \(\phi\) is approaching unity (\(\sim\phi^{3}/(1-\phi)^{2}\)). At high porosities, the effect of preferred fiber orientation (induced by compression or manufacturing process) is strong and cannot be ignored. Lower viscous permeabilities are observed for in-plane fiber orientations than for out-of-plane fiber orientations, in agreement with previous results (Tarnow [10]). In contrast, the static thermal permeability \(k_{0}^{\prime}/r_{v}\) is independent of fiber orientation at a constant porosity. It is noteworthy that, as shown by the results of Fig. 11a, the formal inequality \(k_{0}^{\prime}\geq k_{0}\) is also clearly apparent (Axellaneda and Torquato [72]). * Similarly, the viscous \(\Lambda\) and thermal \(\Lambda^{\prime}\) characteristic lengths also increase non-linearly with the porosity (\(\sim\phi/(1-\phi)\)) [Fig. 11c-d] but to a lesser content than for the viscous \(k_{0}\) and thermal \(k_{0}^{\prime}\) permeabilities (\(\sim\phi^{3}/(1-\phi)^{2}\)) [Fig. 11a-b]. We also checked the following inequality, \(1\leq\Lambda^{\prime}/\Lambda\leq 2\), available for fibrous media in the dilute limit (\(\phi\to 1\)) [73] ; with \(\Lambda^{\prime}/\Lambda=1\) in the limit of in-plane orientation distributions of fibers and \(\Lambda^{\prime}/\Lambda=2\) for fully aligned fibers (Fig. 11e). This observation implies that the ratio \(\Lambda^{\prime}/\Lambda\) increases with the compression rate. The results of \(\Lambda^{\prime}/\Lambda\) were relatively independent of porosity [Eq. 9]. Increasing the fiber alignment significantly increases the viscous characteristic length [Fig. 11c], the effect is larger for high porosities (\(\Lambda^{\prime}\sim\phi/(1-\phi)\) and \(\Lambda^{\prime}/\Lambda=1+P(\Omega_{zz})\)) which occurs physically because \(\Lambda\) is weighted by the scalar product of the local electric field solution \(\mathbf{E}\cdot\mathbf{E}\) (in plane orientation of fibers creates smaller channels for the preferential fluid flow). * The tortuosity \(\alpha_{\infty}\) decreases with increasing porosity (Archie's law; \(\alpha_{\infty}\to 1\) when \(\phi\to 1\)). However, apart from this limit \(\phi\to 1\)), the tortuosity \(\alpha_{\infty}\) was shown to increase at constant porosity when the fibers are perpendicular to the potential flow direction. This situation corresponds to a more tortuous path (Fig. 11f) for which a larger dispersion of the microscopic velocities is obtained (Eq. 9). that R-squared is less than one indicates that at some porosities, there is a small proportion of Sum of Squared Errors (SSE) that is not accounted for by the regression. The coefficient of determination (R-squared) of the fit was 0.999 for the thermal characteristic length, 0.977 for the tortuosity, 0.996 for the ratio of the thermal over viscous characteristic lengths, 0.996 for the viscous permeability and 0.988 for the thermal permeability. The proportionate amount of variation in the response variable (dimensionless transport parameter) that is explained by the independent variables (porosity \(\phi\) and orientation of fibers \(\Omega_{zz}\)) was therefore always very close to one. The residual analysis enables a local quantitative appreciation of the adequacy of the fitted model (Fig. 12). The residuals from a fitted model are defined as the differences between the response data (simulations) and the fitting to the response data (model) at each predictor value. The largest differences are obtained for the tortuosity \(\alpha_{\infty}\), as \(\phi\to 0.65\) and \(\Omega_{zz}\to 1\). In this situation, the tortuosity values should correspond to the upper bound [Eqs. 6 and 7] of a solid fibrous network with lower porosities. But if simulations are performed in opposite fiber orientations, from \(\Omega_{zz}=0\) for in-plane fibers to \(\Omega_{zz}=1\) for unidirectionally aligned fibers, a large variation of tortuosity values should be observed which is somehow contradictory with the initial choice of an Archie's law [Eq. 6]. The presence of these contradictory behaviors (\(\alpha_{\infty}\) increases with decreasing \(\phi\), \(\alpha_{\infty}\) decreases with increasing \(\Omega_{zz}\)) can be used to explain the higher sensitivity of the model to geometrical parameters and the larger proportion of numerical results not entirely present in the model. Similar arguments can be given to quantify the differences between the finite element simulations and the analytical model for the static thermal permeability \(k_{0}^{\prime}\): as \(\phi\to 1\), the value of \(k_{0}^{\prime}\) diverges as (\(\sim\phi^{3}/(1-\phi)^{2}\)) [Eq. 5] which statistically increases the proportion of SSE that is not completely explained by the regression. Our results suggest that a better fit would require an increase in the domain size \(L/D_{m}\), which is important to ensure a lower relative difference between the porosities of the generated microstructures and the porosity that serves as the target, as \(\phi\) approaches one. i.e., if \(\phi\) target = 0.99 with err = 0.01%, \(L/D_{m}=55\); if \(\phi\) target = 0.99 with err = 0.001%, \(L/D_{m}=140\) (see Figs. 16 and 5). Finally, in this section, we presented a comparison between the analytical result and the numerical finite element solution. We saw, through a detailed analysis of the residues, that the comparison between finite element simulations and the analytical model (Figs. 11 and 12, Tab. 4) revealed that the analytical expressions [Eqs. 5, 7, 9 11, 12] fit well with the trends gained from the finite element simulation when the same microstructure parameters are used as input. Hence, analytical estimates can be considered to be accurate enough predictors of the transport properties of nonwoven fibrous materials. ### Comparisons with experimental results Two different types of comparisons are presented to validate the semi-analytic model in Tab. 4. The first type of comparisons, shown in Fig. 13, concerns the transport properties predicted by the model and their experimental measurements or estimates, presented in Section 3.2. The second type of comparisons, shown in Fig. 14, concerns the sound absorption coefficient predicted by the model for each felt and its impedance tube measurement obtained from the method presented in Section 2.3. From these comparisons, several important results can be drawn. They are listed below. * From the comparisons shown in Fig. 13, one can conclude that the proposed semi-analytical model allows nice quantitative predictions of the measured transport properties \(k_{0}\), \(k_{0}^{\prime}\), \(\Lambda\), \(\Lambda^{\prime}\), and \(\alpha_{\infty}\). The comparison is good for a wide range of open porosities (\(0.760\leq\phi\leq 0.948\)) [Tab. 5] and for different fiber orientation distributions (\(0.01\leq\Omega_{zz}\leq 0.22\)) [Tab. 3]. It is recalled that fiber orientation is related to the compression ratio that varies in the range (\(1\leq n\leq 3.4\)) for the two families of composite nonwoven fibrous materials (F and B, Tab. 1). These two families have a different fiber diameter polydispersity content (\(CV\sim 40\%\) for F and \(CV\sim 30\%\) for B, Fig. 3b and Tab. 3). Consequently, the overall agreement between the analytical and experimental results supports the validity of the semi-analytical model within, at least, the degrees of fiber diameter polydispersity and orientation studied. Moreover, Figure 9: Typical meshes of the fluid phase in a periodic REV of fibrous medium F2. The meshes are used to perform finite element simulations on: a) structure with inverse volume-weighted diameter with 947,011 tetrahedral elements, and b) structure with volume-weighted diameter with 1,042,941 tetrahedral elements. this proves that the fiber diameter polydispersity and, to a lesser extent, the orientation of fibers play a leading role in the transport properties of these fibrous composites. * Despite a relatively good overall comparison, a few differences are worth discussing. First, for \(k_{0}^{\prime}\), we recall here that it was not possible to have a direct measurement of \(k_{0}^{\prime}\). Its value is estimated from the identification of \(\Lambda^{\prime}\) (Eq. 1), which is in turn estimated from other measured properties thanks to the Kozeny-Carman formula (Eq. 2). Consequently, we must look at the trend of its evolution more than its values. The same holds for \(\Lambda^{\prime}\) and \(\Lambda\). Second, the predicted value of \(k_{0}\) for F1 departs from the measurement. As explained previously (Section 5.2), the model diverge for high porosity values approaching. * We next explored the sound absorbing behavior at normal incidence in an analytical way using the predicted transport parameters in a JCAL model (D) that allowed us to generate the sound absorption coefficient that could be compared directly with experiments (Fig. 14). This analysis shows that the sound absorption coefficients at normal incidence that are predicted are comparable to those measured experimentally. Together with a close match between the transport parameter values in the experiments and in the models, this and the above results confirm the accuracy of the numerical models and indicate that they capture the essential physics of the viscous fluid-flow, excess temperature, and potential flow velocity field in a polydisperse nonwoven composite and the corresponding transport and sound absorbing properties. Figure 10: Asymptotic fields of velocity and temperature computed on the discretized REVs of Fig.9 for material F2: (a) scaled velocity field expressed as local permeability (\(k_{0}\)) [\(m^{2}\)] corresponding to Stokes flow in the \(z\) direction with the REV reconstructed by volume-weighted diameter; (b) scaled heat diffusion field expressed as local static thermal permeability (\(\xi_{0}\)) [\(m^{2}\)] with the REV reconstructed by volume-weighted diameter, and (c) scaled velocity field expressed as tortuosity \(\alpha_{\infty}\) [\(-\)] corresponding to potential flow in the \(z\) direction with the REV reconstructed by volume inverse weighted diameter. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline & Results & \(\phi\) & \(\sigma(N.s.m^{-4})\) & \(\alpha_{\infty}\) & \(\Lambda(\mu m)\) & \(\Lambda^{\prime}(\mu m)\) & \(k^{\prime}_{0}\times 10^{-10}(m^{2})\) \\ \hline \multirow{2}{*}{F1} & Model & \(0.948\pm 0.005\) & \(38358\pm 1612\) & \(1.022\pm 0.002\) & \(48\pm 5\) & \(84\pm 8\) & \(12.1\pm 1.6\) \\ & Exp & \(0.948\pm 0.005\) & \(28684\pm 3664\) & \(1.023\pm 0.003\) & \(46\pm 3\) & \(74\pm 5\) & \(13.6\pm 1.7\) \\ \hline \multirow{2}{*}{F2} & Model & \(0.941\pm 0.009\) & \(47235\pm 3191\) & \(1.026\pm 0.004\) & \(42\pm 7\) & \(74\pm 12\) & \(9.9\pm 2.1\) \\ & Exp & \(0.941\pm 0.009\) & \(45716\pm 2553\) & \(1.035\pm 0.007\) & \(35\pm 1\) & \(59\pm 2\) & \(8.6\pm 0.4\) \\ \hline \multirow{2}{*}{F3} & Model & \(0.914\pm 0.006\) & \(87776\pm 2818\) & \(1.042\pm 0.003\) & \(25\pm 2\) & \(47\pm 4\) & \(5.1\pm 0.5\) \\ & Exp & \(0.914\pm 0.006\) & \(76479\pm 20416\) & \(1.042\pm 0.015\) & \(27\pm 4\) & \(46\pm 6\) & \(5.1\pm 1.3\) \\ \hline \multirow{2}{*}{F4} & Model & \(0.856\pm 0.007\) & \(242696\pm 5784\) & \(1.078\pm 0.005\) & \(15\pm 1\) & \(28\pm 2\) & \(1.6\pm 0.1\) \\ & Exp & \(0.856\pm 0.007\) & \(235845\pm 105324\) & \(1.074\pm 0.008\) & \(17\pm 4\) & \(28\pm 6\) & \(1.7\pm 0.8\) \\ \hline \multirow{2}{*}{B1} & Model & \(0.887\pm 0.001\) & \(65456\pm 2770\) & \(1.057\pm 0.006\) & \(42\pm 4\) & \(79\pm 8\) & \(6.3\pm 0.8\) \\ & Exp & \(0.888\pm 0.001\) & \(52018\pm 4732\) & \(1.089\pm 0.005\) & \(34\pm 2\) & \(57\pm 3\) & \(7.5\pm 0.6\) \\ \hline \multirow{2}{*}{B2} & Model & \(0.764\pm 0.022\) & \(246602\pm 12283\) & \(1.144\pm 0.021\) & \(15\pm 2\) & \(29\pm 4\) & \(1.18\pm 0.1\) \\ & Exp & \(0.760\pm 0.022\) & \(213834\pm 44998\) & \(1.175\pm 0.02\) & \(22\pm 2\) & \(32\pm 4\) & \(1.1\pm 0.2\) \\ \hline \hline \end{tabular} \end{table} Table 5: Comparison of semi-analytical (Model) and experimental (Exp) estimates of the transport parameters of cotton and PET felts Figure 11: Normalized transport parameters as a function of porosity \(\phi\). The symbols indicate the statistically averaged orientation of fibers as determined by values of \(\beta\) or \(\Omega_{zz}\): \(\Omega_{zz}=0\) (\(\star\)), \(\Omega_{zz}=0.11\) (\(\star\)), \(\Omega_{zz}=0.19\) (\(\star\)), \(\Omega_{zz}=0.30\) (\(\diamond\)), \(\Omega_{zz}=0.39\) (\(\diamond\)), \(\Omega_{zz}=0.49\) (\(\square\)), \(\Omega_{zz}=0.61\) (\(\diamond\)), \(\Omega_{zz}=0.81\) (\(\diamond\)), \(\Omega_{zz}=0.91\) (\(\diamond\)), \(\Omega_{zz}=1\) (\(\diamond\)). The dashed lines are estimates obtained by the semi-analytical model derived from the numerical simulations (Tab. 4). Figure 12: Map fittings and residual plots of dimensionless transport parameters (semi-analytical model derived from the numerical simulations). Figure 13: Evolution of the transport parameters \(k_{0}\), \(k_{0}^{\prime}\), \(\Lambda\), \(\Lambda^{\prime}\), \(\alpha_{\infty}\) with the porosity \(\phi\) for three-dimensional random fibrous materials with transversely isotropic structure and a preferred angular orientation \(\Omega_{zz}\) depending on the compression rate \(\mu\). Comparison between the predictions of the semi-analytical models Tab. 4 and the data obtained from experiments (symbols). These predictions are obtained using the average microstructures descriptors in Tab. 3 for the cotton felts (\(D_{\nu}=18.95\pm 0.5\mu m\); \(D_{lv}=9.20\pm 0.26\mu m\); \(\Omega_{zz}=0.15\pm 0.09\); \(CV=40.2\pm 1.2\%\)) and for the PET felts (\(D_{\nu}=28.95\pm 3.25\mu m\); \(D_{lv}=19.20\pm 0.85\mu m\); \(\Omega_{zz}=0.05\pm 0.05\); \(CV=29.9\pm 4.6\%\)) The thick lines correspond to the deviation of either cotton felts (orange) or PET felts (grey). Figure 14: Comparison between measurements and predictions of the sound absorption coefficient at normal incidence. Sample thickness: F1 - 20.3 _mm_; F2 - 16.1 _mm_; F3 - 11.2 _mm_; F4 - 5.9 _mm_; B1 - 10.3 _mm_; B2 - 4.3 _mm_. ## 6 Conclusions The objective of this study was to link the macroscale transport and sound absorbing properties of nonwoven fibrous composites with their polydisperse fibrous microstructures and the related visco-thermal dissipation mechanisms. For that purpose, two families of composite nonwovens were manufactured using a thermos-compression process, from either recycled cotton and co-PET fibers or a mix of recycled PET and Co-PET fibers with different classes of fineness, and further compacted with several compression rates. SEM images showed that their random fibrous microstructures exhibited well known transverse isotropy with a preferential orientation of fibers that depended on the compression rate. In addition, regardless of the family of the composite nonwovens, the fibers originating from a recycling process were characterized by a wide distribution of diameters which could be modeled as a Gamma-law, a trend already observed for glass and stone wools. From the fiber scale images of their microstructures, we also saw that the radius of curvature of the fibers was large when compared to the fiber radii, so that the individual fibers could be considered as straight cylinders. The connectivity between two adjacent fibers due to the thermo-compression bounded co-PET process was also visible so as to reasonably assume that fibers could intersect. From these experimental data obtained at fiber scale, fiber network models were proposed to predict the through-plane transport properties of the considered polydieperse nonwoven composites. Two microscale models were established. The first one used volume weighted fiber diameter and the second inverse volume weighting as mean diameters to perform finite element simulations. The results were rationalized in the form of analytical laws that can be easily used for engineering purposes, e. g., to optimize polydisperse fibrous media. The modeling approach emphasised the leading roles of the fiber content, polydispersity and orientation on the macroscale transport and sound absorbing properties of the considered nonwovens. The modeling approach quantitatively well predicted the transport and sound absorbing properties characterized at macro-scale. If the porosity and distributions of fiber diameters and orientations are provided as inputs, we have shown that the predictions of the numerical and analytical models can nicely estimate the transport and sound absorbing properties at normal incidence of random and transversely isotropic polydisperse fibrous media for a large range of porosities and without any adjusted parameter. The identified micro-structural descriptor of the low frequency behavior is in accordance with literature data, i. e., at low polydispersity content, only one fiber diameter is necessary to derive the overall transport parameters characterizing both low and high frequency behaviors, thus suggesting a switch from mono disperse to poly disperse fiber distribution as a new lever to understand and optimize transport and sound absorbing properties. The developed model should be tested accordingly for fiber diameter distributions characterized by very large coefficient of variations. ## Acknowledgments This work was part of a project supported by ANRT and Adler Pelzer Group, Acoustic TechCenter R&D under convention CIFRE No. 2020/0122. The MSME laboratory is part of the LabEx MMCD (Investisements d'Avenir: grant agreement no. ANR-11-MABX-022-01). Partial support for this work was also provided by Universite Paris-Est Sup (mobility grant from the ED SIE). The authors also acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC) [funding with ref. number RGPIN-2018-06113]. We acknowledge Remy Pires-Brazuna (ICMPE UMR 7189 CNRS) for SEM imaging of the fibrous samples. ## Appendix A Protocol of preparation and cutting of samples prior to the acquisition of SEM images For non-conductive materials like cotton and PET felts, a high performance metallizer by cathode sputtering was used, coupled with a magnetron source (Cressington sputter coater 208HR); which made it possible to deposit a conductive film of a few nanometers (controlled by a quartz probe, here a Cressington MTM 20) on the surface of the samples. To verify the homogeneity of the microstructure, specifically in terms of fiber diameters, two cubic specimens with dimensions of 10 \(mm\) were taken randomly from different locations of the studied panels (provided with dimensions of 210 mm x 297 mm). On each extracted cubic specimen, SEM images were then acquired to fully scan two horizontal and two vertical planes (situated on opposite faces of the cubic specimens), using a magnification factor of 100 times. For each plane (four planes of interest on each cubic sample), 10 sub-images were randomly extracted to directly measure the morphological parameters of interest in the fibrous network, using the Fili software [74] with a resolution of 0.56 \(\mu m\) per pixel. These parameters include the diameters of the fibers and their orientation angles in the horizontal and vertical planes, as shown in Fig. 2. ## Appendix B Experimental approach used to estimate the viscous and thermal characteristic lengths The so-called Kozeny-Carman resistivity formula, introduced by Henry et al. [55] in their Eq. (15), is given by: \[\sigma_{KC}=\frac{8\alpha_{\infty}\eta}{\phi\Lambda_{est}^{2}}, \tag{12}\] where \(\Lambda_{est}^{\prime}\) is a characteristic dimension. Typically, we can assume that the value of \(\Lambda_{est}^{\prime}\) is between \(\Lambda\) and \(\Lambda^{\prime}\), and that \(\sigma_{KC}\) is an estimate of \(\sigma\). From the Kozeny-Carman formula, a value of \(\Lambda_{est}^{\prime}\) could be obtained using experimental measurements of \(\phi\), \(\sigma\), and \(\alpha_{\infty}\). Therefore, \(\Lambda_{est}^{\prime}\) corresponds to the following equation: \[\Lambda_{est}^{\prime}=\sqrt{\frac{8\alpha_{\infty}\eta}{\phi\sigma}}. \tag{13}\] For a typical porous material, assuming macroscopic homogeneity, the following inequality \(\Lambda\leq\Lambda_{est}^{\prime}\leq\Lambda^{\prime}\) is expected. As a first approximation, the simulated ratio \(r=\Lambda^{\prime}/\Lambda\) can be used to deduce \(\Lambda_{est}^{\prime}\) from \(\Lambda_{est}\). The following formula is applied: \[\Lambda_{est}=\frac{\Lambda_{est}^{\prime}}{r}, \tag{14}\] Here, we used \(r=1.61\), \(1.69\), \(1.70\), \(1.65\), \(1.68\) and \(1.45\) corresponding to the simulated values for F1, F2, F3, F4, B1, and B2, respectively. ## Appendix C Geometrical reconstruction Based on the results of microstructure characterization, a random fibrous network is reconstructed as follows : 1. A random point is chosen in a unit cube of known size \(L\) (the unit cell). 2. From this random point \(M_{i}\), a vector \(\overrightarrow{p}\) is determined, which passes through this random point (having as zenithal \(\theta\) and azimuthal \(\varphi\) angles, randomly selected values from the measured probability density functions). 3. Based on the knowledge of \(\overrightarrow{p}\), the coordinate of the intersecting points \(P_{1}P_{2}\) with the unit cube are derived. 4. Next, the segment \(P_{1}P_{2}\) is cut at \(M_{i}\), from which one can obtain continuity of the solid phase on the opposite faces of the unit cube. This is done by translation of a sub-segment \(M_{i}P_{2}\). For instance, Fig. 14(a) illustrates this procedure. Here, \(M_{i}P_{2}\) is translated to ensure continuity of \(P_{1}\) and \(P_{2}\) (by horizontal translation of the unit cube). 5. Knowing the fiber diameter distribution obtained from measurements, a fiber radius is then randomly drawn from the corresponding Gamma fit distribution (Fig. 14(b)). The algorithm which is reported in Fig. 15(c) allows iterative alteration of the fiber number \(N_{f}\) and the domain size \(L_{i}\) until porosity is converged towards the experimentally determined value. By applying the algorithm with 100 iterations for each domain size \(L/D_{m}\), the result displayed in Fig. 5 shows that it is possible to control both the average porosity and the standard deviation of a reconstructed three-dimensional fibrous structure. \(L_{i}\) was chosen to ensure that the ratio \(\epsilon\) of the standard deviation over the mean value of the targeted porosity is less than \(0.1\%\). Figure 15: Illustration of some important steps by which a representative volume element with periodic boundaries can be constructed. ## Appendix D Elementary transport processes and acoustical macro-behavior ### Elementary transport processes In this section, we focus on identifying macroscopic transport properties by addressing local equations with adequate boundary conditions. These equations are classically derived from an asymptotic analysis. Note that, the open porosity \(\phi\) and the thermal characteristic length \(\Lambda^{\prime}\) are purely geometric parameters that can be directly calculated from the microstructure and determined by integration: \[\phi =\frac{\int_{\Omega_{f}}dV}{\int_{\Omega}dV}, \tag{13}\] \[\Lambda^{\prime} =2\frac{\int_{\Omega_{f}}dV}{\int_{\partial\Omega}dS}, \tag{14}\] where \(\Omega\) is the volume element (VE), \(\Omega_{f}\) is the fluid volume and \(\partial\Omega\) denotes the solid border of a solid element. The remaining transport parameters are determined numerically by applying spatial averaging to the solution fields corresponding to the problems mentioned below. 1. Viscous permeability At low frequencies, also known as the static regime, viscous forces are dominant. The low Reynold's number flow of an incompressible Newtonian fluid in this regime is governed by the steady-state Stokes equation: \[\eta\Delta\mathbf{v}-\nabla p =-\nabla p^{m} \text{in} \Omega_{f},\] (15) \[\nabla\cdot\mathbf{v} =0 \text{in} \Omega_{f},\] \[\mathbf{v} =0 \text{on} \partial\Omega,\] \[\mathbf{v} \text{and}\,p\text{ are}\,\Omega-\text{periodic};\] where \(\mathbf{v}\), \(p\), and \(\eta\) are the velocity, pressure, and viscosity of the fluid, respectively. The term \(\nabla p^{m}\) is a macroscopic pressure gradient acting as a driving force, \(\partial\Omega\) is the fluid-solid interface. The macroscopic pressure gradient is specified in the form, \[\nabla p^{m}=|\nabla p^{m}|\mathbf{e}.\] (16) Since the Eq. 15 is linear, it can be shown that \[\phi\left\langle\mathbf{v}\right\rangle=-\frac{\mathbf{K}}{\eta}\cdot\nabla p ^{m},\] (17) where \(\mathbf{K}\) is a positive-definite symmetric tensor, the symbol \(\left\langle\bullet\right\rangle\) indicates a fluid-phase averaging, that is \[\left\langle\bullet\right\rangle=\frac{1}{\Omega_{f}}\int_{\partial\Omega} \bullet dV.\] The static permeability \(k_{0}\) along the direction specified by the unit vector is calculated as, \[k_{0}=\left(\mathbf{K}\cdot\mathbf{e}\right)\cdot\mathbf{e}=-\frac{\eta\phi}{ |\nabla p^{m}|}\left\langle\mathbf{v}\right\rangle\cdot\mathbf{e}.\] (18) 2. Viscous characteristic length and tortuosity At high frequencies, when \(\omega\) is large enough, inertial Figure 16: Algorithm used to calculate the domain size in order to reconstruct microstructures of the random fibrous materials under study with periodic boundary conditions. forces dominate over viscous forces. The fluid tends to behave as an ideal fluid, having no viscosity. In this case, the inertial flow problem is analogue to the problem of electric conduction of a conducting fluid saturating an insulating porous structure : \[\mathbf{E}=\mathbf{e}-\nabla\varphi\quad\text{in}\quad\Omega_{f},\] (122) \[\nabla\cdot\mathbf{E}=0\quad\text{in}\quad\Omega_{f},\] \[\mathbf{E}\cdot\mathbf{n}=0\quad\text{on}\quad\partial\Omega,\] \[\varphi\text{ is }\Omega-\text{periodic},\] where \(\mathbf{e}\) is a global unit electric field, while \(\mathbf{E}\) is the electric field solution of the boundary problem, \(-\nabla\varphi\) is the scalar electrostatic potential and \(\mathbf{n}\) is local unit normal vector directed into the pore space. Then, the components of the high frequency tortuosity tensor \(\alpha_{\omega ij}\) can be obtained from \[e_{i}=\alpha_{\omega ij}\left\langle E_{j}\right\rangle.\] (123) In the case of isotropy, the components of the tensor simplify to the diagonal form \(\alpha_{\omega ij}=\alpha_{\infty}\delta_{ij}\). The tortuosity can also be determined by calculating the mean square value of the local electric field through \[\alpha_{\infty}=\frac{\left\langle\mathbf{E}\cdot\mathbf{E}\right\rangle_{f}} {\left\langle\mathbf{E}\right\rangle_{f}\cdot\left\langle\mathbf{E}\right\rangle _{f}}.\] (124) The viscous characteristic length \(\Lambda\) can also be determined (for an isotropic medium) by \[\Lambda=2\frac{\int_{\Omega_{f}}\mathbf{E}\cdot\mathbf{E}\,dV}{\int_{\partial \Omega}\mathbf{E}\cdot\mathbf{E}\,dS}.\] (125) 3. Thermal permeability Under the excitation of an external, harmonic source, with perfect absorbing conditions at the fluid-solid interface, the static thermal permeability is obtained from the equation \[k^{\prime}_{0}=\phi\left\langle u\right\rangle,\] (126) where the scaled, \(\Omega\)-periodic temperature field \(u\), is the solution to the Poisson equation \[\Delta u=-1\quad\text{in}\quad\Omega_{f},\] (127) \[u=0\quad\text{on}\quad\partial\Omega.\] Here \(u\) is presumed to be periodic with a period \(L_{i}\) across the three spatial directions. The parameter \(k^{\prime}_{0}\) is a positive definite scalar that is solely dependent on the geometry of the medium. ### Acoustical macro-behavior Significant semi-phenomenological models with visco-thermal dissipation mechanisms were developed by Johnson et al. [63] and Lafarge et al. [68]. In these works, the assumption of a rigid solid skeleton was made \(apriori\). Johnson et al. and Lafarge et al. proposed that two general expressions for the frequency-dependence of the visco-inertial and thermal exchanges between the frame and the saturating fluid can be established with two sets of parameters (\(\Lambda\), \(k_{0}\), \(\alpha_{\infty}\), \(\phi\)) and (\(\Lambda^{\prime}\), \(k^{\prime}_{0}\), \(\phi\)). The model is consistent with the frequency dependence of the first two leading terms of the exact result for high frequencies, but only one term for low frequencies. Both numerical simulations and experiments have demonstrated that the model by Johnson et al. and Lafarge et al., known as the JCAL model, is very robust (although not exact). In this section, we provide a summary of the JCAL model, as a mean of prediction of the sound absorption of polydisperse fibrous media. For porous materials having a rigid and motionless skeleton, the equivalent dynamic mass density \(\tilde{\rho}_{eq}(\omega)\) and the equivalent dynamic bulk modulus \(\tilde{K}_{eq}(\omega)\) of the material are computed as \[\tilde{\rho}(\omega)=\frac{\alpha_{\infty}\rho_{0}}{\phi}\left[1+\frac{\phi \sigma}{i\omega\alpha_{\infty}\rho_{0}}\sqrt{1+i\frac{4\alpha_{\infty}^{2} \eta\rho_{0}\omega}{\alpha^{2}\Lambda^{2}\phi^{2}}}\right], \tag{128}\] and \[\tilde{K}_{eq}(\omega)=\] \[\frac{\gamma P_{0}/\phi}{\gamma-(\gamma-1)\left[1-i\frac{\phi \kappa}{\tilde{\rho}_{0}^{2}\omega\rho\omega}\sqrt{1+i\frac{4\alpha_{\infty}^ {2}C_{p}\rho\omega}{\kappa\Lambda^{2}\phi^{2}}}\right]^{-1}}. \tag{129}\] In these equations, \(\sigma=\mu/k_{0}\) is the (through plane) airflow resistivity, \(\rho_{0}\) is the density of air, \(P_{0}\) the atmospheric pressure, \(\gamma=C_{p}/C_{v}\) the ratio of heat capacities at constant pressure and volume, \(i\) the imaginary unit and \(\omega=2\pi f\) the angular frequency. The wave number \(\tilde{k}_{eq}(\omega)\) and the characteristic impedance \(\tilde{Z}_{eq}(\omega)\) are then given by: \[\tilde{k}_{eq}(\omega)=\omega\sqrt{\tilde{\rho}_{eq}(\omega)/ \tilde{K}_{eq}(\omega)}, \tag{130}\] \[\tilde{Z}_{eq}(\omega)=\sqrt{\tilde{\rho}_{eq}(\omega)\tilde{K}_{ eq}(\omega)}. \tag{131}\] The normal incidence surface impedance is expressed by \[\tilde{Z}_{s}=-i\tilde{Z}_{eq}\cot{(\tilde{k}_{eq}L_{s})}. \tag{132}\] The sound absorption coefficient at normal incidence of thickness \(L_{s}\) follows: \[SAC_{NI}=1-\left|\frac{\tilde{Z}_{s}-Z_{0}}{\tilde{Z}_{s}+Z_{0}}\right|^{2}, \tag{133}\] where \(Z_{0}=\rho_{0}c_{0}\) is the impedance of the air, \(c_{0}\) is the sound speed in air. ## Appendix E Characteristic transition frequencies The viscous transition frequency \(f_{v}\) characterises the transition between the high and low frequency limits of the dynamic viscous permeability \(k_{(\omega)}\) (Johnson et al. [5]). One could estimate \(f_{v}\) using the following simple formula \[f_{v}=\frac{\phi\sigma}{2\pi\rho_{0}\alpha_{\infty}}, \tag{10}\] where \(\rho_{0}=1.213\times 10^{3}(kg/m^{3})\) is the density of air at rest and normal conditions. Here, low frequency means \(f\ll f_{v}\), whereas high frequency corresponds to \(f\gg f_{v}\). As a thermal counterpart of \(f_{v}\), thermal transition frequency \(f_{t}\) characterises the transition between high and low frequency limits of the dynamic thermal permeability \(k^{\prime}(\omega)\) (Lafarge et al. [68]): \[f_{t}=\frac{\phi\kappa}{2\pi k_{0}^{\prime}C_{p}}, \tag{11}\] where \(\kappa=2.5\times 10^{-2}(W/m\cdot K)\) is the air heat conductivity, \(C_{p}=1.219\times 10^{3}(J/K)\) is the Isobaric heat capacity of air. Low frequencies refers to \(f\ll f_{t}\) whereas high frequencies must be understood as \(f\gg f_{t}\). From these simple equations [Eqs. (11)-(10)] and the results in Tab. 5, the characteristic transition frequencies of the materials studied are calculated and reported throughout Tab. 6. These values are useful to show that the high frequency behavior is barely measurable with a standard impedance tube.
2309.08096
GelSplitter: Tactile Reconstruction from Near Infrared and Visible Images
The GelSight-like visual tactile (VT) sensor has gained popularity as a high-resolution tactile sensing technology for robots, capable of measuring touch geometry using a single RGB camera. However, the development of multi-modal perception for VT sensors remains a challenge, limited by the mono camera. In this paper, we propose the GelSplitter, a new framework approach the multi-modal VT sensor with synchronized multi-modal cameras and resemble a more human-like tactile receptor. Furthermore, we focus on 3D tactile reconstruction and implement a compact sensor structure that maintains a comparable size to state-of-the-art VT sensors, even with the addition of a prism and a near infrared (NIR) camera. We also design a photometric fusion stereo neural network (PFSNN), which estimates surface normals of objects and reconstructs touch geometry from both infrared and visible images. Our results demonstrate that the accuracy of RGB and NIR fusion is higher than that of RGB images alone. Additionally, our GelSplitter framework allows for a flexible configuration of different camera sensor combinations, such as RGB and thermal imaging.
Yuankai Lin, Yulin Zhou, Kaiji Huang, Qi Zhong, Tao Cheng, Hua Yang, Zhouping Yin
2023-09-15T01:26:11Z
http://arxiv.org/abs/2309.08096v1
# GelSplitter: Tactile Reconstruction from Near Infrared and Visible Images ###### Abstract The GelSight-like visual tactile (VT) sensor has gained popularity as a high-resolution tactile sensing technology for robots, capable of measuring touch geometry using a single RGB camera. However, the development of multi-modal perception for VT sensors remains a challenge, limited by the mono camera. In this paper, we propose the GelSplitter, a new framework approach the multi-modal VT sensor with synchronized multi-modal cameras and resemble a more human-like tactile receptor. Furthermore, we focus on 3D tactile reconstruction and implement a compact sensor structure that maintains a comparable size to state-of-the-art VT sensors, even with the addition of a prism and a near infrared (NIR) camera. We also design a photometric fusion stereo neural network (PFNN), which estimates surface normals of objects and reconstructs touch geometry from both infrared and visible images. Our results demonstrate that the accuracy of RGB and NIR fusion is higher than that of RGB images alone. Additionally, our GelSplitter framework allows for a flexible configuration of different camera sensor combinations, such as RGB and thermal imaging. Keywords:Visual tactile Photometric stereo Multi-modal fusion. ## 1 Introduction Tactile perception is an essential aspect of robotic interaction with the natural environment [1]. As a direct means for robots to perceive and respond to physical stimuli, it enables them to perform a wide range of tasks and to interact with humans and other objects, such as touch detection [5], force estimation [4], robotic grasping [12] and gait planning [25]. While the human skin can easily translate the geometry of a contacted object into nerve impulses through tactile receptors, robots face challenges in achieving the same level of tactile sensitivity, especially in multi-modal tactile perception. Visual tactile (VT) sensors such as the GelSights [28, 22, 6] are haptic sensing technology becoming popularity with an emphasis on dense, accurate, and high-resolution measurements, using a single RGB camera to measure touch geometry. However, the RGB-only mode of image sensor restricts the multi-modal perception of VT sensor. RGB and near infrared (NIR) image fusion is a hot topic for image enhancement, using IR images to enhance the RGB image detail and improve measurement accuracy [8, 32]. From this perspective, a multi-modal image fusion VT sensor is promoted to enrich the tactile senses of the robot. In this paper, we present the GelSplitter with RGB-NIR fusion as a solution of the above challenge. This novel design integrates a splitting prism to reconstruct tactile geometry from both NIR and RGB light cameras. Our GelSplitter offer a framework for the implementation of multi-modal VT sensors, which can further enhance the tactile capabilities of robots, as shown in Fig. 1. Additionally, our GelSplitter framework allows for a flexible configuration of different camera sensor combinations, such as RGB and thermal imaging. In summary, our contribution lies in three aspects: * We propose the GelSplitter, a new framework to design multi-modal VT sensor. * Based on the framework, we focus on the task of 3D tactile reconstruction and fabricate a compact sensor structure that maintains a comparable size to state-of-the-art VT sensors, even with the addition of a prism and camera. * A photometric fusion stereo neural network (PFNN) is implemented to estimate surface normals of objects and reconstructs touch geometry from both Figure 1: Comparison with RGB-Only method, our GelSplitter offers a framework for the implementation of multi-modal VT sensors, which can further enhance the performance of tactile reconstruction. (a): Our proposed GelSplitter. (b): A RGB image of our sensor. (c): A corresponding NIR image. (d): The normal map of our PFNN estimated from the RGB and NIR images. (e): The Depth map reconstructed from the normal map (d). (f): The normal map of the look-up table (LUT) method [28] estimated from the RGB image. (g): The Depth map reconstructed from the normal map (f). infrared and visible images. The common issue of data alignment between the RGB and IR cameras is addressed by incorporating splitting imaging. Our results demonstrate that our method outperforms the RGB-only VT sensation. The rest of the paper is organized as follows: In Sect. 2, we briefly introduce some related works of VT sensors and multi-modal image fusion. Then, we describe the main design of the GelSplitter and FPSNN in Sect. 3. In order to verify the performance of our proposed method, experimental results are presented in Sect. 4. ## 2 Related Work ### VT Sensor GelSight [28] is a well-established VT sensor that operates like a pliable mirror, transforming physical contact or pressure distribution on its reflective surface into tactile images that can be captured by a single camera. Building upon the framework of the GelSight, various VT sensors [9, 22, 23] are designed to meet diverse application requirements, such as [18, 16, 17, 10]. Inspired by GelSight, DIGIT [13] improves upon past the VT sensors by miniaturizing the form factor to be mountable on multi-fingered hands. Additionally, there are studies that explore the materials and patterns of reflective surfaces to gather other modal information of touch sensing. For example, FingerVision [27] provides a multi-modal sensation with a completely transparent skin. HaptiTemp [2] uses thermochromic pigments color blue, orange, and black with a threshold of \(31^{\circ}\)C, \(43^{\circ}\)C, and \(50^{\circ}\)C, respectively on the Figure 2: Comparison with the existed VT sensors, our GelSplitter extends the dimensionality of perception while maintaining the same image plane. (a): The Imaging mode of a typical Geisight-like [28] VT sensor. (b): The Imaging mode of a GelStereo [11] VT sensor. (c): Our imaging mode of GelSplitter maintains the optical centres of the different cameras and align the multi-modal data. In addition, our framework allows for a flexible configuration of different camera sensor combinations, such as RGB and thermal imaging. gel material, to enable high-resolution temperature measurements. DelTact [31] adopts an improved dense random color pattern to achieve high accuracy of contact deformation tracking. The VT sensors mentioned above are based on a single RGB camera. Inspired by binocular stereo vision, GelStereo [6] uses two RGB cameras to achieve tactile geometry measurements. Further, the gelstereo is developed in six-axis force/torque estimation [30] and geometry measurement [7, 11]. In current research, one type of camera is commonly used, with a small size and a simplified optical path. The RGB camera solution for binocular stereo vision effectively estimates disparity by triangulation. However, data alignment is a common issue to different kinds of camera images [3], because multi-modal sensors naturally have different extrinsic parameters between modalities, such as lens parameters and relative position. In this paper, two identical imaging windows are fabricated by a splitting prism, which filter RGB component (band-pass filtering at \(650nm\) wavelength) and IR component (narrowband filtering at \(940nm\) wavelength). ### Multi-modal Image Fusion Multi-modal image fusion is a fundamental task for robot perception, healthcare and autonomous driving [26, 21]. However, due to high raw data noise, low information utilisation and unaligned multi-modal sensors, it is challenging to achieve a good performance. In these applications, different types of datasets are captured by different sensors such as infrared (IR) and RGB image [32], computed tomography (CT) and positron emission tomograph (PET) scan [19], LIDAR point cloud and RGB image [15]. Typically, the fusion of NIR and RGB images enhances the image performance and complements the missing information in the RGB image. DenseFuse [14] proposes an encoding network that is combined with convolutional layers, a fusion layer, and a dense block to get more useful features from source images. DarkVisionNet [20] extracts clear structure details in deep multiscale feature space rather than raw input space by a deep structure. MIRNet [29] adopts a multi-scale residual block to learn an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details. Data alignment is another important content of multi-modal image fusion. Generally, the targets corresponding to multiple sensors are in different coordinate systems, and the data rates of different sensors are diverse. To effectively utilize heterogeneous information and obtain simultaneous target information, it is essential to map the data onto a unified coordinate system with proper time-space alignment [24, 21]. Following the above literature, PFNN is designed for the gelsplitter, completing the missing information of RGB images by NIR image. It generates refined normal vector maps and depth maps. In addition, splitting prisms are embeded to unify the image planes and optical centres of the different cameras and align the multi-modal data. ## 3 Design and Fabrication Our design aims to achieve high-resolution 3D tactile reconstruction while maintaining a compact shape. Fig. 3 shows the components and schematic diagram of the sensor design. Following, we describe the design principles and lessons learned for each sensor component. ### Prism and Filters To capture the NIR image and the RGB image, a splitting prism, band pass filter, narrow band filter and diffuser were prepared. These components ensure that the images are globally illuminated and homogeneous. The splitting prism is a cube with a side length of \(15mm\), as shown in Fig. 4 (a). It has a spectral ratio of 1:1 and a refractive index of 1.5168, creating two identical viewports. Among the six directions of the cube, except for two directions of the cameras and one direction of the gel, the other three directions are painted with black to reduce secondary reflections. The diffuser in lighting can produce more even global illumination as it distributes light evenly throughout the scene. The octagonal diffuser needs to be optically coupled to the splitting prism to avoid reflection from the air interface, as shown in Fig. 4 (b). "3M Diffuser 3635-70" is used for the sensor. A \(650nm\) bandpass filter and \(940nm\) narrowband filter are used to separate the RGB component from the NIR component. The lens and filter are integrated together, as shown in Fig. 4 (c). ### Lighting In this paper, five types of colour LEDs are provided for illumination in different directions, including red ( Type NCD0603R1, wavelength 615\(\sim\)630\(nm\)), Figure 3: The design of our GelSplitter, including the exploded view showing inner components, the assembled CAD model, and the Sectional view of the sensor. green ( Type NCD0603W1, wavelength 515\(\sim\)530\(nm\)), blue ( Type NCD0603B1, wavelength 463\(\sim\)475\(nm\)), white ( Type NCD0603W1) and infrared ( Type XL-1608IRC940, wavelength 940\(nm\)), as shown in Fig. 4 (d). These LEDs are all in 0603 package size (1.6\(\times\)0.8 \(mm\)). Moreover, in order to represent surface gradient information, LEDs are arranged in two different ways. Firstly, red, green, blue and white LEDs can be arranged in rows, illuminating the four sides of the gel. This allows for a clear representation of surface gradients from different angles. Secondly, infrared LEDs can be arranged in a circular formation above the gel, surrounding the infrared camera. This allows IR light to shine on the shaded areas, supplementing the missing gradient information. Finally, we designed the FPC and PCB to drive LEDs, as shown in Fig. 4 (e). The LED brightness is configured with a resistor, and its consistency is ensured by a luminance meter. ### Camera Both NIR and RGB cameras are used the common CMOS sensor OV5640, manufactured by OmniVision Technologies, Inc. The OV5640 supports a wide range of resolution configurations from 320\(\times\)240 to 2592\(\times\)1944, as well as auto exposure control (AEC) and auto white balance (AWB). Following the experimental setting of GelSights [1], our resolution is configured to 640\(\times\)480. AEC and AWE are disabled to obtain the linear response characteristics of the images. To capture clear images, it is essential to adjust the focal length by rotating the lens and ensure that the depth of field is within the appropriate range. Although ideally, the optical centres of the two cameras are identical. In practice, however, there is a small assembly error that requires fine-grained data alignment. Figure 4: The inner components and details of our GelSplitter. (a): The splitting prism. (b): The diffuser. (c): The 650\(nm\) bandpass filter and 940\(nm\) narrowband filter. (d): The lighting. (e): The FPC and PCB to drive LEDs. (f): The Gel with reflective covering. (g): The Silk screen printing plate. Both RGB and NIR cameras are calibrated and corrected for aberrations using the Zhang's calibration method implemented in OpenCV. Random sample consistency (RANSAC) regression is used to align the checkerboard corners of the two cameras' images to achieve data alignment. ### Elastomer Our transparent silicone is coated with diffuse reflective paint. We choose low-cost food grade platinum silicone which operates in a 1:1 ratio. The silicone is poured into a mould and produced as a soft gel mat with a thickness of 1.5 \(mm\). In our experiment, we found that a gel with a Shore hardness of 10A is both pliable enough to avoid breakage and capable of producing observable deformations. To achieve defoaming, it is necessary to maintain an environmental temperature of approximately 10degC and apply a vacuum pressure of -0.08 MPa. ### SLA 3D Model We use stereo lithography appearance (SLA) 3D printing to create the case. Compared to fused deposition modelling (FDM) 3D printing of PLA, SLA technology has a higher precision for assemble requirements of the splitting prism. Despite the addition of a prism and a NIR camera, our case still maintains a comparable size to the state-of-the-art VT sensor, GelSlim 3.0 [22], as shown in Fig. 4 (g). Figure 5: We propose PFSNN to fuse RGB images and NIR images from GelSplitter, and estimate dense normal maps. PFSNN is composed of multi-layer perceptron (MLP) and sphere normalization, and is supervised by L1 loss function. Furthermore, the fast poisson algorithm [28] can be utilized to solve depth maps based on normal maps. ## 4 Measuring 3D Geometry In this section, We introduce PFSNN which fuses RGB images and NIR images from GelSplitter, and estimate dense normal maps. Following, we describe the components and implementation details of PFSNN. ### Pfsnn #### 4.1.1 Network Architecture of the PFSNN is shown in the table 1. It is composed of multi-layer perceptron (MLP) and sphere normalization. Compared to the look-up table (LUT) method [28], the MLP network is trainable. Through its non-linear fitting capability, the algorithm can combine and integrate information from both RGB and NIR sources. #### 4.1.2 Sphere Normalization is derived from a physical model of the spherical press distribution and outputs a unit normal vector map, as shown in Fig. 5. It is defined as: \[n=\frac{tanh(x)}{max(\left\|tanh(x)\right\|_{2},\epsilon)}, \tag{1}\] where \(x\) is the output of the MLP, and \(\epsilon\) is a small value (\(10^{-12}\) in this paper) to avoid division by zero. Furthermore, the fast poisson algorithm [28] can be utilized to solve depth maps based on normal maps. It is defined as: \[d=Fast\_Poisson(n). \tag{2}\] #### 4.1.3 Implementation Details. PFSNN requires a small amount of data for training. In this paper, only five images of spherical presses were collected, four for training and one for validation. We test the model with screw caps, screws, hair and fingerprints that the PFSNN never seen before, as shown in Fig. 6. All of our experiments are executed on a NVIDIA RTX 3070 laptop GPU. Our method is implemented in the PyTorch 2.0 framework, trained with an ADAM optimizer. The batch size is set to 64. The learning rate is set to 0.01 for 20 epochs. There is no use of data enhancement methods. \begin{table} \begin{tabular}{c c c c c} \hline Layer & Operator & Kernel Size & Input channels & Output Channels \\ \hline 1 & Concat & - & 4(RGB-NIR)+4(Background) & 8 \\ 2 & Conv2d & 1 & 8 & 128 \\ 3 & Relu & - & 128 & 128 \\ 4 & Conv2d & 1 & 128 & 64 \\ 5 & Relu & - & 64 & 64 \\ 6 & Conv2d & 1 & 64 & 3 \\ 7 & Relu & - & 3 & 3 \\ 8 & Tanh & - & 3 & 3 \\ 9 & Normalize & - & 3 & 3 \\ 10 & \(x=0.5x+0.5\) & - & 3 & 3 \\ \hline \end{tabular} \end{table} Table 1: Network Architecture of PFSNN. ### Result of 3D tactile Reconstruction The process of 3D tactile reconstruction can be divided into two main steps. The first step involves calculating the normal map from RGB-NIR. The second step is to apply the fast Poisson solver [28] to integrate the gradients and obtain the depth information. This helps to create a detailed 3D tactile reconstruction, which can be very useful for various applications. The screw cap, screw, hair, fingerprint is chosen to qualitative testing of resolution and precision, as shown in Fig. 6. In NIR images, the opposite property to RGB images is observed, with higher depth map gradients being darker. This means that our design allows the NIR image to contain information that is complementary to the RGB image and illuminates the shaded parts of the RGB image,as shown in Fig. 6 (a). In addition, our splitting design of imaging allows both the normal vector map and the reconstructed depth map to have a clear texture, as shown in Fig. 6 (b), where the threads of the screw are clearly reconstructed. Hair strands (diameter approx. \(0.05mm\sim 0.1mm\)) and \begin{table} \begin{tabular}{c c c c c} \hline \hline \multicolumn{2}{c}{LUT w/o NIR LUT w. NIR PFSNN w/o NIR PFSNN w. NIR} \\ \hline MAE(\({}^{\circ}\)) & 9.292 & 8.731 & 6.057 & 5.682 \\ \hline \hline \end{tabular} \end{table} Table 2: Experiment Result of PFSNN. Figure 6: Result of 3D tactile reconstruction from RGB-NIR. (a):A screw cap. (b):A screw. (c): A hint of hair. (d): A fingerprint. Even though GelSplitter is used to touch these items for the first time, remarkably clear shapes are still able to be reconstructed through the 3D tactile reconstruction process. fingerprints(diameter approx. \(0.01mm\)\(\sim\)\(0.02mm\)) are used to test the minimum resolution of the GelSplitter, as shown in Fig. 6 (c) (d). In addition, our splitter can be easily switched between RGB and RGB-NIR modes and provides a fair comparison of the results, as shown in the Tab. 2. The LUT method [28] is employed as a baseline to verify the validity of our method. The results show that the addition of NIR reduces the normal error by \(0.561^{\text{\textdegree}}\) and \(0.375^{\text{\textdegree}}\) for LUT and PSFNN respectively. Our PFSNN outperforms the LUT, decreasing the error by over \(40\%\). ## 5 Conclusion In this paper, we proposed a framework named GelSplitter to implement multi-modal VT sensor. Furthermore, we focus on 3D tactile reconstruction and designed a compact sensor structure that maintains a comparable size to state-of-the-art VT sensors, even with the addition of a prism and camera. We also implemented the PFSNN to estimate surface normals of objects and reconstruct touch geometry from both NIR and RGB images. Our experiment results demonstrated the performance of our proposed method. ## Acknowledgments This work was supported by the Guangdong University Engineering Technology Research Center for Precision Components of Intelligent Terminal of Transportation Tools (Project No.2021GCZX002), and Guangdong HUST Industrial Technology Research Institute, Guangdong Provincial Key Laboratory of Digital Manufacturing Equipment.
2309.06905
Parity Measurements using Dispersive Shifts for Surface Codes
Parity measurements are central to quantum error correction (QEC). In current implementations measurements of stabilizers are performed using a number of Controlled Not (CNOT) gates. This implementation suffers from an exponential decrease in fidelity as the number of CNOT gates increases thus the stabilizer measurements also suffer a severe decrease in fidelity and increase in gate time. Speeding up and improving the fidelity of this process will improve error rates of these stabilizer measurements thus increasing the coherence times of logical qubits. We propose a single shot method useful for stabilizer readout based on dispersive shifts. We show a possible set up for this method and simulate a 4 qubit system showing that this method is an improvement over the previous CNOT circuit in both fidelity and gate time. We find a fidelity of 99.8% and gate time of 600 ns using our method and investigate the effects of higher order Z interactions on the system.
Aneirin Baker
2023-09-13T12:06:46Z
http://arxiv.org/abs/2309.06905v1
# Parity Measurements using Dispersive Shifts for Surface Codes ###### Abstract Parity measurements are central to quantum error correction (QEC). In current implementations measurements of stabilizers are performed using a number of Controlled Not (CNOT) gates. This implementation suffers from an exponential decrease in fidelity as the number of CNOT gates increases thus the stabilizer measurements also suffer a severe decrease in fidelity and increase in gate time. Speeding up and improving the fidelity of this process will improve error rates of these stabilizer measurements thus increasing the coherence times of logical qubits. We propose a single shot method useful for stabilizer readout based on dispersive shifts. We show a possible set up for this method and simulate a 4 qubit system showing that this method is an improvement over the previous CNOT circuit in both fidelity and gate time. We find a fidelity of 99.8% and gate time of 600 ns using our method and investigate the effects of higher order Z interactions on the system. ## I Introduction As quantum computers progress, the need for quantum error correction (QEC) becomes ever clearer. Having a system whose lifetime and error rates exceed the levels of the individual physical qubits is essential for the future of quantum computing. Current developments within QEC have been focused on an implementation called the surface code [1; 2], however other implementations have been explored such as the colour code and the Steane code [3; 4; 5]. At the heart of these implementations are measurements of stabilizers elements, these stabilizers are used to determine if an error has occurred in the system. Here we focus our attention on the surface code, a planar implementation of Kitaev's Toric code [6]. This code is promising due to its high threshold for correctable errors in the system of around 1% [7], and is amenable to implementation in 2D architectures, making it a promising contender to reach fault tolerant quantum computation. Currently stabilizer measurements are measured via a series of CNOT gates that maps their eigenvalues onto the \(Z\) eigenvalue of an ancilla qubit [8]. In current superconducting implementations of surface codes the CNOT gates which make up these stabilizer measurements have fidelities of 98.5% [1] when we consider that stabilizer measurements require between 2 and 4 CNOT gates simple calculations brings us to an estimated fidelity of 97% and 94% respectively for the stabilizer. Other implementations of the CNOT (or equivalent two qubit gates) in larger qubit devices are much lower than the required fidelity for effective stabilizer interactions [9; 10; 11; 2]. For QEC to be successful these error rates must be improved. A potential way, which has been investigated before in the context of the iToffoli gate [12; 13], is to implement a single shot measurement of this stabilizer. This single shot measurement has the advantage of not being effected by exponential scaling of fidelities for small stabilizer measurements and shows a modest improvement in gate time over its CNOT decomposition. We also find that it is easy to modify this single shot gate to other geometries of surface code. We utilize dispersive shifts that can be engineered in superconducting circuits (SCCs) through the ZZ couplings between qubits [12; 14; 15; 16]. Here, we use these shifts to enact parity measurements on ensembles of qubits. We aim to find a regime where a subset of all two body dispersive shifts dominate and are of equal size. The subset we choose determines the operations we wish to perform. For our example system we choose 4 qubit setup with 3 control qubits and one ancilla qubit. We choose to measure the odd parity of this system to show that the more complex situation of two drives is feasible. To create a parity measurement we must detune the transitions \(\ket{1001}\leftrightarrow\ket{1000},\ket{0101}\leftrightarrow\ket{0100}\) and \(\ket{0011}\leftrightarrow\ket{0010}\) (where we have adopted the notation \(\ket{\text{Qubit1}}\), \(\ket{\text{Qubit2}}\), \(\ket{\text{Qubit3}}\), Ancilla)) from all other transitions of the ancilla. We then drive the ancilla at the detuned transition frequency (\(\omega_{d}=\omega_{a}+\chi\), where \(\chi\) is the dispersive shift we have engineered) only these three transitions will be driven and all the rest will be unaffected. If we also ensure that all other dispersive shifts are suppressed then we can use the same technique to drive higher order transitions for example, Figure 1: Plaquette operator which is the basis for the error correcting codes. It comprises 4 CNOT gates with the ancilla as the target qubit. \(|1110\rangle\leftrightarrow|1111\rangle\) with the drive \(\omega_{d}=\omega_{a}+3\chi\). Choosing the combination of drives correctly we can then generate a parity measurement on an ensemble of qubits. This paper is organised as follows in Section II we discuss the physical implementation of the model where we shall theoretically describe the system, in Section II.2 we shall present the simulations of the of the model in the previous section. In Section III we shall discuss the implementation highlighting its advantages over other implementations and and discussing some of its drawbacks. ## II Physical implementation and model The readout of a stabilizer in QEC requires the data qubits to be connected to an ancilla - or measurement qubit. A recent realizations of simple QEC codes implement these gates using static capacitive coupling [1]. This allowed for fast CZ gates to be implemented in 100ns. In our implementation we propose a lattice of qubits coupled via tunable couplers. This gives us the freedom to turn off interactions and adjust the strength of interaction by tuning the frequency of the couplers. This tunability also gives us access to different interaction regimes where we can perform gates specific to that regime. The lattice in Fig. 2 shows the qubit layout we propose, we note that this is very similar to current implementations [2; 17] thus we are expanding the capabilities of these types of Superconducting Circuits - SCCs. We describe the Hamiltonian of a unit cell in terms of creation and annihilation operators and begin with the following model of the circuit in Fig. 4 \[H=\sum_{i=1}^{4}H_{0}(q_{i})+\sum_{j=1}^{5}H_{0}(c_{j})+H_{0}(a)+H_{\rm int,q}+ H_{\rm int,c}. \tag{1}\] Where we have defined \[H_{0}(X_{i})\ =\ \omega_{i}X_{i}^{\dagger}X_{i}+\frac{\alpha_{i}}{2}X_{i}^{ \dagger}X_{i}^{\dagger}X_{i}X_{i}. \tag{2}\] Here \(X_{i}(X_{i}^{\dagger})\) represents a general annihilation (creation) operator. \[H_{\rm int,q} = \sum_{i<j}g_{ij}(q_{i}-q_{i}^{\dagger})(q_{j}-q_{j}^{\dagger}), \tag{3}\] \[H_{\rm int,c} = \sum_{i=1}^{4}g_{i,ci}(q_{i}-q_{i}^{\dagger})(c_{i}-c_{i}^{ \dagger})\] \[+ g_{i+1,ci}(q_{i+1}-q_{i+1}^{\dagger})(c_{i}-c_{i}^{\dagger})\] \[+ g_{i,c5}(q_{i}-q_{i}^{\dagger})(c_{5}-c_{5}^{\dagger})\] \[+ g_{a,c5}(a-a^{\dagger})(c_{5}-c_{5}^{\dagger}).\] In the above Hamiltonian \(q_{i}\) (\(c_{i}\),\(a\))/\(q_{i}^{\dagger}\) (\(c_{i}^{\dagger}\),\(a^{\dagger}\)) represent the annihilation/creation operators for the qubits (couplers, ancilla) which obey the commutation relations \[[q_{i},q_{j}^{\dagger}]=[c_{i},c_{j}^{\dagger}]=\delta_{ij},\quad[a,a^{ \dagger}]=1. \tag{5}\] All the other combinations of the operators are defined to be \(0\). In this model \(\omega_{i}/\alpha_{i}\) (\(\omega_{ci}/\alpha_{ci}\), \(\omega_{a}/\alpha_{a}\)) are the qubit Figure 2: a.)Example of an effective lattice that tessellates the qubit frequencies over the surface to ensure that all qubits are sufficiently detuned from one another. We colour code the qubits to show which qubits would have different transition frequencies. In this diagram the Q’s represent the data Qubits and the A represents the ancilla qubits. b.) Representation of the circuit we shall be simulating in this chapter. Squares represent qubits and circles represent couplers. Each qubits is colour coded to represent a different transition frequency. This pattern can be tessellated over the plane so that all patches of qubits can have the correct dispersive shift to execute the parity check. c.) Legend for the circuit depicted above showing the qubits and couplers used in this chapter. In this chapter we use fixed frequency qubits and frequency qubit couplers. (coupler, ancilla) transition frequencies and anharmonicities respectively. Here \(g_{nm}\) denotes the coupling between qubit \(n\) and \(m\) (the exact form of the coupling can be found in Appendix A.1) and \(g_{i,cj}\) describes the coupling between the i-th qubit and the j-th coupler. This model describes a full unit cell in the lattice in Fig. 2. We can create an effective model for this by performing successive Schrieffer-Wolff transformations Schrieffer and Wolff (1983) eliminating sets of couplers along the way. At each point we ensure that the approximations made are small enough that we can make these approximations - see Appendix A.1 for further details. For the first approximation we eliminate the couplers along the edges of the unit cell with the transformation \[H\rightarrow\tilde{H}=e^{iS_{\rm edge}}He^{-iS_{\rm edge}}, \tag{6}\] with \[S_{\rm edge}=\sum_{i=1}^{4}\bigg{(}\frac{g_{i,c_{i}}}{\Delta_{i,ci}}\big{(}q_{ i}^{\dagger}c_{i}-q_{i}c_{i}^{\dagger}\big{)}-\frac{g_{i,c_{i}}}{\Sigma_{i,ci}} \big{(}q_{i}^{\dagger}c_{i}^{\dagger}-q_{i}c_{i}\big{)}+\frac{g_{i+1,ci}}{ \Delta_{i+1,ci+1}}\big{(}q_{i+1}^{\dagger}c_{i}-q_{i+1}c_{i}^{\dagger}\big{)} -\frac{g_{i+1,ci}}{\Sigma_{i+1,ci+1}}\big{(}q_{i+1}^{\dagger}c_{i}^{\dagger}-q _{i+1}c_{i}\big{)}\bigg{)}. \tag{7}\] Here we have defined \(\Delta_{i,j}=\omega_{i}-\omega_{j}\), \(\Sigma_{i,j}=\omega_{i}+\omega_{j}\) and we have used a notation where the indices are defined modulo 4 to reflect the periodic nature of the system. We then apply a similar transformation \[\tilde{H}\rightarrow\tilde{H}=e^{iS_{\rm center}}\tilde{H}e^{-iS_{\rm center }}, \tag{8}\] with the argument now \[S_{\rm center} = \sum_{i=1}^{4}\frac{\tilde{g}_{i,\rm cs}}{\tilde{\Delta}_{i5}} \big{(}q_{i}^{\dagger}c_{5}-q_{i}c_{5}^{\dagger}\big{)}-\frac{\tilde{g}_{i,c 5}}{\tilde{\Sigma}_{i5}}\big{(}q_{i}^{\dagger}c_{5}^{\dagger}-q_{i}c_{5}\big{)} \tag{9}\] \[+ \frac{\tilde{g}_{a,c5}}{\tilde{\Delta}_{a5}}\big{(}a^{\dagger}c_ {5}-ac_{5}^{\dagger}\big{)}-\frac{\tilde{g}_{a,c5}}{\tilde{\Sigma}_{a5}} \big{(}a^{\dagger}c_{5}^{\dagger}-ac_{5}\big{)},\] to eliminate the central coupler. Here we have defined \(\tilde{\Delta}_{i,j}=\tilde{\omega}_{i}-\tilde{\omega}_{j}\), \(\tilde{\Sigma}_{i,j}=\tilde{\omega}_{i}+\tilde{\omega}_{j}\). This results in the final Hamiltonian of \[\bar{H} = \sum_{i=1}^{5}\bar{H}_{0}(c_{i})+\sum_{j=1}^{4}\bar{H}_{0}(q_{i}) \tag{10}\] \[+ \bar{H}_{\rm int,q}+O(\frac{g^{2}}{\Delta_{i,5}^{2}}),\] \[\bar{H}_{0}(X_{i}) = \bar{\omega}_{i}X_{i}^{\dagger}X_{i}+\frac{\bar{\alpha}_{i}}{2}X _{i}^{\dagger}X_{i}^{\dagger}X_{i}X_{i},\] (11) \[\bar{H}_{\rm int,q} = \sum_{i<j}\bar{g}_{ij}(q_{i}-q_{i}^{\dagger})(q_{j}-q_{j}^{ \dagger}). \tag{12}\] Here \(\bar{\omega}_{n}\), \(\bar{\alpha}_{n}\) and \(\bar{g}_{nm}\) are shifted frequencies, nonlinearities and couplings - see appendix A.1 for the explicit expressions. We pause here to note that other implementations of this system are possible. It is conceivable that a multi qubit coupler Schrieffer and Wolff (1983); Wolff (1983) could be used to create the same interactions as we have here and in a simpler manner. We choose this setup due to the flexibility gained from the tunable couplers and hence the ability to correct for the phase errors that could occur in the system. Schrieffer-Wolff Commutation ErrorsAs we have performed two successive SW transformations we must examine whether these two operators commute as these are still unitary transformations and so when we apply them to the Hamiltonian there will be some error accumulated if the two transformations do not commute. We see that these indeed do not commute since they both contain the qubit operators \(q_{i}\) and \(q_{j}^{\dagger}\) which do not commute when \(i=j\). According to the Baker-Campbell-Hausdorff formula the error due to these two operators will be proportional to \(\frac{1}{2!}[S_{\rm edge},S_{\rm Center}]\) which will be proportional to \(\frac{g^{2}}{2\Delta^{2}}\). As we have been considering only Hamiltonians at less than second order in \(g\) then we are safely able to ignore this error it has a strength of \(\leq 1\) MHz. ### Dispersive Shifts and parity measurement With the model of the system now in place we derive the conditional shifts that form the basis of our gate. As done previously Schrieffer and Wolff (1983); Wolff (1983) we treat the interaction term as a perturbation to the system and apply perturbation theory with \(V=\bar{H}_{\rm int,q}\) to determine the qubit transition frequencies up to \(n\)-th order in perturbation theory. Using these expressions we can determine the dispersive shifts present in the system. Specifically we are looking for the \(n\)-body dispersive shifts - defined in Appendix B. In previous explorations of dispersive shifts only three body shifts were considered as were only, at most, three qubits coupled together. However, for our system and therefore extensions of our system we must consider \(n\)-body shifts. For example in the simulations below we need to consider 4-body dispersive shift in \(n\)-th order perturbation theory defined by \[\chi^{\rm bare,(n)}_{1234} =(E^{\rm(n)}_{|11111}-E^{\rm(n)}_{|00000\rangle})-(E^{\rm(n)}_{|1000 0\rangle}-E^{\rm(n)}_{|00000\rangle})-(E^{\rm(n)}_{|0100\rangle}-E^{\rm(n)}_{|0000 \rangle})-(E^{\rm(n)}_{|0000\rangle})-(E^{\rm(n)}_{|0001\rangle}-E^{\rm(n)}_{|0000 \rangle})\] \[=E^{\rm(n)}_{|1111\rangle}-E^{\rm(n)}_{|1000\rangle}-E^{\rm(n)}_{ |0100\rangle}-E^{\rm(n)}_{|0010\rangle}+E^{\rm(n)}_{|0000\rangle}. \tag{13}\] We denote the above dispersive shift as the "bare" shift since this shift also contains contributions from lower order dispersive shifts which can be expressed as \[\chi^{\rm(n)}_{1234}=\chi^{\rm bare,(n)}_{1234}-\sum_{i\neq j\neq k}\chi^{\rm bare,(n)}_{ijk}-\sum_{i\neq j}\chi^{\rm(n)}_{ij}. \tag{14}\] Here \(\chi^{(m)}_{ijk...}\) is the Dispersive shift on the qubits \(ijk...\) to \(m\)-th order in perturbation theory and similarly \(E^{(m)}_{|n\rangle}\) is the energy of state \(|n\rangle\) to \(m\)-th order in perturbation theory. Using these expressions we can estimate the effects of these dispersive shifts to find a regime where we can suppress the unwanted interactions. \[\chi^{\rm(2)}_{1234}= \tag{15}\] \[\frac{-4g^{2}_{1,a}}{\alpha_{1}+\alpha_{a}+\Sigma_{1,a}}+\frac{2 g^{2}_{1,a}}{\alpha_{1}+\Sigma_{1,a}}-\frac{2g^{2}_{1,a}}{\alpha_{1}+\Delta_{1,a}}+ \frac{2g^{2}_{1,a}}{\alpha_{a}+\Sigma_{1,a}}-\frac{2g^{2}_{1,a}}{\alpha_{a}- \Delta_{1,a}}-\frac{g^{2}_{1,a}}{\Sigma_{1,a}}-\frac{g^{2}_{1,a}}{\Delta_{1,a} }+\frac{g^{2}_{1,a}}{\Delta_{1,a}}\] \[-\frac{4g^{2}_{2,a}}{\alpha_{2}+\alpha_{a}+\Sigma_{2,a}}+\frac{2 g^{2}_{2,a}}{\alpha_{2}+\Sigma_{2,a}}-\frac{2g^{2}_{2,a}}{\alpha_{2}+\Delta_{2,a}} +\frac{2g^{2}_{2,a}}{\alpha_{a}+\Sigma_{2,a}}-\frac{2g^{2}_{2,a}}{\alpha_{a}- \Delta_{2,a}}-\frac{g^{2}_{2,a}}{\Sigma_{2,a}}+\frac{g^{2}_{2,a}}{\Delta_{2,a }}\] \[-\frac{4g^{2}_{3,a}}{\alpha_{3}+\alpha_{a}+\Sigma_{3,a}}+\frac{2 g^{2}_{3,a}}{\alpha_{3}+\Sigma_{3,a}}-\frac{2g^{2}_{3,a}}{\alpha_{3}+\Delta_{3,a}} +\frac{2g^{2}_{3,a}}{\alpha_{a}+\Sigma_{3,a}}-\frac{2g^{2}_{3,a}}{\alpha_{a}- \Delta_{3,a}}-\frac{g^{2}_{3,a}}{\Sigma_{3,a}}-\frac{g^{2}_{3,a}}{\Delta_{3,a} }+\frac{g^{2}_{3,a}}{\Delta_{3,a}}\] \[+\text{Counter Rotating Terms},\] \[=\chi^{\rm(2)}_{12}+\chi^{\rm(2)}_{13}+\chi^{\rm(2)}_{14}+\text{ Counter Rotating Terms}.\] To second order we find that these higher order dispersive shifts can be decomposed into second order terms with cross terms which are proportional to the counter rotating terms. These counter rotating terms will not contribute much to the dispersive shifts since they are proportional to \(\propto\frac{1}{\omega_{i}+\omega_{j}}\) which, for our parameters, will be small. These are the higher order shifts present in the summation terms of Eq. (14). Using these expressions we find parameter regimes (by sweeping over a large parameter set within the dispersive regime) where all the pairwise dispersive shifts that include the ancilla are equal, any higher order interaction are highly suppressed and finally all other two body shifts are highly suppressed - see section D for exact shifts. Once we have found these regimes we can simulate the dynamics of the system choosing the number of drives and their frequencies according to the parity measurement we wish to execute. ### Simulations We now simulate a system with 3 data qubits,one ancilla qubit and two drives to demonstrate the proposed mechanism. As the Hamiltonian we derived has different behaviours for different initial states we apply two drives to the system to measure the parity. These drives will excite the central ancilla qubits if and only if the data qubits are in the parity state we wish to measure. To select a specific parity state we choose the frequencies of the external drives to match the shifts we have induced in the system. To measure the effectiveness of these simulations we define the ideal evolution to be the evolution where the transitions \(|0000\rangle\leftrightarrow|1000\rangle\), \(|0110\rangle\leftrightarrow|1110\rangle\), \(|0101\rangle\leftrightarrow|1101\rangle\) and \(|0011\rangle\leftrightarrow|1011\rangle\) occur, and all other states are unchanged. Using the definition of process fidelity as \(F_{p}(U_{1},U_{2})=|\text{Tr}(U_{1}^{\dagger},U_{2})|/d\) (\(d=16\) being the dimension of the Hilbert space) we can quantify how well our scheme realizes a stabilizer measurement and so compare it to other implementations of parity/stabilizer measurements. We choose the parameters shown in Table. 1 such that all the higher order shifts are suppressed and only the shifts we require are active. These parameters result in dispersive shifts of \(\chi=\chi_{12}=\chi_{13}=\chi_{14}=-5\) MHz while all other dispersive shifts have absolute values \(\leq 0.4\) MHz. Fig. 3 shows the resulting unitary from simulations of Eq. (10) with two drives where \(\omega_{d1}=E_{|1100\rangle}-E_{|0100\rangle}\) and \(\omega_{d2}=E_{[1111)}-E_{[0111)}\). The overall process fidelity of this operation was calculated to be \(F_{p}=99.8\%\) with a total execution time of \(t_{\text{gate}}=600\) ns. Errors:The reduction in fidelity of \(\approx 0.2\%\) can be attributed to the population leakage to other states. We estimate a \(0.1\%\) fidelity reduction due to leakage to states within the computational subspace and the other \(0.1\%\) due to leakage to higher order states. As the gate time of our operation is reasonably large we estimate that decoherence may play a significant role in reducing fidelity. Current dephasing and detuning times are approaching the \(100\mu s\) mark [11; 22]. Whilst these coherence times are constantly improving, these current values would reduce the fidelity of our operation to \(\approx 99.2\%\) (using a \(100\mu\)s \(T_{1}\) time), which is still much higher than the fidelities currently possible through the CNOT decomposition method. In our simulations and fidelity calculations we have assumed perfect correction of the phases accumulated by the dispersive shifts we are using. In reality this is not the case, these gates have finite fidelities. However a modest change to our parameters solves this problem rather elegantly. It is feasible that we can arrange the parameters such that \(\chi t_{\text{gate}}=2\pi\) (the dispersive shift between the ancilla and the data qubits). For our case this would involve a modest increase in the dispersive shift to \(-7.5\) MHz and an increase in the gate time to \(~{}830\)ns however this would completely negate the issue of accumulated phase for this system. ## III Discussion We have shown that it is possible to execute higher order parity measurements in a single step. This technique can be extended further by adding extra qubits to the unit cell, connected to their neighbours and ancilla via tunable couplers. Our single shot measurement strategy eliminates the need for multiple CNOT gates reducing the effect of multiplying gate errors, this reduction in gate errors comes with the potential for faster parity gates. We also reduce other errors such as errors occurred during the idle time whilst other qubits wait for the CNOT gates to execute. This technique can be extended to more qubits in two different ways. Firstly we can ensure that all two body shifts that are required for the larger parity measurements are dominant and equal whilst also ensuring that other (unwanted) shifts are sufficiently suppressed. We then pick the correct number of drives for the parity measurement required. In doing this we would have to suppress all Z-shifts at orders between 2 and n (where n is the number of qubits in the cluster we are measuring) provided the system is in the dispersive regime. Alternatively we can concatenate these lower order parity measurements together to create a higher order parity measurement. This follows the same mechanism as the current parity decomposition to CNOT gates where the CNOT gates measure the individual parity of each qubit-ancilla pair. Hence, if we know the parity of two "sub clusters" we can infer the parity of the entire cluster, making this proposal useful for much larger parity measurements. For the operation to retain the high fidelity we have predicted, we require a balance between the size of the drive and the gate time. This balance is determined by the coherence time of the qubits. We require that the gate be short enough that decoherence doesn't play a significant role in the reduction of fidelity but long enough that the drive can remain small enough that we remain in the weak driving regime. We also require that the parameters be chosen such that higher order interactions are suppressed so that the transitions we aim to execute are not significantly detuned from the drives. \begin{table} \begin{tabular}{||c|c||} \hline Parameter & Value [\(GHz\)] \\ \hline \hline \(\left[\omega_{1},\omega_{2},\omega_{3},\omega_{4}\right]\) & \([4.95,5.28,5.4,5.48]\) \\ \hline \(\left[\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4}\right]\) & \([-0.3,-0.2,-0.2,-0.19]\) \\ \hline \(\left[g_{12},g_{13},g_{14}\right]\) & \([0.02165,0.032,0.0385]\) \\ \hline \(\left[g_{23},g_{24},g_{34}\right]\) & \([0.001,0.001,0.001]\) \\ \hline \(\left[\Omega_{1},\Omega_{2}\right]\) & \([0.00159,0.00159]\) \\ \hline \(\left[\omega_{d1},\omega_{d2}\right]\) & \([4.938,4.929]\) \\ \hline \end{tabular} \end{table} Table 1: Table outlining the parameters used in the simulations in Fig 3. In this system \(\omega_{1}\) represents the ancilla/measurement qubit and qubits 2-4 represent data qubits. This system produces a dispersive shift of \(\chi=\chi_{12}=\chi_{13}=\chi_{14}=-5MHz\). Whilst this is small it is enough to detune the transitions frequencies such that no other excitations occur when we apply the drives \(\omega_{d1}=\bar{\omega}_{a}+\chi\) and \(\omega_{d2}=\bar{\omega}_{a}+3\chi\). Figure 3: We simulate the system in Eq. (10) using 3 data qubits and 1 measurement qubit. Two drives are applied to the system with frequencies of \(\omega_{d1}=\bar{\omega}_{a}+\chi\) and \(\omega_{d2}=\bar{\omega}_{a}+3\chi\), these drives are what executes the parity measurement. There is still some small amount of population left in the odd parity states. This is due to the non-zero unwanted dispersive shifts. Above we show the absolute values of the unitary produced from this simulation. Whilst we have only discussed the Z stabilizer measurements it is possible to turn these into an X stabilizer measurement. We can use the fact that one can exchange the roles of control and target qubits in CNOT gates by sandwiching it between Hadamard gates. Using this relation we can then turn the X syndrome measurement into a parity measurement with Hadamard gates applied to the data qubits instead of the measurement qubits. This results in the circuit in Eq. (16). With this transformation it is easy to see that the method outlined above can be used to perform both X and Z stabilizer measurements. ## IV Conclusion In conclusion, we have analysed the effectiveness of a single shot stabilizer measurement within superconducting circuits. We have shown the fidelity to be well above current estimates for CNOT decomposition's that are suggested in most implementations of quantum error correction. We discussed the errors associated with this system highlighting the effects of higher order Z interactions which could be corrected for in this system due to the connectivity of the circuit we have proposed. This system uses tunable couplers to couple the qubits together giving the system a large amount of tunability that can be used to execute different types of stabilizer interactions along with the ones we have proposed. Whilst this system shows the use of higher order Z interactions for quantum error correction the explanation of this through perturbation theory may be lacking slightly as 4th order perturbation theory has recently been shown to fail for certain values of the coupler frequency. A more thorough analysis of the higher order interactions is needed to fully understand these interactions and their uses. After the conclusion of this work the authors were made aware of an independent work [23] based on a very similar effect. ###### Acknowledgements. This project has received funding from EPSRC DTP grant EP/R513040/1, I would also like to thank my supervisor M.Hartmann for his advise on this work. Figure 4: A pictorial illustration of the approximations we are making where green squares are the qubits, purple squares are the couplers (SQUIDS) and the black links are the capacitive couplings between the elements. At each stage the arrows represent the SW transformations outlined above. The aim of these approximations is to get an approximate theory of the system with the couplers eliminated and all the qubits connected to the ancilla. Figure 5: Circuit diagrams showing the transformation of an X stabilizer (RHS) into a Z stabilizer (LHS) and vice versa. The transformation inverts the target of a CNOT gate through the use of Hadamard interactions. On the left-hand side the effect of the middle four CNOT gates is to produce parity measurements, the same measurement we have been developing here. This circuit represents the Z stabilizer measurement circuit, on the right we have the X stabilizer measurement. Shown here is the ability to transform between the two. This transformation allows us to create both of the possible stabilizers and others in between. One can imagine only performing the CNOT transformation applied here to two of the qubits thus creating a \(XZZZX\) stabilizer. Figure 6: Example of the concatenation of multiple parity gates together to create a five fold parity measurement gate. We have broken the gate down into two gates a three qubit parity gates (3-parity gate) and a single two qubit parity gate (2-parity gate). This is a natural extension of the CNOT method outlined in Figure 1 where we can consider the CNOT gate as a single parity measurement gate or 1-parity measurement gate. ## Appendix A Model ### SW transformation When performing the SW approximations we eliminate terms which we deem too small to contribute to the overall dynamics. Here we show the full transformations highlighting the approximations and calculating the contributions that these will make and showing that they will be small enough not to contribute. We keep terms only to second order in the coupling as higher terms are considered small enough to not contribute to the dynamics of the system. ### Elimination of edge couplers We begin by eliminating the coupler connecting the qubits along the edges of the unit cell. We use the transformation stated in the main text \[H\rightarrow\tilde{H} = e^{iS_{\mathrm{edge}}}He^{-iS_{\mathrm{edge}}} \tag{30}\] \[e^{iS_{\mathrm{edge}}} H e^{-iS_{\mathrm{edge}}}=H+[S_{\mathrm{edge}},H]\] \[+ \frac{1}{2!}[S_{\mathrm{edge}},[S_{\mathrm{edge}},H]]+...\] with \[S_{\mathrm{edge}}=\sum_{i=1}^{4}\bigg{(}\frac{g_{i,c_{i}}}{\Delta_{i,ci}}(q_{i }^{\dagger}c_{i}-q_{i}c_{i}^{\dagger})-\frac{g_{i,c_{i}}}{\Sigma_{i,ci}}(q_{i }^{\dagger}c_{i}^{\dagger}-q_{i}c_{i})+\frac{g_{i+1,c_{i}}}{\Delta_{i+1,ci+1}}( q_{i+1}^{\dagger}c_{i}-q_{i+1}c_{i}^{\dagger})-\frac{g_{i+1,ci}}{\Sigma_{i+1,ci+1}}(q_{ i+1}^{\dagger}c_{i}^{\dagger}-q_{i+1}c_{i})\bigg{)}. \tag{31}\] Here we have defined \(\Delta_{i,j}=\omega_{i}-\omega_{j}\), \(\Sigma_{i,j}=\omega_{i}+\omega_{j}\). We also have defined \((q_{i},c_{i},a)/(q_{i}^{\dagger},c_{i}^{\dagger},a^{\dagger})\) to be the qubit, coupler and ancilla annihilation/creation operators. In addition, we have defined \(g_{nm}\) to denote the coupling between qubit \(n\) and \(m\) (the exact form can be found in the supplementary material) and \(g_{i,cj}\) describes the coupling between the i-th qubit and the j-th coupler. For later use we shall also define the transition frequencies and anharmonicities for the ith qubit, coupler and the ancilla to be \(\omega_{i},\omega_{ci}\) and \(\omega_{a}\). The full calculation is long and obtuse, so we will only state the final result in this section. The calculation is similar to many other SW transformations to eliminate tunable couplers, which can e.g. be found in [19; 24]. Keeping terms up to second order in the coupling we obtain the Hamiltonian \[\tilde{H} = \sum_{i=1}^{4}\tilde{H}_{0}(q_{i})+\sum_{j=1}^{5}\tilde{H}_{0}(c_ {j})+\tilde{H}_{0}(a)\] \[+ \tilde{H}_{\mathrm{int,q}}+\tilde{H}_{\mathrm{int,c}},\] with the functions now defined as \[\tilde{H}_{0}(X_{i}) = \tilde{\omega}_{i}X_{i}^{\dagger}X_{i}+\frac{\tilde{\alpha}_{i}}{ 2}X_{i}^{\dagger}X_{i}^{\dagger}X_{i}X_{i}, \tag{32}\] \[\tilde{H}_{\mathrm{int,q}} = \sum_{i<j}\tilde{g}_{ij}(q_{i}-q_{i}^{\dagger})(q_{j}-q_{j}^{ \dagger}),\] (33) \[\tilde{H}_{\mathrm{int,c}} = \tilde{g}_{i,c5}(q_{i}-q_{i}^{\dagger})(c_{5}-c_{5}^{\dagger}),\] (34) \[+ \tilde{g}_{ci,c5}(c_{i}-c_{i}^{\dagger})(c_{5}c_{5}^{\dagger})\] \[+ \tilde{g}_{ci+1,c5}(c_{i+1}-c_{i+1}^{\dagger})(c_{5}-c_{5}^{ \dagger}).\] Where \[\tilde{\omega}_{i} = \omega_{i}+g_{i,ci}^{2}(\frac{1}{\Delta_{i,ci}}+\frac{1}{\Sigma_ {i,ci}})\] \[+ g_{i-1,ci-1}^{2}(\frac{1}{\Delta_{i-1,ci-1}}+\frac{1}{\Sigma_{i-1,ci-1}}),\] \[\tilde{\omega}_{ci} = \omega_{ci}+g_{i,ci}^{2}(\frac{1}{\Delta_{i,ci}}+\frac{1}{\Sigma_ {i,ci}})\] \[+ g_{i+1,ci+1}^{2}(\frac{1}{\Delta_{i+1,ci+1}}+\frac{1}{\Sigma_{i+1,ci+1}}),\] \[\tilde{\omega}_{a} = \omega_{a},\] \[\tilde{g}_{ij} = g_{ij}+g_{i,ci}g_{j,cj}\left(\frac{1}{\Delta_{i,ci}}+\frac{1}{ \Delta_{j,cj}}-\frac{1}{\Sigma_{i,ci}}-\frac{1}{\Sigma_{j,cj}}\right),\] \[\tilde{g}_{ci,c5} = g_{i,ci}g_{i,c5}\left(\frac{1}{\Delta_{i,ci}}+\frac{1}{\Sigma_{i, ci}}\right),\] \[\tilde{\alpha}_{i} \approx \alpha_{i},\quad,\quad\tilde{\alpha}_{ci}\approx\alpha_{ci}, \quad,\quad\tilde{\alpha}_{a}\approx\alpha_{a}.\] Here we have defined the shifted or "first-dressed" system variables \(\tilde{\omega},\tilde{\alpha}\) and \(\tilde{g}\) where these represent the approximate transition frequency, anharmonicity and coupling of the new dressed system. We note that there is some effective coupling between the couplers and the central coupler but this term in proportional to \(g_{i,c}g_{i+1,c5}(\frac{1}{\Delta_{i,ci}}+\frac{1}{\Sigma_{i,ci}})\). Estimating this strength we find that for our parameters this coupling will have a reasonably large interaction strength. However the couplers should never gain any excitations as they are detuned from the qubits and are far detuned from any drive. We also find counter rotating terms \(c_{i}c_{i}+c_{i}^{\dagger}c_{i}^{\dagger}\) that have strength given by \(\frac{g_{ci}^{2}}{\Delta_{i,ci}}\). In a rotating frame these opera tors will rotate at the sum of their transition frequencies thus their dynamics will not affect the system and will average out to no contribution in the course of this interaction. ### Elimination of central coupler Now that we have eliminated the edge couplers we move to eliminate the central coupler. This is again done with a Schrieffer-Wolff transformation but this time with the argument \[S_{\text{center}}=\sum_{i=1}^{4}\frac{\tilde{g}_{i,c_{5}}}{\tilde{\Delta}_{i5}} (q_{i}^{\dagger}c_{5}-q_{i}c_{5}^{\dagger})-\frac{\tilde{g}_{i,c_{5}}}{\tilde{ \Sigma}_{i5}}(q_{i}^{\dagger}c_{5}^{\dagger}-q_{i}c_{5})+\frac{\tilde{g}_{a,c _{5}}}{\tilde{\Delta}_{a5}}(a^{\dagger}c_{5}-ac_{5}^{\dagger})-\frac{\tilde{g} _{a,c_{5}}}{\tilde{\Sigma}_{a5}}(a^{\dagger}c_{5}^{\dagger}-ac_{5}). \tag{10}\] We perform the transformation \[\tilde{H}\rightarrow\tilde{H}=e^{iS_{\text{center}}}\tilde{H}e^{-iS_{\text{ center}}}, \tag{11}\] using the commutations relations in Eq.(5) as stated in the main text and the commutator expansion Eq. 10 we obtain \[[S_{\text{center}},\tilde{H}] \tag{12}\] \[= -\sum_{i=1}^{4}\tilde{g}_{i,c5}(q_{i}c_{5}^{\dagger}+q_{i}^{ \dagger}c_{5})+\tilde{g}_{i,c5}(q_{i}c_{5}+q_{i}^{\dagger}c_{5}^{\dagger}) \tag{13}\] \[-\tilde{g}_{a}(ac_{5}^{\dagger}+a^{\dagger}c_{5})+\tilde{g}_{a}( ac_{5}+a^{\dagger}c_{5}^{\dagger})\] \[-\sum_{i=1}^{4}\sum_{j<k}(\frac{\tilde{g}_{i,c5}\tilde{g}_{jk}}{ \tilde{\Delta}_{i,c5}}+\frac{\tilde{g}_{i,c5}\tilde{g}_{jk}}{\tilde{\Sigma}_{ i,c5}})((q_{k}-q_{k}^{\dagger})(c_{5}-c_{5}^{\dagger})\delta_{ij}\] \[+(q_{j}-q_{j}^{\dagger})(c_{5}-c_{5}^{\dagger})\delta_{ik})\] \[+\sum_{i,j}\tilde{g}_{i,c5}\tilde{g}_{j,c5}(\frac{1}{\tilde{ \Delta}_{i,c5}}-\frac{1}{\tilde{\Sigma}_{i,c5}})(q_{j}-q_{j}^{\dagger})(q_{i}- q_{i}^{\dagger})\] \[-(c_{5}-c_{5}^{\dagger})^{2}\] \[+\sum_{i}\tilde{g}_{i,c5}\tilde{g}_{a}(\frac{1}{\tilde{\Delta}_{ i,c5}}-\frac{1}{\tilde{\Sigma}_{i,c5}})(a-a^{\dagger})(q_{i}-q_{i}^{\dagger})\] \[+\tilde{g}_{a}^{2}(\frac{1}{\tilde{\Delta}_{a,c5}}-\frac{1}{ \tilde{\Sigma}_{a,c5}})((a-a^{\dagger})^{2}-(c_{5}-c_{5}^{\dagger})^{2}).\] \[[S_{\text{center}},[S_{\text{center}},\tilde{H}]] \tag{14}\] \[-\sum_{i,j}\tilde{g}_{i,c5}\tilde{g}_{j,c5}(q_{i}+q_{i}^{\dagger})(q_{j}+q_{j }^{\dagger})(\frac{1}{\Sigma_{i,c5}}+\frac{1}{\Delta_{i,c5}})\] \[-\sum_{i,j}\tilde{g}_{i,c5}\tilde{g}_{j,c5}(\frac{1}{\Sigma_{i,c5 }}+\frac{1}{\Delta_{i,c5}})\delta_{ij}((c_{5}c_{5}^{\dagger}-c_{5}^{\dagger} c_{5})+(c_{5}c_{5}+c_{5}^{\dagger}c_{5}^{\dagger}))\] \[-\sum_{j}\tilde{g}_{a}\tilde{g}_{j,c5}(\frac{1}{\Sigma_{a,c5}}+ \frac{1}{\Delta_{a,c5}})(a-a^{\dagger})(a-a^{\dagger})\] \[-\sum_{j}\tilde{g}_{a}\tilde{g}_{j,c5}(\frac{1}{\Sigma_{a,c5}}+ \frac{1}{\Delta_{a,c5}})(c_{5}+c_{5}^{\dagger})^{2}\] \[-\sum_{i,j}\tilde{g}_{a,c5}\tilde{g}_{i,c5}(\frac{1}{\Sigma_{a,c5 }}+\frac{1}{\Delta_{a,c5}})((q_{i}a^{\dagger}+q_{i}^{\dagger})-(q_{i}a-q_{i}^{ \dagger}a^{\dagger}))\] \[-\sum_{i,j}\tilde{g}_{a,c5}\tilde{g}_{j,c5}(\frac{1}{\Sigma_{j,c5 }}+\frac{1}{\Delta_{j,c5}})((q_{j}^{\dagger}a+a^{\dagger}q_{j})+(aq_{j}-a^{ \dagger}q_{j}^{\dagger})).\] with these calculations we find the final Hamiltonian to be \[\bar{H}=\sum_{i=1}^{5}\bar{H}_{0}(c_{i})+\sum_{j=1}^{4}\bar{H}_{0} (q_{i})+\bar{H}_{0}(a) \tag{16}\] \[+\bar{H}_{\text{int,q}}+O(\frac{g^{2}}{\Delta^{2}}),\] \[\bar{H}_{0}(X_{i})=\bar{\omega}_{i}X_{i}^{\dagger}X_{i}+\frac{ \bar{\alpha}_{i}}{2}X_{i}^{\dagger}X_{i}^{\dagger}X_{i}X_{i},\] \[\bar{H}_{\text{int,q}}=\sum_{i<j}\bar{g}_{ij}(q_{i}-q_{i}^{ \dagger})(q_{j}-q_{j}^{\dagger}).\] Where we define \[\bar{\omega}_{qi} = \tilde{\omega}_{qi}+\tilde{g}_{i,c5}^{2}(\frac{1}{\tilde{\Delta}_{ i,c5}}+\frac{1}{\tilde{\Sigma}_{i,c5}}), \tag{17}\] \[\bar{\omega}_{c5} = \tilde{\omega}_{c5}+\sum_{i=1}^{4}\tilde{g}_{i,c5}^{2}(\frac{1}{ \tilde{\Delta}_{i,c5}}+\frac{1}{\tilde{\Sigma}_{i,c5}}),\] \[\omega_{a} = \tilde{\omega}_{a}+g_{a,c5}\left(\frac{1}{\tilde{\Delta}_{a,c5}}+ \frac{1}{\tilde{\Sigma}_{a,c5}}\right),\] \[\bar{g}_{ij} = \tilde{g}_{ij}+\tilde{g}_{i,c5}\tilde{g}_{j,c5}\left(\frac{1}{ \tilde{\Delta}_{i,c5}}+\frac{1}{\tilde{\Delta}_{j,c5}}-\frac{1}{\tilde{\Sigma}_{ i,c5}}-\frac{1}{\tilde{\Sigma}_{j,c5}}\right),\] \[\bar{\alpha}_{i} \approx \tilde{\alpha}_{i}\quad,\quad\bar{\alpha}_{a}\approx\tilde{\alpha} _{a}.\] Here we have \(\tilde{\Delta}_{i,j}=\tilde{\omega}_{i}-\tilde{\omega}_{j}\), \(\tilde{\Sigma}_{i,j}=\tilde{\omega}_{i}+\tilde{\omega}_{j}\). In addition, we now have the "second-dressed" variables being \(\tilde{\omega}_{n}\) and \(\bar{\alpha}_{n}\) and \(\bar{g}_{nm}\) are shifted frequencies, anharmonicities and couplings respectively. ## Appendix B Dispersive Shifts In the main text we described how we derived the dispersive shifts and gave some expressions relating the bare and true dispersive shifts. The expressions for these shifts get larger the more qubits are added to the system. We have included a link to a repository with a Mathematica notebook where these expressions can be generated and explored [25]. ## Appendix C Parameters Here we give some example parameters which would allow for the tessellation shown in Figure 2. Since there are many other parameters associated with the lattice namely the coupler frequencies which will affect the couplings between the qubits, we only give the qubit frequencies. These frequencies have been shown to give dispersive shifts as described in the main text with all other (higher order and unwanted) shifts suppressed to below \(0.5\)MHz. ## Appendix D Dispersive Shifts We show the all of the dispersive shifts that were calculated in our simulations these shifts were tuned within reason so that the unwanted shifts were small enough and the data qubit - ancilla shifts were equal and dominant. We give each shift to three decimal places.
2309.12970
PI-RADS v2 Compliant Automated Segmentation of Prostate Zones Using co-training Motivated Multi-task Dual-Path CNN
The detailed images produced by Magnetic Resonance Imaging (MRI) provide life-critical information for the diagnosis and treatment of prostate cancer. To provide standardized acquisition, interpretation and usage of the complex MRI images, the PI-RADS v2 guideline was proposed. An automated segmentation following the guideline facilitates consistent and precise lesion detection, staging and treatment. The guideline recommends a division of the prostate into four zones, PZ (peripheral zone), TZ (transition zone), DPU (distal prostatic urethra) and AFS (anterior fibromuscular stroma). Not every zone shares a boundary with the others and is present in every slice. Further, the representations captured by a single model might not suffice for all zones. This motivated us to design a dual-branch convolutional neural network (CNN), where each branch captures the representations of the connected zones separately. Further, the representations from different branches act complementary to each other at the second stage of training, where they are fine-tuned through an unsupervised loss. The loss penalises the difference in predictions from the two branches for the same class. We also incorporate multi-task learning in our framework to further improve the segmentation accuracy. The proposed approach improves the segmentation accuracy of the baseline (mean absolute symmetric distance) by 7.56%, 11.00%, 58.43% and 19.67% for PZ, TZ, DPU and AFS zones respectively.
Arnab Das, Suhita Ghosh, Sebastian Stober
2023-09-22T16:10:21Z
http://arxiv.org/abs/2309.12970v1
PI-RADS V2 compliant automated segmentation of prostate zones using co-training motivated multi-task dual-path cnn ###### Abstract The detailed images produced by Magnetic Resonance Imaging (MRI) provide life-critical information for the diagnosis and treatment of prostate cancer. To provide standardized acquisition, interpretation and usage of the complex MRI images, the PI-RADS v2 guideline was proposed. An automated segmentation following the guideline facilitates consistent and precise lesion detection, staging and treatment. The guideline recommends a division of the prostate into four zones, PZ (peripheral zone), TZ (transition zone), DPU (distal prostatic urethra) and AFS (anterior fibromuscular stroma). Not every zone shares a boundary with the others and is present in every slice. Further, the representations captured by a single model might not suffice for all zones, as observed in [1]. This motivated us to design a dual-branch convolutional neural network (CNN), where each branch captures the representations of the connected zones separately. Further, the representations from different branches act complementary to each other at the second stage of training, where they are fine-tuned through an unsupervised loss. The loss penalises the difference in predictions from the two branches for the same class. We also incorporate multi-task learning in our framework to further improve the segmentation accuracy. The proposed approach improves the segmentation accuracy of the baseline (mean absolute symmetric distance) by 7.56%, 11.00%, 58.43% and 19.67% for PZ, TZ, DPU and AFS zones respectively. Arnab Das+ Suhita Ghosh* Sebastian Stober Artificial Intelligence Lab (AILab), Otto-von-Guericke-University, Magdeburg, Germany Prostate Zone Segmentation, Supervised Deep Learning, co-training, U-Net, MRI, PI-RADS v2 Footnote †: These authors contributed equally to this work ## 1 Introduction Prostate cancer (PCa) is the most commonly diagnosed cancer and one of the leading causes of cancer-induced death in men [2]. Regular prostate-specific antigen (PSA) screenings can curb the PCa mortality rate. However, the screenings do not always provide accurate results and often lead to unnecessary diagnosis and over-treatment [3]. Due to this reason, the high-resolution images produced from multiparametric MRI (mpMRI) are used for clinical assessment, localisation and therapy planning of PCa [4]. To provide guidelines for a standardised acquisition, interpretation and usage of mpMRI, Prostate Imaging-Reporting and Data System version 2 (PI-RADS v2) [4] was introduced. The guideline considers segmentation of prostate into four anatomical zones, as introduced by McNeal [5], shown in Fig. 1. The segmentation of PZ and TZ facilitates diagnosis and localisation of cancerous cells, as these zones have a higher probability of hosting the clinically significant lesions [6]. The delineation of AFS and DPU zones help in the post-diagnostic treatment, dose analysis and focal therapy [1]. Further, the demarcation of DPU helps in facilitating a precise annihilation of the lesions while sparing the healthy tissue. However, manual delineation of prostate zones is a time-consuming and error-prone task. This is due to fuzzy borders, high heterogeneity of pixel intensity within the same zone, and high inter-patient variability, as seen in Fig. 1. Therefore, an automated segmentation of prostate structures is pertinent to provide a consistent lesion localisation and reduce the cognitive burden on the clinicians. Many approaches have been proposed for prostate zone segmentation but targeting only PZ and TZ. Recently, a deep learning-based method [1] was proposed following the PI-RADS v2 recommendation. The authors proposed a convolutional neural network (CNN) based method using T2-weighted MRI. The method performed in the range of inter-rater variability for all zones except for AFS, as representations captured by the same model might not be suitable for all zones [1]. To this end, the representations for AFS are required to be learnt separately. Further, we can observe in Fig. 1 that a pair of zones are directly connected for most of the slices, such as (TZ and AFS) and (PZ and DPU), and Figure 1: Examples of axial slices from prostate T2-weighted MRI, taken from different subjects. Illustrates the variability of the zones across patients. some are never connected (AFS and DPU). Since the connected zones share boundaries, they tend to have similar representations. In this work, we propose a dual-branch CNN based method, where each branch captures the representations of the connected zones independently, but act complementary to each other. Further, we perform a two-stage co-training [7] motivated training. In co-training, two views of the data are used to build an initial pair of models, followed by the initially trained models teaching each other. At the first stage, the branches are trained independently, so that each one captures the representations of the connected zones only. Subsequently, the representations of each branch is fine-tuned through an unsupervised loss. The loss is calculated for each zone, which is calculated as the difference between the predictions of the two branches. We also propose a multi-task loss, which considers the reconstruction of the prostate along with the segmentation of the prostate. This further facilitates the model to improve the overall segmentation accuracy. ## 2 Related Work The review [8] summarizes the machine learning and conventional methods for the whole prostate and its (PZ and TZ) zonal segmentation. The traditional methods are based on deterministic and probabilistic atlas, hybrid methods incorporating intensity and shape prior information. The review [9] provides a detailed overview of the DL methods proposed for prostate zone segmentation. The PZ and TZ segmentation method proposed in [10] comprised of three sub-networks. The authors used a feature-pyramid attention network in the middle to capture minute spatial information in multiple scales from encoded latent images. [11] segmented PZ and TZ zones by an improved U-Net, using the dense-blocks from DenseNet [12]. A two stage method was proposed in [13], where a probabilistic atlas based approach was applied for PZ and TZ segmentation, followed by the whole prostate segmentation. Only two methods exist which have targeted four zones. One of them is a supervised DL method [1], where an anisotropic 3D U-Net [14] was trained on axial T2-weighted MRI volumes. They used a combination of isotropic and anisotropic Maxpool layers to cater to the non-isotropic data. The other one [15] is a semi-supervised method which is a fusion of uncertainty-guided self-training and temporal ensembling. The method used the annotated data from [1] and a subset of unlabelled data from PROSTATEx challenge dataset [16]. ## 3 Methodology In this work, we propose a dual-branch CNN architecture, where the representations of the four zones is learned in two stages of training. The two-branch concept is based on the hypothesis that it is easier to learn the representations for connected/related zones than all learned together. Therefore, each branch is trained simultaneously and independent of each other, at the first stage. This ensures that each branch captures the representations of only connected zones. At the second stage, the representations from each branch is fine-tuned through an unsupervised loss, which is calculated as the discrepancy between the predictions produced by the two branches, for each zone. In this way, a transfer of knowledge occurs between the branches, as in co-training. [17] showed that multi-task learning (MTL) improves performance in networks, where the network learns to perform multiple tasks simultaneously given a single input for all the tasks. This motivates us to incorporate reconstruction loss in the objective to improve the overall segmentation accuracy. Firstly, we discuss the proposed DL architecture, followed by the two-stage training strategy. Figure 2: a) An overview of the method proposed. b) The U-Net architecture used in the method. The number inside the blocks represent filter numbers. c) Dilated convolution block used in the mixed model. ### DL Architecture The prostate zones are extremely dissimilar with respect to shape, texture, inter- and intra-patient variability. Therefore, the features learned by a single network's filters may not be suitable for segmenting all four zones simultaneously. However, the connected zones may have similar representations, as they share boundaries. Therefore, we trained a network with two branches, as shown in Fig 2(a). Branch-I is intended to capture the representations for PZ, DPU and Background, and Branch-II for TZ and AFS. AFS and TZ are considered in the same branch, as AFS is disconnected from the others except TZ in most of the slices (refer to Fig. 1). Similarly, PZ always contain DPU. Apart from the zones, there is another class, Background, which contains the pixels outside the prostate. It was placed in Branch-I containing PZ, as Background share its boundary mostly with PZ. The AFS zone is the most difficult zone to segment, even for the domain experts [1]. This is attributed to its extremely indistinct border and widely varying shape and texture across patients. [18] argued that dilated convolution works better for semantic segmentation, due to the increase in the effective receptive field. Therefore, for the branch having AFS (Branch-II), an additional _dilated_ block was added before the first upsampling, as shown in Fig. 2(c). This block contains three dilated convolution layers in parallel with three different dilation rates, which are 3, 6 and 12, along with a 1 x 1 x 1 convolution. The feature maps are then concatenated and passed through another 1 x 1 x 1 convolution before passing to the upsampling layer in the decoder. The other branch could also contain the _dilated_ block, but it would unnecessarily increase the model parameters. Although any architecture can be used for the branches, 3D U-Net [14] was considered, shown in Fig. 2(b). ### Training Strategy The training strategy shown in Fig. 2(a) can be divided into two stages, Stage-I and Stage-II. #### 3.2.1 Stage-I At this stage, both the branches are trained in a supervised manner, simultaneously and independently. To this end, the loss at this stage is computed using the predictions from their relevant branches (PZ, DPU and Background from Branch-I and, TZ and AFS from Branch-II), as shown in Fig. 2(a). Eqn. 1 shows the loss used at this stage, where \(N\) is the total number of voxels, \(p_{z,i}\) is the model's prediction and \(y_{z,i}\) is the ground truth for the ith voxel and zth zone, \(Z=\{TZ,PZ,AFS,DPU,Background\}\), and [\(\cdot\)] is a mask-based indicator function. Mask M is one when the class predictions \(p_{z,i}\) are produced from their relevant branches. \[Loss_{disc}=\sum_{z\in Z}1-\frac{2\sum_{i=1}^{N}[M_{i}=1]p_{z,i}y_{z,i}}{\sum_{ i=1}^{N}[M_{i}=1]p_{z,i}^{2}+\sum_{i=1}^{N}[M_{i}=1]y_{z,i}^{2}} \tag{1}\] The loss function is based on Dice similarity coefficient (DSC), similar to [1]. We incorporated multi-task learning (MTL) in the method by using an additional reconstruction loss, as shown in Eqn. 2, where \(\hat{\textbf{X}}_{b}\) represents the reconstructed volume by branch \(b\), **X** is the actual prostate MRI volume, and \(b\in\{Branch-I,Branch-II\}\). The loss is based on Structural Similarity Index (SSIM), used typically in reconstruction tasks [19]. \[Loss_{recon}=\sum_{b}1-SSIM(\hat{\textbf{X}}_{b},\textbf{X}) \tag{2}\] Therefore, the supervised loss (\(Loss_{S}\)) at this stage is a combination of \(Loss_{recon}\) and \(Loss_{dsc}\), as shown in Eqn. 3. \[Loss_{S}=Loss_{dsc}+Loss_{recon} \tag{3}\] #### 3.2.2 Stage-II At this stage, we compute an additional unsupervised loss as shown in Eqn. 4, where \(p_{z,i}^{{}^{\prime}}\) and \(p_{z,i}^{{}^{\prime\prime}}\) denote the predictions from Branch-I and Branch-II respectively. \[Loss_{U}=\sum_{z\in Z}1-\frac{2\sum_{i=1}^{N}p_{z,i}^{{}^{\prime}}p_{z,i}^{{} ^{\prime\prime}}}{\sum_{i=1}^{N}p_{z,i}^{{}^{\prime}}+\sum_{i=1}^{N}p_{z,i}^{{} ^{\prime\prime}}2} \tag{4}\] The loss is computed between the predictions of the branches, which helps in exchange of knowledge, as in co-training. The loss increases with the disagreement of predictions between the branches. This acts as a regularizer and helps in reducing the bias induced at Stage-I. The total loss for this stage is presented in Eqn. 5, where the supervised loss restricts the model from catastrophic forgetting [20] and the unsupervised loss in generalization. \[Loss_{T}=Loss_{S}+Loss_{U} \tag{5}\] ## 4 Dataset and Experiment Details We have used the annotated 98 T2-weighted axial MRI volumes provided by [1]. To speed up convergence, the voxel intensities were cropped to the first and 99th percentile and then normalized to the range of \([0,1]\). The train, validation and test split was 58, 20 and 20 respectively. We have performed 4-fold cross-validation for all our experiments by re-shuffling the volumes from train and validation set. The supervised state-of-the-art [1] for the prostate zonal segmentation served as the baseline (\(M_{base}\)). We denote the proposed two-branch mixed model with MTL as \(M_{mix\_reco}\), where the mixed model is the one with different branches, where only one of the branches have dilated blocks. For ablation study, we trained the following two-branch model variants: with same branches and without MTL (\(M_{par}\)), with same branches and with MTL (\(M_{par\_reco}\)), and with different branches without MTL (\(M_{mix}\)). The models were trained using ADAM optimizer (learning rate 1e-5). Each experiment was run with early-stopping (after 30 epochs of no improvement) on the validation set. The model with the lowest validation loss was selected and used for the evaluation on test data. To ensure topological correctness, a post-processing step was performed, as done in [1]. It includes two steps, connected components analysis (CCA) and a signed euclidean distance-based hole filling operation. The CCA only retains the largest component for each zone, and the latter assigns labels to each label-free voxels, produced by CCA. Since the zones' predictions come from different branches, a normalization step was performed, before passing the predictions to the post-processing step. ## 5 Results and Discussion We evaluated the models using DSC and mean absolute symmetric distance (MAD), as done in [1]. Table 1 portrays the performance of all models. Considering the overall performance (for all zones), our proposed dual-branch mixed MTL model \(M_{mix\_reco}\) outperformed \(M_{base}\) with respect to the mean DSC and MAD scores. A statistical test (one-sided paired t-test with significance level 0.05) showed that \(M_{mix\_reco}\) outperformed the baseline for all zones except TZ. Further, the statistical test resulted in a p-value of 0.0189 (PZ), 0.0517 (TZ), 0.0001 (DPU) and 0.0011 (AFS). With respect to MAD score also, we obtained similar statistical evidence results. Interestingly no variant of our proposed two-branch method performed the best for all zones. But, all the variants of two-branch method outperformed \(M_{base}\) for all zones, with respect to both metrics. Fig. 3 shows that our proposed model produces segmentation masks closer to the ground truth, compared to \(M_{base}\). For PZ, the two-branch model with MTL (\(M_{par\_reco}\)) achieved the highest mean DSC score of 76.83%, which is a 2.15% increase over \(M_{base}\). Although the mixed model variant \(M_{mix\_reco}\) performed closely to \(M_{par\_reco}\). Our proposed model \(M_{mix\_reco}\) rectified the over-segmentation of the baseline in many cases, as shown in Fig. 3. For TZ, \(M_{mix\_reco}\) outperformed other variants, where it achieved a mean DSC of 87.03% which is 1.35% higher than \(M_{base}\). Similar to PZ, \(M_{mix\_reco}\) rectified the over-segmentation of the baseline, as shown in Fig. 3. However, \(M_{mix\_reco}\) also over-segmented in many cases. This is attributed to the similar intensity distribution of the nearby tissue. Considering the minority classes DPU and AFS, the baseline's mean MAD scores were improved remarkably by 58.43% (DPU) and 19.67% (AFS). This indicated that the proposed approach improved the baseline's quality of border delineation considerably for the smaller zones. For DPU \(M_{mix\_reco}\) performed the best. In many cases, both \(M_{mix\_reco}\) and \(M_{base}\) missed DPU, which is a difficult class to detect due to its severe under-presence in the dataset (less than 1%). Interestingly, \(M_{par}\) produced the best mean DSC score for AFS (6.34% better than \(M_{base}\)). However, with respect to mean MAD score, \(M_{mix\_reco}\) performed the best. Further, the multi-task model (\(M_{par\_reco}\)) performed worse than \(M_{par\_reco}\) for AFS. This indicates the additional inductive bias introduced by MTL does not always help [17], as in the case of AFS. However, MTL improved the segmentation accuracy for other zones, PZ, TZ, and DPU. Fig. 4 shows that our proposed method produced much better segmentation quality for different shapes of AFS, which complies by the distance-based measure's (MAD) value, shown in Tab. 1. This indicates the proposed method helps to generalise over variety of shapes observed for AFS. The code is publicly available on Github. \begin{table} \begin{tabular}{l|c c|c c|c c|c c|c c} \hline \multirow{2}{*}{Model} & \multicolumn{2}{c}{PZ} & \multicolumn{2}{c}{TZ} & \multicolumn{2}{c}{DPU} & \multicolumn{2}{c}{AFS} & \multicolumn{2}{c}{Zones Avg.} \\ & DSC (\%) & MAD & DSC (\%) & MAD & DSC (\%) & MAD & DSC(\%) & MAD & DSC (\%) & MAD \\ \hline \hline \(M_{base}\) & 75.21 \(\pm 0.21\) & 1.19 \(\pm 0.06\) & 85.87 \(\pm 0.42\) & 1.00 \(\pm 0.05\) & 64.40 \(\pm 1.42\) & 3.44 \(\pm 0.46\) & 39.56 \(\pm 1.92\) & 4.17 \(\pm 0.82\) & 66.26 & 2.45 \\ \hline \(M_{par}\) & 76.43 \(\pm 0.59\) & 1.10 \(\pm 0.01\) & 86.57 \(\pm 0.42\) & 0.95 \(\pm 0.04\) & 64.39 \(\pm 3.50\) & 2.59 \(\pm 1.93\) & **42.07**\(\pm 1.46\) & 3.37 \(\pm 0.49\) & 67.36 & 2.00 \\ \hline \(M_{par\_reco}\) & **76.83**\(\pm 0.49\) & **1.07**\(\pm 0.07\) & 86.93 \(\pm 0.26\) & 0.92 \(\pm 0.02\) & 64.43 \(\pm 1.20\) & 3.30 \(\pm 2.42\) & 40.42 \(\pm 1.83\) & 3.60 \(\pm 0.44\) & 67.15 & 2.22 \\ \hline \(M_{mix}\) & 75.89 \(\pm 0.28\) & 1.14 \(\pm 0.03\) & 86.50 \(\pm 0.59\) & 0.93 \(\pm 0.04\) & 64.20 \(\pm 1.95\) & 2.97 \(\pm 2.72\) & 40.18 \(\pm 1.96\) & 3.91 \(\pm 0.29\) & 66.70 & 2.23 \\ \hline \(M_{mix\_reco}\) & 76.55 \(\pm 0.47\) & 1.10 \(\pm 0.06\) & **87.03**\(\pm 0.55\) & **0.89**\(\pm 0.04\) & **65.65**\(\pm 3.09\) & **1.43**\(\pm 2.74\) & 40.94 \(\pm 1.03\) & **3.35**\(\pm 0.34\) & **67.54** & **1.69** \\ \hline \end{tabular} \end{table} Table 1: Quantitative evaluation for all models. The last column Zones Avg. shows the mean score of all zones. The best results are in bold. Figure 3: Examples of predictions produced for the zones PZ, TZ, DPU and AFS, by \(M_{mix\_reco}\) (Yellow) and \(M_{base}\) (Red). The ground truth is denoted by Green contour. The mentioned values are DSC scores for the zone prediction from their respective models. ## 6 Conclusion In this work, we presented a co-training motivated dual-branch CNN-based method for simultaneous zonal segmentation of the prostate as per the globally accepted PI-RADS v2 guidelines, from axial T2-weighted MRI volumes. The method is based on the concept that it is easier to learn representations for similar classes than all considered together. We also proposed a loss incorporating multi-task learning, which improved the overall segmentation accuracy significantly compared to the baseline method. However, the mean DSC score for small regions like AFS is still significantly lower compared to the large regions like TZ and PZ. One of the reasons being, only 0.3% of voxels belong to AFS in the dataset, which makes it hard for the model to generalise for such a hard zone with varied shape, size, and appearance. Therefore, in order to improve the segmentation accuracy of AFS significantly, more good quality annotated data is needed. Also, smaller structures tend to obtain lesser accuracy for region-based metrics, such as DSC, as mentioned in [1]. This motivates us to explore other loss functions specifically for the AFS zone, which are not based on DSC, as future work. We did not compare our results to the semi-supervised method [15], as they used additional 235 unlabelled prostate volumes. As future work, we will extend our method to include additional unlabeled data. We also plan to experiment with other perception-aware reconstruction losses used in other imaging modalities [21]. ## 7 Compliance with Ethical Standards This research study was conducted retrospectively using human subject data made available in open access by [1]. Ethical approval was not required as confirmed by the license attached with the open access data.
2309.03869
Text-to-feature diffusion for audio-visual few-shot learning
Training deep learning models for video classification from audio-visual data commonly requires immense amounts of labeled training data collected via a costly process. A challenging and underexplored, yet much cheaper, setup is few-shot learning from video data. In particular, the inherently multi-modal nature of video data with sound and visual information has not been leveraged extensively for the few-shot video classification task. Therefore, we introduce a unified audio-visual few-shot video classification benchmark on three datasets, i.e. the VGGSound-FSL, UCF-FSL, ActivityNet-FSL datasets, where we adapt and compare ten methods. In addition, we propose AV-DIFF, a text-to-feature diffusion framework, which first fuses the temporal and audio-visual features via cross-modal attention and then generates multi-modal features for the novel classes. We show that AV-DIFF obtains state-of-the-art performance on our proposed benchmark for audio-visual (generalised) few-shot learning. Our benchmark paves the way for effective audio-visual classification when only limited labeled data is available. Code and data are available at https://github.com/ExplainableML/AVDIFF-GFSL.
Otniel-Bogdan Mercea, Thomas Hummel, A. Sophia Koepke, Zeynep Akata
2023-09-07T17:30:36Z
http://arxiv.org/abs/2309.03869v1
# Text-to-feature diffusion for audio-visual few-shot learning ###### Abstract Training deep learning models for video classification from audio-visual data commonly requires vast amounts of labelled training data collected via a costly process. A challenging and underexplored, yet much cheaper, setup is few-shot learning from video data. In particular, the inherently multi-modal nature of video data with sound and visual information has not been leveraged extensively for the few-shot video classification task. Therefore, we introduce a unified audio-visual few-shot video classification benchmark on three datasets, i.e. the VGGSound-FSL, UCF-FSL, ActivityNet-FSL datasets, where we adapt and compare ten methods. In addition, we propose AV-Diff, a text-to-feature diffusion framework, which first fuses the temporal and audio-visual features via cross-modal attention and then generates multi-modal features for the novel classes. We show that AV-Diff obtains state-of-the-art performance on our proposed benchmark for audio-visual (generalised) few-shot learning. Our benchmark paves the way for effective audio-visual classification when only limited labelled data is available. Code and data are available at [https://github.com/ExplainableML/AVDIFF-GFSL](https://github.com/ExplainableML/AVDIFF-GFSL). Keywords:audio-visual learning, few-shot learning. ## 1 Introduction The use of audio-visual data can yield impressive results for video classification [56, 62, 85]. The complementary knowledge contained in the two modalities results in a richer learning signal than using unimodal data. However, video classification frameworks commonly rely on significant amounts of costly training data and computational resources. To mitigate the need for large amounts of labelled data, we consider the few-shot learning (FSL) setting where a model is tasked to recognise new classes with only a few labelled examples. Moreover, the need for vast computational resources can be alleviated by operating on the feature level, using features extracted from pre-trained visual and sound classification networks. In this work, we tackle the task of few-shot action recognition in videos from audio and visual data which is an understudied problem in computer vision. In the few-shot setting, a model has to learn a transferable audio-visual representation which can be adapted to new classes with few annotated data samples. In particular, we focus on the more practical generalised FSL (GFSL) setting, where the aim is to recognise samples from both the base classes, i.e. classes with many training samples, and from novel classes which contain only few examples. Additional modalities, such as text and audio, are especially useful for learning transferable and robust representations from few samples. To the best of our knowledge, the FSL setting with audio-visual data has only been considered for speech recognition [88], and for learning an acoustic model of 3D scenes [50]. Moreover, existing video FSL benchmarks are not suitable for the audio-visual setting. In particular, the SomethingV2 and HMDB51 benchmarks proposed in [15] and [87] do not contain audio and about 50% of the classes in the UCF101 benchmark from [83] have no sound either. The Kinetics split in [90] suffers from an overlap with the classes used to pre-train the feature extractors [83], and [56, 85] show that the audio modality in Kinetics is less class-relevant than the visual modality. Existing audio-visual zero-shot learning benchmarks [51, 52] cannot directly be used for few-shot learning due to their distinct training and testing protocols. Moreover, the baselines in both settings differ significantly as state-of-the-art few-shot learning methods usually necessitate knowledge of novel classes through classification objectives and generative models, a condition that is not possible in zero-shot learning. Thus, we introduce a new benchmark for generalised audio-visual FSL for video classification that is comprised of three audio-visual datasets and ten methods carefully adapted to this challenging, yet practical task. To tackle our new benchmark, we propose AV-Diff which uses a novel hybrid cross-modal attention for fusing audio-visual information. Different to various attention fusion techniques in the audio-visual domain [51, 52, 56] which use a single attention type or different transformers for each modality, our model makes use of a novel combination of within-modality and cross-modal attention in a multi-modal transformer. This allows the effective fusion of information from Figure 1: AV-Diff learns to fuse the audio-visual inputs into multi-modal representations in the audio-visual learning stage (left). In the few-shot learning stage (right), the multi-modal representations from the previous stage are used to concurrently train (double arrow line) a text-conditioned diffusion model on all the classes (middle) and a classifier. The classifier is trained on real features from base classes and real and synthetic features from novel classes. both modalities and across the temporal dimension of the inputs. Furthermore, we introduce a novel text-conditioned diffusion model for generating audio-visual features to augment the few samples in the novel classes. In the image and video domain, generative adversarial networks (GANs) have been used to generate uni-modal features for data augmentation in the FSL setting [32, 46, 58, 83, 84]. However, we are not aware of prior works that have used diffusion models for multi-modal (audio-visual) feature generation in FSL. Both, cross-modal fusion and text-to-feature diffusion contribute to significant boosts in performance on our proposed benchmark. To summarise, our contributions are: 1) We introduce the audio-visual generalised few-shot learning task for video classification and a benchmark on three audio-visual datasets. We additionally adapt and compare ten methods for this task. 2) We propose a hybrid attention mechanism to fuse multi-modal information and a diffusion model for multi-modal feature generation to augment the training dataset with additional novel-class samples. 3) We obtain state-of-the-art performance across all three datasets, outperforming the adapted multi-modal zero-shot learning and video FSL models. ## 2 Related work We discuss prior works in learning from audio-visual data, FSL, and feature generation in low-shot learning. **Audio-visual learning.** Multi-modal inputs, such as audio and visual data, provide significantly more information than unimodal data, resulting in improved overall performance for video classification and acoustic scene classification [7, 10, 45, 60, 61, 62]. Approaches, such as [21, 25], use class-label supervision between modalities without requiring temporal alignment between the input modalities. Besides audio and video classification, other domains also benefit from multi-modal data, such as lip reading [4, 5], audio synthesis based on visual information [27, 30, 43, 57, 72, 89], and localisation and separation of sounds in videos [3, 6, 8, 18, 28, 59, 75]. Recently, transformer models have gained popularity in audio-visual learning, e.g. for classification [14], event localization [48], dense video captioning [36], and text-based video retrieval [26, 80]. As shown in these works, transformers can effectively process multi-modal input. Thus, our proposed framework fuses audio-visual information using a transformer-based mechanism. **FSL** has been explored in the image domain [20, 23, 23, 47, 49, 64, 65, 68, 70, 73, 79, 81, 82, 86] and in the video domain [11, 15, 41, 83, 90]. The popular meta-learning paradigm in FSL [11, 15, 47, 49, 65, 73, 79, 81, 86, 90] has been criticised by recent works [20, 39, 81, 83]. In the video domain, commonly a query and support set is used and each query sample is compared to all the support samples [11, 15, 63, 90]. The number of comparisons grows exponentially with the number of ways and shots. These methods become prohibitively expensive for GFSL, where models are evaluated on both the base and the novel classes. Hence, we focus on the non-meta-learning approach in this work. Some non-meta-learning approaches have addressed the more challenging and practical GFSL setting for videos [46, 83] using unimodal visual data. In contrast, we propose to use multi-modal data in our novel (G)FSL benchmark for audio-visual video classification which provides the possibility to test a model in both scenarios (FSL and GFSL). **Feature generation.** Due to the progress of generative models, such as GANs [2, 29, 31, 37, 55] and diffusion models [12, 24, 67], different works have tried to adapt these systems to generate features as a data augmentation mechanism. GANs have been used in zero-shot learning (ZSL) and FSL [46, 58, 83, 84] to increase the number and diversity of samples, especially for unseen or novel classes. Diffusion models have also been applied to image generation in the feature space, such as [67, 77], but not in the ZSL or FSL setting. It is known that GANs are hard to optimize [69] while diffusion models appear to be more stable, leading to better results [22]. Therefore, our proposed framework uses a text-conditioned diffusion model to generate features for the novel classes in the FSL setting. ## 3 Audio-visual (G)FSL benchmark We describe the audio-visual (G)FSL setting, present our proposed benchmark that we construct from audio-visual datasets, and explain the methods that we used to establish baselines for this task. ### Audio-visual (G)FSL setting We address the tasks of (G)FSL using audio-visual inputs. The aim of FSL is to recognise samples from classes that contain very few training samples, so-called _novel classes_. In addition, the goal of GFSL is to recognise both _base classes_, which contain a significant amount of samples, and novel classes. Given an audio-visual dataset \(\mathcal{V}\) with \(M\) samples and \(C\) classes, containing base and novel classes, we have \(\mathcal{V}=\{\mathcal{X}_{\boldsymbol{a}[i]},\mathcal{X}_{\boldsymbol{v}[i] },y_{[i]}\}_{i=1}^{M}\), where \(\mathcal{X}_{\boldsymbol{a}[i]}\) represents the audio input, \(\mathcal{X}_{\boldsymbol{v}[i]}\) the video input and \(y_{[i]}\in\mathbb{R}^{C}\) the ground-truth class label. Both the audio and the video inputs contain temporal information. Two frozen, pre-trained networks are used to extract features from the inputs, VGGish [34] for the audio features \(a_{[i]}=\{a_{1},\dots,a_{t},\dots,a_{F_{u}}\}_{i}\) and C3D [76] for video features \(v_{[i]}=\{v_{1},...,v_{t},...,v_{F_{v}}\}_{i}\). We use these specific feature extractors to ensure that there is no leakage to the novel classes from classes seen when training the feature extractors (Sports1M [40] for the visual and YouTube-8M [1] for the audio modality), similar to [52]. A potential leakage is harmful as it would artificially increase the performance and will not reflect the true performance. All models are evaluated in the FSL and GFSL settings for \(k\) samples in the novel classes (called shots), with \(k\in\{1,5,10,20\}\). During inference, in the FSL setting, the class search space is composed only of the novel class labels and the samples belonging to these classes. In the GFSL setting, the search space contains both the novel and base class labels and their corresponding samples. Meta-learning approaches commonly use the notion of episodes, where each episode only uses \(P\) novel classes randomly sampled from the total number of novel classes in a dataset, usually \(P\in\{1,5\}\) (coined \(P\)-way). However, similar to [83], we suggest using higher values for \(P\) (e.g. all the classes in the dataset), so that the evaluation is closer to the real-world setting, as argued in [32, 83]. In our proposed FSL setting, \(P\) corresponds to the total number of novel classes \(P=N\), while for GFSL \(P=C\). Our evaluation protocol is in line with [32]. ### Dataset splits and training protocol We provide training and evaluation protocols for audio-visual (G)FSL along with splits for UCF-FSL, ActivityNet-FSL and VGGSound-FSL. These are based on the UCF-101 [71], ActivityNet [33] and VGGSound [19] datasets. Our proposed training and evaluation protocol is similar to [51, 32, 52]. The training protocol is composed of two stages, indicated by subscripts \({}_{1}\),\({}_{2}\). In the first stage, a model is trained on the training set \(\textit{Train}_{1}=\mathcal{V}_{B_{1}}\cup\mathcal{V}_{N_{1}}\) where \(\mathcal{V}_{B_{1}}\) consists of dataset samples from base classes, and \(\mathcal{V}_{N_{1}}\) contains \(k\) samples for each of the classes \(N_{1}\). The trained model is then evaluated on \(\textit{Val}=\textit{Val}_{B}\cup\textit{Val}_{N}\), where \(\textit{Val}\) is the validation dataset which contains the same classes as \(\textit{Train}_{1}\). In the first stage, the hyperparameters for the network are determined, such as the number of training epochs and the learning rate scheduler parameters. In the second stage, the model is retrained on the training set \(\textit{Train}_{2}\), using the hyperparameters determined in the first stage. Here, \(\textit{Train}_{2}=\mathcal{V}_{B_{2}}\cup\mathcal{V}_{N_{2}}\) with \(\mathcal{V}_{B_{2}}=\textit{Train}_{1}\cup\textit{Val}\), and \(\mathcal{V}_{N_{2}}\) contains \(k\) samples for the novel classes in the _Test_ set. The final model is evaluated on \(\textit{Test}=\textit{Test}_{B}\cup\textit{Test}_{N}\) with \(\textit{Train}_{2}\cap\textit{Test}=\emptyset\). With a small number of shots, e.g. \(k=1\), models risk a bias towards the novel samples in \(\textit{Train}_{2}\). To obtain robust evaluation results, the second stage is repeated three times with \(k\) randomly selected, but fixed samples from \(\mathcal{V}_{N_{2}}\). We provide dataset statistics in Table 1. ### Benchmark comparisons To establish benchmark performances for the audio-visual GFSL task, we adapt ten recent state-of-the-art methods for video FSL from visual information only, from audio-visual representation learning, and from audio-visual ZSL. \begin{table} \begin{tabular}{c|c c c c|c c c c|c c c c} \hline \hline & \multicolumn{4}{c|}{\# **classes**} & \multicolumn{4}{c|}{\# **videos**_stage 1_} & \multicolumn{4}{c}{\# **videos**_stage 2_} \\ & all & \(\mathcal{V}_{B_{1}}\) & \(\mathcal{V}_{N_{1}}\) & \(\mathcal{V}_{N_{2}}\) & \(\mathcal{V}_{B_{1}}\) & \(\mathcal{V}_{N_{1}}\) & \(\textit{Val}_{B}\) & \(\textit{Val}_{N}\) & \(\mathcal{V}_{B_{2}}\) & \(\mathcal{V}_{N_{2}}\) & \(\textit{Test}_{B}\) & \(\textit{Test}_{N}\) \\ \hline **(1)** & 271 & 138 & 69 & 64 & 70351 & 345 & 7817 & 2757 & 81270 & 320 & 9032 & 2880 \\ **(2)** & 48 & 30 & 12 & 6 & 3174 & 60 & 353 & 1407 & 4994 & 30 & 555 & 815 \\ **(3)** & 198 & 99 & 51 & 48 & 9204 & 255 & 1023 & 4052 & 14534 & 240 & 1615 & 3812 \\ \hline \hline \end{tabular} \end{table} Table 1: Statistics for our VGGSound-FSL **(1)**, UCF-FSL **(2)**, and ActivityNet-FSL **(3)** benchmark datasets, showing the number of classes and videos in our proposed splits in the 5-shot setting. \(\mathcal{V}_{B_{1}}\cup\mathcal{V}_{N_{1}}\) are used for training, \(\textit{Val}_{B}\) and \(\textit{Val}_{N}\) for validation in the first training stage. \(\mathcal{V}_{B_{2}}\cup\mathcal{V}_{N_{2}}\) serves as the training set in the second stage, and evaluation is done on \(\textit{Test}_{B}\) and \(\textit{Test}_{N}\). We provide results with several few-shot video recognition frameworks that are adapted to the multimodal audio-visual setting. **ProtoGan**[46] uses GANs conditioned on the visual prototypes of classes that are obtained by averaging the features of all videos in that class. We adapt it to audio-visual inputs by concatenating the visual and audio features before passing them into the model. **SLDG**[13] is a multi-modal video FSL that uses video frames and optical flow as input. It weighs the frame features according to normal distributions. We replace the optical flow in [13] with audio features. **TSL**[83] is the current state-of-the-art video FSL which uses a GAN to generate synthetic samples for novel classes. It does not fully use temporal information, as the final score is the average of scores obtained on multiple short segments. We adapt it to the multi-modal setting by concatenating input features from the audio and visual modalities. Moreover, we have adapted audio-visual representation learning methods to the few-shot task as can be seen below. **Perceiver**[38], **Hierarchical Perceiver (HiP)**[16], and **Attention Fusion**[25] are versatile video classification methods and we provide comparisons with them. We use the implementations of the adapted Perceiver and Attention Fusion frameworks provided by [51] and we implement HiP in a similar way. **MBT**[56] learns audio-visual representations for video recognition. It uses a transformer for each modality and these transformers can only exchange information using bottleneck attention. **Zorro**[66], in contrast to MBT, uses two transformers that do not have access to the bottleneck attention. We adapt it by using a classifier on top of the averaged bottleneck attention tokens. Finally, we have adapted the state-of-the-art methods in the audio-visual zero-shot learning domain, as shown below. **AVCA**[52] is an audio-visual ZSL method which uses temporally averaged features for the audio and visual modalities. We adapt it by using a classifier on the video output, which is the strongest of the two outputs in [52]. **TCAF**[51] is the state-of-the-art audio-visual ZSL method. It utilizes a transformer architecture with only cross-modal attention, leveraging temporal information in both modalities. As it does not use a classifier, TCAF outputs embeddings, and we determine the class by computing the distance to the semantic descriptors and selecting the closest one. ## 4 AV-Diff framework In this section, we provide details for our proposed cross-modal AV-Diff framework which employs cross-modal fusion (Section 4.1) and a diffusion model to generate audio-visual features (Section 4.2). Then, we describe the training curriculum in Section 4.3. Figure 2 illustrates AV-Diff's full architecture. ### Audio-visual fusion with cross-modal attention **Audio-visual fusion.** We project the audio \(a_{[i]}\) and visual features \(v_{[i]}\) to a shared embedding space. Then we use Fourier features [74] as temporal positional embeddings and modality embeddings respectively and obtain positional aware video \(v_{t}^{E}\) and audio \(a_{t}^{E}\) tokens for timestep \(t\). We prepend a classification token \(cls^{0}\in\mathbb{R}^{d_{dim}}\) to the audio and visual tokens. The output token \(cls\) corresponding to \(cls^{0}\) is the final fused audio-visual representation which is input to \(Proj_{net}\). Our audio-visual fusion mechanism contains \(L\) layers, which are based on multi-head attention [78] \(\text{Att}^{l}\), followed by a feed forward function \(\text{FF}^{l}:\mathbb{R}^{d_{dim}}\rightarrow\mathbb{R}^{d_{dim}}\). The input to the first layer is \(x_{in}^{1}=[cls^{0},a_{1}^{E},\cdots,a_{T_{a}}^{E},v_{1}^{E},\cdots,v_{T_{v}}^ {E}]\). The output of a layer is: \[x_{out}^{l}=\text{FF}^{l}(\text{Att}^{l}(x_{in}^{l})+x_{in}^{l})+\text{Att}^{l }(x_{in}^{l})+x_{in}^{l}. \tag{1}\] In the following, we describe the first layer of the audio-visual fusion. The other layers work similarly. Our input \(x_{in}^{1}\) is projected to queries, keys and values with linear maps \(s:\mathbb{R}^{d_{dim}}\rightarrow\mathbb{R}^{d_{dim}}\) for \(s\in\{q,k,v\}\). The outputs of the projection are written as zero-padded query, key and value features. For the keys we get: Figure 2: Our AV-Diff model for audio-visual (G)FSL takes audio and visual features extracted from pre-trained audio and video classification models as inputs. During training, the features from both modalities are fused into a classification token, denoted by \(cls\). At the same time, our diffusion model (bottom) generates additional synthetic features for the novel classes (denoted by \(x_{0}\)). Finally, we train our classifier \(CL_{net}\) (right) on fused real features \(c_{o}\) of both novel and base classes and synthetic features of novel classes. \(\otimes\) is the concatenation operator. \[\mathbf{K}_{c} =[k(cls^{0}),0,\cdots,0], \tag{2}\] \[\mathbf{K}_{a} =[0,\cdots,0,k(a_{1}^{E}),\cdots,k(a_{F_{a}}^{E}),0,\cdots,0],\] (3) \[\mathbf{K}_{v} =[0,\cdots,0,k(v_{1}^{E}),\cdots,k(v_{F_{v}}^{E})]. \tag{4}\] The final keys are obtained as \(\mathbf{K}=\mathbf{K}_{c}+\mathbf{K}_{a}+\mathbf{K}_{v}\). The queries and values are obtained in a similar way. We define full attention as \(\mathbf{A}=\mathbf{A}_{c}+\mathbf{A}_{cross}+\mathbf{A}_{self}\): \[\mathbf{A}_{c}=\mathbf{Q}_{c}\,\mathbf{K}^{T}+\mathbf{K}\, \mathbf{Q}_{c}^{T},\qquad\mathbf{A}_{cross}=\mathbf{Q}_{a}\,\mathbf{K}_{v}^{ T}+\mathbf{Q}_{v}\,\mathbf{K}_{a}^{T}, \tag{5}\] \[\mathbf{A}_{self}=\mathbf{Q}_{a}\,\mathbf{K}_{a}^{T}+\mathbf{Q} _{v}\,\mathbf{K}_{v}^{T}.\] The novelty in the attention mechanism in AV-Diff is that it exploits a hybrid attention mechanism composed of two types of attention: within-modality self-attention and full-attention. The first \(Z\) layers use self-attention \(\mathbf{A}_{self}+\mathbf{A}_{c}\), the subsequent \(L-Z\) layers leverage full attention \(\mathbf{A}\). **Audio-visual classification.** We project \(cls\) to \(\mathbb{R}^{d_{out}}\) by using a projection network, \(c_{o}=Proj_{net}(cls)\). Then, we apply a classification layer to \(c_{o}\), \(logits=CL_{net}(c_{o})\). Given the ground-truth labels _gt_, we use a cross-entropy loss, \(L_{ce}=CE(logits,gt)\) to train the full architecture. ### Text-conditioned feature generation AV-Diff uses a diffusion process to generate audio-visual features which is based on the Denoising Diffusion Probabilistic Models (DDPM) [35]. In particular, we condition the generation of features for novel classes on a conditioning signal, such as the word embedding (e.g. word2vec [53]) of a class name. The diffusion framework consists of a forward process and a reverse process. **The forward process** adds noise to the data sample \(x_{0}\) for \(T\) timesteps: \[q(x_{1:T}|x_{0})=\prod_{t=1}^{T}q(x_{t}|x_{t-1})=\prod_{t=1}^{T}\mathcal{N} \big{(}x_{t};\sqrt{1-\beta_{t}}x_{t-1},\beta_{t}\mathbf{I}\big{)}, \tag{6}\] where \(\beta_{1},\dots,\beta_{T}\) is the variance schedule. As the **reverse process**\(q(x_{t-1}|x_{t})\) is intractable, we approximate it with a parameterised model \(p_{\theta}\): \[p_{\theta}(x_{0:T})=p_{\theta}(x_{T})\prod_{t=1}^{T}p_{\theta}(x_{t-1}|x_{t})= p_{\theta}(x_{T})\prod_{t=1}^{T}\mathcal{N}(x_{t-1};\mu_{\theta}(x_{t},t), \Sigma_{\theta}(x_{t},t)). \tag{7}\] We condition the model on the timestep \(t\) and the class label embedding w, \[L_{\text{diff},w}=E_{x_{0},t,w,\epsilon}[||\epsilon-\epsilon_{\theta}(\sqrt{ \bar{a}_{t}}x_{0}+\sqrt{1-\bar{a}_{t}}\epsilon,w,t)||^{2}], \tag{8}\] where \(\epsilon\) is the noise added at each timestep and \(\epsilon_{\theta}\) is a model that predicts this noise. The sample at timestep \(t-1\) is obtained from timestep \(t\) as: \[p_{\theta}(x_{t-1}|x_{t},w)=\mathcal{N}(x_{t-1};\frac{1}{\sqrt{\alpha_{t}}}(x_ {t}-\frac{\beta_{t}}{\sqrt{1-\bar{\alpha}_{t}}}\epsilon_{\theta}(x_{t},w,t)), \sigma_{t}^{2}\mathcal{I}). \tag{9}\] The input to \(\epsilon_{\theta}\) at timestep \(t\) is obtained by concatenating \(x_{t},w\), and \(t\). We optimize \(L_{\mathrm{diff},w}\) to learn \(p_{\theta}\). ### Training curriculum and evaluation Each training stage (explained in Section 3.2) is split into two substages. In the first substage, we train the full architecture (the fusion mechanism, the diffusion model, \(Proj_{net}\) and the classifier \(CL_{net}\)) on base classes \(\mathcal{V}_{B_{1}}\) (or \(\mathcal{V}_{B_{2}}\) in the second stage) by minimizing \(L_{ce}+L_{\mathrm{diff},w}\). The classifier \(CL_{net}\) is trained only on real features for the base classes in \(\mathcal{V}_{B_{1}}\) (or \(\mathcal{V}_{B_{2}}\) for the second stage) in the first substage. During the second substage, we freeze the fusion mechanism and continue to train the diffusion model, \(Proj_{net}\) and \(CL_{net}\) with the same training objective \(L_{ce}+L_{\mathrm{diff},w}\). Here we consider both base and novel classes \(\mathcal{V}_{B_{1}}\) and \(\mathcal{V}_{N_{1}}\) classes (or \(\mathcal{V}_{B_{2}}\) and \(\mathcal{V}_{N_{2}}\) in the second stage), unlike in the first substage where we only used base classes. For each batch composed of real samples from novel classes, we generate a corresponding batch of the same size with synthetic samples using our diffusion model. \(CL_{net}\) is then trained on real features from \(\mathcal{V}_{B_{1}}\) (or \(\mathcal{V}_{B_{2}}\) in the second stage) and on real and synthetic features for the classes in \(\mathcal{V}_{N_{1}}\) (or \(\mathcal{V}_{N_{2}}\) in the second stage). Freezing the audio-visual transformer ensures that its fusion mechanism does not overfit to the few samples from the novel classes. The diffusion model is not used for inference, and the output of the classifier \(CL_{net}\) for \(c_{0}\) provides the predicted score for each class (including the novel classes). The class with the highest score is selected as the predicted class. ## 5 Experiments In this section, we first provide the implementation details for obtaining the presented results (Section 5.1). We then report results for our proposed AV-Diff in our benchmark study (Section 5.2). Finally, we analyse the impact of different components of AV-Diff (Section 5.3). ### Implementation details AV-Diff uses features extracted from pre-trained audio and visual classification networks as inputs (details provided in the suppl. material). AV-Diff is trained using \(d_{dim}=300\) and \(d_{out}=64\). Our fusion network has \(L=5,4,8\) transformer layers, the layer after which the attention changes is set to \(Z=3,2,5\) on ActivityNet-FSL, UCF-FSL and VGGSound-FSL respectively. We train all models on a single NVIDIA RTX 2080-Ti GPU. The first substage uses 30 epochs while the second one uses 20 epochs. We use the Adam optimizer [42], and \(\beta_{1}=0.9\), \(\beta_{2}=0.999\), and weight decay of \(1e^{-5}\). We use a learning rate of \(7e^{-5}\) for UCF-FSL and ActivityNet-FSL, and \(6e^{-5}\) for VGGSound-FSL. For ActivityNet-FSL and UCF-FSL, we use a scheduler that reduces the learning rate by a factor of 0.1 when the performance has not improved for 3 epochs. We use a batch size of 32 for ActivityNet-FSL, and 64 for UCF-FSL and VGGSound-FSL. Each epoch consists of 300 batches. As ActivityNet-FSL has very long videos, we randomly trim the number of features during training to 60. During evaluation, we also trim the videos to a maximum length of 300 features, and the trimmed features are centred in the middle of the video. To reduce the bias towards base classes, we use calibrated stacking [17] on the search space composed of the interval [0,1] with a step size of 0.1. This value is obtained on the validation dataset. ### Audio-visual GFSL performance For each of the models featured in our benchmark, we report results for three different numbers of shots, i.e. 1-shot, 5-shot, 10-shot on all three datasets in Table 2. AV-Diff outperforms all the methods across all shots and datasets for few-shot learning (FSL) and generalised few-shot learning (HM). For 1-shot, AV-Diff achieves a HM/FSL of 20.31%/22.95% vs. HM of 19.54% for TCaF and FSL score of 22.44% for TSL on VGGSound-FSL. On 5-shot, our model obtains a HM/FSL of 31.19%/36.56% vs. 29.92% for the Perceiver and FSL of 35.17% for Zorro. Furthermore, AV-Diff yields slightly better results than the Perceiver in both HM and FSL for 10 shots, with HM/FSL of 33.99%/41.39% vs. 33.65%/40.73% for the Perceiver. Thus, combining our hybrid attention and the diffusion model is superior to systems that rely solely on powerful attention mechanisms without incorporating generative modelling (Perceiver, TCaF) and systems that incorporate generative modelling, but that do not employ powerful attention mechanisms (TSL, ProtoGan). Similar trends are observed on UCF-FSL, while on ActivityNet-FSL, the ranking of methods changes dramatically. Methods that perform well on UCF-FSL and VGGSound-FSL, but which do not fully use the temporal information (e.g. Attention Fusion, ProtoGan and TSL) perform weakly on ActivityNet-FSL which contains videos with varying lengths, including some very long videos, \begin{table} \begin{tabular}{l|c c c c c c|c c c c c c|c c c c c} \hline \hline & \multicolumn{4}{c|}{VGGSound-FSL} & \multicolumn{4}{c|}{UCF-FSL} & \multicolumn{4}{c}{ActivityNet-FSL} \\ \cline{2-13} **Model \(\downarrow\)** & \(l\)-_shot_ & \(\hat{S}\)-_shot_ & _10-shot_ & \(l\)-_shot_ & \(l\)-_shot_ & \(\hat{S}\)-_shot_ & _10-shot_ & \(l\)-_shot_ & \(\hat{S}\)-_shot_ & \(\hat{S}\)-_shot_ & _10-shot_ \\ & HM & FSL & HM & FSL & HM & FSL & HM & FSL & HM & FSL & HM & FSL & HM & FSL & HM & FSL \\ \hline Att. Fusion [23] & 15.46 & 16.37 & 28.22 & 31.57 & 30.73 & 39.02 & 37.39 & 36.88 & 51.68 & 47.18 & 57.91 & 52.19 & 4.35 & 5.82 & 61.7 & 8.13 & 10.67 & 10.78 \\ Perceive [8] & 17.97 & 18.51 & 29.92 & 33.58 & 38.36 & 40.73 & 41.42 & 33.73 & 48.60 & 40.47 & 55.33 & 47.48 & 17.34 & 12.53 & 25.75 & 21.50 & 29.88 & 28.46 \\ MIFT [46] & 14.70 & 21.96 & 27.26 & 36.35 & 10.22 & 38.93 & 40.25 & 27.97 & 46.55 & 34.53 & 30.04 & 30.73 & 14.26 & 12.63 & 22.28 & 28.26 & 28.63 \\ TCaF [51] & 19.54 & 20.01 & 26.09 & 32.22 & 28.96 & 36.43 & 46.11 & 35.90 & 46.29 & 37.39 & 54.19 & 47.61 & 16.50 & 13.01 & 22.79 & 21.81 & 24.78 & 23.33 \\ PretoGan [46] & 10.74 & 14.08 & 25.17 & 28.87 & 29.85 & 34.80 & 37.95 & 28.08 & 42.42 & 38.63 & 51.01 & 40.68 & 2.77 & 4.40 & 2.67 & 7.81 & 4.05 & 8.81 \\ SLDG [13] & 16.83 & 17.57 & 20.79 & 25.17 & 24.11 & 29.48 & 39.32 & 28.91 & 36.47 & 28.56 & 34.31 & 29.66 & 13.57 & 10.30 & 22.29 & 11.68 & 27.53 \\ TSL [8] & 18.73 & 24.24 & 19.49 & 29.50 & 20.19 & 31.29 & 41.54 & 31.57 & 51.08 & 42.42 & 40.90 & 55.63 & 9.53 & 10.77 & 10.97 & 12.77 & 10.39 & 12.18 \\ HIP [6] & 19.27 & 18.64 & 26.82 & 30.67 & 29.25 & 35.13 & 21.79 & 34.88 & 36.44 & 42.23 & 50.60 & 43.29 & 18.00 & 13.01 & 18.10 & 16.25 & 19.37 & 17.06 \\ Zorro [64] & 18.88 & 21.79 & 29.56 & 35.17 & 32.96 & 40.66 & 44.35 & 34.52 & 51.86 & 42.59 & 58.80 & 49.06 & 14.56 & 11.94 & 23.14 & 21.94 & 27.35 & 26.33 \\ AVCA [2] & 6.29 & 10.29 & 15.98 & 20.50 & 18.68 & 28.27 & 43.61 & 31.24 & 40.19 & 36.70 & 50.53 & 39.17 & 12.83 & 12.22 & 20.09 & 21.65 & 20.22 & 26.76 \\ \hline AV-Diff & **20.31** & **22.95** & **31.19** & **36.56** & **33.90** & **41.39** & **51.50** & **30.85** & **59.96** & **51.45** & **64.18** & **57.39** & **18.47** & **13.80** & **20.56** & **23.00** & **30.86** & **27.81** \\ \hline \hline \end{tabular} \end{table} Table 2: **Our benchmark study for audio-visual (G)FSL**: 1,5,10-shot performance of our AV-Diff and compared methods on (G)FSL. The harmonic mean (HM) of the mean class accuracies for base and novel classes are reported for GFSL. For the FSL performance, only the test subset of the novel classes is considered. Base, novel, and 20-shot performances are included in the suppl. material. making the setting more challenging. Our AV-Diff can process temporal information effectively, resulting in robust state-of-the-art results on ActivityNet-FSL. Interestingly, VGGSound-FSL contains the most classes among the datasets considered, resulting in a significantly lower N (suppl. material, Tab. 1) than FSL. This also lowers the HM (computed from B, N). On VGGSound-FSL, methods tend to be biased towards novel classes (N \(\geq\) B) due to calibration [17]. In this case, HM \(\leq\) N \(\leq\) FSL. Moreover, some baselines that were also used in audio-visual zero-shot learning [51, 52] (e.g. TCAF) exhibit significant increases in performance even in the 1-shot setting. This is expected as for 1-shot learning, one training example is used from each novel class. This reduces the bias towards base classes, leading to more balanced B and N scores, and thereby better HM and FSL results. Base, novel, and 20-shot performances are included in the suppl. material. ### AV-Diff model ablations Here, we analyse the benefits of the main components of AV-Diff, i.e. our proposed audio-visual fusion mechanism, and the diffusion model for feature generation. Furthermore, we analyse the importance of using multiple modalities, and the effect of different semantic representations. **Audio-visual fusion mechanism.** Table 3 ablates our cross-modal fusion mechanism for generating rich audio-visual representations. As shown in Section 4.1, AV-Diff uses two types of attention: \(\textbf{A}_{self}\)+\(\textbf{A}_{c}\) for the first few layers and **A** for the later layers. For _Alternate_ AV-Diff, we alternate the two types of attention used in AV-Diff in subsequent layers. We also show our model with \(\textbf{A}_{cross}\)+\(\textbf{A}_{c}\) which is the same attention used by the SOTA audio-visual GZSL framework [51]. On ActivityNet-FSL, AV-Diff obtains a HM/FSL of 26.96%/23.00% vs. 25.58%/22.65% for \(\textbf{A}_{self}\)+\(\textbf{A}_{c}\). The same trend is seen on UCF-FSL. On VGGSound-FSL, we outperform _Alternate_ AV-Diff on HM but are slightly weaker than \(\textbf{A}_{self}\)+\(\textbf{A}_{c}\) in FSL. Overall, our fusion mechanism is the best across both metrics and datasets. **Feature generation model.** In Table 4, we investigate the impact of different generative models to produce audio-visual features for the novel classes. We compare the diffusion model in AV-Diff to a GAN similar to the one used by TSL [83], which optimizes a Wasserstein GAN loss [9]. On ActivityNet-FSL, we observe that AV-Diff outperforms the GAN variant, with a HM/FSL of \begin{table} \begin{tabular}{l|c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Model \(\downarrow\)**} & \multicolumn{4}{c}{VGGSound-FSL} & \multicolumn{4}{c}{UCF-FSL} & \multicolumn{4}{c}{ActivityNet-FSL} \\ & B & N & HM & FSL & B & N & HM & FSL & B & N & HM & FSL \\ \hline **A** & 28.56 & 31.52 & 29.98 & 36.55 & 78.95 & 42.07 & 54.90 & 43.75 & 23.10 & 22.06 & 22.57 & 22.53 \\ \(\textbf{A}_{cross}\) + \(\textbf{A}_{c}\) & 28.44 & 32.48 & 30.33 & 36.85 & 82.89 & 44.33 & 57.77 & 47.02 & 27.02 & 21.25 & 23.79 & 21.98 \\ \(\textbf{A}_{self}\) + \(\textbf{A}_{c}\) & 26.68 & 33.23 & 29.60 & **37.06** & 50.10 & 44.58 & 47.18 & 45.03 & 31.61 & 21.48 & 25.58 & 22.65 \\ Alternate AV-Drrr & 27.40 & 32.60 & 29.78 & 36.82 & 80.25 & 43.01 & 56.00 & 45.81 & 31.15 & 21.57 & 25.49 & 22.59 \\ \hline AV-Drrr & 30.88 & 31.50 & **31.19** & 36.56 & 74.11 & 50.35 & **59.96** & **51.45** & 35.84 & 21.61 & **26.96** & **23.00** \\ \hline \hline \end{tabular} \end{table} Table 3: Impact of different audio-visual fusion mechanisms in the 5-shot setting. 26.96%/23.00% vs. 25.10%/21.35% for the GAN. The same can be seen on UCF-FSL and VGGSound-FSL. This shows that our generative diffusion model is better suited for audio-visual GFSL than a GAN. **Multi-modal input.** We explore the impact of using multi-modal inputs for AV-Diff in Table 5. For unimodal inputs, we adapt AV-Diff to only employ full attention which is identical to self-attention in this case. On ActivityNet-FSL, using multi-modal inputs provides a significant boost in performance compared to unimodal inputs, with a HM/FSL of 26.96%/23.00% vs. 19.01%/17.84% when using only visual information. The same trend can be observed on UCF-FSL. In contrast, on VGGSound-FSL, using multi-modal inputs gives stronger GFSL but slightly weaker results in FSL than using the audio modality. This might be due to the focus on the audio modality in the data curation process for VGGSound. As a result, significant portions of the visual information can be unrelated to the labelled class. Overall, the use of multi-modal inputs from the audio and visual modalities significantly boosts the (G)FSL performance for AV-Diff. However, one interesting aspect is that using both modalities leads to better \(B\) and \(N\) performances across all three datasets. For example, on ActivityNet-FSL, AV-Diff obtains a \(B\) score of 35.84% and an \(N\) score of 21.61% compared to 20.80% and 17.49% when using only the visual modality. On UCF-FSL, AV-Diff achieves a score of 74.11% for \(B\) and 50.35% for \(N\) compared to 67.13% and 39.18% for the visual and audio modalities respectively. Finally, on VGGSound-FSL, AV-Diff achieves a \(B\) score of 30.88% and an \(N\) score of 31.50% compared to 28.30% and 30.56% for unimodal audio inputs. This shows that using multi-modal inputs decreases the bias towards either of the metrics, leading to a more robust and balanced system. **Semantic class representations.** We consider using different semantic class representations in Table 6. In FSL, the most common semantic descriptor is word2vec [53] which is used to condition the audio-visual feature generation in AV-Diff. However, related works (e.g. ProtoGan [46]), use prototypes which average the visual features of all the training videos in a class to obtain the semantic representation of that class. In the multi-modal setting, we can concatenate the \begin{table} \begin{tabular}{l|c c c c|c c c c|c c c c} \hline \hline \multirow{2}{*}{**Model \(\downarrow\)**} & \multicolumn{4}{c|}{ VGGSound-FSL} & \multicolumn{4}{c|}{UCF-FSL} & \multicolumn{4}{c}{ActivityNet-FSL} \\ & B & N & HM & FSL & B & N & HM & FSL & B & N & HM & FSL \\ \hline AV-GAN & 27.80 & 31.75 & 29.64 & 36.53 & 83.79 & 36.20 & 50.56 & 37.33 & 35.12 & 19.53 & 25.10 & 21.35 \\ \hline AV-Diff & 30.88 & 31.50 & **31.19** & **36.56** & 74.11 & 50.35 & **59.96** & **51.45** & 35.84 & 21.61 & **26.96** & **23.00** \\ \hline \hline \end{tabular} \end{table} Table 4: Influence of using different feature generators in the 5-shot setting. \begin{table} \begin{tabular}{l|c c c c|c c c c|c c c c} \hline \hline \multirow{2}{*}{**Model \(\downarrow\)**} & \multicolumn{4}{c|}{ VGGSound-FSL} & \multicolumn{4}{c|}{UCF-FSL} & \multicolumn{4}{c}{ActivityNet-FSL} \\ & B & N & HM & FSL & B & N & HM & FSL & B & N & HM & FSL \\ \hline Audio & 28.30 & 30.56 & 29.39 & **36.64** & 55.31 & 39.18 & 45.87 & 44.44 & 13.74 & 15.23 & 14.45 & 17.58 \\ Visual & 7.83 & 8.92 & 8.35 & 9.51 & 67.13 & 30.70 & 42.14 & 30.98 & 20.80 & 17.49 & 19.01 & 17.84 \\ \hline AV-Diff & 30.88 & 31.50 & **31.19** & 36.56 & 74.11 & 50.35 & **59.96** & **51.45** & 35.84 & 21.61 & **26.96** & **23.00** \\ \hline \hline \end{tabular} \end{table} Table 5: Influence of using multi-modal input in the 5-shot setting. audio and visual prototypes to obtain multi-modal prototypes \(av_{prot}\) which is used as a conditioning signal for our diffusion model. On ActivityNet-FSL, using word2vec embeddings leads to better results than using the audio-visual prototypes \(av_{prot}\), with a HM/FSL of \(26.96\%/23.00\%\) vs. \(25.79\%/22.73\%\) for \(av_{prot}\). The same can be seen on UCF-FSL and VGGSound-FSL, demonstrating that the word2vec embeddings provide a more effective conditioning signal. ## 6 Conclusion In this work, we propose an audio-visual (generalised) few-shot learning benchmark for video classification. Our benchmark includes training and evaluation protocols on three datasets, namely VGGSound-FSL, UCF-FSL and ActivityNet-FSL, and baseline performances for ten state-of-the-art methods adapted from different fields. Moreover, we propose AV-Diff which fuses multi-modal information with a hybrid attention mechanism and uses a text-conditioned diffusion model to generate features for novel classes. AV-Diff outperforms all related methods on the new benchmark. Finally, we provided extensive model ablations to show the benefits of our model's components. We hope that our benchmark will enable significant progress for audio-visual generalised few-shot learning. **Acknowledgements:** This work was supported by BMBF FKZ: 01IS18039A, DFG: SFB 1233 TP 17 - project number 276693517, by the ERC (853489 - DEXIM), and by EXC number 2064/1 - project number 390727645. The authors thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting O.-B. Mercea and T. Hummel.
2309.12964
Geometric engineering of viscous magnetotransport in a two-dimensional electron system
In this study, we present our experimental investigation on the magnetotransport properties of a two-dimensional electron system in GaAs quantum wells utilizing a variety of device geometries, including obstacles with thin barriers and periodic width variations. Our primary focus is to explore the impact of these geometries on the electron viscous flow parameters, enabling precise manipulation of hydrodynamic effects under controlled conditions. Through an analysis of the large negative magnetoresistivity and zero field resistivity, we deduce the scattering times for electron-electron and electron-phonon interactions, as well as the effective channel width. Our findings confirm that the system under investigation serves as a tunable experimental platform for investigating hydrodynamic transport regimes at temperatures above 10 K.
A. D. Levin, G. M. Gusev, A. S. Yaroshevich, Z. D. Kvon, A. K. Bakarov
2023-09-22T16:01:47Z
http://arxiv.org/abs/2309.12964v1
# Geometric engineering of viscous magnetotransport in a two-dimensional electron system ###### Abstract In this study, we present our experimental investigation on the magnetotransport properties of a two-dimensional electron system in GaAs quantum wells utilizing a variety of device geometries, including obstacles with thin barriers and periodic width variations. Our primary focus is to explore the impact of these geometries on the electron viscous flow parameters, enabling precise manipulation of hydrodynamic effects under controlled conditions. Through an analysis of the large negative magnetoresistivity and zero field resistivity, we deduce the scattering times for electron-electron and electron-phonon interactions, as well as the effective channel width. Our findings confirm that the system under investigation serves as a tunable experimental platform for investigating hydrodynamic transport regimes at temperatures above 10 K. ## I Introduction The concept that has significantly enhanced our understanding of electronic transport phenomena is the notion that, when electron-electron scattering is strong enough, an effectively viscous hydrodynamics approach can be employed [1]-[3]. Gurzhi proposed this idea a while ago, but only recently have we been able to conduct a systematic investigation using a set of exceptionally clean samples that allow for the observation of a wide range of hydrodynamic effects. These effects include resistance decreasing with temperature (Gurzhi effect) [1; 2; 3; 4], giant negative magnetoresistance [5; 6; 7; 8; 9; 10; 11; 12], negative nonlocal resistance [13; 14; 15; 16], superballistic flow [17; 18] and modifications to the Hall effect [19; 20; 21; 22; 23; 24]. For a comprehensive overview of the field of viscous electronics, refer to papers [26]-[27]. Viscous electron flows are expected to manifest in resistivity when the mean free path for electron-electron collisions (represented by \(l_{ee}\)) is considerably shorter than the mean free path resulting from impurity and phonon scattering (denoted as \(l\)). Theoretical propositions suggest a direct proportionality between the electrical resistivity of a two-dimensional system and the electron shear viscosity, which can be expressed as \(\eta=\frac{1}{4}v_{F}^{2}\tau_{ee}\), where \(v_{F}\) represents the Fermi velocity, and \(\tau_{ee}\) denotes the scattering time arising from electron-electron interactions, given by \(\tau_{ee}=\frac{l_{ee}}{v_{F}}\). Geometry plays an essential role in hydrodynamic flow. A Poiseuille geometry (\(l_{ee}<W<l\)) allows for the establishment of a parabolic flow profile within the confined space. In this scenario, hydrodinamic electron transport takes place, driven by the electric field, and encounters diffusive scattering at the channel's boundary. The relationship between resistivity and width is predicted to follow an inverse square law, where resistivity \(\rho\) is inversely proportional to the square of the width (\(\rho\sim W^{-2}\)) [1; 28]. Similarly, resistivity is also expected to be inversely proportional to the square of the temperature (\(\rho\sim T^{-2}\)) [1; 9; 25]. Importantly, a noticeable decrease in resistance as temperature rises has been observed in devices with an H-shaped geometry [9]. The idea that the Gurzhi effect could be connected to the unevenness in the velocity field due to the shape has been suggested. Exploring this phenomenon in devices with varying shapes would be valuable, potentially providing deeper insights into the electron hydrodynamic. The boundary conditions of the system can be described in terms of diffusive scattering or by introducing a slip length denoted as \(l_{s}\). In extreme cases, the boundary conditions can be classified as "no-slip" (\(l_{s}\) tends to zero) or "no-stress" (\(l_{s}\) tends to infinity). When slip length approaches infinity (no-stress condition), it is anticipated that the Gurzhi effect will not be observed [29; 30]. An additional example is when a circular obstacle is present within the channel (Stokes geometry). Even if the slip length exceeds the size of the sample, there can still be the emergence of viscous shear forces, leading to the reappearance of the Gurzhi effect [31; 32; 33]. Furthermore, in a Stokes geometry, the pre-turbulent regime is predicted for large flow velocity as a periodic separation of hydrodynamic vortices, resulting in the formation of a phenomenon known as the Karman vortex street [34]. The nonlinear effects have been theoretically explored within the confines of samples possessing Ventury geometry [35]. These peculiar samples are characterized by a continuous expansion of their width. Drawing a parallel to the hydrodynamic Bernoulli effect, it has been suggested the potential utilization of hydrodynamic materials as a novel foundation for constructing nonlinear electronic devices [35; 36]. The recent focus of the theoretical models' advancements has been on modifying slip parameters in a channel caused by a sequence of slender obstructions [37]. Consequently, it becomes evident that not only the sample geometry itself, but also the geometry of its boundaries can exert an influence on transport properties, thereby enabling the advent of hydrodynamic conditions within narrow channels [38]. Studying the magnetohydrodynamic behavior of elec tron transport significantly enhances our comprehension of viscous transport, enabling us to extract key parameters like electron-electron scattering rates and slip lengths [5; 7; 8; 9; 11]. In simpler situations, the width of the sample directly factors into equations that describe magnetoresistance induced by viscosity. However, a more thorough analysis across a range of sample widths is essential for this theory. Additionally, the specific geometric arrangement may impact magnetoresistance. Our research is positioned to attract theoretical attention and could potentially serve as a foundational basis for future investigations. In the current study, we have conducted experimental investigations on the transport properties of a mesoscopic 2D electron system in GaAs quantum wells with various geometries. Three distinct device configurations were examined ( see figure 1). The first configurations involve rectangular-like variations in the sample width, resulting in the formation of cavities after the electron flow traverses long, narrow constrictions (figure 1a,b), and are nominated as C1 and C2. The other configuration (C3) consists of a series of obstacles with asymmetrically positioned barriers, enabling a zigzag-like flow pattern within the sample ( figure 1c). For all configurations we observe a giant negative magnetoresistance at low magnetic field. By analysing this pronounced negative magnetoresistivity and the resistivity in zero magnetic field, we extract the scattering times associated with electron-electron and electron-phonon interactions. Furthermore, we determine the effective width of the channel utilized in these experiments, which is found to be coincident with the geometric width within an order of magnitude variation. ## II Experimental results We used high-quality GaAs quantum wells for our devices. These wells have a width of 14 nm and an electron density of approximately \(7.1\times 10^{11}cm^{-2}\) at a temperature of 1.4 K. The mobility of the sample was \(2\times 10^{6}cm^{2}/Vs\). To conduct our measurements, we designed a Hall bar specifically for multiterminal experiments. The sample consists of three consecutive segments with different lengths L (100, 20, and 100 \(\mu m\)), each being W\(=20\)\(\mu m\) wide. Additionally, we incorporated eight voltage probes into the setup. Ohmic contacts to a two-dimensional electron system were fabricated by the annealing of the Ti/Ni/Au that is deposited on the GaAs surface. Ti/Au Schottky gates were fabricated to control electrostaticaly defined barriers in 2D liquid. To create electrostatic barriers, we apply a gate voltage of \(V_{g}=-0.9V\). Figure 1: (Color online) Hydrodynamic velocity flow. (a) Sketch of the velocity flow profile in a device with periodic periodic rectangular channel with narrow width of W\(=2\)\(\mu m\), configuration C1 (b) Sketch of the velocity flow profile in device with periodic rectangular channel width, with narrow width of W\(=8\)\(\mu m\), configuration C2. (c) Sketch of the velocity flow profile in the presence of a set of thin barrier obstacles (zigzag barrier structure), configuration C3 The width of the sample is 20 \(\mu m\). For the measurements, we utilized a VTI cryostat and employed a conventional lock-in technique. This technique allowed us to measure the longitudinal resistance. To avoid overheating effects, we applied an alternating current (ac) of 0.1-1 \(\mu A\) through the sample, which is considered sufficiently low. The current I flows between contacts 1 and 4, and the voltage V was measured between probes 2 and 3, \(R=R_{2,3}^{1,4}=V_{2,3}/I_{1,4}\) (figure 2). Furthermore, we compared our findings with the transport properties of two-dimensional (2D) electrons in a larger-scale sample. The mean free path of electrons in macroscopic sample is 25\(\mu m\) at T=4.2K, that exceeds the width of the sample. In this paper, our main focus lies on conducting magnetoresistivity measurements and observing the resistivity behavior at zero magnetic field with varying temperature, particularly for different geometries. We begin by conducting measurements on unpatterned samples. Figure 2 illustrates the resistivity (\(\rho=\frac{W}{L}R\)) evolution as a function of magnetic field at different temperatures and the image of the device. It is notable that there is a significant negative magnetoresistivity (\(\rho(B)-\rho(0)<0\)) characterized by a Lorenztian profile, which becomes smaller and broader as the temperature increases. Additionally, the resistivity at zero magnetic field exhibits an increase with temperature. This observation agrees with previous findings, which were interpreted as distinctive characteristics of hydrodynamic behavior [5; 9; 24; 25] except for the temperature range \(4.2<T<10\) K. In this temperature interval, both ballistic and hydrodynamic properties should be considered equally in describing the system's behavior [11]. The small oscillations observed above \(B>0.05T\) for low temperatures in Figure 2. The Larmor radius is 3.5 \(\mu m\) at \(B=\pm 0.06T\) and 1.6\(\mu m\) at \(B=\pm 0.12T\), both of which correspond to the maxima of the oscillations. These values are notably smaller when compared to the dimensions of the sample in terms of width or length. The observed oscillations can potentially be attributed to a degree of commensurability with the width of the potentiometric probe, as well as the misalignment of the gold contacts with the sample edges. It's important to note that these oscillations are minimal and do not significantly impact the magnetoresistance, particularly at higher temperatures. Figures 3, 4, and 5 illustrate the pronounced negative magnetoresistivity observed in configurations C1, C2, and C3, respectively. The figures also show the image of the devices, demonstrating configuration of the barriers. It is important to note that both the width and height of the Lorentzian profile exhibit strong dependence on the specific configurations. For instance, the magnetoresistivity for C1 configuration appears significantly broader compared to C2 and C3 geometries. Furthermore, the Lorentzian profile for C2 geometry exhibits the smallest height among the three configurations. As the temperature increases, the peaks become broader while maintaining a Lorentzian profile across all devices. However, it is important to note that for all configurations, the peak at zero magnetic field consistently increases with temperature. This observation indicates the absence of the Gurzhi effect and suggests that disorder and phonon scattering contributes more significantly to the resistivity compared to hydrodynamic effects for these configurations. In order to provide a more comprehensive understanding of this behavior, we conduct a detailed comparison with theoretical models in the next section. ## III Theory and Discussion In current theories, electron transport in mesoscopic samples is typically analyzed using various models such as ballistic, hydrodynamic, or more general frameworks (see for review [27]). These models are based on a detailed approach that involves solving the Boltzmann kinetic equation while considering boundary conditions for Figure 2: (Color online) Temperature-dependent magnetoresistivity of unpatterned mesoscopic GaAs. The circles are examples illustrating magnetoresistance calculated from Eqs. (1) for different temperatures T(K): 4.3 (blue), 10.9 (red), 15.5 (green), 22.3 (cyan), 29.1 (black). Top-image of the central part of the Hall bar with 6 contacts. the electron distribution function. In our study, we employ the model proposed in Refs. [5; 20] as it encompasses the essential magnetohydrodynamic properties, including the intricate effects associated with the relaxation of the distribution function's second harmonic by defects and electron-electron scattering. This model presents conductivity as a combination of two independent contributions. The first contribution is attributed to ballistic effects or static disorder, while the second contribution arises from viscosity [5]. The approach involves utilizing a magnetic field-dependent viscosity tensor and deriving the resistivity tensor: \[\rho(B)=\frac{m}{e^{2}n}\left(\frac{1}{\tau}+\frac{1}{\tau^{*}}\frac{1}{1+(2 \omega_{c}\tau_{2})^{2}}\right), \tag{1}\] where \(1/\tau\) is the scattering rate due to static disorder, \(m\) and \(n\) are the effective mass and the density, where \(\omega_{c}=\frac{eB}{mc}\) is the cyclotron frequency, \(\tau^{*}=\frac{W^{2}}{12\eta}\), where \(\eta=\frac{1}{4}v_{F}^{2}\tau_{2}\) is the viscosity. The shear viscosity relaxation rate is given by \[\frac{1}{\tau_{2}(T)}=\frac{1}{\tau_{2,ee}(T)}+\frac{1}{\tau_{2,imp}}=A_{ee} \frac{(kT)^{2}}{hE_{F}}+\frac{1}{\tau_{2,imp}}, \tag{2}\] where \(A_{ee}\) is numerical factor which can be different for the weakly and strongly interacting Fermi system [20 ]. The relaxation rate \(\frac{1}{\tau_{2,imp(T)}}\), which arises from any process responsible for relaxing the second harmonic of the distribution function, including scattering by static defects, contributes to viscosity. On the other hand, \(\frac{1}{\tau_{2,ee}(T)}\) corresponds to the relaxation of shear viscosity due to electron-electron scattering [5; 7]. The momentum relaxation rate \(\frac{1}{\tau_{2,imp(T)}}\) is given by \[\frac{1}{\tau_{2,imp(T)}}=\frac{1}{\tau_{2,imp(T)}}+\frac{1}{\tau_{2,imp(T)} }=A_{ee}\frac{(kT)^{2}}{hE_{F}}+\frac{1}{\tau_{2,imp}}, \tag{3}\] Figure 3: (Color online) Temperature-dependent magnetoresistivity of a mesoscopic GaAs for configuration C1. The circles (thick lines) are examples illustrating magnetoresistivity calculated from Eqs. (1) for different temperatures T(K): 4.2 (black), 11.1 (red), 22.1 green), 30.4 (blue), 41.4 (cyan), 51 (magenta), 55.4 (yellow). Top-image of the central part of the Hall bar with 6 contacts. Figure 4: (Color online) Temperature-dependent magnetoresistivity of a mesoscopic GaAs for configuration C2. The circles (thick lines) are examples illustrating magnetoresistivity calculated from Eqs. (1) for different temperatures T(K): 4.2 (black), 9.5 (red), 17 (green), 32.5 (yellow), 40.2 cyan), 46.2 (violet), 59.3 (olive). Top-image of the central part of the Hall bar with 6 contacts. ation rate is given by \[\frac{1}{\tau(T)}=\frac{1}{\tau_{0,ph}(T)}+\frac{1}{\tau_{0,imp}}, \tag{3}\] where \(\tau_{0,ph}\) represents the term associated with phonon scattering, and \(\tau_{0,imp}\) represents the scattering time resulting from static disorder (not related to the relaxation time of the second moment) [5; 7]. In our experiments, we typically measure the resistance directly, and the resistivity is subsequently calculated using geometric factors. Specifically, the resistivity \(\rho\) is determined using the equation \(\rho=\frac{W}{L}R\), where R is the resistance, W represents the width and L denotes the length of the rectangular device. For configurations C1, C2, and C3, the simple relation mentioned earlier is not applicable due to the presence of barriers. In order to calculate the resistivity, we solve the Laplace equation for potentials while specifying appropriate boundary conditions. This allows us to determine the effective ratio \(W_{eff}/L_{eff}\), which is used to calculate the resistivity in these cases. In the Ohmic case, Figure 6 illustrates the profile of the electric field for configurations C1, C2, and C3. Indeed the equipotential lines consistently (not shown) intersect the borders at right angles, while the current flows in the direction of decreasing potential. Furthermore, unlike hydrodynamic flow ( figure 1), the current density profile across the sample width does not exhibit a parabolic shape. Based on the calculated potential distribution, we determine the resistivity. Subsequently, we perform a fitting of the magnetoresistance curves and the \(\rho(T)\) at zero magnetic field shown in Figures 2-5. The fitting is done using three parameters: \(\tau(T),\tau^{*}(T)\) and \(\tau_{2}(T)\). Excellent agreement with equation 1 is observed over a wide range of magnetic fields and temperatures. Furthermore, we demonstrate that the two parameters,\(\tau^{*}(T)\) and \(\tau_{2}(T)\), are not completely independent but instead maintain a constant ratio with respect to the sample width. Now, let's shift our focus to the data concerning electron-electron interaction, which can be derived from the analysis of magnetoresistance. Figure 7 displays the data for \(1/\tau_{2,ee}\), which is determined through the comparison of the magnetoresistivity curves with equation 1 in an unpatterned sample. This parameter is associated with inelastic electron-electron scattering, as indicated in equation 2. In addition, we include the dependence of \(1/\tau(T)\) extracted from the macroscopic sample mobility for comparison. As mentioned in the introduction, the hydrodynamic regime is expected to be relevant when the electron-electron collision rate is significantly higher than the scattering rate due to impurities and phonons. This specific region is highlighted with a blue shading indicating temperature interval above \(T>10K\). We can see here that \(1/\tau_{2,ee}\) follows \(T^{2}\) behaviour in accordance with equation 2. Parameters \(A_{ee}\) extracted from this comparison are indicated in Table 1. By comparing the temperature dependency of the relaxation rate \(1/\tau_{2,ee}\) with equation (2), we can deduce a temperature-independent characteristic time \(1/\tau_{0,imp}\) (table 1). The hydrodynamic approach is linked to a significant relaxation of the mth harmonic of the distribution function caused by disorder scattering with the rates \(1/\tau_{m,imp}\)[5; 27]. \(T^{2}\) and \(1/\tau\sim T\), exhibiting nearly identical parameters. The parameters \(A_{ee}\) and \(B_{ph}\) represent the rates of electron-electron and electron-phonon scattering, respectively. These parameters align with previously extracted values and correspond to the theoretical models [5]. Upon analyzing these dependencies in figure 8, we hold the perspective that, contrary to an exponential decline for temperatures exceeding 30K in configurations C1 and C2, there seems to be a small bump in these trends ( less significant for C3). We contend that this observation could be linked to challenges in fitting the data rather than a shift in the relaxation mechanism. Our approach involved applying a simplified theory of magnetoresistance for a rectangular sample, whereas our actual configuration is more intricate. It is worth mentioning that the effective time \(\tau^{*}\), obtained from the height of the Lorentzian profile in the magnetoresistivity, is inversely proportional to the relaxation time \(\tau_{2}\) that determines the width of the Lorentzian profile. Consequently, the product of these two quantities is anticipated to be temperature-independent and can be expressed as follows: \[\tau_{2}\tau^{*}=\frac{W^{2}}{3v_{F}^{2}} \tag{4}\] This equation enables us to independently determine the effective channel width. Figure 10 shows the product \(\tau_{2,ee}\tau^{*}\) as a function of temperature. Notably, this parameter exhibits minimal temperature dependence over a wide temperature range. Specifically, in the case of the unpatterned sample and configuration C1, a slight increase is observed, while configurations C2 and C3 demonstrate a decrease in this parameter. Indeed, the derived channel width, denoted as \(W^{*}\), closely aligns with the geometrical width over a wide range of variation spanning approximately one order of magnitude. Note, that we determine from Ohmic current distribution (fig.6) geometrical width \(W_{eff}\) for C3 configuration (zigzag-like). This agreement unequivocally justifies that our magnetoresistivity arises from viscosity and has a hydrodynamic origin. One would expect that the dependence on temperature would be temperature-independent in the case of ballistic or classically-sized magnetoresistivity, at least until the mean free path exceeds the width of the sample. Furthermore, it is highly likely that the temperature dependence would differ for different widths for \(l=v_{F}\tau>W\). However, what we \begin{table} \begin{tabular}{c c c c c c c} Config. & \(1/\tau_{2,imp}\) & \(1/\tau_{0,imp}\) & \(A_{ee}\) & \(B_{ph}\) & \(W_{eff}\) & \(W^{*}\) \\ & (\(10^{11}1/s\)) & (\(10^{10}1/s\)) & & (\(10^{9}1/sK\)) & \(\mu m\) & \(\mu m\) \\ \hline Unpatt. & 0.7 & 1.7 & 0.53 & 0.7 & 20 & 20 \\ C1 & 6.95 & 0.8 & 0.6 & 0.65 & 1.7 & 1.5 \\ C2 & 1.5 & 0.8 & 0.6 & 0.55 & 6 & 10 \\ C3 & 1.0 & 1.0 & 0.73 & 0.65 & 11.5 & 15 \\ \end{tabular} \end{table} Table 1: Fitting parameters of the electron system for different configurations. Parameters are defined in the text. Figure 6: (Color online) Ohmic current flow. (a) Sketch of the electric field profile in a device with periodic periodic rectangular channel width narrow width of W=2 \(\mu m\), configuration C1 (b) Sketch of the electric field profile in a device with periodic rectangular channel width, with narrow width of W=8 \(\mu m\), configuration C2. (c) Sketch of the electric field profile in the presence of a set of thin barrier obstacles (zigzag barrier structure), configuration C3 The width of the sample is 20 \(\mu m\). observed is a universal temperature dependence characterized by a \(T^{-2}\) broadening of the Lorentzian shape, which is more indicative of electron-electron scattering rather than the \(T^{-1}\) dependence associated with momentum relaxation due to phonon scattering. In the concluding section of the paper, we direct our attention towards an important question that was initially investigated by Gurzhi. We explore which parameters, associated with realistic samples, can lead to a more notable phenomenon: a decrease in resistivity with increasing temperature [1]. By examining Table 1, it becomes apparent that different configurations and sample geometries allow us to alter the conditions for the hydrodynamic effect, resulting in a substantial variation of the negative magnetoresistance. Interestingly, despite these variations, fundamental parameters associated with electron-electron collisions and scattering by phonons, such as \(A_{ee}\) and \(B_{ph}\), exhibit universality and remain consistent. It is important to note that despite the significant variations in parameters, we did not observe the Gurzhi effect in our devices. Instead, the resistivity in all devices exhibited an increase with temperature. We calculate the resistivity in zero magnetic field by using equation 1 for universal parameters \(A_{ee}=0.6\) and \(B_{ph}=0.6\times 10^{9}\frac{1}{\rm{sK}}\) varying the width of the sample and relaxation time of the second harmonic of distribution function caused by disordered scattering \(\tau_{2,imp}\). The results of these calculations are depicted in Figures 11a and 11b. Interestingly, we observe a significant dependence of the Gurzhi effect on a particular parameter. As illustrated in the figures, only when the relaxation rate \(1/\tau_{2,imp}\) is relatively small, specifically below \(0.7\times 10^{11}s^{-1}\), do we observe a decrease in resistivity with temperature, particularly in narrower devices. For larger relaxation rates, only an increase in Figure 8: (Color online) The relaxation rate, \(1/\tau_{2,ee}\) a a function of the temperature obtained for different configurations: (a)-C1, (b)-C2, (c)-C3. Solid lines- theory. Figure 7: (Color online) The relaxation rate, represented by black circles, denoted as \(1/\tau_{2,ee}\), is obtained by comparing with experimental data in an unpatterned sample. The impurity scattering rate, represented by red circles, denoted as \(1/\tau\), is derived from the macroscopic sample mobility and is plotted as a function of temperature. The blue shading highlights the temperature range where \(1/\tau_{2,ee}>1/\tau\), indicating the presence of the hydrodynamic regime. At low temperatures, scattering is predominantly governed by static impurities, whereas at higher temperatures, scattering by phonons becomes more significant. resistivity is expected. This explains why we did not observe the Gurzhi effect in the samples examined in this study. According to the parameters listed in the table, the relaxation rate \(1/\tau_{2,imp}\) is relatively large, which prevents the observation of the Gurzhi effect. Unfortunately, even in narrower samples, the relaxation rate \(1/\tau_{2,imp}\) is significantly increased, which unfortunately suppresses the Gurzhi effect. In our previous study a pronounced Gurzhi effect has been observed due to a combination of small \(1/\tau_{2,imp}\) and narrow width [9; 25]. The importance of the relaxation rate \(1/\tau_{0,imp}\) is relatively low, but it is desirable to have a value smaller than \(10^{11}s^{-1}\) for the observation of the Gurzhi effect. Exploring the microscopic nature of the relaxation rate \(1/\tau_{2,imp}\) would be intriguing as it could enhance the hydrodynamic conditions for manipulating the Gurzhi effect. The potential existence of a hydrodynamic regime in real samples seems to be connected to the significant relaxation of both odd and even harmonics in electron scattering on disorder, with relaxation rates denoted as \(1/\tau_{m,imp}\) that are much larger for odd harmonics with \(m\geq 3\) compared to \(1/\tau_{m,ee}\) for even harmonics. If relaxation only occurred due to electron-electron scattering, a substantial difference in relaxation times between even and odd harmonics could result in the emergence of anomalous non-hydrodynamic transport regimes [1]. Both contributions \(1/\tau_{2,ee}\) and \(1/\tau_{2,imp}\) to the relaxation rate \(1/\tau_{2}\) are proportional to the products of the Landau parameter factor \((1+F_{2})\) and the quasiparticle collision integrals averaged by energy [7]. In summary, although the Gurzhi effect is a delicate phenomenon that necessitates specific parameter combinations in real samples, the hydrodynamically induced negative magnetoresistance is highly robust to these parameters. It can be observed across a variety of geometric configurations, making it an invaluable tool for investigating hydrodynamic effects in various materials, including graphene and Dirac fermions in HgTe samples [12; 19]. ## IV Conclusion In this study, we conducted experimental investigations on the magnetotransport properties of a two Figure 11: (Color online) The relative resistivity in zero magnetic field, as determined by equation 1 using the parameters \(A_{ee}=0.6\) and \(B_{ph}=0.6\times 10^{9}\frac{1}{8\pi}\), is plotted as a function of temperature for various sample widths, \(T_{0}=1K\). The corresponding relaxation times are indicated on the panel for figs. (a) and (b). Figure 10: (Color online) The product of the relaxation times \(\tau_{2,ee}\tau^{*}\) as a function of the temperature. This product is proportional to \(W^{2}\) and allows to extract the width of the channel \(W^{*}\). The insert shows the correspondence between effective geometrical width \(W_{eff}\) and \(W^{*}\) Figure 9: (Color online) The momentum relaxation rate, \(1/\tau\), as a function of the temperature obtained for different configurations. Solid lines- theory. -dimensional electron system in GaAs quantum well using different device geometries. We observed that the resistivity at zero magnetic field consistently increased with temperature, although the temperature dependence, represented by \(\rho(T)\), varied for different configurations. We proposed that the Gurzhi effect, characterized by a decrease in resistivity with temperature increase, is governed by the relaxation rate of the second harmonics of the distribution function due to disorder. On the other hand, we found that the hydrodynamically induced large negative magnetoresistivity was persistent across all geometries. By analyzing this pronounced negative magnetoresistivity and the resistivity in the absence of a magnetic field, we were able to extract the scattering times associated with electron-electron and electron-phonon interactions. Furthermore, we determined the effective width of the channel used in these experiments, which closely matched the geometric width with only a modest variation within an order of magnitude. ## V Acknowledgments The financial support of this work by FAPESP (Brazil), CNPq (Brazil) and Ministry of Science and Higher Education of the Russian Federation is acknowledged.
2308.00091
Convolutional Occupancy Models for Dense Packing of Complex, Novel Objects
Dense packing in pick-and-place systems is an important feature in many warehouse and logistics applications. Prior work in this space has largely focused on planning algorithms in simulation, but real-world packing performance is often bottlenecked by the difficulty of perceiving 3D object geometry in highly occluded, partially observed scenes. In this work, we present a fully-convolutional shape completion model, F-CON, which can be easily combined with off-the-shelf planning methods for dense packing in the real world. We also release a simulated dataset, COB-3D-v2, that can be used to train shape completion models for real-word robotics applications, and use it to demonstrate that F-CON outperforms other state-of-the-art shape completion methods. Finally, we equip a real-world pick-and-place system with F-CON, and demonstrate dense packing of complex, unseen objects in cluttered scenes. Across multiple planning methods, F-CON enables substantially better dense packing than other shape completion methods.
Nikhil Mishra, Pieter Abbeel, Xi Chen, Maximilian Sieb
2023-07-31T19:08:16Z
http://arxiv.org/abs/2308.00091v1
# Convolutional Occupancy Models ###### Abstract Dense packing in pick-and-place systems is an important feature in many warehouse and logistics applications. Prior work in this space has largely focused on planning algorithms in simulation, but real-world packing performance is often bottlenecked by the difficulty of perceiving 3D object geometry in highly occluded, partially observed scenes. In this work, we present a fully-convolutional shape completion model, F-CON, which can be easily combined with off-the-shelf planning methods for dense packing in the real world. We also release a simulated dataset, COB-3D-v2, that can be used to train shape completion models for real-word robotics applications, and use it to demonstrate that F-CON outperforms other state-of-the-art shape completion methods. Finally, we equip a real-world pick-and-place system with F-CON, and demonstrate dense packing of complex, unseen objects in cluttered scenes. Across multiple planning methods, F-CON enables substantially better dense packing than other shape completion methods. ## I Introduction Recent years have seen huge commercial interest in robotic pick-and-place systems for applications in warehouse automation and logistics. While current work on these systems has mostly focused on picking, intelligent placing is also critical to many use cases. For example, in order fulfillment, a robot must densely pack objects into shipping boxes that will be sent from a warehouse to a customer. Suboptimal packing performance leads to inefficiencies in the overall operation, as larger boxes or more shipments will be unnecessarily required, increasing both waste and cost. As a result, _dense packing_ - a task where a robot must pack objects into a given container in a way that maximizes the density or number of objects - is a requisite feature in many real-world pick-and-place applications. The majority of work in dense packing has focused on the sequential-decision-making aspect of the problem: in order to achieve optimal packing densities, every object needs to be placed carefully, with regard for how it affects the subsequent objects that will need to be packed into the same container. The result of this line of work has been a series of attempts to cast dense packing as a reinforcement learning (RL) problem [3][4][5]. To make this difficult problem more tractable and easier to evaluate, common practices have been to operate in simulation on simplified state representations, such as by approximating all objects as cuboids, or to assume that complete state information is available, such the ground-truth geometry of all objects [2][6]. However, there remain perceptual challenges that need to be addressed in order to apply this work to the real world. Consider the requirements placed upon real-world pick-and-place systems: they need to be capable of handling a huge variety of objects, many of which will be unseen by the system prior to the moment they need to be manipulated. In the context of dense packing, these systems need have a strong understanding of the objects' 3D geometry. The difficulty of generalization to novel objects is exacerbated by the fact that objects are only partially observed in most applications - for example, they are typically presented in cluttered bins where occlusions make it difficult to perceive the entire 3D shape. Even the visible portions can pose a challenge, as many items are made of materials that cannot be easily sensed by depth cameras. Fig. 1: Our proposed shape completion architecture, F-CON, is trained in simulation and predicts completed point clouds for complex, unseen objects in the real world. Off-the-shelf packing planners can leverage F-CON for precise dense-packing in pick-and-place applications. In this work, we present a shape completion model that provides the necessary perceptual understanding for dense packing systems. Shape completion is a well-studied vision task, where the goal is typically to predict the entire 3D shape of an object based on limited information, such as a partial point cloud. Since this problem is inherently partially observed, shape completion models are forced to learn strong priors about 3D object geometry, making them a good fit for pick-and-place applications where the ability to deal with novel objects is essential. Our main contributions are as follows: 1. We release a simulated dataset that can be used to train shape completion models for robotics applications. This dataset, COB-3D-v2, exhibits state-of-the-art visual realism, making it effective for sim-to-real transfer of perceptual tasks. It is publically available at our project page. 2. We propose a 3D fully-convolutional model architecture for shape completion that performs well in the robotics domain. We show that our model achieves state-of-the-art performance on COB-3D-v2. 3. Through extensive real-world experiments, we show that our model trained on COB-3D-v2 can be combined with simple, off-the-shelf planning methods to enable state-of-the-art dense packing performance on cluttered scenes with complex, novel items. ## II Related Work Dense packing has been mostly studied from a planning perspective: in what order and pose should the items be placed to maximize the packed density? Early work proposed heuristics like Deepest-Bottom-Left-First (DBLF) or Heightmap-Minimization (HM) [1][2]. Besides their simplicity, these heuristics are attractive because they empirically perform well even in situations where the entire item set to be packed is not know in advance. More recent work has attempted to learn policies using reinforcement learning (RL), exploring state/action representations, reward functions, neural network architectures, and RL algorithms that work best for this problem domain [3][4][5]. However, the RL-for-packing work has mostly been limited to simulated evaluations of cuboid objects, limiting its applicability to real-world systems. Methods for 3D bounding-box estimation could help extend this work to arbitrary objects, but the imprecision of the bounding-box representation would likely result in suboptimal packing performance. Our experiments will explore this in Section IV. Packing of complex objects in the real world has only been explored under restricted settings, such as where the ground-truth object geometry is known in advance, where the items are already singulated, or where the items only need to be packed in a 2D planar configuration [6][7]. These simplifications reduce the perception requirements necessary for a packing system, but do not reflect the challenges encountered in real-world applications. In this work, we consider a more realistic setting where the items must be picked from a cluttered bin and packed into a dense 3D arrangement. As we show in Section IV, the strong geometric priors learned by our shape completion model enable existing planning methods to gracefully handle this difficult task. Additionally, we explicitly evaluate the achieved packing density, which, to the best of our knowledge, has not been explored in prior work. Methods for shape completion can be roughly categorized by the particular 3D representation that they predict. Recent work has focused on implicit functions, which offer the best accuracy and resolution, but have limited ability for generalization and are computationally expensive during inference [16][17]. While such methods are incredibly effective in many graphics applications, these properties make them a poor fit for robotic systems that need to handle unseen objects in low-latency applications. Other representations like voxel grids and unstructured point clouds are more computationally tractable to work with; voxel grids tend to be more amenable to prediction with neural networks, but scale poorly to extremely high resolutions. We will discuss how these trade-offs influence our system in Section III-A. In robotics, shape completion has mostly seen attention in the context of grasping. There have been labor-intensive attempts to collect real-world training data by taking RGB-D captures of objects with known meshes, and then use the resulting shape completion models to plan parallel-jaw grasps on singulated objects [8]. Later work attempted to train shape completion models on existing, generic, simulated datasets, and used the predictions to evaluate both grasp quality and kinematic/collision feasibility during placement [7]. However, the poor real-world performance of their shape completion model necessitated substantial focus on how the planning algorithm could reason about perceptual errors. We evaluate that model as a baseline in Section IV. ## III Dense Packing with Convolutional Occupancy Models ### _Frustum-Convolutional Occupancy Networks_ The shape completion problem is typically posed as follows: given a partial point cloud of an object, the goal is to produce a complete point cloud of the entire object surface, including invisible or occluded portions. This is particularly amenable to robotics, where RGB-D cameras can provide partial point clouds of varying quality. In existing benchmarks, scenes typically contain only a single object, or the partial point clouds are already segmented into objects. However, for a real-world application where objects appear in cluttered scenes, we also require access to an instance segmentation model. Model families such as Mask-RCNN or DETR are relatively mature and have been extensively used in robotic applications [12][13]. Given an RGB-D image and corresponding instance segmentation, our model predicts voxels for each object using a 3D fully-convolutional architecture. We find that inference with voxel representations is still efficient at the resolutions we care about, and the convolution-based network architecture is the best way to impose strong inductive biases when working with structured data like RGB-D images. As illustrated in Figure 2, we first construct a trapezoidal frustum for each object. Each frustum is the projection of its 2D bounding-box into 3D, clipped between a near plane and a far plane. The near plane is chosen to be slightly closer to the camera than the nearest point in the partial point cloud in the object's instance mask, and the far plane is chosen to be conservatively far based on the working volume of the scene. We then discretize each frustum into a voxel grid, and associate a feature vector of dimension \(C\) with each voxel, resulting in a feature volume of shape \(C\times D\times H\times W\) for each instance. For all experiments, we chose \(D=96,H=W=64\) for a favorable trade-off between inference speed and performance. The voxels are trapezoidal, but they are aligned with the camera viewpoint (for a given \(h,w\) coordinate, every voxel along the \(D\) dimension is on the same ray entering the camera, and projects to the same pixel in the image plane), and they are spaced linearly along the \(D\) dimension between the near and far plane. For each point in the partial point cloud, we fill the corresponding voxel with the RGB color and a binary indicator for the instance mask (\(C=4\)). The camera-centric and object-centric properties of this scheme encourage robustness to different camera viewpoints, scene composition, and object sizes, which improves the sim-to-real transfer performance. A similar frustum-based scheme was used by Mesh-RCNN; however, they predict voxels jointly with instance masks, and do not condition on a partial point cloud [10]. The latter is desirable in applications where depth is not available during inference, but depth cameras are already ubiquitous in robotics and can substantially improve performance. The initial \(C\times D\times H\times W\) feature volume is passed through a 3D-convolutional UNet to produce an updated feature volume of the same shape. We then use a 3D-conv layer to reduce the \(C\) dimension to 1, and then refine the resulting \(D\times H\times W\) feature map with a 2D UNet (where the \(D\) dimension is treated as the channel dimension). This is an efficient way to increase the expressiveness of the model, since 2D convolutions are cheaper than their 3D counterparts. Each element of the 2D UNet's output (still \(D\times H\times W\)) is the scalar probability that the corresponding voxel is occupied by the object's completed shape. During training, we label each voxel as positive if it is contained inside the object's ground-truth mesh. Each voxel is supervised independently using a class-balanced binary cross-entropy loss. During inference, we extract meshes from the voxel predictions using Marching Cubes [19] and sample points uniformly from the surface to obtain an unstructured point cloud. For more details, see our implementation. We call this model F-CON (_F_rustum-Convolutional _O_ccupancy _N_etwork). In the following sections, we discuss the dataset used to train it, and how we utilize it for dense packing in the real world. ### _Simulated Dataset_ COB-3D is a simulated dataset of common objects in bins, arranged in realistic yet challenging configurations [9]. As illustrated in Figure 3, the dataset contains roughly 7000 scenes of high-quality RGB renderings along with ground-truth camera calibrations, instance masks and point clouds. Each scene contains up to 30 objects, which are thrown into a bin using physics simulation. For robust sim-to-real transfer, the object sizes, camera parameters, and scene lighting are all randomized. In this work, we release a new version of this dataset, COB-3D-v2, with ground-truth meshes and poses of each object instance. This addition allows shape completion models such as F-CON to be trained on COB-3D-v2. For more details about the dataset format, see Appendix A. Example scenes are visualized in Appendix B. Note that neither COB-3D nor COB-3D-v2 have object categories (all objects belong to a single category). This differs from prior work in shape completion, where common practice is to either train category-specific models or condition on the object category. However, we find that the lack of categories is more representative of real-world settings with novel objects, which may belong to arbitrary novel categories, or may be hard to categorize in the first place. Fig. 3: COB-3D contains high quality RGB renderings (left), instance masks (top right), and depth maps (middle right). In this work, we released a new version, COB-3D-v2, that also includes meshes for each instance (bottom right). It can be downloaded from our project page. Fig. 2: F-CON unprojects a segmented instance into a camera-aligned frustum, populates the frustum with the instance’s partial point cloud (orange voxels), and then applies a series of 3D convolutions to produce a voxel grid of the completed shape (green). The ground-truth instance shape (purple contour) is used as supervision during training. Here we only visualize a single slice of the frustum (indicated by the red line in the image plane). In practice, F-CON unprojects the entire region-of-interest for each object. We trained F-CON for 125 epochs on COB-3D-v2, which took about 4 GPU-days. We used a batch size of 32 scenes and the Adam optimizer with default hyperparameters (\(\alpha=10^{-3},\beta_{1}=0.9,\beta_{2}=0.999,\epsilon=10^{-8}\)). During training, we also randomize the near and far planes for each instance, which improves robustness after sim-to-real transfer. In Section IV-A, we evaluate F-CON against baselines for shape completion on COB-3D-v2. ### _Dense Packing with F-CON_ Given a trained shape completion model, there can be different ways to utilize it in a dense-packing system. To highlight F-CON's perceptual capabilities, we opted for a simple planning pipeline that allows us to use off-the-shelf methods from prior work. As described in Algorithm 1, we determine a grasp pose \(g^{*}\in SE(3)\) and placement pose \(q^{*}\in SE(3)\) from the following inputs: * A height-map \(H[\cdot,\cdot]\) for the target container, which is computed from the captured depth map. The container is discretized into rectangular cells, where \(H[u,v]=z\) if the highest sensed point in cell \((u,v)\) is distance \(z\) from the container bottom. * A set of candidate grasp poses \(G=\{g^{(1)},\dots,g^{(N)}\}_{i=1}^{N},g^{(i)}\in SE(3)\). These can be generated using any method and for any grasping modality (suction, parallel-jaw, etc). * Completed point clouds \(O=\{o^{(i)}\}_{i=1}^{N}\) for each grasped object. Each \(o^{(i)}\in\mathbb{R}^{K\times 3}\) is expressed in the \(g^{(i)}\) frame. * A cost function \(C(g,q)\rightarrow\mathbb{R}\) that evaluates the packing quality of a candidate grasp \(g\) and placement \(q\). Given the returned grasp and placement poses, we use a scripted trajectory planner such that the robot's gripper retracts linearly upwards from the grasp pose, and descends linearly downwards during placement. ``` Input: Height-map \(H\) for the target container Candidate grasps \(G=\{g^{(1)},\dots,g^{(N)}\}\) Object point clouds \(O=\{o^{(1)},\dots,o^{(N)}\}\) Placement cost function \(C(g,q)\rightarrow\mathbb{R}\) Initialize \(g^{*}\leftarrow\text{null},q^{*}\leftarrow\text{null},c^{*}\leftarrow\infty\); fork = 1,..., Mdo Sample a grasp \(g^{(k)}\) from \(G\); Sample a placement cell \((u^{(k)},v^{(k)})\) inside \(H\); Sample a placement orientation \(R^{(k)}\); Compute the lowest placeable height \(z^{(k)}\) (see Figure 4) for object \(o^{(k)}\), when centered at \(H[u^{(k)},v^{(k)}]\) and rotated by \(R^{(k)}\); Compute the gripper pose \(q^{(k)}\) corresponding to \((i^{(k)},j^{(k)},z^{(k)},R^{(k)})\); Evaluate \(c^{(k)}\gets C(g^{(k)},q^{(k)})\); if\(c^{(k)}<c\)and MotionFeasible\((g^{(k)},q^{(k)})\)then \(g^{*}\gets g^{(k)},q^{*}\gets q^{(k)},c^{*}\gets c^{(k)}\); end if end for Result: Best placement \(q^{*}\), corresponding grasp \(g^{*}\) ``` **Algorithm 1**A simple planner for dense packing This framework neatly encapsulates existing planning methods like DBLF and HM. Using \(C(g,q)=q_{z}+\epsilon\cdot(q_{x}+q_{y})\), for \(0<\epsilon<<1\), yields the DBLF planner. The HM planner estimates the height-map \(H^{\prime}(q)\) that would result from placement \(q\), and then uses \(C(g,q)=\sum_{u,v}H^{\prime}(q)[u,v]-H[u,v]\). For more details about how \(H^{\prime}(q)\) is computed, see [2]. Combining F-CON with a model-based RL method (such as by extending the simulated packing work discussed in Section II) could potentially result in a better cost function than either DBLF or HM, as well as a better sampler than the uniform one that we use. However, to focus on evaluating F-CON, we defer such explorations to future work. Unlike some prior work, we do not consider re-grasping, where already-packed items may be removed in order to achieve a better arrangement. Re-grasping allows the system to mitigate the effects of perceptual mistakes made in the preceding timesteps, which confounds our evaluation of shape completion models. ## IV Experiments We conducted extensive real-world experiments to answer the following questions: 1. How effective is F-CON at shape completion, as trained and evaluated on the realistic-but-simulated scenes in COB-3D-v2? 2. To what extent does F-CON address the perceptual difficulties surrounding dense packing, as evaluated on novel objects in cluttered real-world scenes? ### _COB-3D-v2 Evaluation_ To benchmark shape completion on COB-3D-v2, we considered several metrics, following prior work: Fig. 4: A 2D-slice of the lowest-placeable-height computation from Algorithm 1. For a candidate placement pose, every point in the completed shape (yellow) is projected (black/red arrows) onto the target container’s height-map (blue). The shortest projection distance (red arrow) determines the height at which the object can be placed for that cell and orientation. In this example, the left candidate results in a lower placement than the right. * **Chamfer distance**: This is the standard metric for comparing unstructured point clouds in benchmarks such as ShapeNet [14]. Given two point clouds \(X\) and \(Y\), the Chamfer distance (CD) is computed as follows: \[\text{CD}(X,Y)=\frac{1}{|X|}\sum_{x\in X}\min_{y\in Y}\|x-y\|_{2}^{2}+\frac{1}{|Y |}\sum_{y\in Y}\min_{x\in X}\|y-x\|_{2}^{2}\] Often, an L1-variant (CD-L1), where the \(\|\cdot\|_{2}^{2}\) norms are replaced by \(\|\cdot\|_{1}\), is reported alongside the traditional Chamfer-L2 distance (CD-L2). * \(\mathbf{F1}^{\tau}\): For a given distance threshold \(\tau\), predicted point cloud \(X\) and ground-truth point cloud \(Y\), \(\text{F1}^{\tau}(X,Y)\) is the harmonic mean of the precision at \(\tau\) (fraction of points in \(X\) that are within \(\tau\) of some point in \(Y\) ) and the recall at \(\tau\) (fraction of points in \(Y\) that are within \(\tau\) of some point in \(X\)). This metric is usually considered alongside the Chamfer distance in shape completion benchmarks because it is less sensitive to outliers, and is typically reported at varying values of \(\tau\). * **Box IoU, IoG, F1**: Given a completed point cloud, a 3D bounding-box can be fit around it and compared it to the ground-truth bounding-box, using metrics from 3D bounding-box estimation. In contrast with Chamfer distances, and especially F1\({}^{\tau}\), we find that bounding-box metrics are very sensitive to outliers. However, they may be more representative of packing performance, since outliers can cause a packing system to believe that an item cannot fit in a pose where it actually could have. To fit a bounding box around a point cloud, we sample rotations uniformly at random in quaternion space [20], compute the axis-aligned dimensions of the enclosing box in each sampled rotation frame, and finally choose the sampled box with the smallest volume (see Figure 5 for a visualization). Using this scheme, we report IoU, IoG, and F1: IoU is the standard metric for bounding-box estimation, IoG (intersection-over-ground-truth) is a form of recall to complement IoU, and F1 trades off between the two [9]. Note that IoG is not particularly meaningful in isolation, since perfect IoG can be achieved by simply predicting arbitrarily large bounding-boxes. We considered the following methods as baselines to F-CON. Like prior work in shape completion for grasping, we did not consider implicit functions: they must be queried extremely densely to extract surface geometry, and often require test-time optimization, making them too computationally expensive during inference for use in a real-world system [16][17]. * PCN [11] has been used in prior work for grasp and placement planning [7]. It uses an encoder that embeds a partial cloud into a latent space, and a decoder that constructs the completed point cloud from the latent vector. Both encoder and decoder use PointNets [15] to operate directly on unstructured point clouds directly, and they are trained end-to-end using the Chamfer-L2 distance as the loss function. To train PCN on COB-3D-v2, we normalize each instance's partial point cloud using its frustum, decode the completed point cloud in the normalized coordinates, and then transform it back to the original space. For a more fair comparison with F-CON, we also improve upon the original architecture by concatenating RGB and instance masks with the partial point cloud. * PoinTr [18] uses a similar encoder-decoder framework to PCN, but substantially improves the architecture, primarily by using Transformers [21]. It achieves state-of-the-art performance on many shape completion benchmarks even outside of robotics. We train PoinTr using the same normalization scheme, additional inputs, and loss function as PCN. * The autoregressive bounding-box model (AR-bbox) that accompanied the original COB-3D release has been shown to perform well on 3D bounding-box estimation [9]. Since bounding boxes have been used as a simplified state representation in prior packing work (as discussed in Section II), we also consider this model as a baseline, but only evaluate it on bounding-box metrics. Both Chamfer distance and F1\({}^{\tau}\) generally depend on both the scaling and density of the point clouds. Following prior work [10], we scale all point clouds such that the longest edge of the ground-truth bounding-box has length 10. The ground-truth point clouds are generated by sampling 16384 points uniformly from the mesh surface. For F-CON, we sample points in the same way, but from the meshes obtained via Marching-Cubes. PCN and PoinTr always output a fixed number of points, so we train them accordingly to predict 16384 points. Fig. 5: Fitting a bounding box around a point cloud (black dots). We sample several candidate boxes that contain all the points (blue), and then take the one with minimum volume (green). Notice that the single point on the far right substantially influences the dimensions of the fitted box. In Table I, we see that F-CON outperforms the baseline methods across all shape completion metrics. We find these results particularly compelling given that F-CON is not trained to minimize Chamfer distance, unlike PCN and PoinTr. We also note that F-CON performs well on 3D bounding-box metrics, even though PCN and PoinTr do not. Qualitatively, we observe PCN and PoinTr are prone to outliers in their predicted point clouds - this is well-reflected in their F1\({}^{\tau}\) and IoG scores. ### _Real-World Dense Packing_ We designed real-world experiments to mimic typical order fulfillment applications, where a robot must pack items from a cluttered bin (or often, multiple bins) into a smaller container, like a cardboard box. As shown in Figure 6, we used an ABB-1200 with a 5-cup suction gripper, with RGB-D cameras mounted above both containers. The item set consists of a variety of household objects of diverse shapes and categories. In total, we have 35 objects, which are unseen by all models (since all models are trained purely in simulation on COB-3D-v2). Recall that the goal of dense packing is to minimize the volume that the packed objects occupy. Thus, we evaluate our system by directly estimating this quantity at the end of each episode, using a scheme inspired by the HM heuristic. To the best of our knowledge, no prior work has evaluated real-world packing quality with a continuous volumetric measure (a common practice is simply to check whether all items were successfully placed in the container). Using the target container's height-map, as defined in Section III-C, we can estimate the total volume occupied by the packed objects via numerical integration over the cells of the height-map. The HM planner minimizes the change in this quantity for each item to be packed; while tractable and easy to implement, this is generally not optimal when considering the entire episode. For each shape completion model from Section IV-A, we use the planner from Algorithm 1 using either DBLF or HM as the cost function \(C(g,q)\), height-map cells of 1 mm \(\times\) 1 mm, and a sample size of \(M=4096\) placements. Since F-CON operates in an object-centric manner, its inference time scales with the number of objects in the scene. For a large scene (16 objects), it takes about 25 milliseconds on an NVIDIA Quadro RTX 600. The entire planning process (perception, sampling placements, scoring, motion planning) takes about 300 milliseconds. Within each trial, we select a subset ranging from 5 to 15 items uniformly at random from the overall item set, and arrange them chaotically in the source container. Each trial consists of one episode for each (_shape completion_, _planner_) configuration, wherein the system packs the sampled items one-by-one into the target container. The episode ends when either all items are placed, one is placed that causes the container to overflow, or the system cannot find an overflow-free plan. At the end of each episode, we estimate the packed volume using the height-map integration scheme discussed in the previous paragraph. Finally, we measure human performance by quickly packing the items by hand (taking 20 seconds or less), and measuring the packed volume in the same manner. In Table II, we show the performance of each shape completion model paired with different off-the-shelf packing planners, alongside human performance, across 50 trials. We report the mean and standard error for the following metrics: * **Success Rate**: the fraction of episodes where all items were successfully packed at the end of the episode. * **Packed Volume**: the volume measured by height-map integration at the end of the episode. We express this value as a fraction of the total container volume. In episodes that are not successes, we record a value of 1.0, which is the worst possible score and corresponds to using the entire container. For all planners, F-CON substantially outperforms the other shape completion methods, demonstrating its efficacy for real-world dense packing. In Figure 7, we visualize representative episodes for a qualitative understanding. With other methods, the system often cannot find a suitable placement pose or causes the target container to overflow. The former typically results from overestimation of the object's size, which is consistent with the F1\({}^{\tau}\) and IoG results discussed in Section IV-A. Fig. 6: (a) Our robot picks items from the cluttered bin in the front, and packs them into the cardboard box on the the table. (b) The complete item set used in our experiments. ## V Conclusion We presented F-CON, a voxel-based shape completion model with strong inductive biases, and validated it on the highly-realistic simulated dataset COB-3D-v2. We then conducted extensive experiments in the real-world, and showed that the strong geometric priors learned by F-CON can enable dense-packing of complex, unseen items in chaotic, cluttered scenes, without any real-world training. Although using F-CON results in substantially better performance that baseline shape completion methods, one shortcoming of our packing system is the simplicity of the planning methods we used. Combining F-CON with RL methods to obtain learned packing policies is an exciting direction for future work, and we hypothesize that this could close the gap with respect to human performance. [MISSING_PAGE_POST] nd-to-end object detection with transformers. arXiv ### _COB-3D-v2 Dataset Format_ COB-3D-v2 contains 6955 scenes in total (6259 train, 696 val). For each scene, we provide the following: |-- rgb: The rendered RGB image. Shape (3, H, W), dtype float32. Values scaled to [0, 1]. |-- intrinsic: The camera intrinsics. Shape (3, 3), dtype float32. |-- depth_map: The rendered depth map corresponding to 'rgb'. Shape (H, W), dtype float32. |-- normal_map: The rendered normal map corresponding to rgb'. Shape (3, H, W), dtype float32. |-- near_plane: The minimum depth value of the scene's working volume. Scalar, float32. |-- far_plane: The maximum depth value of the scene's working volume. Scalar, float32. |-- segm/ |-- boxes: 2D bounding boxes for each object in the scene. Shape (N_objects, 4), dtype float32. These are pixel coordinates relative to 'rgb'. The box format is '[x_low, y_low, x_high, y_high]' |-- masks: Binary masks for each object in the scene. Shape (N_objects, H, W), dtype bool. |-- amodal_masks: Amodal instance masks for each object. Shape (N_objects, H, W), dtype bool. |-- bbox3d/ |-- poses: The pose of each object's 3D bounding box, as a 4x4 matrix. This is the transform from the bbox frame to the camera frame. Shape (N_objects, 4, 4), dtype float32. |-- dimensions: The dimensions of each 3D bounding box. Shape (N_objects, 3), dtype float32. |-- corners: The corner points of each 3D bounding box, in the camera frame. Shape (N_objects, 8, 3), dtype float32. |-- mesh_ids: The mesh_id of each object. List[str], length N_objects. |-- obj_poses/ |-- poses: The pose of each mesh, as a 4x4 matrix. This is the transform from the mesh frame to the camera frame. Note that the mesh frame does not necessarily equal the bbox frame! Shape (N_objects, 4, 4), dtype float32. |-- scales: The scale of each mesh. Shape (N_objects, 3), dtype float32. |-- voxel_grid/ |-- voxels: The surface of each mesh, extracted into a voxel grid. Shape (N_objects, n_voxels, n_voxel, n_voxels), dtype bool. |-- extents: The extents of each object's voxel grid. 'voxels[i]' span the cuboid '[-extents[i], extents[i]]', in the object frame 'obj_poses/poses[i]'. Shape (N_objects, 3), dtype float32. ### _COB-3D-v2 Examples_ The following pages exhibit some representative scenes from COB-3D-v2, showcasing the visual quality and diversity of the dataset. In each row, the left column is the rendered RGB, and the right column is the rendered depth map. ## References ## References ## References
2309.08082
An Attractive Proposal for Resolving the Hubble Tension: Dynamical Attractors that Unify Early and Late Dark Energy
Early dark energy is a promising potential resolution of the Hubble tension. Unfortunately, many models suffer from the need to fine-tune their initial conditions to ensure that the epoch of early dark energy coincides with matter-radiation equality. We propose a class of attractive early dark energy models where this coincidence arises naturally as a saddle point of a dynamical system that attracts a large volume of phase-space trajectories regardless of the initial conditions. The system approaches a global dark energy attractor at late-times. Our framework therefore unifies early and late dark energy using a single scalar degree of freedom. We analyze a fiducial attractive early dark energy model and find that it is disfavored by cosmological data due to the presence of a long-lived saddle point in the matter era where the scalar plays the role of an additional component of (non-clustering) dark matter. Our investigations provide lessons for future model-building efforts aimed at constructing viable attractive early dark energy models.
Omar F. Ramadan, Tanvi Karwal, Jeremy Sakstein
2023-09-15T00:52:20Z
http://arxiv.org/abs/2309.08082v2
An _Attractive_ Proposal for Resolving the Hubble Tension: Dynamical Attractors that Unify Early and Late Dark Energy ###### Abstract Early dark energy is a promising potential resolution of the Hubble tension. Unfortunately, many models suffer from the need to fine-tune their initial conditions to ensure that the epoch of early dark energy coincides with matter-radiation equality. We propose a class of _attractive early dark energy_ models where this coincidence arises naturally as a saddle point of a dynamical system that attracts a large volume of phase-space trajectories regardless of the initial conditions. The system approaches a global dark energy attractor at late-times. Our framework therefore unifies early and late dark energy using a single scalar degree of freedom. We analyze a fiducial attractive early dark energy model and find that it is disfavored by cosmological data due to the presence of a long-lived saddle point in the matter era where the scalar plays the role of an additional component of (non-clustering) dark matter. Our investigations provide lessons for future model-building efforts aimed at constructing viable attractive early dark energy models. ## I Introduction Discovering the origin of the Hubble tension -- which refers to the statistically significant discrepancy between measurements of the Hubble constant \(H_{0}\) made using late-universe probes and early-universe inferences [1; 2; 3] -- is an urgent goal of cosmology research. The lack of a complete concordance model that is capable of accommodating all of our observations is limiting our ability to interpret data from cosmological missions and astrophysical surveys, and will continue to do so until the physics responsible for the Hubble tension is identified. The strongest tension, now over \(5\sigma\), is between the \(\Lambda\)CDM fit to the Planck cosmic microwave background (CMB) data, which yields \(H_{0}=(67.4\pm 0.5)\)km/s/Mpc [4]; and the measurement by the SH0ES collaboration, who report \(H_{0}=(73.29\pm 0.90)\)km/s/Mpc [5] using a Cepheid and type-Ia supernova distance ladder. While there is some spread in the late-universe measurements of \(H_{0}\), they trend to higher \(H_{0}\) values, with none scattering lower than the Planck CMB estimate [6]. Similarly, inferences based on the early universe, even those independent of any CMB data, are clustered at low \(H_{0}\) values [7; 8; 9; 10], with none scattering higher than the SH0ES late-universe measurement. Concerted efforts to update and interrogate the data over the past few years have failed to find any evidence that the tension is due to a systematic error in the data [11; 12; 13; 14; 15; 16; 17]. It is difficult to imagine a single systematic that would affect the various different objects used to calibrate the distance ladder, as each is governed by different physics. Alternatively, a series of uncorrelated systematics must miraculously conspire to bias \(H_{0}\) upwards by similar amounts in order to explain the tension. Given these considerations, the hypothesis that the disagreement between early- and late-universe measurements of \(H_{0}\) originates from new physics beyond the cosmological standard model has been the subject of an intense research effort [3; 18]. A plethora of theoretical solutions have been proposed [3; 18; 19]. Among them, early dark energy (EDE) [20; 21; 22] is one of the most successful [19]. In this scenario, a new component of the Universe becomes active around the time of matter-radiation equality and accounts for a maximal fractional contribution \(f_{\rm ede}\sim 10\%\) of the Universe's energy budget at a critical redshift \(z_{c}\sim 3300\). The presence of this additional component increases the pre-recombination expansion rate, which shrinks the _physical_ size of the sound horizon. To compensate and maintain the precisely measured _angular_ size \(\theta_{*}\) of the sound horizon, the CMB-predicted \(H_{0}\) increases toward the locally-measured value. Post recombination, the EDE must redshift away faster than radiation in order to preserve the excellent \(\Lambda\)CDM fit to late-time cosmic observables [22]. Despite its success, EDE faces several challenges. First, another growing tension in cosmology, the \(\sim 3\sigma\)\(S\) tension between the amplitude of density fluctuations inferred from the CMB and measured by weak-lensing probes [23], is exacerbated by EDE [24; 25] because it predicts a universe with more dark matter than \(\Lambda\)CDM. Second, the physics of EDE is disconnected from the physics of photons and baryons, presenting a coincidence or _why then?_ problem. Why was EDE important at matter-radiation equality and not some other epoch? The majority of models achieve this coincidence by fine-tuning the model parameters but these are not protected from radiative corrections by any fundamental symmetries. These models are therefore unnatural effective field theories. Proposals that avoid these fine tunings include models where the EDE scalar couples to neutrinos [26; 27; 28] that naturally begin the epoch of EDE when they inject energy into the scalar when they become non-relativistic, which, coincidentally, is around the time of matter-radiation equality; realizing EDE potentials in UV-complete theories such as string theory [29]; coupling EDE to dark matter [30; 31; 32]; and coupling it to a non-Abelian gauge group [33; 34]. Finally, the _ad hoc_ nature of EDE is unappealing. EDE is posited solely to solve the Hubble tension, and does not reach beyond that issue to connect with other cosmological phenomena such as late dark energy (LDE). This final problem has led to attempts to unify EDE and LDE through quintessence models where a scalar field plays the role of both EDE and LDE [35; 36]. In this work, we propose a model that attempts to address the shortcomings described above: _attractive Early Dark Energy_ (@EDE). In this framework, both EDE and LDE are described by a single quintessence scalar field \(\phi\) with a non-linear potential that results in autonomous cosmological equations that form a dynamical system. The potential is chosen such that: (1) EDE arises as a saddle point during the radiation epoch so that solutions naturally flow towards it regardless of the initial conditions; and (2) the late-time global attractor is a dark-energy-dominated universe. This framework overcomes the need for fine-tuning as the fixed points determine the magnitude of the EDE injection, not the initial conditions. In addition, the framework is appealing because unifying EDE and LDE reduces the number of extra degrees of freedom needed to explain the \(H_{0}\) tension and late dark energy. We study an example of @EDE that is motivated by string theory, analyze its background dynamics, and confront it with cosmological data. Ultimately, we find that this specific model is not preferred by the data due to its percent-level contribution to the energy budget of the post-recombination universe. Our analysis reveals important lessons that lay the foundations for constructing viable @EDE models that we discuss in our conclusions. This paper is organized as follows: In section (II.1), we lay out our framework with a discussion of the dynamical system, its fixed points, and the properties they must possess in order to potentially resolve the \(H_{0}\) tension and to drive LDE. In section (II.2), we discuss the background dynamics of the field. We outline our analysis methodology, describing our choices of parameters, priors and data sets in section (III) and present its results in section (IV). Finally, in section (V), we discuss our results and draw conclusions. ## II Attractive early dark energy ### Framework and Model Our framework for constructing @EDE models is the dynamical systems formulation of a single uncoupled quintessence scalar \(\phi\)[37; 38; 39]. This formalism has been successful at alleviating the LDE coincidence problem, and provides a convenient and well-studied starting point for constructing dynamical systems that include EDE fixed points. The action consists of the Einstein-Hilbert action, an uncoupled quintessence scalar field \(\phi\), and a decoupled Standard Model (SM) and dark matter sector described by \(S_{\rm SM}\) and \(S_{\rm DM}\) respectively: \[S= \int d^{4}x\sqrt{-g}\left[\frac{M_{\rm Pl}^{2}R}{2}-\frac{1}{2} \nabla_{\mu}\phi\nabla^{\mu}\phi-V(\phi)\right] \tag{1}\] \[+ S_{\rm SM}+S_{\rm DM}.\] We assume a flat Friedmann-Lemaitre-Robertson-Walker (FLRW) universe that is described by the metric \[ds^{2}=-dt^{2}+a^{2}(t)\delta_{ij}dx^{i}dx^{j}\, \tag{2}\] where \(a(t)\) is the dimensionless scale factor normalized to unity today. The evolution of the scalar field is determined by the Klein-Gordon equation \[\ddot{\phi}+3H\dot{\phi}+\frac{dV}{d\phi}=0\,. \tag{3}\] We consider a universe containing matter (\(m\)), radiation (\(r\)), and the scalar; and define the following quantities: \[\Omega_{m}=\frac{\rho_{m}}{3M_{\rm Pl}^{2}H^{2}},\quad\Omega_{r}= \frac{\rho_{r}}{3M_{\rm Pl}^{2}H^{2}}, \tag{4}\] \[\Omega_{k}=\frac{\dot{\phi}^{2}}{6M_{\rm Pl}^{2}H^{2}},\qquad \Omega_{v}=\frac{V(\phi)}{3M_{\rm Pl}^{2}H^{2}},\] (5) \[w_{\phi}=\frac{P_{\phi}}{\rho_{\phi}}=\frac{\Omega_{k}-\Omega_{v }}{\Omega_{k}+\Omega_{k}},\] (6) \[\lambda\equiv-M_{\rm Pl}\frac{V_{,\phi}}{V(\phi)},\] (7) \[\Gamma\equiv\frac{V(\phi)V_{,\phi\phi}}{V_{,\phi}^{2}}\,. \tag{8}\] Here, \(\Omega_{m}\), \(\Omega_{r}\), \(\Omega_{k}\), and \(\Omega_{v}\) are the density parameters for matter, radiation, the scalar's kinetic energy, and the scalar's potential energy respectively; \(w_{\phi}\) is the scalar's (time-dependent) equation of state; and \(\lambda(\phi)\) and \(\Gamma(\phi)\) are the _roll_ and _tracker_ parameters respectively, which are helpful for parameterizing the dynamical system. The equations (4)-(8) along with the Friedmann equations and the continuity equations for matter and radiation can be written as a dynamical system with phase space \(\{\Omega_{m},\Omega_{r},\Omega_{k},\Omega_{v},\lambda,\Gamma\}\)[38; 39]. The Friedmann constraint, \(\Omega_{m}+\Omega_{r}+\Omega_{k}+\Omega_{v}=1\), allows us to eliminate one of the \(\Omega\)'s in terms of the others, reducing the dimension of the phase space by one. We choose to eliminate \(\Omega_{m}\). The equations of motion in dynamical systems form are then: \[\Omega_{r}^{\prime} =\Omega_{r}(3\Omega_{m}+4\Omega_{r}+6\Omega_{k}-4), \tag{9}\] \[\Omega_{k}^{\prime} =\Omega_{k}(3\Omega_{m}+4\Omega_{r}+6\Omega_{k}-6)+\lambda\Omega_{v }\sqrt{6\Omega_{k}},\] (10) \[\Omega_{v}^{\prime} =\Omega_{v}(3\Omega_{m}+4\Omega_{r}+6\Omega_{k}-\lambda\sqrt{6 \Omega_{k}})\] \[+\lambda\Omega_{v}\sqrt{6\Omega_{k}},\] (11) \[\lambda^{\prime} =-\lambda^{2}(\Gamma(\phi)-1)\sqrt{6\Omega_{k}},, \tag{12}\] where \(\Omega_{m}=1-\Omega_{r}-\Omega_{k}-\Omega_{v}\). These equations do not form an autonomous system unless \(\lambda^{\prime}=0\) identically, \(\lambda(\phi)\) is invertible so that one can find \(\phi(\lambda)\) and hence write \(\Gamma(\phi)=\Gamma(\phi(\lambda))=\Gamma(\lambda)\) to close the system, or further equations for derivatives of \(\Gamma\) are supplied. The first case implies that \(\lambda\) is constant, which corresponds to an exponential potential \(V(\phi)=V_{0}\exp(-\lambda\phi/M_{\rm Pl})\). The resulting three-dimensional phase space of this system \(\{\Omega_{r},\Omega_{k},\Omega_{v}\}\) has been extensively studied [37, 38], and it is not possible to simultaneously have an LDE attractor with \(w_{\phi}\approx-1\) and an early-universe saddle point that could play the role of EDE. The case where \(\lambda(\phi)\) is invertible so that \(\Gamma=\Gamma(\lambda)\) corresponds to a four-dimensional phase space \(\{\Omega_{r},\Omega_{k},\Omega_{v},\lambda\}\). We will examine the structure of this space shortly and find that this dynamical system does admit the possibility of @EDE models with an EDE saddle and an LDE attractor. Systems where derivatives of \(\Gamma\) (and possibly their derivatives) are required correspond to higher-dimensional phase spaces. We will not explore these systems here, but follow-up investigations along these lines would certainly be interesting. When \(\lambda\) is invertible, the fixed points of the dynamical system correspond to points where equations (9)-(12) are simultaneously equal to zero. We can characterize these independently of the choice of potential by setting equation (12) to zero and finding all roots \(\lambda_{*}\) such that \[\lambda_{*}^{2}(\Gamma\left(\lambda_{*}\right)-1)=0\,, \tag{13}\] and substituting the roots \(\lambda=\lambda_{*}\) into (9)-(11) to obtain the fixed points. These are listed in Table 1 along with their linear stability; we refer the reader to [38, 39] for details of how the stability is determined. To make further progress, we must specify a potential, calculate \(\Gamma(\lambda)\), and hence calculate \(\lambda_{*}\) as a function of the potential's parameters. Only specific potentials have invertible \(\lambda(\phi)\). These include the single exponential (\(\Gamma=1\)) potential discussed above, power-law potentials, and the particular functions listed in Table 10 of [39]. Examining Table 1 reveals the properties of the potential required to construct @EDE models. Specifically, we require that: 1. Fixed point B* exists, and corresponds to an EDE saddle point i.e., \(\lambda_{*}\) is such that \(\Omega_{\phi}\sim 10\%\). 2. Fixed point C* exists and is a stable late-time global attractor that will account for LDE i.e., \(\lambda_{*}\) is such that \(w_{\phi}\approx-1\).1 Footnote 1: One could replace fixed point C* by fixed point D, but this corresponds to the scalar behaving as a pure cosmological constant i.e., late dark energy is non-dynamical. This scenario is equivalent to the canonical EDE scenario. The above requirements are incompatible with a single value of \(\lambda_{*}\) because having \(w_{\phi}\approx-1\) at point C* implies a small \(\lambda_{*}\) that would render point B* non-existent, so any viable potential must admit multiple distinct roots of equation (13). We surveyed the potentials known to \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \# & \(\mathbf{\Omega_{\phi}}\) & \(\mathbf{\Omega_{X}}\) & **Existence** & **Condition** & **Stability** & \(\mathbf{w_{\phi}}\) \\ \hline \hline O\({}_{\lambda}\) & 0 & 0 & Always & None & Saddle & 0 \\ \hline A\({}^{*}\) & 1 & 0 & & \(\lambda_{*}>-\sqrt{6}\) and \(\Gamma_{*}^{\prime}>0\) & Unstable node & \\ & & & \(\forall\lambda_{*}\) & \(\lambda_{*}<-\sqrt{6}\) & Saddle & 1 \\ & & & & \(\Gamma_{*}^{\prime}<0\) & Saddle & \\ \hline B\({}^{*}\) & \(\dfrac{3(1+w)}{\lambda_{*}^{2}}\) & \(1-\dfrac{3(1+w)}{\lambda_{*}^{2}}\) & \(\lambda_{*}\geq\sqrt{3(1+w)}\) & \(\lambda_{*}\Gamma_{*}^{\prime}>0\) & Stable node & \\ & & & & \(\lambda_{*}\Gamma_{*}^{\prime}<0\) & Saddle & \\ \hline C\({}^{*}\) & 1 & 0 & 1 & \(\sqrt{3(1+w)}\leq\lambda_{*}<\sqrt{6}\) & Saddle & \(-1+\dfrac{\lambda_{*}^{2}}{3}\) \\ & & & & & \(\lambda_{*}\Gamma_{*}<0\) & Saddle & \\ \hline D & 1 & 0 & Always & \(\dfrac{\lambda^{2}\left[\Gamma(\lambda)-1\right]|_{\lambda=0}>0}{\lambda^{2} \left[\Gamma(\lambda)-1\right]|_{\lambda=0}<0}\) & Stable node & \(-1\) \\ \hline \end{tabular} \end{table} Table 1: Fixed points of the general dynamical system when \(\lambda(\phi)\) is invertible. \(\Omega_{X}\) represents the density parameter of the dominant species — either matter or radiation — i.e., \(X=m,\,r\). Each distinct value of \(\lambda_{*}\) that solves equation (13) gives rise to a set of fixed points A*, B*, and C*. \(\Gamma_{*}^{\prime}=\Gamma^{\prime}(\lambda_{*})\). Note that point O\({}_{\lambda}\) exists independently of the potential, and point D has \(\lambda=0\) so corresponds to a cosmological constant. Point D always exists provided \(\lambda=0\) is accessible. have invertible \(\lambda(\phi)\) and identified two that satisfiy the criteria above: \[V(\phi) =V_{\alpha}e^{-\alpha\frac{\phi}{M_{\rm Pl}}}+V_{\beta}e^{-\beta \frac{\phi}{M_{\rm Pl}}}, \tag{14}\] \[V(\phi) =V_{0}\left(\eta+e^{-\alpha\frac{\phi}{M_{\rm Pl}}}\right)^{-\beta}. \tag{15}\] The first is the double exponential (14) that has been thoroughly studied in the context of quintessence LDE [40; 41; 42] and arises naturally in string theory [43]. The second potential (15) is contrived to correspond to a non-dynamical cosmological constant at late times [44; 45]. Wishing to focus on natural and well-motivated models, we confine our study to the double exponential potential. This potential shares some similarities with the _assisted quintessence_ EDE model studied by [46] in which multiple scalar fields, each with a single exponential potential, are investigated as a resolution of the Hubble tension. ### Background dynamics The double exponential potential admits two roots of equation (13): \(\lambda_{*}=\alpha\) and \(\lambda_{*}=\beta\). We make the arbitrary choice to impose \(\alpha>\beta\), in which case we can identify the EDE saddle point B* with \(\lambda_{*}=\alpha\) and the LDE attractor C* with \(\lambda_{*}=\beta\). We can gain insight into the cosmology of this potential by considering the limit \(V_{\alpha}\gg V_{\beta}\), motivated by the hierarchy between the energy scales of EDE and LDE. We then expect that this region of parameter space corresponds to EDE driven by the \(\alpha\)-exponential and LDE driven by the \(\beta\)-exponential. In this limit, the potential can be approximated as \[V(\phi)\approx\begin{cases}V_{\alpha}e^{-\alpha\frac{\phi}{M_{\rm Pl}}},& \text{at early times},\\ V_{\beta}e^{-\beta\frac{\phi}{M_{\rm Pl}}},&\text{at late times},\end{cases} \tag{16}\] assuming appropriate initial conditions. To a good approximation, this implies that \[\lambda_{*}\approx\begin{cases}\alpha,&\text{at early times},\\ \beta,&\text{at late times}.\end{cases} \tag{17}\] This simplifies the analysis because the \(\lambda_{*}=\alpha\) saddle -- which corresponds to EDE -- dictates the cosmology of the early universe, and we shall hence refer to it as it the _EDE saddle_; and the \(\lambda_{*}=\beta\) attractor -- which corresponds to LDE -- dictates the late universe cosmology, so we refer to this as the _LDE attractor_. This allows us to analyze the late- and early-time cosmology independently by examining the phase space of each exponential separately. The fixed points for a single exponential are given in Table 2. Beginning with early times when \(\lambda_{*}=\alpha\), during the radiation epoch we can identify point G in Table 2 with EDE supplying a fractional energy density2 Footnote 2: Technically, the expression in equation (18) corresponds to point B* in Table 1. Point G in Table 2 is found by substituting \(w=1/3\) corresponding to the radiation era. We keep equation (18) more general for the purposes of a later discussion. \[\Omega_{\phi}=\frac{3(1+w)}{\alpha^{2}}\simeq f_{\rm ede}\,. \tag{18}\] Previous EDE analyses suggest that the \(H_{0}\) tension will be resolved if \(f_{\rm ede}\sim 0.1\)[3], implying that \(\alpha\approx\sqrt{40}\). This \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \# & \(\mathbf{\Omega_{m}}\) & \(\mathbf{\Omega_{r}}\) & \(\mathbf{\Omega_{\phi}}\) & **Existence** & **Condition** & **Stability** & \(\mathbf{w_{\phi}}\) \\ \hline \hline A & \(0\) & \(0\) & \(1\) & \(\forall\,\lambda_{*}\) & - & Saddle & \(-1\) \\ \hline B & \(0\) & \(0\) & \(1\) & \(\forall\,\lambda_{*}\) & \(\lambda_{*}<\sqrt{6}\) & Unstable node & \(1\) \\ & & & & \(\lambda_{*}>\sqrt{6}\) & Saddle & & \\ \hline C & \(0\) & \(1\) & \(0\) & \(\forall\,\lambda_{*}\) & - & Saddle & \(0\) \\ \hline D & \(1\) & \(0\) & \(0\) & \(\forall\,\lambda_{*}\) & - & Saddle & \(0\) \\ \hline E & \(0\) & \(0\) & \(1\) & \(\lambda_{*}<\sqrt{6}\) & \(0<\lambda_{*}<2\) & Saddle & \(-1+\frac{\lambda_{*}^{2}}{3}\) \\ & & & & & \(2<\lambda_{*}<\sqrt{6}\) & Saddle & \\ \hline F & \(1-\frac{3}{\lambda_{*}^{2}}\) & \(0\) & \(\frac{3}{\lambda_{*}^{2}}\) & \(\lambda_{*}>\sqrt{3}\) & \(\sqrt{3}<\lambda_{*}<\sqrt{24/7}\) & Stable node & \(0\) \\ & & & & & \(\lambda_{*}>\sqrt{24/7}\) & Stable spiral & & \\ \hline G & \(0\) & \(1-\frac{4}{\lambda_{*}^{2}}\) & \(\frac{4}{\lambda_{*}^{2}}\) & \(\lambda_{*}>2\) & \(2<\lambda_{*}<\sqrt{64/15}\) & Saddle & \(1/3\) \\ & & & & & \(\lambda_{*}>\sqrt{64/15}\) & Saddle & & \\ \hline \end{tabular} \end{table} Table 2: Fixed points and the stability of the phase space of a single exponential quintessence potential. The points relevant to our model are the EDE saddle point G, the matter-domination stable spiral F, and the scalar field-dominated LDE attractor E. Note that points F and G correspond to point B* in Table 1 with \(w\) fixed to the dominant species. value of \(\alpha\) is consistent with point \(G\) being a saddle point. At late times, when \(\lambda_{*}=\beta\), the system will flow towards point \(E\) or \(F\) depending on the value of \(\beta\). For \(\beta<\sqrt{3}\), the global attractor is point \(E\), which corresponds to a scalar-dominated universe with equation of state \[w_{\phi}=-1+\frac{\beta^{2}}{3}\,. \tag{19}\] Given that current observations indicate that the equation of state (EoS) of dark energy is \(w\approx-1\) to high accuracy [47], we expect small values of \(\beta\ll 1\) to correspond to LDE. The choices above imply that point F corresponds to a saddle during the matter epoch where the scalar accounts for a fraction of the matter density; we refer to this point as the _dark matter saddle_. We can verify the theoretical predictions above via a qualitative exploration of the parameter space. This exploration is also helpful for informing our subsequent data analysis. Figs. 1, 2, and 3 show the effects of varying \(\alpha\), \(\beta\), and \(V_{\alpha}\). We only show the most impacted quantity for each parameter variation: \(\alpha\) and \(V_{\alpha}\) predominantly effect \(\Omega_{\phi}\) while \(\beta\) sets the equation of state of LDE. Beginning with Fig. 1, one can see that larger values of \(\alpha\) reduce the amplitude of the EDE injection, consistent with the predictions of equation (18). As for the injection redshift \(z_{c}\), varying \(\alpha\) changes the effective mass \(m\) of the field given by \[m(\phi)=\alpha\frac{\sqrt{V_{\alpha}}}{M_{\rm Pl}}e^{-\frac{\alpha\phi}{2M_{ \rm Pl}}}\,, \tag{20}\] and the injection occurs when \(m(\phi(z))=H(z)\). For injections in the radiation era, \(z_{c}\) can be approximated as \[z_{c}\sim\left(\frac{V_{\alpha}\alpha^{2}e^{-\alpha\frac{\phi}{M_{\rm Pl}}}}{ \Omega_{r}H_{0}^{2}M_{\rm Pl}^{2}}\right)^{1/4}-1. \tag{21}\] Hence, as \(\alpha\) increases, the injection is pushed to higher redshifts, as can be seen in Fig. 1. Turning to Fig. 2, which demonstrates the effect of varying \(\beta\), in accordance with our predictions above there is no effect at high redshifts but the equation of state of dark energy is increased for larger values of \(\beta\) as per equation (19). Finally, Fig. 3 reveals that \(V_{\alpha}\) exhibits significant degeneracy with \(\alpha\). A larger \(V_{\alpha}\) increases the field's mass according to equation (20) and directly shifts the injection towards higher redshifts. It also changes the amplitude of the energy injection indirectly because this depends on the background EoS \(w(z)\) as shown in equation (18). Since \(w\) increases with redshift, so does the injection amplitude. In light of the above, it is possible that some amount of tuning of the parameters/initial conditions may be needed to resolve the \(H_{0}\) tension, although less than canonical EDE models.3 Whether or not this is tantamount to a fine-tuning depends upon the nature of the potential and whether these parameter choices are radiatively-stable. In the case of our fiducial model, the double exponential arises in string theory [43], so one expects the free parameters to be fixed in the UV. The question is then one of finding string theory realizations that fix the parameters to values that can simultaneously resolve the \(H_{0}\) tension, and explain LDE. Footnote 3: The tuning in \(\beta\) is the usual tuning associated with quintessence models, and is not a new feature of @EDE. To summarize our scenario, we expect that the universe will approach an EDE saddle-point in the radiation epoch, transition to a matter saddle with some fraction of dark matter composed of the (non-clustering) scalar, and will ultimately settle into the LDE attractor. This progression is shown in Fig. 4, where we plot the evolution of the fractional energy densities in \(\Lambda\)CDM and @EDE, the equation of state \(w_{\phi}\) of @EDE, and the modification to the expansion rate \(\Delta H/H\). The introduction of @EDE increases \(H(z)\) at early times with a localised injection, then settles into a constant fractional increase, ending with \(H(z)\) smaller than in \(\Lambda\)CDM because the @EDE equation of state \(w_{\phi}>-1\). Note that \(\Lambda\)CDM parameters are not fixed across cosmologies in Fig. 4 because such cosmologies are already shown by Figs. 1-3. We now confront our fiducial @EDE model with data. Ultimately, we find that the scenario described Figure 1: Background evolution of \(\Omega_{\phi}\) as a function of redshift when \(\alpha\) is varied. We fixed \(\beta=0.01\), \(\log_{10}V_{\alpha}=-7.8\). The \(\Lambda\)CDM parameters were fixed to the \(\Lambda\)CDM best fit given in Table 4. Increasing \(\alpha\) reduces the fractional EDE injection and shifts this injection to earlier times. above is excluded with high significance for reasons we delineate below. We use this insight to discuss how future @EDE model-building efforts may be able to construct theories that better-fit the data. ## III Methodology and Data We modified the Cosmic Linear Anisotropy Solving System (CLASS) [48; 49] to numerically solve the coupled Friedmann-Klein-Gordon equations of the double-exponential potential and evolve its perturbations. To sample the @EDE and \(\Lambda\)CDM parameter space and capture any multimodality we used PolyChord (PC) [50] -- which implements the nested-sampling algorithm [51] -- wrapped by the sampler Cobaya[52]. We analyzed our results using GetDist [53]. For PC, we set the number of live points to \(n_{\text{live}}=35D\) where \(D\) is the dimensionality of the parameter space and use the default convergence criteria \(\mathcal{Z}_{\text{live}}=10^{-2}Z\) where \(\mathcal{Z}_{\text{live}}\) is the posterior mass contained in the live points and \(\mathcal{Z}\) is the total model evidence. ### Datasets We used the data combination that has become standard for EDE and Hubble tension investigations [3] in order to ensure a like-for-like comparison with previous works: 1. **Cosmic microwave background (CMB) data**: We used CMB data from Planck 2018 [4] comprising of the low-\(\ell\) TT and EE data, along with the high-\(\ell\) plik-lite likelihood. We used plik-lite instead of the full plik likelihoods to reduce the dimensionality of the parameter space, removing nuisance parameters that would significantly slow down PC. 2. **Baryon acoustic oscillations (BAO)**: We used BAO data that is consistent with the CMB including the 6dF galaxy survey (6dFGS) [54], SDSS DR7 [55] and, SDSS-III DR12 [56] which, combined, constrain the distance-redshift relation over \(0.01<z<0.61\) using spectroscopic methods on galactic data. 3. **Supernovae (SNe)**: We used the Pantheon [57] data which surveyed the redshifts and magnitudes of 1048 Type Ia Supernovae ranging from \(0.01<z<2.3\). An updated version, Pantheon+ was released while this manuscript was in preparation. We do not expect our results to change with updated data as the CMB strongly constrains and excludes the fiducial @EDE scenario explored here. 4. **Cepheid distance ladder \(H_{0}\)**: We use the SH0ES direct measurement of Figure 3: The evolution of \(\Omega_{\phi}\) as a function of redshift when varying \(V_{\alpha}\). Here, we fixed \(\alpha=\sqrt{40}\) and \(\beta=0.01\). The \(\Lambda\)CDM parameters were fixed as in Fig. 1. Increasing \(V_{\alpha}\) leads to a larger and earlier injection. Although increasing \(\alpha\) has a similar effect of shifting the injection to higher redshifts, it decreases the amplitude of injection. Figure 2: Evolution of the EoS \(w_{\phi}\) of \(\phi\) as a function of redshift when varying \(\beta\). Same as Fig. 1, we fixed \(\alpha=\sqrt{40}\) and \(\log_{10}V_{\alpha}=-7.8\). The case \(\beta=0\) corresponds to a cosmological constant driving LDE. Note that \(\beta\) only becomes relevant in the LDE era around \(z\sim O(10)\). \((74.03\pm 1.42)\)km/s/Mpc [58] as a best-case test scenario to check whether our scenario resolves the Hubble tension and to mitigate prior volume projection effects in EDE cosmologies [59; 25]. A new, more precise, measurement [5] of \(H_{0}=(73.29\pm 0.90)\) km/s/Mpc was released as we prepared this manuscript, but updating the \(H_{0}\) likelihood we use would have a minimal impact on our results because CMB data dominate our constraints. ### Parameter space To explore @EDE cosmology, we varied the standard \(\Lambda\)CDM parameters: the physical densities of baryons \(\omega_{b}\) and cold dark matter \(\omega_{c}\), the amplitude \(A_{s}\) of the primordial power spectrum as \(\ln 10^{10}A_{s}\) and its tilt \(n_{s}\), the optical depth \(\tau\) due to reionization, and the expansion rate \(H_{0}\) of the Universe today. We additionally varied the @EDE parameters \(\alpha\), \(\beta\), and \(V_{\alpha}\) that control the @EDE scalar potential. We fixed the remaining @EDE parameters as follows. We fixed the initial condition \(\phi_{i}\) by exploiting a symmetry of the potential. Specifically, the action is invariant under \(\phi\rightarrow\phi+\phi_{0}\), \(V_{k}\to V_{k}e^{k\frac{\phi_{0}}{M_{\rm Pl}}}\) with \(k=\alpha\), \(\beta\) and where \(\phi_{0}\) is a constant. This allows us to fix \(\phi_{i}\) without loss of generality. We make the arbitrary choice to fix \(\phi_{i}=-4.583M_{\rm Pl}\). We fixed \(\dot{\phi}_{i}\) using attractor initial conditions. Since the field starts frozen in time, we determined an attractor initial condition for \(\dot{\phi_{i}}\) in CLASS by setting \(\ddot{\phi}=0\) in the equation of motion (3) i.e., we assumed that the field is only slowly-rolling, and approximated the potential as \(V(\phi)\approx V_{\alpha}e^{-\alpha\frac{\phi}{M_{\rm Pl}}}\) since the \(\beta\)-exponential is only relevant at late times. With these approximations, we then have \(\dot{\phi}_{i}=\frac{\alpha V_{i}}{3M_{\rm H}}e^{-\alpha\frac{\phi_{i}}{3M_{ \rm Pl}}}\) as our initial condition. Finally, at late times, the system is determined by the \(\beta\)-exponential and approaches fixed point E in Table 2 with \(\lambda_{*}=\beta\). The scalar acts as LDE with an EoS given by equation (19). To close the universe, we therefore modified CLASS to shoot for \(V_{\beta}\) using \(V_{\beta}=3H_{0}^{2}M_{\rm Pl}^{2}\Omega_{\phi}\) for a given set of other parameters. Fixing \(\phi_{i}\), \(\dot{\phi}_{i}\), and \(V_{\beta}\) reduced the dimensionality of the @EDE cosmology parameter space from 12 dimensions to 9 dimensions (the six \(\Lambda\)CDM parameters along with \(\alpha\), \(\beta\), and \(V_{\alpha}\)). We used uninformative priors for the \(\Lambda\)CDM parameters and the priors given in Table 3 for the @EDE parameters. The _wide @EDE prior_ range explores \(0\leq f_{\rm ede}\leq 0.15\) and \(z_{\rm c}\geq 1100\), in terms of the usual Figure 4: Evolution of important cosmological parameters in @EDE (solid curves) relative to the \(\Lambda\)CDM best fit (dashed). The @EDE cosmology shown is the best-fit from the _narrow priors_ exploration in Sec. IV.1 that excludes \(\Lambda\)CDM from the prior, with best fits given in Table 4. When \(\Lambda\)CDM is included in the allowed @EDE parameter space, the resultant best fit is indistinguishable from \(\Lambda\)CDM. _Top_: Fractional energy densities \(\Omega_{X}\) in matter, radiation, and the @EDE scalar relative to the total energy density of the Universe. _Middle_: The equation of state \(w_{\phi}\) of @EDE, which begins frozen with \(w_{\phi}=-1\), becomes dynamical around \(z\sim 10^{5}\) acting as EDE, redshifts like matter during the matter era with \(w_{\phi}\simeq 0\), and finally ends with scalar-field domination with \(w_{\phi}\) close to -1 today. _Bottom_: The fractional change in \(H(z)\) induced by @EDE relative to \(\Lambda\)CDM. \begin{table} \begin{tabular}{|c|c|c|} \hline Parameter & Wide Prior & Narrow Prior \\ \hline \hline \(\alpha\) & \([0,17]\) & \([8.5,15]\) \\ \(\log_{10}V_{\alpha}\) & \([-50,-6]\) & \([-15,-9]\) \\ \(\beta\) & \([0,\sqrt{3}]\) & \([0,\sqrt{3}]\) \\ \hline \(f_{ede}\) & \([0,0.268]\) & \([0.033,0.100]\) \\ \(\log_{10}z_{c}\) & \([3,9.82]\) & \([3.13,8.06]\) \\ \hline \end{tabular} \end{table} Table 3: Priors for @EDE parameters that include \(\Lambda\)CDM (wide priors) and exclude it (narrow priors). EDE parameters: the maximal fractional energy density \(f_{\rm ede}\) in EDE that occurs at redshift \(z_{c}\). Ultimately, we found that the best-fitting @EDE model is nearly identical to \(\Lambda\)CDM, and that our expected scenario described in the previous section is excluded with high significance. To help to understand why this is the case, we performed a second analysis with _narrow priors_ centered on the theoretical values derived in section (II.2). This restricted prior range focuses on \(10^{3}\leq z_{c}\leq 10^{8}\) and excludes \(\Lambda\)CDM by forcing an @EDE energy injection of \(0.033\leq f_{\rm ede}\leq 0.1\). These cosmologies, along with \(\Lambda\)CDM, are explored in the following section. ## IV Results We begin by exploring an @EDE scenario with wide priors that includes \(\Lambda\)CDM as a nested model. In this prior range, \(\Lambda\)CDM is recovered when \(\alpha=0\) and \(V_{\alpha}=0\) such that there is no EDE phase, and \(\beta=0\), which implies a cosmological-constant LDE. This allows for a direct comparison of the goodness-of-fits, constraints on \(H_{0}\), and any preference for @EDE over \(\Lambda\)CDM. This model is labelled _@EDE: wide priors_ in the following. Despite theoretical expectations, we find that @EDE is not preferred over \(\Lambda\)CDM, and therefore does not offer a resolution to the Hubble tension. Data prefer a scenario that closely, within \(1\sigma\), mimics \(\Lambda\)CDM, effectively excluding any early injection of energy density that may resemble EDEs, as shown in Fig. 5. Effectively, @EDE remains frozen throughout cosmic history, acting like a cosmological constant with \(w_{\phi}=-1\) and with the scalar potential dominated by \(V_{\beta}e^{-\beta\frac{\phi}{M_{\rm Pl}}}\). This cosmology is indistinguishable from \(\Lambda\)CDM in the top and bottom panels of Fig. 4. Its best-fitting parameters are given in Table 4. This strong preference for \(\Lambda\)CDM is not due to prior volume effects of the model as can be seen from Table 4, wherein the best-fits are well within \(1\sigma\) of the means, demonstrating that the peak of the likelihood coincides well with the peak of the samples at the mean. As @EDE posteriors are completely consistent with \(\Lambda\)CDM posteriors, it follows that our fiducial @EDE model does not resolve the Hubble tension, with an insufficient increase of \(\Delta H_{0}=0.15\) km/s/Mpc. Moreover, despite three additional parameters in @EDE, we find a negligible improvement in goodness-of-fit with \(\Delta\chi^{2}=-0.28\). The expectation for three additional parameters would be a minimal improvement of \(\Delta\chi^{2}=3\), which further underscores the data preference for \(\Lambda\)CDM over @EDE. To further understand the shortcomings of our fiducial @EDE model, and to gain insight into building a model that can better fit the data, resolve the Hubble tension, and link early and late dark energy, we next explore forcing an @EDE injection through a prior-restricted @EDE model that excludes \(\Lambda\)CDM, labeled as _@EDE: narrow priors_ in what follows. ### Excluding \(\Lambda\)CDM from the parameter space The narrow priors constrain \(0.033\leq f_{\rm ede}\leq 0.1\) and \(10^{3}<z_{c}\leq 10^{8}\), forcing a minimum @EDE energy injection of \(3.3\%\). Note however that these phenomenological parameters are not explored via uniform distributions. We employ flat priors on the model parameters \(\alpha\) and \(V_{\alpha}\), such that the smallest injection occurs at \(z_{c}\simeq 3.6\times 10^{6}\), while the largest \(f_{\rm ede}\) occurs at \(z_{c}\simeq 5\times 10^{4}\). In this restricted scenario, the data prefer \(\alpha=15\) and \(\log_{10}V_{\alpha}=-15\), both at the edges of their respective narrow-prior ranges as seen from comparing with Table 3, resulting in \(z_{c}\simeq 3.7\times 10^{6}\) and \(f_{\rm ede}\simeq 3.4\%\). This best fit reduces the impact of @EDE on observables via minimizing the energy injection and consequently, pushes @EDE dynamics into the redshift range that data are less sensitive to. The Planck CMB is sensitive to new physics injections up to a maximum redshift of \(z\lesssim 10^{6}\)[20; 33; 60]. For earlier injections, @EDE effectively acts like an additional matter-tracking component, similar to tracking early dark energies [61; 62; 63] and will map onto the same upper-limit constraints as tracker early dark energies. As we force a non-zero @EDE energy injection, \(\Lambda\)CDM parameters are forced to compensate for its impact, inducing the parameter shifts shown in Fig. 5 and Table 4. These shifts can be understood within the context of the CMB as follows. First, at \(z\sim 10^{7}\), the scalar field begins to roll and enters an EDE phase, contributing \(f_{\rm ede}\simeq 3.39\%\) to the energy budget of the Universe. Usually, after this early-universe peak in \(\Omega_{\phi}\), EDEs dilute away and have no further impact on cosmology. However, @EDE loses roughly half its fractional energy density, then approaches the matter attractor, dithiting like matter and contributing \(1.33\%<\Omega_{\phi}<1.37\%\) to the energy budget of the Universe. This contribution persists until the dark energy attractor takes over and @EDE contributes the LDE that dominates the Universe today. If the \(\Lambda\)CDM parameters are fixed, the addition of @EDE decreases the size of the sound horizon \(r_{s}\), and enhances the early integrated Sachs-Wolfe (ISW) effect, boosting power in the first CMB peak. In addition, the lingering component of @EDE in the late universe provides an additional contribution to the late ISW effect, and reduces the angular diameter distance \(D_{A}\) to the CMB. The decrease in \(D_{A}\) cannot compensate the decrease in \(r_{s}\) and the precisely measured angular size \(\theta_{*}\) of the sound horizon will decrease. When the \(\Lambda\)CDM parameters are allowed to vary, they react to this additional component and compensate for the effects above through shifts in \(H_{0}\), \(\omega_{b}\) and \(\omega_{c}\) as follows. A substantial increase in \(\omega_{c}\) and slight increase in \(\omega_{b}\) suppresses the TT power and counteracts the enhanced early and late ISW effects. The increase in \(\omega_{c}\) also impacts both \(r_{s}\) and \(D_{A}\), such that \(\theta_{*}\) would increase to larger than the observed value. The increase in \(\omega_{b}\) and slight decrease in \(H_{0}\) then try to offset this shift in and the CMB peak locations. The right panel of Fig. 6 shows these competing effects as the \(\Lambda\)CDM parameters adapt to absorb the impact of @EDE. These parameter shifts are ultimately unsuccessful, leading to the significantly worsened fit to CMB data with \(\Delta\chi^{2}=+44.05\) relative to \(\Lambda\)CDM. The fit to BAO data is also worsened with \(\Delta\chi^{2}=+2.75\). Observations of BAO are in excellent agreement with CMB data interpreted within the \(\Lambda\)CDM paradigm, with agreement between \(r_{s}\), \(D_{A}\) and \(H(z)\) at very different \(z\)[56]. The introduction of @EDE spoils this consistency and increases the BAO \(\chi^{2}\). Ultimately, two competing effects keep @EDE from being a viable solution to the Hubble tension. Fitting the height of the first CMB peak requires increasing \(\omega_{c}\) to suppress the additional early ISW effect from the early-universe injection of @EDE [65, 64, 3]. But fitting the Figure 5: Posteriors for various cosmologies when fit to Planck2018+BAO+SNe+SH0ES. We explore @EDE with wide priors that include \(\Lambda\)CDM in purple, @EDE with narrow priors that exclude \(\Lambda\)CDM in red, and \(\Lambda\)CDM in orange. The horizontal and vertical lines mark the best-fit points of the wide prior (densely dashed) and narrow prior (dash-dotted) @EDE cosmologies. location of the same peak requires decreasing \(\omega_{c}\) to maintain a good fit to \(\theta_{*}\) via shifts in \(D_{A}\). As both cannot be accommodated simultaneously, @EDE is not preferred by data and posteriors converge to a \(\Lambda\)CDM-like universe. ## V Discussion and Conclusions In this paper we have proposed a unified framework for explaining early and late dark energy with a single scalar field -- attractive early dark energy. This framework has the potential to simultaneously resolve the Hubble tension and drive the late-time acceleration of the cosmic expansion without the need to fine-tune the model parameters. Instead, the coincidence between the onset of EDE and matter-radiation equality is explained by a saddle point of a dynamical system in the radiation epoch that attracts solutions independent of their initial conditions. Similarly LDE arises naturally as a late-time global attractor of the system. We explored a fiducial @EDE model corresponding to a double exponential potential, which arises in string theory. Unfortunately, this specific model was not preferred by the data because the dynamical system also possesses a saddle point during the matter era where the scalar contributes a fraction of non-clustering dark matter to the Universe's energy budget. This reduces the angular diameter distance \(D_{A}\) to the CMB. In addition, as is the case with all EDE models, the early ISW effect is also enhanced. The enhancement in the early ISW effect must be compensated by an increase in \(\omega_{c}\), but the shift in \(D_{A}\) must be compensated by a decrease in \(\omega_{c}\). Since, there is no single value of \(\omega_{c}\) that can compensate both effects simultaneously, the model is a poor fit to the data. It is prudent to discuss the potential for constructing viable @EDE models that can help to restore cosmolog \begin{table} \begin{tabular}{|c|c|c|c|} \hline Parameter & \(\Lambda\)CDM & @EDE: wide priors & @EDE: narrow priors \\ \hline \hline \(\log(10^{10}A_{s})\) & \(3.052(3.045)\pm 0.015\) & \(3.051(3.052)^{+0.013}_{-0.016}\) & \(3.083(3.085)\pm 0.017\) \\ \hline \(n_{s}\) & \(0.9689(0.9687)^{+0.0034}_{-0.0039}\) & \(0.9691(0.9691)\pm 0.0036\) & \(0.9691(0.9700)\pm 0.0036\) \\ \hline \(\omega_{\rm b}\) & \(0.02251(0.02249)\pm 0.00013\) & \(0.02253(0.02251)\pm 0.00013\) & \(0.02262(0.02262)\pm 0.00013\) \\ \hline \(\omega_{c}\) & \(0.11816(0.11840)\pm 0.00088\) & \(0.11808(0.11807)\pm 0.00089\) & \(0.12072(0.12043)\pm 0.00091\) \\ \hline \(\tau_{\rm relo}\) & \(0.0599(0.0593)\pm 0.0076\) & \(0.0597(0.0599)^{+0.0066}_{-0.0080}\) & \(0.0751(0.0769)\pm 0.0088\) \\ \hline \(D_{A}\) & \((12.76)\) & \((12.76)\) & \((12.62)\) \\ \hline \(r_{s*}\) & \((144.85)\) & \((144.92)\) & \((143.12)\) \\ \hline \(100\theta_{*}\) & \((1.042009)\) & \((1.042049)\) & \((1.040470)\) \\ \hline \(H_{0}\) & \(68.21(68.09)\pm 0.39\) & \(68.13(68.24)\pm 0.41\) & \(67.53(67.77)\pm 0.41\) \\ \hline \(\sigma_{8}\) & \(0.8084(0.8084)\pm 0.0061\) & \(0.8069(0.8080)^{+0.0058}_{-0.0066}\) & \(0.7729(0.7758)\pm 0.0069\) \\ \hline \(S_{8}\) & \(0.813(0.815)\pm 0.010\) & \(0.813(0.812)\pm 0.010\) & \(793(0.792)\pm 0.010\) \\ \hline \hline \(\alpha\) & - & \(<8.53(9.82)\) & \(>14.8(15)\) \\ \hline \(\beta\) & - & \(<0.213(0)\) & \(<0.0883(1.69\times 10^{-6})\) \\ \hline \(\log_{10}V_{\alpha}\) & - & \(<-30.9(-38.5)\) & \((-15)\) \\ \hline \(f_{\rm ode}\) & - & - & \(0.03480(0.03388)^{+0.00017}_{-0.00092}\) \\ \hline \(\log_{10}z_{c}\) & - & - & \(7.20(6.56)\pm 0.46\) \\ \hline \hline \(\chi^{2}_{CMB}(\Delta)\) & \(1013.99\) & \(1014.64(+0.65)\) & \(1052.70(+38.71)\) \\ \(\chi^{2}_{BAO}(\Delta)\) & \(5.23\) & \(5.24(+0.01)\) & \(7.98(+2.75)\) \\ \(\chi^{2}_{H_{0}}(\Delta)\) & \(15.45\) & \(14.56(-0.89)\) & \(17.45(+2.00)\) \\ \(\chi^{2}_{Pantheon}(\Delta)\) & \(1034.82\) & \(1034.77(-0.05)\) & \(1035.41(+0.59)\) \\ \hline \hline \(\chi^{2}(\Delta)\) & \(2069.49\) & \(2069.21(-0.28)\) & \(2113.54(+44.05)\) \\ \hline \end{tabular} \end{table} Table 4: Marginalized posteriors for \(\Lambda\)CDM and two @EDE cosmologies with different priors, showing mean (best-fit) \(\pm 1\sigma\). The @EDE scenario with wide priors includes \(\Lambda\)CDM as a nested model, while the narrow priors exclude \(\Lambda\)CDM. The wide-priors model does not list \(f_{\rm ode}\) and \(z_{c}\) as there is no EDE phase in that scenario, just LDE. We also show the various \(\chi^{2}\) (\(\Delta\chi^{2}\)) broken down by data set and their differences relative to \(\Lambda\)CDM. ical concordance. Within the framework analyzed in the present work -- a four-dimensional phase space for the dynamical system -- we identified a second scalar potential that has the qualitative features required for the scalar to function as @EDE. The near-identical background evolution and fixed points of this model imply that it is unlikely to be a better fit to the data. We therefore discuss potential extensions of the framework. One possibility would be to consider potentials where the phase space is higher-dimensional. This would enlarge the number of potential models, and introduce additional free parameters that would decouple the properties of the EDE and dark matter saddles. One could then envision potentials where the amount of scalar dark matter during the matter epoch is negligible. An alternative is to analyze non-minimal scalar theories that are known to form dynamical systems such as coupled quintessence [66], where the scalar is conformally coupled to dark matter. The fixed points of these theories are similar to those found in the uncoupled case but the equation of state for the scalar at each point depends on the strength of the dark matter coupling. It may be possible to find models where the equation of state at the dark matter saddle is \(w_{\phi}>0\) so that the scalar redshifts during the matter era, diminishing its contributions to the late universe which caused our fiducial model to be a poor fit to the data. A similar effect could be achieved by studying K-essence theories, where the scalar has a non-canonical kinetic term (reference [39] contains a comprehensive study of these models), or disformal quintessence [67; 68], where the scalar is derivatively coupled to dark matter. The derivative interactions induce a non-zero sound speed for the scalar, so that it would cluster on small scales, reducing the late ISW enhancement. As remarked above, the lack of a complete concordance model that is capable of explaining all of our observations is limiting our ability to interpret data from cosmological missions and astrophysical surveys. This will remain the case as the current generation of missions conclude and the next generation begin to see first light. Discovering the origin of the Hubble tension is therefore paramount. If new physics underlies the tension then any fundamental description of said physics should be consistent with the principles of quantum field theory i.e., the model should be a natural effective field theory free of fine-tunings and coincidence problems. In addition, it is natural that this new physics be connected with the other phenomena we observe in the universe e.g., late dark energy. This work has taken steps in towards this by proposing a framework that ameliorates the fine-tuning and coincidence problems associated with early dark energy, and unifies both early and late dark energy into a single phenomenon driven by one new scalar degree of freedom. Ultimately, our fiducial attractive early dark energy proposal did not provide a good fit to the data, but our study has provided novel lessons for future @EDE model-building ef Figure 6: CMB residuals for various cosmologies relative to the \(\Lambda\)CDM best fit as a fraction of cosmic variance. The light gray data points are binned measurements from Planck 2018. The solid dark grey vertical lines mark peak locations in all spectra. _Left:_ The dash-dotted blue curve shows the @EDE best-fit for wide priors that include \(\Lambda\)CDM, while the solid purple curve excludes \(\Lambda\)CDM with narrow priors for @EDE. The dashed orange curve is a \(\Lambda\)CDM cosmology but with parameters set to the best fit found for the @EDE narrow prior scenario. The dotted red curve on the other hand is an @EDE cosmology, with \(\Lambda\)CDM parameters set to the \(\Lambda\)CDM best fit and @EDE parameters at the same narrow-prior best fit. With these two curves, we show how \(\Lambda\)CDM and @EDE parameters trade off, and how they compensate for each other or, rather, fail to do so in an attempt to better fit data. Effectively, the dashed orange and dotted red residual curves sum up to the solid purple. _Right:_ We further break down the dashed orange curve on the left figure (shown here in solid orange) into its component shifts, shifting one \(\Lambda\)CDM parameter at a time away from its \(\Lambda\)CDM best fit. The greatest shifts are due to \(\omega_{c}\) (an overall suppression of the power spectrum and a shift in the peaks to larger angular scales) and \(H_{0}\) (a compensating shift in the peaks to smaller angular scales). forts aimed at achieving this goal. ###### Acknowledgements. We are grateful for discussions with Eric Baxter, Jason Kumar, Vivian Poulin, David Rubin, and Istvan Szapudi. The technical support and advanced computing resources from University of Hawai'i Information Technology Services - Cyberinfrastructure, funded in part by the National Science Foundation CC awards #2201428 and #2232862 are gratefully acknowledged. TK acknowledges support from NASA ATP Grant 80NSSC18K0694 and funds provided by the Center for Particle Cosmology at the University of Pennsylvania.
2309.04727
Optimal transport with constraints: from mirror descent to classical mechanics
Finding optimal trajectories for multiple traffic demands in a congested network is a challenging task. Optimal transport theory is a principled approach that has been used successfully to study various transportation problems. Its usage is limited by the lack of principled and flexible ways to incorporate realistic constraints. We propose a principled physics-based approach to impose constraints flexibly in such optimal transport problems. Constraints are included in mirror descent dynamics using the principle of D'Alembert-Lagrange from classical mechanics. This leads to a sparse, local and linear approximation of the feasible set leading in many cases to closed-form updates.
Abdullahi Adinoyi Ibrahim, Michael Muehlebach, Caterina De Bacco
2023-09-09T09:13:11Z
http://arxiv.org/abs/2309.04727v2
# Optimal transport with constraints: from mirror descent to classical mechanics ###### Abstract Finding optimal trajectories for multiple traffic demands in a congested network is a challenging task. Optimal transport theory is a principled approach that has been used successfully to study various transportation problems. Its usage is limited by the lack of principled and flexible ways to incorporate realistic constraints. We propose a principled physics-based approach to impose constraints flexibly in such optimal transport problems. Constraints are included in mirror descent dynamics using the principle of D'Alembert-Lagrange from classical mechanics. This leads to a sparse, local and linear approximation of the feasible set leading in many cases to closed-form updates. ## I Introduction Optimal transport in networks has important applications in different disciplines, in particular in urban transportation networks [1]. Congestion not only increases travel time for users and decreases productivity, but it also drives air pollution. Reducing congestion and making transportation more efficient are also a core objective for EU policies, as highlighted throughout the EU Transport White Paper and the Strategic Plan 2020-2024 [2; 3]. The design of efficient transportation networks is a complex task that requires a multifaceted solution. One of these facets is the problem of finding optimal routes for passengers. This is a well-studied problem and a variety of approaches have been suggested, such as shortest-path minimization [4; 5] and assignment strategies [6]. Other approaches that are based on adaptation dynamics [7; 8; 9] have also been proposed to model biological distribution networks. However, these approaches fall short on describing realistic scenarios where transport flows are limited by constraints, requiring a more general theory of optimal transport (OT). OT has been used to model and optimize various aspects of transport networks such as network design [7; 9; 10; 11] and traffic flows [12; 13; 14; 15; 16]. These approaches guarantee a principled and computationally efficient way of solving transportation problems on networks. In standard OT methods, beyond few obvious constraints (e.g. conservation of mass), the amount of flow passing through an edge of the transportation network is unconstrained. As a result, traffic tends to concentrate on path trajectories that may be structurally unfeasible, which severely limits the applicability of OT models in real-world situations, where, for example, roads have a limited capacity of vehicles traveling at the same time. This letter proposes an approach to avoid this crucial flaw of OT models by imposing constraints. Applying this approach significantly impacts the overall network topology induced by the optimal flows, as the resulting path trajectories have different path lengths and traffic distribution than those obtained from unconstrained scenarios. Our approach has not only a solid foundation via the principle of D'Alembert-Lagrange from classical mechanics [17], but also leads to algorithms that are computationally efficient and have a low implementation complexity. The key idea is to consider mirror descent dynamics of an OT problem, where constraints are included on a velocity level. This leads to a sparse, local and linear approximation of the feasible set which, in many cases, allows for a closed-form update rule, even in situations where the feasible set is nonconvex. _The model._ In analogy with electrical grids or hydraulic networks, we model mass flow on a transportation network using conductivities and flows on network edges. We consider a multi-commodity scenario [13; 18], where mass of different type \(i=1,\ldots,M\) can move along different trajectories. The flow \(F_{e}^{i}\) of mass of type \(i\) along an edge \(e=(u,v)\) can be described by \(F_{e}^{i}=\mu_{e}(p_{u}^{i}-p_{v}^{i})/\ell_{e}\), where \(p_{u}^{i}\) is a pressure potential at node \(u\) for passenger of type \(i\), \(\ell_{e}\) is the length of the edge \(e\) and \(\mu_{e}\) its conductivity. This latter quantity can be seen as proportional to the size of an edge, and is the main variable of interest in determining optimal trajectories. In the absence of constraints, the optimal conductivities are the stationary solutions of the dynamics \(\dot{\mu}=f\), where \[f_{e}=\mu_{e}^{\beta}\frac{\sum_{i}(p_{u}^{i}-p_{v}^{i})^{2}}{\ell_{e}^{2}}- \mu_{e}\equiv\mu_{e}^{\beta-2}|F_{e}|^{2}-\mu_{e}\quad, \tag{1}\] with \(F_{e}=(F_{e}^{1},\ldots,F_{e}^{M})\) and \(|\cdot|\) denotes the Euclidean norm. Intuitively, this equation describes a positive feedback mechanism where conductivities increase for larger fluxes and decrease for negligible ones. It can be shown that the dynamics in Eq. (1) admits a Lyapunov function \(\mathfrak{L}_{\beta}\) which can be interpreted as a combination of the cost to operate the network and that of building the infrastructure [13]. Moreover, we have that \(f=-S\,\nabla\mathfrak{L}_{\beta}\), where \(S\) is a diagonal matrix with diagonal entries \(S_{e}=2\mu_{e}^{\beta}/\ell_{e}\) and Eq. (1) can therefore be seen as a mirror descent for the cost function \(\mathfrak{L}_{\beta}\)[19]. This scaling in \(S\) has the advantage of ensuring good behavior of the resulting numerical methods. One can also reinterpret Eq. (1) as a classical gradient descent by applying a suitable transformation [20], we do not explore this here. Variants of these dynamics have been proposed to model distributions over networks [8; 9; 14; 21; 22]. The constant \(\beta\in(0,2)\) regulates the desired transportation regime. The setting \(\beta<1\) penalizes traffic congestion by distributing paths on more edges, \(\beta>1\) encourages path consolidation into fewer highways, and \(\beta=1\) is shortest path-like. In addition to imposing Kirchhoff's law on nodes to ensure mass conservation, solving these dynamics outputs otherwise unconstrained optimal \(\mu_{e}\) and \(F_{e}\). While this may be enough in ideal cases, in more realistic scenarios it is important to further constrain the solution. For instance, structural constraints may limit the maximum amount of flow that an edge can carry, or a budget constraint may be used to limit the infrastructure cost for building the network. Hence, the dynamics \(\dot{\mu}=f\) must be altered to account for these additional constraints. There are many ways in which constraints can be added. A popular approach is to add constraints on a so-called position level, which leads to gradient inclusions in continuous time [23; Ch 3.4], and projected gradient descent in discrete time. Unfortunately, the scope of projected gradients is limited, due to the fact that projections can only be efficiently evaluated for constraints that have a particular structure (such as a low-dimensional hyperplane, the probability simplex, or a Euclidean norm ball). When the feasible set is nonconvex and/or fails to have a simple structure, evaluating projections is a computationally daunting task. This motivates our formulation (see also [24]), which includes constraints on a velocity level and yields a sparse local and linear approximation of the feasible set. As a consequence, the updates for \(\mu\) can often still be evaluated in closed-form (or there is an efficient way of computing them numerically) even though the underlying feasible set is nonconvex or fails to have a simple structure. We will highlight explicit examples of such situations in the remainder of this letter. We define \(C:=\{\mu\in\mathbb{R}_{>0}^{E}\mid g(\mu)\geq 0\}\) as the set of feasible conductivities \(\mu=(\mu_{1},\ldots,\mu_{E})\), with \(g\) a constraint function that we assume continuously differentiable and \(E\) is the number of network edges. We focus on those edges where constraints are not satisfied, and denote the set of active constraints for a given \(\mu\) as \(I_{\mu}:=\{i\in\mathbb{Z}\mid g_{i}(\mu)\leq 0\}\). Interpreting \(\mu\) as a "position" variable, a constraint to ensure \(\mu(t)\in C,\forall t\geq 0\), can be equivalently formulated as a constraint on its _velocity_\(\dot{\mu}(t)\in T_{C}(\mu(t)),\forall t\geq 0\), with \(\mu(0)\in C\), where \(T_{C}(\mu)\) denotes the tangent cone of the feasible set at \(\mu\), see [25]. However, it will be convenient to slightly extend the notion of tangent cone to also account for infeasible initial conditions (this is particularly important for the discretization), which is achieved by imposing \(\dot{\mu}(t)\in V_{\alpha}(\mu(t))\), where \(V_{\alpha}(\mu):=\{v\in\mathbb{R}^{E}\mid\nabla g_{i}(\mu)^{T}v\geq-\alpha g_ {i}(\mu),i\in I_{\mu}\}\), and \(\alpha\geq 0\) is a constant typically referred to as a "restitution" parameter or "slackness". We note that \(V_{\alpha}(\mu)\) generalizes the notion of the tangent cone, since for \(\mu\in C\), \(V_{\alpha}(\mu)=T_{C}(\mu)\).[26] For \(\mu(t)\not\in C\) the constraint \(\dot{\mu}(t)\in V_{\alpha}(\mu(t))\) is equivalent to \(\mathrm{d}g_{i}(\mu(t))/\mathrm{d}t\geq-\alpha g_{i}(\mu(t))\), \(i\in I_{\mu(t)}\), which ensures that potential constraint violations decay at the rate \(\alpha>0\). The situation is visualized graphically in Fig. 1 (panel A). In order to account for the velocity constraint \(\dot{\mu}\in V_{\alpha}(\mu)\) we augment the dynamics \(\dot{\mu}=f\) with a constraint _reaction_ force \(R\), that is, \[\dot{\mu}=f+R,\quad\text{with}\ -R\in N_{V_{\alpha}(\mu)}(\dot{\mu}), \tag{2}\] where \(N_{V_{\alpha}(\mu)}(\dot{\mu})\) denotes the normal cone of the set \(V_{\alpha}(\mu)\) at \(\dot{\mu}\). Due to the scaling of the gradient with \(S\), the normal cone is defined with respect to the inner product \((a,b)=a^{T}S^{-1}b\), where \(a,b\in\mathbb{R}^{E}\) are arbitrary vectors. This has the important effect of guaranteeing that \(\mathfrak{L}_{\beta}\) (of the unconstrained dynamics) is still a Lyapunov function also in the constrained setting and that \(\mathfrak{L}_{\beta}(\mu(t))\) is monotonically decreasing along the trajectories of Eq. (2). A detailed derivation is included in Supporting Material [27]. The addition of \(R\) ensures that even if \(f\) pushes \(\mu\) away from \(C\), as shown in Fig. 1 (panel B), the force \(R\), which is orthogonal to the set \(V_{\alpha}(\mu)\), annihilates the component of \(f\) that would lead to a constraint violation and ensures that \(\dot{\mu}\in V_{\alpha}(\mu)\). As discussed above, we can therefore conclude that \(\mu(0)\in C\Rightarrow\mu(t)\in C\) for all \(t\geq 0\) and \(\mu(0)\not\in C\Rightarrow\mu(t)\to C\) for \(t\rightarrow\infty\). In addition, we infer from Fig. 1 that the resulting \(\dot{\mu}\) in Eq. (2) is nothing but the projection of \(f\) onto the set \(V_{\alpha}(\mu)\) and as a result, we can rewrite \(\dot{\mu}\) in the following way: \[\dot{\mu}:=\operatorname*{arg\,min}_{v\in V_{\alpha}(\mu)}\frac{1}{2}(v-f,v-f) \quad, \tag{3}\] which can also be equivalently reformulated as the quadratic program \[\dot{\mu}:=\operatorname*{arg\,min}_{v\in V_{\alpha}(\mu)}\frac{1}{2}(v-f)^{T} S^{-1}(v-f)\quad. \tag{4}\] Figure 1: (A) Visualization of the set \(C\) and the set of feasible velocities \(V_{\alpha}(\mu_{1})\) and \(V_{\alpha}(\mu_{2})\) at points \(\mu_{1}\) and \(\mu_{2}\), respectively. Point \(\mu_{1}\) lies on the boundary of \(C\), while \(\mu_{2}\) is infeasible; \(\alpha\) is a restitution parameter. (B) When the vector field \(f\) is pushing away from \(C\), a force \(-R\in N_{V_{\alpha}}(\dot{\mu})\) is added to the dynamics. The force \(R\) annihilates the component of \(f\) that would lead to a constraint violation and ensures \(\dot{\mu}\in V_{\alpha}(\mu)\). This reformulation is not only useful for numerical computations, but also highlights that the velocity \(\dot{\mu}\) is chosen, at each point in time, to match the unconstrained \(f\). Fig. 1(A) visualizes the set \(C\) and the set of feasible velocities \(V_{\alpha}(\mu_{1})\) and \(V_{\alpha}(\mu_{2})\) at points \(\mu_{1}\) and \(\mu_{2}\), respectively. Point \(\mu_{1}\) lies on the boundary of \(C\), while \(\mu_{2}\) is infeasible. We note that the cone \(V_{\alpha}(\mu_{2})\) includes an offset, which is controlled by the restitution parameter \(\alpha\); this ensures that any \(v\in V_{\alpha}(\mu_{2})\) leads to a decrease in constraint violation. Fig. 1 (B) shows that when the vector field \(f\) is pushing away from \(C\), a force \(-R\in V_{V_{\alpha}}(\dot{\mu})\) is added to the dynamics. The force \(R\) annihilates the component of \(f\) that would lead to a constraint violation and ensures \(\dot{\mu}\in V_{\alpha}(\mu)\), where \(\dot{\mu}\) is chosen as close as possible to \(f\). This can also be interpreted as Gauss's principle of least constraint. It is important to note that \(V_{\alpha}(\mu)\) is a polyhedral set that only includes the constraints \(I_{\mu}\), a subset of the original constraints \(g(\mu)\geq 0\). The set \(V_{\alpha}(\mu)\) represents therefore a sparse, local and linear approximation of the feasible set. The solution \(\dot{\mu}\) of Eq. (3) can then be used to update the conductivity with a discrete-time algorithm: \[\mu^{t+1}=\mu^{t}+\tau\dot{\mu}\quad, \tag{5}\] where \(\tau>0\) is the step size. This general formalism can be applied to a variety of scenarios, provided one can compute \(\nabla g\), which determines the set \(V_{\alpha}(\mu)\). We now describe three concrete and relevant examples. Capacity constraints.In cases of structural constraints that strictly limit the amount of mass that can travel along any given edge, one can consider capacities \(c_{e}\geq 0\) on edges and set constraints as \(g_{e}(\mu)=c_{e}-\mu_{e}\). The velocity constraint \(v\in V_{\alpha}(\mu)\) in Eq. (3) reads as \(v_{e}\leq\alpha g_{e}(\mu_{e})\), for \(e\in I_{\mu}\), which is strictly negative, since \(\alpha>0\) (Supporting Material [27]). As previously discussed, \(\alpha>0\) is a restitution parameter that dictates the rate at which constraint violations decay. In discrete time, one should choose \(\alpha>0\) such that \(\alpha\,\tau\leq 1\) to guarantee convergence (see [24]). We can then solve Eq. (3) in closed-form for edges violating the constraint obtaining \(v_{e}=\min\left\{\alpha\left(c_{e}-\mu_{e}\right),f_{e}\right\}\). In summary, for each edge \(e\), we have: \[\dot{\mu}_{e}=\begin{cases}\alpha\left(c_{e}-\mu_{e}\right),&\text{if}\;\;f_ {e}\geq\alpha\left(c_{e}-\mu_{e}\right)\text{ and }\mu_{e}\geq c_{e},\\ f_{e}&\text{otherwise}\quad.\end{cases} \tag{6}\] We illustrate the topologies of the paths resulting from considering the capacity constraint on synthetic data and compare against those obtained in the unconstrained case in Fig. 2. We measure the Gini coefficient \(Gini(T)\) calculated on the traffic on edges, defined as the \(E\)-dimensional vector \(T\) with entries \(T_{e}=\sum_{i}|F_{e}^{i}|/n\), where \(n\) is the number of passengers. The coefficient has value in \([0,1]\) and it determines how traffic is distributed along network edges, with \(Gini(T)=0\) meaning equally-balanced distribution and \(Gini(T)=1\) indicating highly unbalanced traffic on few edges. The choice of the edge capacity \(c_{e}\) influences this value, with lower \(c_{e}\) imposing stricter constraint and thus encouraging traffic to distribute more equally along the edge, i.e. lower Gini, as shown in Fig. 2(A). Conversely, this implies longer routes for passengers, as measured by an increasing average total path length \((l)=\sum_{e,i}\,\ell_{e}|F_{e}^{i}|/n\) compared to the unconstrained solution, as shown in Fig. 2(B). Budget constraint.As a second example, we consider a global constraint that involves all the edges at once, a budget constraint \(g_{b}(\mu)=b\,-\,\sum_{e}\mu_{e}\). This is relevant when a network manager has a fixed limited amount of resources \(b>0\) to invest. We note that, while the Lyapunov function \(\mathfrak{L}_{\beta}\) contains a similar budget term-the cost to build the infrastructure-this cost is not regarded as a constraint in standard approaches [8; 13] but as part of the energy consumption, and the budget \(b\) is not a Lagrange multiplier but a measurable constant. Furthermore, unlike the previous case where including a positivity constraint \(\mu_{e}\geq 0\) is optional (but it can in principle be imposed as well, see Supporting Material [27]), here we need to include that explicitly. In the standard OT formalism positivity is ensured, provided \(\mu_{e}\) is initialized as a positive quantity. Adding constraint may not preserve positivity anymore during the updates, this is the case for the budget constraint, as we observed empirically. Positivity is enforced by adding \(g_{p}(\mu)=\mu\geq 0\), i.e. \(\mu_{e}\geq 0\,\forall e\). In this budget constraint setting, the conductivities vi Figure 2: Capacity constraint on synthetic networks. (A) Gini coefficient of the traffic distribution on edges. The edge capacity \(c_{e}=c\) is selected as a percentile of the distribution of \(\mu\) over edges obtained in the unconstrained case (Unconstrained). (B) Ratio of average total path length to that of Unconstrained, \((1)_{f}\). Markers and shadows are averages and standard deviations over 20 network realizations, with 100 randomly selected origins. All passengers have the same central destination (square magenta marker). (c) Example trajectory of one passenger type (green color), whose origin is the green triangle marker. Edge widths are proportional to the amount of passengers traveling through an edge; \(\beta=1.8\). olate the constraint whenever \(\sum_{e}\mu_{e}>b\). We derive a closed-form solution as: \(\dot{\mu}_{e}=f_{e}-S_{e}\lambda_{b}\), if \(f_{e}-S_{e}\lambda_{b}\geq-\alpha\,\mu_{e}\), and \(\dot{\mu}_{e}=-\alpha\,\mu_{e}\) otherwise, where \(\lambda_{b}\in\mathbb{R}\) is a Lagrange multiplier for the budget constraint and can be determined numerically using fixed-point iteration; see Supporting Material [27]. Combining linear and non-linear constraints.All the previous examples considered linear constraints, where it is simple to derive analytical solutions. In general, constraints can be more complicated and thus require numerical methods to solve the constrained quadratic optimization in Eq. (3). In this scenario, we consider a non-linear budget constraint of the form: \(g_{\delta}(\mu)=b-\sum_{e}\mu_{e}^{\delta}\geq 0\), where \(\delta>0\) is a nonlinearity parameter. Setting \(\delta=1\) gives a linear budget constraint as the one discussed earlier. A non-linear example is a volume-preserving constraint where \(\delta=1/2\), this is relevant for biological processes such as leaf venation and vascular systems [28; 9]. This non-linear budget induces the velocity constraint \(\sum_{e}\delta\mu_{e}^{\delta-1}v_{e}\leq\alpha\,g_{\delta}(\mu)\). In addition, we also consider a capacity constraint as in the first scenario studied above. Overall, three functions are required: i) \(g_{\delta}(\mu)\) to impose non-linear budget constraint; ii) \(g_{e}(\mu)\) to impose edge capacity and iii) \(g_{p}(\mu)\) to ensure positivity. Also in this non-linear constraint example, we can derive closed-form solution as \[\dot{\mu}_{e}=\begin{cases}\alpha\,(c_{e}-\mu_{e})&\text{ if }f_{e}-S_{e} \lambda_{\delta}\,h_{e}\geq\alpha\,(c_{e}-\mu_{e}),\,\mu_{e}\geq c_{e}\\ \\ -\alpha\,\mu_{e}&\text{ if }f_{e}-S_{e}\lambda_{\delta}\,h_{e}\leq-\alpha\,\mu_{e},\,\mu_{e}\leq 0\\ \\ f_{e}-S_{e}\lambda_{\delta}\,h_{e}&\text{ otherwise }\quad,\end{cases} \tag{7}\] where \(h_{e}=\delta\,\mu_{e}^{\delta-1}\) and \(\lambda_{\delta}>0\). The value of \(\lambda_{\delta}\) can be determined numerically using fixed-point iteration (Supporting Material [27]). In this analytical solution, the value \(\alpha\,(c_{e}-\mu_{e})\) ensures there is no violation on the edge capacity, \(-\alpha\,\mu_{e}\) imposes positivity constraint and \(f_{e}-S_{e}\lambda^{\delta}h_{e}\) captures budget violation. Overall, this scenario ensures that the velocity \(\dot{\mu}_{e}\) has an upper bound of \(\alpha\,(c_{e}-\mu_{e})\) and lower bound of \(-\alpha\,\mu_{e}\). The choice of \(\delta\) impacts the topological properties of the resulting network, e.g., the total path length. In the numerical experiments, we set the nonlinearity parameter as \(\delta\in(0,1)\). General scenarios: quadratic programming.The three examples illustrate cases where one can derive analytical or semi-analytical updates. Our method is valid more generally, for any choice of the constraints \(g(\mu)\) provided its gradient can be derived. In fact, one can always cast the quadratic optimization for the velocity \(\dot{\mu}\) into a quadratic program and use optimized numerical solvers to extract a solution. Grenoble network.We examine the topology of various constrained solutions on the road network of the city of Grenoble [29], see Fig. 3(A). This has 640 nodes and 740 edges. We set the central bus station as the destination node and select the remaining 639 nodes as origins. Routes generated from the non-linear constraint scenario balance traffic more than the unconstrained case and result in longer routes, see Fig. 3(B-C). Adding a budget constraint for \(\beta>1\) results in more distributed traffic (lower Gini) without increasing much the total path length, compared to the unconstrained case. Discussion.Distributing flows in a transportation network is a challenging task. Approaches based on optimal transport theory are promising, but they are limited by the lack of a mechanism to incorporate realistic constraints. Our work shows how to impose arbitrary constraints on optimal transport problems in a principled and flexible way. The constraints are lifted from a position level to a velocity level and are included in the corresponding mirror descent dynamics. This results in a scalable algorithm that has a low implementation complexity and solves constrained optimal transport problems in a computationally efficient manner. Due to the fact that the algorithm relies on a sparse local approximation of the feasible set at each iteration, closed-form updates can often be derived, even if the underlying feasible set is nonconvex or nonlinear. Moreover, in the absence of closed-form solutions, one can resort to efficent numerical methods to solve at most a quadratic program. Our physics-based approach is a change of paradigm with regard to how optimal transport problems are modelled and solved numerically. This calls for a generalization of transportation problems in wider scenarios, e.g. in networks with multiple transport modes [15], with real-time traffic demands [30] or with noise-induced resonances [31]. To facilitate the usage of our model, we provide an open source implementation within the repository [32]. Acknowledgments: The authors thank the International Max Planck Research School for Intelligent Systems (IMPRS-IS) for supporting AAI. MM thanks the German Research Foundation and the Branco Weiss Fellowship, administered by ETH Zurich, for the support.
2310.06053
New Retarded Nonlinear Integral Inequalities of the Gronwall-Bellman-Pachpatte Type and Their Applications
The goal of the present article is to offer a number of new retarded nonlinear inequalities of Gronwall, Bellman and Pachpatte kind for a class of integral and integro-differential equations. These inequalities generalize and provide new formulations of some well-known results in the mathematical framework of integral and differential inequalities that have been derived currently as well as in earlier times. These results can be utilized to investigate diverse aspects, both qualitative and quantitative, of a class of aforementioned equations. We propose a few applications to ensure effectiveness of these inequalities.
Nagesh Kale
2023-09-22T07:41:01Z
http://arxiv.org/abs/2310.06053v1
New retarded nonlinear integral inequalities of the Gronwall-Bellman-Pachpute type and their applications ###### Abstract. The goal of the present article is to offer a number of new retarded nonlinear inequalities of Gronwall, Bellman and Pachpatte kind for a class of integral and integro-differential equations. These inequalities generalize and provide new formulations of some well-known results in the mathematical framework of integral and differential inequalities that have been derived currently as well as in earlier times. These results can be utilized to investigate diverse aspects, both qualitative and quantitative, of a class of aforementioned equations. We propose a few applications to ensure effectiveness of these inequalities. 2020 Mathematics Subject Classification: 39B72, 26D10, 34A34 ## 1. **Introduction** In the realm of contemporary advances in several disciplines of mathematics, the quest of integral equations, differential equations, and integro-differential equations has an essential role due to its widespread recognition as a leading instrument of applied research. Evidently, the study of a number of qualitative and quantitative properties of these classes of equations has focused heavily on inequality technique.The extensive literature illuminating the these tools and their evolution can be unveiled form [5, 6, 8], and references specified therein. In 1919, while researching the dependent nature of systems of differential equations with relative parameters, Gronwall devised the widely recognized integral inequality [12], which states that **Theorem 1.1** (Gronwall [12]).: _If_ \[0\leq x(\delta)\leq\int\limits_{\mathfrak{c}}^{\delta}\Big{(}\mathfrak{h}_{1} x(\tilde{\delta})+\mathfrak{h}_{2}\Big{)}d\tilde{\delta},\text{ for }\delta\in[\mathfrak{c},\mathfrak{c}+\mathfrak{h}],\mathfrak{h}_{1}, \mathfrak{h}_{2}\in\mathbb{R}_{+},\] _for continuous function \(x(\delta)\) on \([\mathfrak{c},\mathfrak{c}+\mathfrak{h}]\) then_ \[0\leq x(\delta)\leq\mathfrak{h}_{2}\mathfrak{h}\mathfrak{e}^{\mathfrak{h}_{1} \mathfrak{h}},\text{ for }\delta\in[\mathfrak{c},\mathfrak{c}+\mathfrak{h}].\] Subsequently, Bellman (1943) proposed an intriguing extension of Gronwall's inequality, which reads as **Theorem 1.2** (Bellman [5]).: _If_ \[0\leq x(\delta)\leq\mathfrak{h}+\int_{\mathfrak{h}_{1}}^{\delta}\mathfrak{w}( \tilde{\delta})x(\tilde{\delta})d\tilde{\delta},\text{ for }\delta\in\mathcal{J}=[\mathfrak{h}_{1}, \mathfrak{h}_{2}]\] _for a continuous function \(x(\delta)\) and \(\mathfrak{h}_{1}\in\mathbb{R}_{+},\) then_ \[0\leq x(\delta)\leq\mathfrak{h}\exp\left(\int_{\mathfrak{h}_{1}}^{\delta} \mathfrak{w}(\tilde{\delta})d\tilde{\delta}\right),\text{ for }\delta\in \mathcal{J}=[\mathfrak{h}_{1},\mathfrak{h}_{2}].\] Furthermore, Pachpatte produced a more broad variant of the Gronwall-Bellaman inequality in 1973, which asserts that **Theorem 1.3** (Pachpatte [13]).: _If \(x,\mathfrak{w},\tilde{\mathfrak{w}}\) are nonnegative continuous functions defined on \(\mathbb{R}_{+}\) such that_ \[x(\delta)\leq\mathfrak{h}+\int_{0}^{\delta}\mathfrak{w}(\tilde{\delta})x( \tilde{\delta})d\tilde{\delta}+\int_{0}^{\delta}\mathfrak{w}(\tilde{\delta}) \left(\int_{0}^{s}\tilde{\mathfrak{w}}(\sigma)x(\sigma)d\sigma\right)d\tilde {\delta},\text{ for }\delta\in\mathbb{R}_{+},\] _for nonnegative and continuous functions \(x,\mathfrak{w},\tilde{\mathfrak{w}}\) and \(\mathfrak{h}\in\mathbb{R}_{+}\) then_ \[x(\delta)\leq\mathfrak{h}\left[1+\int_{0}^{\delta}\mathfrak{w}(\tilde{\delta} )\exp\left(\int_{0}^{s}[\mathfrak{w}(\sigma)+\tilde{\mathfrak{w}}(\sigma)]d \sigma\right)d\tilde{\delta}\right],\text{ for }\delta\in\mathbb{R}_{+}.\] In the past few decades, a number of generalizations and extensions of these types of inequalities and their extended discrete analogues have been published [1, 3, 4, 6, 8, 10]. The retarded integral inequalities, which have their roots in the aforementioned integral inequalities, were recently devised by A Shakoor, Wang, Abdeldaim, Yakout, and El-Deeb [2, 9, 11, 14, 15, 16]. The most recent generalized improvements of a few previous retarded integral inequalities were reported by A. Shakoor et al. [2]. Here, we mention one of the inequalities reported by A. Shakoor et al., which stated that **Theorem 1.4** (Shaknoor [2]).: _If_ \[x^{\prime}(\delta)\leq l(\delta)+\int_{0}^{\alpha(\delta)}g_{1}(\tilde{\delta })x(\tilde{\delta})d\tilde{\delta}+\int_{0}^{\alpha(\delta)}g_{2}(\tilde{ \delta})\left(x^{\lambda_{1}}(\tilde{\delta})+\int_{0}^{\tilde{\delta}}g_{3}( \mu)x^{\lambda_{2}}(\mu)d\mu\right)^{\frac{1}{\lambda_{1}}}d\tilde{\delta} \quad\forall\delta\in\mathbb{R}_{+},\] _for \(\lambda_{1}>\lambda_{2}\geq 0\), nonnegative \(x,x^{\prime},g_{1},g_{2},g_{3}\in\mathrm{Cf}_{\mathbb{R}_{+}}\) and nondecreasing \(l,\alpha\in\mathrm{Cdf}_{\mathbb{R}_{+}}\) wherein \(x_{0}=0,l(\delta)\geq 1,\alpha(\delta)\leq\delta\) on \(\mathbb{R}_{+}\) then_ \[x(\delta)\leq \Bigg{[}\frac{(\lambda_{1}-\lambda_{2})}{\lambda_{1}}\int_{0}^{ \alpha(\delta)}g_{3}(\tilde{\delta})\,\mathrm{ex}\lambda_{1}\Bigg{(}(\lambda_{ 1}-\lambda_{2})\int_{\tilde{\delta}}^{\alpha(\delta)}\Bigg{(}\alpha^{-1}( \sigma)\Bigg{(}l^{\prime}\left(\alpha^{-1}(\sigma)\right)+g_{1}(\sigma)\] \[\qquad\qquad+g_{2}(\sigma)\Bigg{)}\!+\!\frac{1}{\alpha^{-1}( \sigma)}\Bigg{)}d\sigma\Bigg{)}d\tilde{\delta}\Bigg{]}^{\frac{1}{\lambda_{1}- \lambda_{2}}}\quad\forall\delta\in\mathbb{R}_{+}.\] This work proposes generalized inequalities, expanding on Shakoor's inequalities [2]. Before moving on, we'll go through some of the symbols and notations that will be used in the discussion that follows. \(\mathrm{Cf}_{\mathbb{R}_{+}}\) (continuous functions on \(\mathbb{R}_{+}\)), \(\mathrm{Cdf}_{\mathbb{R}_{+}}\) (continuously differentiable functions on \(\mathbb{R}_{+}\)) and \(\mathbb{R}_{+}\), indicates \(\mathbb{R}_{+}=[0,\infty)\). The subsequent portion of the article is separated into the following sections: The first section presents some novel retarded nonlinear integral and integro-differential inequalities that generalize the existing inequalities in the literature. In the second section, we provide few examples to show the effectiveness of our inequalities in determining and analyzing boundedness and and global behavior of the solution for nonlinear retarded integral equations of Volterra kind. In the last section, we state some crucial conclusions of this study. ## 2. **Main Results** Before proceeding to our main result, we initiate our section with fundamental lemmas, which will come in handy later on. **Lemma 2.1**.: _If \(\omega_{1},\omega_{2}\geq 0\) and \(\gamma\geq 1\), then_ \[(\omega_{1}+\omega_{2})^{\gamma}\leq 2^{\gamma-1}(\omega_{1}^{\gamma}+\omega_{2} ^{\gamma}).\] **Lemma 2.2**.: _(Zhao [7]) For any \(\omega\geq 0,\ \gamma_{1}\geq\gamma_{2}\geq 0,\gamma_{1}\neq 0\),_ \[\omega^{\frac{\gamma_{2}}{\gamma_{1}}}\leq\frac{\gamma_{2}}{\gamma_{1}} \kappa^{\frac{\gamma_{2}-\gamma_{1}}{\gamma_{1}}}\omega+\frac{\gamma_{1}- \gamma_{2}}{\gamma_{1}}\kappa^{\frac{\gamma_{2}}{\gamma_{1}}},\ \kappa>0.\] We begin with a new generalized version of nonlinear retarded integro-differential inequality developed by A Shakoor et al. [2] mentioned in Theorem 1.4. **Theorem 2.3**.: _If \(\mathfrak{u},\mathfrak{u}^{\prime},\Psi_{1},\Psi_{2},\Psi_{3}\in\mathrm{Cf}_{ \mathbb{R}_{+}}\) and \(a,f\in\mathrm{Cdf}_{\mathbb{R}_{+}}\) are nondecreasing in nature wherein \(a(\delta)\geq 1,f(\delta)\leq\delta\ (\delta\in\mathbb{R}_{+}),\mathfrak{u}(0)=0\) are such that_ \[(\mathfrak{u}^{\prime}(\delta))^{\gamma_{1}}\leq a(\delta)+\int\limits_{0}^{f( \delta)}\Psi_{1}(\theta)\mathfrak{u}(\theta)d\theta+\int\limits_{0}^{f(\delta )}\Psi_{2}(\theta)\left(\mathfrak{u}^{\gamma_{2}}(\theta)+\int\limits_{0}^{ \theta}\Psi_{3}(\xi)\mathfrak{u}^{\gamma_{3}}(\xi)d\xi\right)^{\frac{1}{ \gamma_{2}}}d\theta \tag{2.1}\] _for \(\delta,\gamma_{1},\gamma_{2},\gamma_{3},\in\mathbb{R}_{+},\) with \(\gamma_{1}\geq 1,\gamma_{2}\geq 2,\gamma_{3}\geq 1,\gamma_{2}\neq\gamma_{3}\) then_ \[\mathfrak{u}(\delta) \leq\delta\ \zeta_{2}+2^{\frac{1-\gamma_{2}}{\gamma_{2}}}\left\{ \frac{\gamma_{2}-\gamma_{3}}{\gamma_{2}}\int_{0}^{f(\delta)}\Psi_{3}(\xi)2^{ \frac{\gamma_{3}-\gamma_{2}}{\gamma_{2}}}\exp\!\left((\gamma_{2}-\gamma_{3}) \int_{\xi}^{f(\delta)}\!\left(2^{\frac{\gamma_{2}-1}{\gamma_{2}}}f^{-1}( \theta)\ \zeta_{1}a^{\prime}(f^{-1}(\theta))\right.\right.\right.\] \[+2^{\gamma_{2}-1}(f^{-1}(\theta))^{\gamma_{2}-1}\ \zeta_{2}^{ \gamma_{2}}+\frac{1}{\gamma_{2}}\Psi_{3}(\theta)2^{\gamma_{3}-1}(f^{-1}( \theta))^{\gamma_{3}}\ \zeta_{2}^{\gamma_{3}}+f^{-1}(\theta)\ \zeta_{1}\Psi_{1}(\theta)\] \[\left.\left.+\,2^{\frac{\gamma_{2}-1}{\gamma_{2}}}(f^{-1}(\theta) )^{2}\ \zeta_{1}\zeta_{2}\Psi_{1}(\theta)+2^{\frac{\gamma_{2}-1}{\gamma_{2}}}f^{-1}( \theta)\ \zeta_{1}\Psi_{2}(\theta)+\frac{1}{f^{-1}(\theta)}\right)\right) \right\}^{\frac{1}{\gamma_{2}-\gamma_{3}}}, \tag{2.2}\] _where \(\zeta_{1}=\frac{1}{\gamma_{1}}\kappa^{\frac{1-\gamma_{1}}{\gamma_{1}}},\zeta _{2}=\frac{\gamma_{1}-1}{\gamma_{1}}\kappa^{\frac{1}{\gamma_{1}}}\ (\kappa>0)\)._ Proof.: If \(\mathfrak{v}(\delta)\) indicates right-hand-side of inequality (2.1) then \(\mathfrak{v}(0)=a(0)\) and from (2.1), it is apparent that \[\mathfrak{u}^{\prime}(\delta)\leq\mathfrak{v}^{\frac{1}{\gamma_{1}}}(\delta) \leq\zeta_{1}\mathfrak{v}(\delta)+\zeta_{2},\ \text{where}\ \zeta_{1}=\frac{1}{\gamma_{1}}\kappa^{\frac{1-\gamma_{1}}{\gamma_{1}}},\zeta _{2}=\frac{\gamma_{1}-1}{\gamma_{1}}\kappa^{\frac{1}{\gamma_{1}}},\ \text{for any}\ \kappa>0. \tag{2.3}\] Further the nondecreasing nature of \(\mathfrak{v}(\delta)\geq 0\), asserts that \[\mathfrak{u}(\delta)\leq\delta\ \zeta_{1}\mathfrak{v}(\delta)+\delta\ \zeta_{2}. \tag{2.4}\] Using (2.4) and lemma 2.1, we have \[\mathfrak{v}^{\prime}(\delta)=a^{\prime}(\delta)+f^{\prime}( \delta)\Psi_{1}(f(\delta))\mathfrak{u}(f(\delta))+f^{\prime}(\delta)\Psi_{2}(f (\delta))\left(\mathfrak{u}^{\gamma_{2}}(f(\delta))+\int\limits_{0}^{f(\delta )}\Psi_{3}(\xi)\mathfrak{u}^{\gamma_{3}}(\xi)d\xi\right)^{\frac{1}{\gamma_{2}}}\] \[\leq a^{\prime}(\delta)+f^{\prime}(\delta)\Psi_{1}(f(\delta))( \delta\ \zeta_{1}\mathfrak{v}(\delta)+\delta\ \zeta_{2})+f^{\prime}(\delta)\Psi_{2}(f(\delta))\Bigg{(}2^{\gamma_{2}-1} \Big{(}\delta^{\gamma_{2}}\ \zeta_{1}^{\gamma_{2}}\mathfrak{v}^{\gamma_{2}}(\delta)+\delta^{\gamma_{2}}\ \zeta_{2}^{\gamma_{2}}\Big{)}\] \[+\int\limits_{0}^{f(\delta)}\Psi_{3}(\xi)(\xi^{\gamma_{3}}\ \zeta_{1}^{ \gamma_{3}}\mathfrak{v}^{\gamma_{3}}(\xi)+\xi^{\gamma_{3}}\ \zeta_{2}^{\gamma_{3}})d\xi\Biggr{)}^{\frac{1}{\gamma_{2}}}. \tag{2.5}\] Set up \(\mathfrak{w}^{\gamma_{2}}(\delta)\) as \[\mathfrak{w}^{\gamma_{2}}(\delta)=2^{\gamma_{2}-1}\Bigl{(}\delta^{\gamma_{2}} \ \zeta_{1}^{\gamma_{2}}\mathfrak{v}^{\gamma_{2}}(\delta)+\delta^{\gamma_{2}}\ \zeta_{2}^{\gamma_{2}}\Bigr{)}+\int\limits_{0}^{f(\delta)}\Psi_{3}(\xi)2^{ \gamma_{3}-1}\Bigl{(}\xi^{\gamma_{3}}\ \zeta_{1}^{\gamma_{3}}\mathfrak{v}^{\gamma_{3}}(\xi)+\xi^{\gamma_{3}}\ \zeta_{2}^{\gamma_{3}}\Bigr{)}d\xi. \tag{2.6}\] Thus, \(\mathfrak{u}(\delta)\leq\delta\ \zeta_{1}\mathfrak{v}(\delta)+\delta\ \zeta_{2}\leq 2^{\frac{1- \gamma_{2}}{\gamma_{2}}}\mathfrak{w}(\delta)+\delta\ \zeta_{2}\) and \(\mathfrak{w}(0)=0\). On differentiating (2.6), we see that \[\gamma_{2}\mathfrak{w}^{\gamma_{2}-1}(\delta)\mathfrak{w}^{ \prime}(\delta) =2^{\gamma_{2}-1}\Bigl{(}\delta^{\gamma_{2}}\ \zeta_{1}^{\gamma_{2}}\gamma_{2}\mathfrak{v}^{\gamma_{2}-1}(\delta) \mathfrak{v}^{\prime}(\delta)+\gamma_{2}\delta^{\gamma_{2}-1}\ \zeta_{1}^{\gamma_{2}}\mathfrak{v}^{\gamma_{2}}(\delta)+\gamma_{2}\delta^{ \gamma_{2}-1}\ \zeta_{2}^{\gamma_{2}}\Bigr{)}\] \[\leq 2^{\frac{\gamma_{2}-1}{\gamma_{2}}}\delta\ \zeta_{1}\gamma_{2} \mathfrak{w}^{\gamma_{2}-1}(\delta)\mathfrak{v}^{\prime}(\delta)+\gamma_{2} \delta^{-1}\ \mathfrak{w}^{\gamma_{2}}(\delta)+2^{\gamma_{2}-1}\gamma_{2}\delta^{ \gamma_{2}-1}\ \zeta_{2}^{\gamma_{2}}\] \[\quad+f^{\prime}(\delta)\Psi_{3}(f(\delta))2^{\frac{\gamma_{2}- \gamma_{2}}{\gamma_{2}}}\mathfrak{w}^{\gamma_{3}}(\delta)+f^{\prime}(\delta) \Psi_{3}(f(\delta))2^{\gamma_{3}-1}\delta^{\gamma_{3}}\ \zeta_{2}^{\gamma_{3}}\] \[\leq 2^{\frac{\gamma_{2}-1}{\gamma_{2}}}\delta\ \zeta_{1}\gamma_{2} \mathfrak{w}^{\gamma_{2}-1}(\delta)\Biggl{\{}a^{\prime}(\delta)+f^{\prime}( \delta)\Psi_{1}(f(\delta))(\delta\ \zeta_{1}\mathfrak{v}(\delta)+\delta\ \zeta_{2})\] \[\quad+f^{\prime}(\delta)\Psi_{2}(f(\delta))\mathfrak{w}(\delta) \Biggr{\}}\] \[\quad+\gamma_{2}\delta^{-1}\ \mathfrak{w}^{\gamma_{2}}(\delta)+2^{ \gamma_{2}-1}\gamma_{2}\delta^{\gamma_{2}-1}\ \zeta_{2}^{\gamma_{2}}+f^{\prime}(\delta)\Psi_{3}(f(\delta))2^{\frac{\gamma_{ 3}-\gamma_{2}}{\gamma_{2}}}\mathfrak{w}^{\gamma_{3}}(\delta)\] \[\quad+f^{\prime}(\delta)\Psi_{3}(f(\delta))2^{\gamma_{3}-1} \delta^{\gamma_{3}}\ \zeta_{2}^{\gamma_{3}}. \tag{2.7}\] Further dividing inequality (2.7) by \(\gamma_{2}\mathfrak{w}^{\gamma_{2}-1}(\delta)\) with \(1\geq\mathfrak{w}^{-1}(\delta)\geq\mathfrak{w}^{1-\gamma_{2}}(\delta)\) implies that \[\mathfrak{w}^{\prime}(\delta) \leq 2^{\frac{\gamma_{2}-1}{\gamma_{2}}}\delta\ \zeta_{1}\Biggl{\{}a^{\prime}(\delta)+f^{\prime}(\delta)\Psi_{1}(f(\delta))( \delta\ \zeta_{1}\mathfrak{v}(\delta)+\delta\ \zeta_{2})+f^{\prime}(\delta)\Psi_{2}(f(\delta))\mathfrak{w}(\delta) \Biggr{\}}\] \[\quad+\delta^{-1}\ \mathfrak{w}(\delta)+2^{\gamma_{2}-1}\delta^{ \gamma_{2}-1}\ \zeta_{2}^{\gamma_{2}}\mathfrak{w}^{1-\gamma_{2}}(\delta)+\frac{1}{\gamma_{2}} f^{\prime}(\delta)\Psi_{3}(f(\delta))2^{\frac{\gamma_{3}-\gamma_{2}}{\gamma_{2}}} \mathfrak{w}^{\gamma_{3}-\gamma_{2}+1}(\delta)\] \[\quad+\frac{1}{\gamma_{2}}f^{\prime}(\delta)\Psi_{3}(f(\delta))2^ {\gamma_{3}-1}\delta^{\gamma_{3}}\ \zeta_{2}^{\gamma_{3}}\mathfrak{w}^{1-\gamma_{2}}(\delta)\] \[=\Biggl{(}2^{\frac{\gamma_{2}-1}{\gamma_{2}}}\delta\ \zeta_{1}a^{\prime}(\delta)+2^{\frac{\gamma_{2}-1}{\gamma_{2}}}\delta^{2}\ \zeta_{1}\zeta_{2}f^{\prime}(\delta)\Psi_{1}(f(\delta))+\frac{1}{\gamma_{2}}f^{ \prime}(\delta)\Psi_{3}(f(\delta))2^{\gamma_{3}-1}\delta^{\gamma_{3}}\ \zeta_{2}^{\gamma_{3}}\] \[\quad+2^{\gamma_{2}-1}\delta^{\gamma_{2}-1}\ \zeta_{2}^{\gamma_{2}} \Biggr{)}+\biggl{(}\delta\ \zeta_{1}f^{\prime}(\delta)\Psi_{1}(f(\delta))+2^{\frac{\gamma_{2}-1}{\gamma_{2}} }\delta\ \zeta_{1}f^{\prime}(\delta)\Psi_{2}(f(\delta))+\delta^{-1}\Biggr{)} \mathfrak{w}(\delta)\] \[\quad+\frac{1}{\gamma_{2}}f^{\prime}(\delta)\Psi_{3}(f(\delta))2^ {\frac{\gamma_{3}-\gamma_{2}}{\gamma_{2}}}\mathfrak{w}^{\gamma_{3}-\gamma_{2}+1}( \delta). \tag{2.8}\] Suppose \(\mathfrak{z}(\delta)=\mathfrak{w}^{\gamma_{2}-\gamma_{3}}(\delta)\), thereby, \(\mathfrak{z}(0)=0\) and \(\mathfrak{w}^{\prime}(\delta)=\frac{1}{\gamma_{2}-\gamma_{3}}\mathfrak{z}^{ \prime}(\delta)\mathfrak{w}^{\gamma_{3}-\gamma_{2}+1}(\delta)\). By inserting it in the inequality (2.8) and dividing the entire resulting inequality by \(\mathfrak{w}^{\gamma_{3}-\gamma_{2}+1}(\delta)\), we get \[\mathfrak{z}^{\prime}(\delta) \leq(\gamma_{2}-\gamma_{3})\Bigg{(}2^{\frac{\gamma_{2}-1}{\gamma_ {2}}}\delta\ \zeta_{1}a^{\prime}(\delta)+2^{\gamma_{2}-1}\delta^{\gamma_{2}-1}\ \zeta_{2}^{\gamma_{2}}+\frac{1}{\gamma_{2}}f^{\prime}(\delta)\Psi_{3}(f( \delta))2^{\gamma_{3}-1}\delta^{\gamma_{3}}\ \zeta_{2}^{\gamma_{3}}\] \[\quad+\delta\ \zeta_{1}f^{\prime}(\delta)\Psi_{1}(f(\delta))+2^{ \frac{\gamma_{2}-1}{\gamma_{2}}}\delta^{2}\ \zeta_{1}\zeta_{2}f^{\prime}(\delta)\Psi_{1}(f(\delta))+2^{\frac{\gamma_{2}-1} {\gamma_{2}}}\delta\ \zeta_{1}f^{\prime}(\delta)\Psi_{2}(f(\delta))+\delta^{-1}\Bigg{)} \mathfrak{z}(\delta)\] \[\quad+\frac{\gamma_{2}-\gamma_{3}}{\gamma_{2}}f^{\prime}(\delta) \Psi_{3}(f(\delta))2^{\frac{\gamma_{3}-\gamma_{2}}{\gamma_{2}}}. \tag{2.9}\] Integrating inequality (2.9), we obtain \[\mathfrak{z}(\delta) \leq\frac{\gamma_{2}-\gamma_{3}}{\gamma_{2}}\int_{0}^{f(\delta) }\Psi_{3}(\xi)2^{\frac{\gamma_{3}-\gamma_{2}}{\gamma_{2}}}\exp\Bigg{(}(\gamma _{2}-\gamma_{3})\int_{\xi}^{f(\delta)}\Bigg{(}2^{\frac{\gamma_{2}-1}{\gamma_{2 }}}f^{-1}(\theta)\ \zeta_{1}a^{\prime}(f^{-1}(\theta))\] \[\quad+2^{\gamma_{2}-1}(f^{-1}(\theta))^{\gamma_{2}-1}\ \zeta_{2}^{\gamma_{2}}+\frac{1}{\gamma_{2}}\Psi_{3}(\theta)2^{\gamma_{3}-1}(f^{ -1}(\theta))^{\gamma_{3}}\ \zeta_{2}^{\gamma_{3}}+f^{-1}(\theta)\ \zeta_{1}\Psi_{1}(\theta)\] \[\quad+2^{\frac{\gamma_{2}-1}{\gamma_{2}}}(f^{-1}(\theta))^{2}\ \zeta_{1}\zeta_{2}\Psi_{1}(\theta)+2^{\frac{ \gamma_{2}-1}{\gamma_{2}}}f^{-1}(\theta)\ \zeta_{1}\Psi_{2}(\theta)+\frac{1}{f^{-1}(\theta)}\Bigg{)}\Bigg{)} \tag{2.10}\] Combining this with \(\mathfrak{z}(\delta)=\mathfrak{w}^{\gamma_{2}-\gamma_{3}}(\delta)\) and \(\mathfrak{u}(\delta)\leq 2^{\frac{1-\gamma_{2}}{\gamma_{2}}}\mathfrak{w}( \delta)+\delta\ \zeta_{2}\), we achieve the bound as stated in (2.2). **Theorem 2.4**.: _If \(\mathfrak{u},\mathfrak{u}^{\prime},\Psi_{1},\Psi_{2},\Psi_{3}\in\mathrm{Cf}_{ \mathbb{R}_{+}}\) and \(a,f\in\mathrm{Cdf}_{\mathbb{R}_{+}}\) are nondecreasing in nature wherein \(a(\delta)\geq 1,f(\delta)\leq\delta\ (\delta\in\mathbb{R}_{+}),\mathfrak{u}(0)=0\) are such that_ \[(\mathfrak{u}^{\prime}(\delta))^{\gamma_{1}}\leq a(\delta)+\int \limits_{0}^{f(\delta)}\Psi_{1}(\theta)\mathfrak{u}(\theta)d\theta+\int \limits_{0}^{f(\delta)}\Psi_{2}(\theta)\left((\mathfrak{u}^{\prime}(\theta))^{ \gamma_{2}}+\int\limits_{0}^{\theta}\Psi_{3}(\xi)\mathfrak{u}(\xi)d\xi \right)^{\frac{1}{\gamma_{3}}}d\theta \tag{2.11}\] _for \(\delta,\gamma_{1},\gamma_{2},\gamma_{3}\in\mathbb{R}_{+},\) with \(\gamma_{1}\geq\gamma_{2}\geq 1,\gamma_{3}\geq 1\) then_ \[\mathfrak{u}(\delta) \leq\delta\ \zeta_{2}+\frac{\zeta_{1}}{\zeta_{3}}\ \delta\Bigg{(}(\zeta_{3}\ a(0)+\zeta_{4})\exp\Bigg{(}\int_{0}^{f(\delta)}(f^{-1} \theta)\ \zeta_{1}\ \Psi_{1}(\theta)+\zeta_{3}\ \zeta_{5}\ \Psi_{2}(\theta)+\frac{\zeta_{1}}{\zeta_{3}}\ (f^{-1} \theta)\Psi_{3}(\theta)d\theta\Bigg{)}\] \[\quad+\int_{0}^{f(\delta)}\big{(}\zeta_{3}a^{\prime}(f^{-1}(\xi) )+\zeta_{2}\zeta_{3}\Psi_{1}(\xi)(f^{-1}(\xi))+\zeta_{6}\ \Psi_{2}(\xi)+(f^{-1}\xi)\ \zeta_{2}\ \Psi_{3}(\xi)\big{)}\] \[\quad\quad\times\exp\Bigg{(}\int_{\xi}^{f(\delta)}(f^{-1} \theta)\ \zeta_{1}\ \Psi_{1}(\theta)+\zeta_{3}\ \zeta_{5}\ \Psi_{2}(\theta)+\frac{\zeta_{1}}{\zeta_{3}}\ (f^{-1} \theta)\Psi_{3}(\theta)d\theta\Bigg{)}\,d\xi\Bigg{)}, \tag{2.12}\] _where \(\zeta_{1}=\frac{1}{\gamma_{1}}\kappa^{\frac{1-\gamma_{1}}{\gamma_{1}}},\zeta_{ 2}=\frac{\gamma_{1}-1}{\gamma_{1}}\kappa^{\frac{1}{\gamma_{1}}},\zeta_{3}=\frac {\gamma_{2}}{\gamma_{1}}\kappa^{\frac{\gamma_{2}-\gamma_{1}}{\gamma_{1}}},\zeta_{4}= \frac{\gamma_{1}-\gamma_{2}}{\gamma_{1}}\kappa^{\frac{\gamma_{2}}{\gamma_{1}}}, \zeta_{5}=\frac{1}{\gamma_{3}}\kappa^{\frac{1-\gamma_{3}}{\gamma_{3}}},\zeta_{6}= \frac{\gamma_{3}-1}{\gamma_{1}}\kappa^{\frac{1}{\gamma_{3}}}\) (\(\kappa>0\))._ Proof.: If the right-hand-side of inequality (2.1) is substituted as \(\mathfrak{v}(\delta)\) then \(\mathfrak{v}(0)=a(0)\) and thus from lemma 2.2 \[\mathfrak{u}^{\prime}(\delta)\leq\mathfrak{v}^{\frac{1}{\gamma_{1}}}(\delta) \leq\zeta_{1}\mathfrak{v}(\delta)+\zeta_{2}. \tag{2.13}\] However, the nondecreasing nature of \(\mathfrak{v}(\delta)\geq 0\) gives \[\mathfrak{u}(\delta)\leq\delta\ \zeta_{1}\mathfrak{v}(\delta)+\delta\ \zeta_{2}. \tag{2.14}\] Using (2.14), we have \[\mathfrak{v}^{\prime}(\delta) =a^{\prime}(\delta)+f^{\prime}(\delta)\Psi_{1}(f(\delta))\mathfrak{ u}(f(\delta))+f^{\prime}(\delta)\Psi_{2}(f(\delta))\left((\mathfrak{u}^{\prime}(f( \delta)))^{\gamma_{2}}+\int\limits_{0}^{f(\delta)}\Psi_{3}(\xi)\mathfrak{u}( \xi)d\xi\right)^{\frac{1}{\gamma_{3}}}\] \[\leq a^{\prime}(\delta)+f^{\prime}(\delta)\Psi_{1}(f(\delta))( \delta\ \zeta_{1}\mathfrak{v}(\delta)+\delta\ \zeta_{2})+f^{\prime}(\delta)\Psi_{2}(f(\delta))\] \[\times\left(\mathfrak{v}^{\frac{\gamma_{2}}{\gamma_{1}}}(\delta) +\int\limits_{0}^{f(\delta)}\Psi_{3}(\xi)(\xi\ \zeta_{1}\mathfrak{v}(\xi)+\xi\ \zeta_{2})d\xi\right)^{\frac{1}{\gamma_{3}}}\] \[\leq a^{\prime}(\delta)+f^{\prime}(\delta)\Psi_{1}(f(\delta)) \delta\ \zeta_{1}\mathfrak{v}(\delta)+f^{\prime}(\delta)\Psi_{1}(f(\delta))\delta\ \zeta_{2}+f^{\prime}(\delta)\Psi_{2}(f(\delta))_{\mathfrak{z}} \frac{1}{\gamma_{3}}(\delta), \tag{2.15}\] where \(\zeta_{3}=\frac{\gamma_{2}}{\gamma_{1}}\kappa^{\frac{\gamma_{2}-\gamma_{1}}{ \gamma_{1}}},\zeta_{4}=\frac{\gamma_{1}-\gamma_{2}}{\gamma_{1}}\kappa^{\frac{ \gamma_{2}}{\gamma_{1}}}\) (\(\kappa>0\)) and \[\mathfrak{z}(\delta)=\zeta_{3}\mathfrak{v}(\delta)+\zeta_{4}+\int\limits_{0}^{ f(\delta)}\Psi_{3}(\xi)(\xi\ \zeta_{1}\mathfrak{v}(\xi)+\xi\ \zeta_{2})d\xi. \tag{2.16}\] Because \(\mathfrak{u}(\delta)\leq\delta\ \zeta_{1}\mathfrak{v}(\delta)+\delta\ \zeta_{2}\leq\delta\ \frac{\zeta_{1}}{\zeta_{3}}\mathfrak{z}(\delta)+\delta\ \zeta_{2}\) according to (2.16) and also \(\mathfrak{z}^{\frac{1}{\gamma_{3}}}(\delta)\leq\zeta_{5}\ \mathfrak{z}(\delta)+\zeta_{6}\), thus \[\mathfrak{z}^{\prime}(\delta) =\zeta_{3}\mathfrak{v}^{\prime}(\delta)+f^{\prime}(\delta)\Psi_{ 3}(f(\delta))(f(\delta)\ \zeta_{1}\mathfrak{v}(f(\delta))+f(\delta)\ \zeta_{2})\] \[\leq\zeta_{3}\Big{(}a^{\prime}(\delta)+f^{\prime}(\delta)\Psi_{1} (f(\delta))\delta\ \zeta_{1}\mathfrak{v}(\delta)+f^{\prime}(\delta)\Psi_{1}(f(\delta))\delta\ \zeta_{2}+f^{\prime}(\delta)\Psi_{2}(f(\delta))_{\mathfrak{z}}\frac{1}{\gamma_ {3}}(\delta)\Big{)}\] \[\qquad+f^{\prime}(\delta)\Psi_{3}(f(\delta))\delta\ \zeta_{1} \mathfrak{v}(\delta)+f^{\prime}(\delta)\Psi_{3}(f(\delta))\delta\ \zeta_{2}\] \[=\Big{(}\zeta_{3}a^{\prime}(\delta)+\zeta_{2}\zeta_{3}f^{\prime}( \delta)\Psi_{1}(f(\delta))\delta+\zeta_{6}\ f^{\prime}(\delta)\Psi_{2}(f(\delta ))+\delta\ \zeta_{2}\ f^{\prime}(\delta)\Psi_{3}(f(\delta))\Big{)}\] \[\qquad+\Big{(}\delta\ \zeta_{1}\ f^{\prime}(\delta)\Psi_{1}(f( \delta))+\zeta_{3}\zeta_{5}\ f^{\prime}(\delta)\Psi_{2}(f(\delta))+\frac{\zeta _{1}}{\zeta_{3}}\ \delta\ f^{\prime}(\delta)\Psi_{3}(f(\delta))\Big{)}\mathfrak{z}(\delta). \tag{2.17}\] where \(\zeta_{5}=\frac{1}{\gamma_{3}}\kappa^{\frac{1-\gamma_{3}}{\gamma_{3}}},\zeta_ {6}=\frac{\gamma_{3}-1}{\gamma_{1}}\kappa^{\frac{1}{\gamma_{3}}}\), for any \(\kappa>0\). Integrating inequality (2.17) from \(0\) to \(\delta\), consequently, \[\mathfrak{z}(\delta) \leq(\zeta_{3}\ a(0)+\zeta_{4})\exp\left(\int_{0}^{f(\delta)}(f^{-1} \theta)\ \zeta_{1}\ \Psi_{1}(\theta)+\zeta_{3}\ \zeta_{5}\ \Psi_{2}(\theta)+\frac{\zeta_{1}}{\zeta_{3}}\ (f^{-1} \theta)\Psi_{3}(\theta)d\theta\right)\] \[\qquad+\int_{0}^{f(\delta)}\big{(}\zeta_{3}a^{\prime}(f^{-1} \xi)+\zeta_{2}\zeta_{3}\Psi_{1}(\xi)(f^{-1}\xi)+\zeta_{6}\ \Psi_{2}(\xi)+(f^{-1}\xi)\ \zeta_{2}\ \Psi_{3}(\xi)\big{)}\] \[\qquad\times\exp\left(\int_{\xi}^{f(\delta)}(f^{-1}\theta)\ \zeta_{1}\ \Psi_{1}(\theta)+\zeta_{3}\ \zeta_{5}\ \Psi_{2}(\theta)+\frac{\zeta_{1}}{\zeta_{3}}\ (f^{-1} \theta)\Psi_{3}(\theta)d\theta\right)d\xi. \tag{2.18}\] Combining the bound obtained on \(\mathfrak{z}(\delta)\) in (2.18) with (2.14) and using \(\mathfrak{v}(\delta)\leq\frac{1}{\zeta_{3}}\ \mathfrak{z}(\delta)\), we arrive at the bound in (2.12). **Remark 2.1**.: The integro-differential inequality of A Shakoor et al. [2] can be produced by allowing \(\gamma_{1}=\gamma_{2}=1\) and \(\gamma_{3}=p\). **Theorem 2.5**.: _If \(\mathfrak{u},\mathfrak{u}^{\prime},\Psi_{1},\Psi_{2},\Psi_{3}\in\mathrm{Cf}_{ \mathbb{R}_{+}}\) and \(a,f\in\mathrm{Cdf}_{\mathbb{R}_{+}}\) are nondecreasing in nature wherein \(a(\delta)\geq 1,f(\delta)\leq\delta\)\((\delta\in\mathbb{R}_{+})\) are such that_ \[\mathfrak{u}^{\gamma_{1}}(\delta)\leq\left(a(\delta)+\int\limits_{0}^{f( \delta)}\Psi_{1}(\theta)\mathfrak{u}(\theta)d\theta+\int\limits_{0}^{f(\delta )}\Psi_{2}(\theta)\left(\mathfrak{u}^{\gamma_{2}}(\theta)+\int\limits_{0}^{ \theta}\Psi_{3}(\xi)\mathfrak{u}^{\gamma_{3}}(\xi)d\xi\right)^{\frac{1}{\gamma_ {2}}}d\theta\right)^{\gamma_{4}} \tag{2.19}\] _for \(\delta,\gamma_{1},\gamma_{2},\gamma_{3},\gamma_{4}\in\mathbb{R}_{+}\) with \(\gamma_{1}\geq\gamma_{4}>0,\gamma_{2}>\gamma_{3}\geq 0\) then_ \[\mathfrak{u}(\delta) \leq\left\{(\zeta_{7}+\zeta_{8}a)^{\gamma_{2}-\gamma_{3}}(0) \exp\left(\int_{0}^{f(\delta)}\zeta_{8}[\gamma_{2}-\gamma_{3}]\Big{(}a^{ \prime}(f^{-1}\theta)+\Psi_{1}(\theta)+\Psi_{2}(\theta)\Big{)}d\theta\right)\right.\] \[\left.+\int_{0}^{f(\delta)}\frac{\gamma_{2}-\gamma_{3}}{\gamma_{2 }}\Psi_{3}(\xi)\exp\left(\int_{0}^{f(\delta)}\zeta_{8}[\gamma_{2}-\gamma_{3}] \Big{(}a^{\prime}(f^{-1}\theta)+\Psi_{1}(\theta)+\Psi_{2}(\theta)\Big{)}d \theta\right)d\xi\right\}^{\frac{1}{\gamma_{2}-\gamma_{3}}}, \tag{2.20}\] _where \(\zeta_{7}=\frac{\gamma_{1}-\gamma_{4}}{\gamma_{1}}\kappa^{\frac{\gamma_{4}}{ \gamma_{1}}}\) and \(\zeta_{8}=\frac{\gamma_{4}}{\gamma_{1}}\kappa^{\frac{\gamma_{4}-\gamma_{1}}{ \gamma_{1}}}\)\((\kappa>0)\)._ Proof.: The inequality (2.19) can be rephrased to the form, \[\mathfrak{u}(\delta) \leq\left(a(\delta)+\int\limits_{0}^{f(\delta)}\Psi_{1}(\theta) \mathfrak{u}(\theta)d\theta+\int\limits_{0}^{f(\delta)}\Psi_{2}(\theta)\left( \mathfrak{u}^{\gamma_{2}}(\theta)+\int\limits_{0}^{\theta}\Psi_{3}(\xi) \mathfrak{u}^{\gamma_{3}}(\xi)d\xi\right)^{\frac{1}{\gamma_{2}}}d\theta \right)^{\frac{\gamma_{4}}{\gamma_{1}}}\] \[\leq\zeta_{7}+\zeta_{8}a(\delta)+\int\limits_{0}^{f(\delta)}\zeta _{8}\Psi_{1}(\theta)\mathfrak{u}(\theta)d\theta+\int\limits_{0}^{f(\delta)} \zeta_{8}\Psi_{2}(\theta)\left(\mathfrak{u}^{\gamma_{2}}(\theta)+\int\limits _{0}^{\theta}\Psi_{3}(\xi)\mathfrak{u}^{\gamma_{3}}(\xi)d\xi\right)^{\frac{1} {\gamma_{2}}}d\theta, \tag{2.21}\] where \(\zeta_{7}=\frac{\gamma_{1}-\gamma_{4}}{\gamma_{1}}\kappa^{\frac{\gamma_{4}}{ \gamma_{1}}}\) and \(\zeta_{8}=\frac{\gamma_{4}}{\gamma_{1}}\kappa^{\frac{\gamma_{4}-\gamma_{1}}{ \gamma_{1}}}\)\((\kappa>0)\). If \(\mathfrak{v}(\delta)\) indicates right-hand-side of inequality (2.21), then \(\mathfrak{u}(\delta)\leq\mathfrak{v}(\delta)\) with \(\mathfrak{v}(0)=\zeta_{7}+\zeta_{8}a(0)\), and thus \(\mathfrak{u}(f(\delta))\leq\mathfrak{v}(f(\delta))\leq\mathfrak{v}(\delta)\) due to nondecreasing nature of \(\mathfrak{v}(\delta)\). Further, \[\mathfrak{v}^{\prime}(\delta) =\zeta_{8}a^{\prime}(\delta)+\zeta_{8}f^{\prime}(\delta)\Psi_{1}( f(\delta))\mathfrak{u}(f(\delta))+f^{\prime}(\delta)\zeta_{8}\Psi_{2}(f(\delta)) \left(\mathfrak{u}^{\gamma_{2}}(f(\delta))+\int\limits_{0}^{f(\delta)}\Psi_{3 }(\xi)\mathfrak{u}^{\gamma_{3}}(\xi)d\xi\right)^{\frac{1}{\gamma_{2}}}\] \[\leq\zeta_{8}a^{\prime}(\delta)+\zeta_{8}f^{\prime}(\delta)\Psi_{ 1}(f(\delta))\mathfrak{v}(\delta)+\zeta_{8}f^{\prime}(\delta)\Psi_{2}(f(\delta) )\left(\mathfrak{v}^{\gamma_{2}}(\delta)+\int\limits_{0}^{f(\delta)}\Psi_{3 }(\xi)\mathfrak{v}^{\gamma_{3}}(\xi)d\xi\right)^{\frac{1}{\gamma_{2}}}\] \[\leq\zeta_{8}a^{\prime}(\delta)+\zeta_{8}f^{\prime}(\delta)\Psi_{ 1}(f(\delta))\mathfrak{v}(\delta)+\zeta_{8}f^{\prime}(\delta)\Psi_{2}(f(\delta) )\mathfrak{v}(\delta), \tag{2.22}\] where \[\mathfrak{v}(\delta)=\left(\mathfrak{v}^{\gamma_{2}}(\delta)+\int\limits_{0}^{f( \delta)}\Psi_{3}(\xi)\mathfrak{v}^{\gamma_{3}}(\xi)d\xi\right)^{\frac{1}{ \gamma_{2}}}\ \ \text{i.e.}\ \mathfrak{v}^{\gamma_{2}}(\delta)=\mathfrak{v}^{\gamma_{2}}(\delta)+\int \limits_{0}^{f(\delta)}\Psi_{3}(\xi)\mathfrak{v}^{\gamma_{3}}(\xi)d\xi. \tag{2.23}\] The equation (2.23) provides that \(\mathfrak{v}(0)=\mathfrak{v}(0)=\zeta_{7}+\zeta_{8}a(0),\mathfrak{v}(\delta)\leq \mathfrak{v}(\delta)\), and therein \[\gamma_{2}\mathfrak{v}^{\gamma_{2}-1}(\delta)\mathfrak{v}^{\prime}(\delta)= \gamma_{2}\mathfrak{v}^{\gamma_{2}-1}(\delta)\mathfrak{v}^{\prime}(\delta)+f^{ \prime}(\delta)\Psi_{3}(f(\delta))\mathfrak{v}^{\gamma_{3}}(f(\delta))\] \[\leq\zeta_{8}\Big{(}a^{\prime}(\delta)+f^{\prime}(\delta)\Psi_{1}(f( \delta))+f^{\prime}(\delta)\Psi_{2}(f(\delta))\Big{)}\mathfrak{z}(\delta)+\frac{1 }{\gamma_{2}}f^{\prime}(\delta)\Psi_{3}(f(\delta)) \tag{2.28}\] Integrating inequality (2.28) from \(0\) to \(\delta\), we find that \[\mathfrak{z}(\delta)\leq(\zeta_{7}+\zeta_{8}a(0))^{\gamma_{2}- \gamma_{3}}(0)\exp\left(\int_{0}^{f(\delta)}\zeta_{8}[\gamma_{2}-\gamma_{3}] \Big{(}a^{\prime}(f^{-1}\sigma)+\Psi_{1}(\sigma)+\Psi_{2}(\sigma)\Big{)}d \sigma\right)\\ +\int_{0}^{f(\delta)}\frac{\gamma_{2}-\gamma_{3}}{\gamma_{2}}\Psi _{3}(\lambda)\exp\left(\int_{0}^{f(\delta)}\zeta_{8}[\gamma_{2}-\gamma_{3}] \Big{(}a^{\prime}(f^{-1}\sigma)+\Psi_{1}(\sigma)+\Psi_{2}(\sigma)\Big{)}d \sigma\right)d\lambda. \tag{2.29}\] Thus from (2.29), \(\mathfrak{w}(\delta)\geq\mathfrak{v}(\delta)\geq\mathfrak{u}(\delta)\), and using definition of \(\mathfrak{z}(\delta)\), we achieve the bound as stated in (2.20). **Remark 2.2**.: Through alteration in the initial assumptions of the Theorem 2.5, we come up with the following widely recognized inequalities. 1. If we set \(\gamma_{1}=1=\gamma_{4}\), we retrieve the most latest nonlinear retarded integral inequality developed by A Shaknoor et al. (Theorem 2.1 [2]). 2. The renowned inequality of Gronwall and Bellman [5] can be acquired if we consider \(a(\delta)=c\) for some \(c\in\mathbb{R}_{+},\Psi_{2}(\delta)=0,\gamma_{1}=1=\gamma_{4}\) and \(\mathfrak{a}(\delta)=\delta\). 3. If we set up the assumptions as \(\gamma_{1}=1=\gamma_{4},a(\delta)=c\in\mathbb{R}_{+},\Psi_{1}(\delta)=0\), and \(f(\delta)=\delta\) then inequality proved above shrinks to Theorem 2.3 [6]. **Theorem 2.6**.: _Consider \(\mathfrak{u},\Psi_{1},\Psi_{2},\Psi_{3},\Psi_{4},\Psi_{5},\Psi_{6}\in\mathrm{Cf }_{\mathbb{R}_{+}}\), and let \(a,f\in\mathrm{Cdf}_{\mathbb{R}_{+}}\) be nondecreasing in nature wherein \(a(\delta)\geq 1,f(\delta)\leq\delta(\delta\in\mathbb{R}_{+})\) such that_ \[\mathfrak{u}^{\gamma_{1}}(\delta)\leq a(\delta)+\int\limits_{0}^{f( \delta)}(\Psi_{1}(\theta)\mathfrak{u}(\theta)+\Psi_{2}(\theta))d\theta+\int \limits_{0}^{f(\delta)}\Biggl{\{}\Psi_{3}(\theta)\Biggl{(}\mathfrak{u}^{ \gamma_{1}}(\theta)\] \[\qquad\qquad+\int\limits_{0}^{\theta}(\Psi_{4}(\xi)\mathfrak{u} ^{\gamma_{2}}(\xi)+\Psi_{5}(\xi))d\xi\Biggr{)}^{\frac{1}{\gamma_{1}}}+\Psi_{6 }(\theta)\Biggr{\}}d\theta \tag{2.30}\] _for \(\delta,\gamma_{1},\gamma_{2}\in\mathbb{R}_{+}\) with \(\gamma_{1}\geq\gamma_{2}\geq 1\) then_ \[\mathfrak{u}(\delta)\leq\Biggl{\{}a(0)\exp\Biggl{(}\int_{0}^{f( \delta)}\zeta_{1}\ \Psi_{1}(\theta)+\zeta_{1}\ \Psi_{3}(\theta)+\zeta_{3}\ \Psi_{4}(\theta)d\theta\Biggr{)}+\int_{0}^{f( \delta)}\Big{(}a^{\prime}(f^{-1}\xi)+\zeta_{2}\ \Psi_{1}(\xi)\] \[\qquad\qquad+\Psi_{2}(\xi)+\zeta_{2}\ \Psi_{3}(\xi)+\Psi_{6}(\xi)+ \zeta_{4}\Psi_{4}(\xi)+\Psi_{5}(\xi)\Bigr{)}\times\exp\Biggl{(}\int_{\xi}^{f( \delta)}\zeta_{1}\ \Psi_{1}(\theta)+\zeta_{1}\ \Psi_{3}(\theta)\] \[\qquad\qquad+\zeta_{3}\ \Psi_{4}(\theta)d\theta\Biggr{)}d\xi \Biggr{\}}^{\frac{1}{p}}, \tag{2.31}\] _where \(\zeta_{1},\zeta_{2},\zeta_{3},\zeta_{4}\) are as in Theorem 2.4._ Proof.: If the right-hand-side of inequality (2.30) is substituted as \(\mathfrak{v}(\delta)\) then \(\mathfrak{u}^{\gamma_{1}}(\delta)\leq\mathfrak{v}(\delta),\ \mathfrak{v}(0)=a(0)\), and so from lemma 2.2, \[\mathfrak{v}^{\prime}(\delta)=a^{\prime}(\delta)+f^{\prime}( \delta)(\Psi_{1}(f(\delta))\mathfrak{u}(f(\delta))+\Psi_{2}(f(\delta)))+f^{ \prime}(\delta)\Biggl{\{}\Psi_{3}(f(\delta))\] \[\qquad\qquad\qquad\Biggl{(}\mathfrak{u}^{\gamma_{1}}(f(\delta))+ \int\limits_{0}^{f(\delta)}(\Psi_{4}(\xi)\mathfrak{u}^{\gamma_{2}}(\xi)+\Psi_ {5}(\xi))d\xi\Biggr{)}^{\frac{1}{\gamma_{1}}}+\Psi_{6}(f(\delta))\Biggr{\}}\] \[\leq a^{\prime}(\delta)+f^{\prime}(\delta)(\Psi_{1}(f(\delta)) \mathfrak{v}^{\frac{1}{\gamma_{1}}}(\delta)+\Psi_{2}(f(\delta)))+f^{\prime}( \delta)\Biggl{\{}\Psi_{3}(f(\delta))\] \[\qquad\qquad\qquad\Biggl{(}\mathfrak{v}(\delta)+\int\limits_{0}^ {f(\delta)}(\Psi_{4}(\xi)\mathfrak{v}^{\frac{\gamma_{2}}{\gamma_{1}}}(\xi)+ \Psi_{5}(\xi))d\xi\Biggr{)}^{\frac{1}{\gamma_{1}}}+\Psi_{6}(f(\delta))\Biggr{\}}\] \[\leq a^{\prime}(\delta)+f^{\prime}(\delta)(\Psi_{1}(f(\delta))( \zeta_{1}\ \mathfrak{v}(\delta)+\zeta_{2})+\Psi_{2}(f(\delta)))+f^{\prime}(\delta)\Bigl{(} \Psi_{3}(f(\delta))\] \[\qquad\qquad\qquad\times(\zeta_{1}\ \mathfrak{w}(\delta)+\zeta_{2})+\Psi_{6}(f(\delta)) \Bigr{)}, \tag{2.32}\] where \[\mathfrak{w}(\delta)=\mathfrak{v}(\delta)+\int\limits_{0}^{f(\delta)}(\Psi_{4}( \xi)\mathfrak{v}^{\frac{\gamma_{2}}{\gamma_{1}}}(\xi)+\Psi_{5}(\xi))d\xi,\quad \mathfrak{w}(0)=a(0),\text{ and }\quad\mathfrak{v}(\delta)\leq\mathfrak{w}(\delta). \tag{2.33}\] On differentiating \(\mathfrak{w}(\delta)\) and using (2.33), we find that \[\mathfrak{w}^{\prime}(\delta) =\mathfrak{v}^{\prime}(\delta)+f^{\prime}(\delta)(\Psi_{4}(f( \delta))\mathfrak{v}^{\frac{\gamma_{2}}{\gamma_{1}}}(f(\delta))+\Psi_{5}(f( \delta)))\] \[\leq a^{\prime}(\delta)+f^{\prime}(\delta)(\Psi_{1}(f(\delta))( \zeta_{1}\ \mathfrak{v}(\delta)+\zeta_{2})+\Psi_{2}(f(\delta)))+f^{\prime}(\delta)\Big{(} \Psi_{3}(f(\delta))(\zeta_{1}\ \mathfrak{w}(\delta)+\zeta_{2})\] \[\qquad+\Psi_{6}(f(\delta))\Big{)}+f^{\prime}(\delta)(\Psi_{4}(f( \delta))(\zeta_{3}\mathfrak{v}(\delta)+\zeta_{4})+\Psi_{5}(f(\delta)))\] \[=\Big{(}\zeta_{1}\ f^{\prime}(\delta)\Psi_{1}(f(\delta))+\zeta_{1 }\ f^{\prime}(\delta)\Psi_{3}(f(\delta))+\zeta_{3}\ f^{\prime}(\delta)\Psi_{4} (f(\delta))\Big{)}\mathfrak{w}(\delta)\] \[\qquad\qquad+\Big{(}a^{\prime}(\delta)+\zeta_{2}\ f^{\prime}( \delta)\Psi_{1}(f(\delta))+f^{\prime}(\delta)\Psi_{2}(f(\delta))+\zeta_{2}\ f^{ \prime}(\delta)\Psi_{3}(f(\delta))\] \[\qquad\qquad+f^{\prime}(\delta)\Psi_{6}(f(\delta))+\zeta_{4}\ f^{ \prime}(\delta)\Psi_{4}(f(\delta))+f^{\prime}(\delta)\Psi_{5}(f(\delta))\Big{)}. \tag{2.34}\] Integrating inequality (2.34) from \(0\) to \(\delta\), we achieve that \[\mathfrak{w}(\delta) \leq a(0)\exp\Bigg{(}\int\limits_{0}^{f(\delta)}\zeta_{1}\ \Psi_{1}(\theta)+\zeta_{1}\ \Psi_{3}(\theta)+\zeta_{3}\ \Psi_{4}(\theta)d\theta\Bigg{)}+\int\limits_{0}^{f(\delta)}\Big{(}a^{\prime}( f^{-1}\xi)+\zeta_{2}\ \Psi_{1}(\xi)+\Psi_{2}(\xi)\] \[\quad+\zeta_{2}\ \Psi_{3}(\xi)+\Psi_{6}(\xi)+\zeta_{4}\Psi_{4}(\xi)+ \Psi_{5}(\xi)\Big{)}\times\exp\Bigg{(}\int\limits_{\xi}^{f(\delta)}\zeta_{1} \ \Psi_{1}(\theta)+\zeta_{1}\ \Psi_{3}(\theta)+\zeta_{3}\ \Psi_{4}(\theta)d\theta\Bigg{)}d\xi. \tag{2.35}\] Using \(\mathfrak{u}^{\gamma_{1}}(\delta)\leq\mathfrak{v}(\delta)\), (2.33) and (2.35), we find the estimate as stated in (2.31). **Remark 2.3**.: We note that this result reduces to certain recent and well-known integral inequalities under an appropriate set of assumptions as below: 1. If we insert \(\Psi_{2}(\delta)=\Psi_{5}(\delta)=\Psi_{6}(\delta)=0\) then this inequality is reduced to Theorem 2.4 [2]. 2. One can achieve the well-known inequality due to Gronwall and Bellman [5] from Theorem 2.6 if it is assumed that \(a(t)=c\) for some \(c\in\mathbb{R}_{+},\Psi_{2}(\delta)=\Psi_{3}(\delta)=0,\gamma_{1}=1\) and \(f(\delta)=\delta\). 3. When we set \(a(\delta)=c\ (c\in\mathbb{R}_{+}),\Psi_{1}(\delta)=0,\Psi_{2}(\delta)=0,\Psi_{5}( \delta)=0,\Psi_{6}(\delta)=0\), and \(f(\delta)=\delta\), the inequality established and proven in Theorem 2.6 changes into the inequality shown in Theorem 2.3 [6]. 4. Substituting \(\Psi_{1}=0,\Psi_{2}(\delta)=0,\Psi_{5}(\delta)=0,\Psi_{6}(\delta)=0,a(\delta) =c,f(\delta)=\delta,\gamma_{1}=\gamma_{2}=1\) in the previous inequality yields the same form as the inequality defined in Pachpatte's Theorem 1.3. **Theorem 2.7**.: _Consider \(\mathfrak{u},\Psi_{1},\Psi_{2},\Psi_{3}\in\mathrm{Cf}_{\mathbb{R}_{+}},\) and let \(a,f,\Phi\in\mathrm{Cdf}_{\mathbb{R}_{+}}\) be nondecreasing in nature wherein \(a(\delta)\geq 1,\Phi(\delta)\geq 1,f(\delta)\leq\delta(\delta\in\mathbb{R}_{+})\) such that_ \[\mathfrak{u}^{\gamma_{1}}(\delta)\leq\Phi(\delta)\Bigg{[}a(\delta)+\int\limits _{0}^{f(\delta)}\Psi_{1}(\theta)\mathfrak{u}(\theta)d\theta+\int\limits_{0}^{f( \delta)}\Psi_{2}(\theta)\Bigg{(}\mathfrak{u}^{\gamma_{1}}(\theta)+\int\limits_{0 }^{\theta}\Psi_{3}(\xi)\mathfrak{u}^{\gamma_{2}}(\xi)d\xi\Bigg{)}^{\frac{1}{ \gamma_{2}}}d\theta\Bigg{]} \tag{2.36}\] _for \(\delta,\gamma_{1},\gamma_{2}\in\mathbb{R}_{+}\) such that \(\gamma_{1}\geq\gamma_{2}\geq 1\) then_ \[\mathfrak{u}(\delta) \leq\Bigg{\{}\Phi(0)a(0)\exp\Biggl{(}\int_{0}^{f(\delta)}\Big{[} \Phi^{\prime}(f^{-1}(\theta))\Phi^{-1}(f^{-1}(\theta))+\zeta_{1}\ \Phi(f^{-1}(\theta))\Psi_{1}(\theta)+\zeta_{9}\ \Phi(f^{-1}(\theta))\Psi_{2}(\theta)\] \[\quad+\zeta_{3}\ \Psi_{3}(\theta)\Big{]}d\theta\Biggr{)}+\int_{0}^{f( \delta)}\Phi(a^{-1}(\xi))a^{\prime}(a^{-1}(\xi))+\zeta_{2}\ \Phi(a^{-1}(\xi))\Psi_{1}(\xi)+\zeta_{10}\ \Phi(a^{-1}(\xi))\Psi_{2}(\xi)\] \[\quad+\zeta_{4}\ \Psi_{3}(\xi)\times\Biggl{(}\exp\Biggl{(}\int_{0}^{f( \delta)}\Big{[}\Phi^{\prime}(f^{-1}(\theta))\Phi^{-1}(f^{-1}(\theta))+\zeta_{1 }\ \Phi(f^{-1}(\theta))\Psi_{1}(\theta)\] \[\quad+\zeta_{9}\ \Phi(f^{-1}(\theta))\Psi_{2}(\theta)+\zeta_{3}\ \Psi_{3}(\theta)\Big{]}d\theta\Biggr{)}\Biggr{)}d\xi \Biggr{\}}^{\frac{1}{\gamma_{1}}}, \tag{2.37}\] _where \(\zeta_{1},\zeta_{2},\zeta_{3},\zeta_{4}\) are as in Theorem 2.4 and \(\zeta_{9}=\frac{1}{\gamma_{2}}\kappa^{\frac{1-\gamma_{2}}{\gamma_{2}}},\zeta_{ 10}=\frac{\gamma_{2}-1}{\gamma_{2}}\kappa^{\frac{1}{\gamma_{2}}}(\kappa>0)\)._ Proof.: We begin by substituting right-hand-side of (2.36) with the function \(\mathfrak{v}(\delta)\). It brings us to the conclusions that \(\mathfrak{u}^{\gamma_{1}}(\delta)\leq\mathfrak{v}(\delta)\) and \(\mathfrak{v}(0)=\Phi(0)a(0)\). Further derivative of \(\mathfrak{v}(\delta)\), and lemmas 2.1 and 2.2 directs us to \[\mathfrak{v}^{\prime}(\delta) =\Phi^{\prime}(\delta)\Biggl{[}a(\delta)+\int\limits_{0}^{f( \delta)}\Psi_{1}(\theta)\mathfrak{u}(\theta)d\theta+\int\limits_{0}^{f(\delta) }\Psi_{2}(\theta)\Biggl{(}\mathfrak{u}^{\gamma_{1}}(\theta)+\int\limits_{0}^{ \theta}\Psi_{3}(\xi)\mathfrak{u}^{\gamma_{2}}(\xi)d\xi\Biggr{)}^{\frac{1}{ \gamma_{2}}}d\theta\Biggr{]}\] \[\quad+\Phi(\delta)\Biggl{[}a^{\prime}(\delta)+f^{\prime}(\delta) \Psi_{1}(f(\delta))\mathfrak{u}(f(\delta))+f^{\prime}(\delta)\Psi_{2}(f(\delta ))\Biggl{(}\mathfrak{u}^{\gamma_{1}}(f(\delta))+\int\limits_{0}^{f(\delta)} \Psi_{3}(\xi)\mathfrak{u}^{\gamma_{2}}(\xi)d\xi\Biggr{)}^{\frac{1}{\gamma_{2}}} \Biggr{]}\] \[\leq\Phi^{\prime}(\delta)\Phi^{-1}(\delta)\mathfrak{v}(\delta)+ \Phi(\delta)\Biggl{(}a^{\prime}(\delta)+f^{\prime}(\delta)\Psi_{1}(f(\delta)) \mathfrak{v}^{\frac{1}{\gamma_{1}}}(\delta)+f^{\prime}(\delta)\Psi_{2}(f( \delta))\mathfrak{v}^{\frac{1}{\gamma_{2}}}(\delta)\Biggr{)},\] \[\qquad\qquad\text{where }\mathfrak{w}(\delta)=\mathfrak{v}( \delta)+\int\limits_{0}^{f(\delta)}\Psi_{3}(\xi)\mathfrak{v}^{\frac{\gamma_{2 }}{\gamma_{1}}}(\xi)d\xi\] \[\leq\Phi^{\prime}(\delta)\Phi^{-1}(\delta)\mathfrak{v}(\delta)+ \Phi(\delta)a^{\prime}(\delta)+\Phi(\delta)f^{\prime}(\delta)\Psi_{1}(f(\delta ))(\zeta_{1}\ \mathfrak{v}(\delta)+\zeta_{2})\] \[\qquad+\Phi(\delta)f^{\prime}(\delta)\Psi_{2}(f(\delta))(\zeta_ {9}\ \mathfrak{w}(\delta)+\zeta_{10}). \tag{2.38}\] Since \(\mathfrak{v}(\delta)\leq\mathfrak{w}(\delta)\) and \(\mathfrak{w}(0)=\Phi(0)a(0)\), from (2.38), we have \[\mathfrak{w}^{\prime}(\delta) =\mathfrak{v}^{\prime}(\delta)+f^{\prime}(\delta)\Psi_{3}(f( \delta))\mathfrak{v}^{\frac{\gamma_{2}}{\gamma_{1}}}(f(\delta))\] \[\leq\Phi^{\prime}(\delta)\Phi^{-1}(\delta)\mathfrak{w}(\delta)+ \Phi(\delta)a^{\prime}(\delta)+\Phi(\delta)f^{\prime}(\delta)\Psi_{1}(f(\delta ))(\zeta_{1}\ \mathfrak{w}(\delta)+\zeta_{2})\] \[\qquad+\Phi(\delta)f^{\prime}(\delta)\Psi_{2}(f(\delta))(\zeta_{ 9}\ \mathfrak{w}(\delta)+\zeta_{10})+f^{\prime}(\delta)\Psi_{3}(f(\delta))(\zeta_{3} \ \mathfrak{w}(\delta)+\zeta_{4})\] \[=\Bigl{(}\Phi^{\prime}(\delta)\Phi^{-1}(\delta)+\zeta_{1}\ \Phi(\delta)f^{\prime}(\delta)\Psi_{1}(f(\delta))+\zeta_{9}\ \Phi(\delta)f^{\prime}(\delta)\Psi_{2}(f(\delta))+\zeta_{3}\ f^{\prime}(\delta) \Psi_{3}(f(\delta))\Bigr{)}\mathfrak{w}(\delta)\] \[+\Bigl{(}\Phi(\delta)a^{\prime}(\delta)+\zeta_{2}\ \Phi(\delta)f^{\prime}(\delta)\Psi_{1}(f(\delta))+\zeta_{10}\ \Phi(\delta)f^{\prime}(\delta)\Psi_{2}(f(\delta))+\zeta_{4}\ f^{\prime}(\delta) \Psi_{3}(f(\delta))\Bigr{)} \tag{2.39}\] Integrating inequality (2.39) from \(0\) to \(\delta\), we find that \[\mathfrak{w}(\delta) \leq\Phi(0)a(0)\exp\Biggl{(}\int_{0}^{f(\delta)}\Big{[}\Phi^{\prime }(f^{-1}(\theta))\Phi^{-1}(f^{-1}(\theta))+\zeta_{1}\ \Phi(f^{-1}(\theta))\Psi_{1}(\theta)+\zeta_{9}\ \Phi(f^{-1}(\theta))\Psi_{2}(\theta)\] \[\quad+\zeta_{3}\ \Psi_{3}(\theta)\Big{]}d\theta\Biggr{)}+\int_{0}^{f( \delta)}\Phi(a^{-1}(\xi))a^{\prime}(a^{-1}(\xi))+\zeta_{2}\ \Phi(a^{-1}(\xi))\Psi_{1}(\xi)+\zeta_{10}\ \Phi(a^{-1}(\xi))\Psi_{2}(\xi)\] \[\quad+\zeta_{4}\ \Psi_{3}(\xi)\times\Bigg{(}\exp\Biggl{(}\int_{0}^{f( \delta)}\Big{[}\Phi^{\prime}(f^{-1}(\theta))\Phi^{-1}(f^{-1}(\theta))+\zeta_{1 }\ \Phi(f^{-1}(\theta))\Psi_{1}(\theta)\] \[\quad+\zeta_{9}\ \Phi(f^{-1}(\theta))\Psi_{2}(\theta)+\zeta_{3}\ \Psi_{3}(\theta)\Big{]}d\theta\Biggr{)}\Biggr{)}d\xi. \tag{2.40}\] Thus, using \(\mathfrak{u}^{\gamma_{1}}(\delta)\leq\mathfrak{v}(\delta)\leq\mathfrak{w}(\delta)\), we arrive at the bound as stated in (2.37). **Remark 2.4**.: Under a suitable set of assumptions, as listed below, we remark that this result simplifies to a few current and well-known integral inequalities. 1. If we set \(\Phi(\delta)=1,\delta\in\mathbb{R}_{+}\), then above inequality takes the form of inequality due to A Shakoor et al. [2]. 2. Theorem 2.7 is simplified to Gronwall-Bellman inequality [5] by taking into assumptions that \(\Phi(\delta)=1\ (\delta\in\mathbb{R}_{+}),a(\delta)=c\ (c\in\mathbb{R}_{+}),\Psi_{2}( \delta)=0,f(\delta)=\delta\), and \(\gamma_{1}=1\). 3. In particular, Theorem 2.7 results into Theorem 2.3 [6], when we choose \(\Phi(\delta)=1\), for \(\delta\in\mathbb{R}_{+}\), \(a(\delta)=c\ (c\in\mathbb{R}_{+}),\Psi_{1}(\delta)=0\), and \(f(\delta)=\delta\). 4. If we specify the following functions and parameters: \(\Phi(\delta)=1\), \(a(\delta)=c\), \(\Psi_{1}(\delta)=0\), \(f(\delta)=\delta\), and setting \(\gamma_{1}\) and \(\gamma_{2}\) both to \(1\), then the inequality proven in Theorem 2.7 simplifies to Pachpatte's inequality as noted in Theorem 1.3. ## 3. **Applications** **Example 3.1**.: Assume an integral equation with nonlinear retardation as \[\mathfrak{u}^{5}(\delta)\leq\left(\delta+\int\limits_{0}^{\sqrt{\delta}}2 \mathfrak{u}(\theta)d\theta+\int\limits_{0}^{\sqrt{\delta}}3\left(\mathfrak{u }^{4}(\theta)+\int\limits_{0}^{\theta}\xi\mathfrak{u}^{3}(\xi)d\xi\right)^{ \frac{1}{4}}d\theta\right)^{3}. \tag{3.1}\] We can observe that the unknown function \(\mathfrak{u}(\delta)\) in (3.1) is as stated in Theorem 2.5, thus utilizing Theorem 2.5, \[\mathfrak{u}(\delta)\leq(\zeta_{7}+\zeta_{8}\delta)(0)\exp\left(\int_{0}^{ \sqrt{\delta}}6\zeta_{8}d\theta\right)+\int_{0}^{\sqrt{\delta}}\frac{\xi}{4} \exp\left(\int_{0}^{\sqrt{\delta}}6\zeta_{8}\ d\theta\right)d\xi, \tag{3.2}\] where \(\zeta_{7}=\frac{2}{5}\kappa^{\frac{3}{5}}\) and \(\zeta_{8}=\frac{3}{5}\kappa^{\frac{-2}{5}}\), for any \(\kappa>0\). If we let \(\kappa=1\), then \[\mathfrak{u}(\delta) \leq\Big{(}\frac{2}{5}+\frac{3}{5}\delta\Big{)}(0)\exp\left(\int _{0}^{\sqrt{\delta}}\frac{18}{5}d\theta\right)+\int_{0}^{\sqrt{\delta}}\frac{ \xi}{4}\exp\left(\int_{0}^{\sqrt{\delta}}\frac{18}{5}\ d\theta\right)d\xi\] \[=\frac{2}{5}\exp\left(\frac{18\sqrt{\delta}}{5}\right)+\int_{0}^{ \sqrt{\delta}}\frac{\xi}{4}\exp\left(\frac{18(-\xi+\sqrt{\delta})}{5}\right)d\xi\] \[=\frac{2}{5}\exp\left(\frac{18\sqrt{\delta}}{5}\right)+\frac{5 \left(-18\sqrt{t}+5e^{\frac{18\sqrt{t}}{5}}-5\right)}{1296}. \tag{3.3}\] We notice that blow-up does not occur at any point \(\delta\in\mathbb{R}_{+}\), indicating that the solution of (3.1) is globally defined. **Example 3.2**.: Assume an integral equation with nonlinear retardation as \[\mathfrak{u}^{3}(\delta)\leq 1+2\delta+\int\limits_{0}^{\delta^{\frac{1}{3}}}(2 \mathfrak{u}(\theta)+\theta)d\theta+\int\limits_{0}^{\delta^{\frac{1}{3}}} \Biggl{\{}5\Biggl{(}\mathfrak{u}^{3}(\theta)+\int\limits_{0}^{\theta}(7 \mathfrak{u}^{2}(\xi)+\xi)d\xi\Biggr{)}^{\frac{1}{3}}+\theta\Biggr{\}}d\theta. \tag{3.4}\] We notice that the definition of the function \(\mathfrak{u}(\delta)\) in equation (3.4) is in accordance with Theorem 2.6. The value of \(\mathfrak{u}(\delta)\) is thus clearly estimated by applying Theorem 2.6 to equation (3.4), and can be represented as follows: \[\mathfrak{u}(\delta)\leq\Biggl{\{}\exp\Biggl{(}\int_{0}^{\delta^{\frac{1}{3}}} (2\ \zeta_{1}+5\ \zeta_{1}+7\ \zeta_{3})d\theta\Biggr{)}+\int_{0}^{\delta^{\frac{1}{3}}} \left(2+2\ \zeta_{2}+3\xi+5\ \zeta_{2}+7\ \zeta_{4}\right)\] \[\times\exp\Biggl{(}\int_{\xi}^{\delta^{\frac{1}{3}}}(2\ \zeta_{1}+5\ \zeta_{1}+7\ \zeta_{3})d\theta \Biggr{)}d\xi\Biggr{\}}^{\frac{1}{3}}, \tag{3.5}\] where \(\zeta_{1}=\frac{1}{3}\kappa^{\frac{-2}{3}},\zeta_{2}=\frac{2}{3}\kappa^{\frac {1}{3}},\zeta_{3}=\frac{2}{3}\kappa^{\frac{-1}{3}},\zeta_{4}=\frac{1}{3}\kappa ^{\frac{2}{3}}\), for any \(\kappa>0\). If we set \(\kappa=1\), then we find that \[\mathfrak{u}(\delta)\leq\Biggl{\{}\exp\Biggl{(}\int_{0}^{\delta^{ \frac{1}{3}}}7\ d\theta\Biggr{)}+\int_{0}^{\delta^{\frac{1}{3}}}\left(9+3\xi \right)\times\exp\Biggl{(}\int_{\xi}^{\delta^{\frac{1}{3}}}7\ d\theta\Biggr{)}d \xi\Biggr{\}}^{\frac{1}{3}}\] \[=\Biggl{\{}\exp\Biggl{(}7\sqrt[3]{\delta}\Biggr{)}+\int_{0}^{\delta^{\frac{1}{ 3}}}\left(9+3\xi\right)\times\exp\Biggl{(}7(\sqrt[3]{\delta}-\xi)\Biggr{)}d \xi\Biggr{\}}^{\frac{1}{3}}\] \[=\Biggl{\{}\exp\Biggl{(}7\sqrt[3]{t}\Biggr{)}+\frac{3}{49}\left(-7\sqrt[3]{ \delta}+22e^{7\sqrt[3]{\delta}}-22\right)\Biggr{\}}^{\frac{1}{3}}. \tag{3.6}\] This plot representation of explicit bound on \(\mathfrak{u}(\delta)\) to analyze the bound for blow-up is \[\begin{array}{c}\includegraphics[width=142.26378pt]{Blow up analysis of the solution $u$}\\ \includegraphics[width=142.26378pt]{Blow up analysis of the solution $u$}\end{array}\] This plot indicates that the solution does not blow up for any \(\delta\in\mathbb{R}_{+}\), hence the solution of the equation (3.4) is globally defined. ## 4. **Conclusions** Some novel nonlinear integral and integro-differential inequalities of Gronwall-Bellman-Pachpatte kind are investigated in this work. It demonstrates how a variety of well-known inequalities from both the current literature and the most recent research can be attained by careful choice of parameters. The manuscript then uses the introduced integral inequalities to investigate the existence, uniqueness, stability, boundedness, and asymptotic behavior of solutions to more complicated nonlinear differential and integral equations. Additional important integral and integro-differential problems can be tackled with the help of generalized versions of useful integral inequalities provided by this study.
2309.14534
Teach AI How to Code: Using Large Language Models as Teachable Agents for Programming Education
This work investigates large language models (LLMs) as teachable agents for learning by teaching (LBT). LBT with teachable agents helps learners identify knowledge gaps and discover new knowledge. However, teachable agents require expensive programming of subject-specific knowledge. While LLMs as teachable agents can reduce the cost, LLMs' expansive knowledge as tutees discourages learners from teaching. We propose a prompting pipeline that restrains LLMs' knowledge and makes them initiate "why" and "how" questions for effective knowledge-building. We combined these techniques into TeachYou, an LBT environment for algorithm learning, and AlgoBo, an LLM-based tutee chatbot that can simulate misconceptions and unawareness prescribed in its knowledge state. Our technical evaluation confirmed that our prompting pipeline can effectively configure AlgoBo's problem-solving performance. Through a between-subject study with 40 algorithm novices, we also observed that AlgoBo's questions led to knowledge-dense conversations (effect size=0.71). Lastly, we discuss design implications, cost-efficiency, and personalization of LLM-based teachable agents.
Hyoungwook Jin, Seonghee Lee, Hyungyu Shin, Juho Kim
2023-09-25T21:20:04Z
http://arxiv.org/abs/2309.14534v3
# "Teach AI How to Code": Using Large Language Models as Teachable Agents for Programming Education ###### Abstract This work investigates large language models (LLMs) as teachable agents for learning by teaching (LBT). LBT with teachable agents helps learners identify their knowledge gaps and discover new knowledge. However, teachable agents require expensive programming of subject-specific knowledge. While LLMs as teachable agents can reduce the cost, LLMs' over-competence as tutes discourages learners from teaching. We propose a prompting pipeline that restrains LLMs' competence and makes them initiate "why" and "how" questions for effective knowledge-building. We combined these techniques into TeachYou, an LBT environment for algorithm learning, and AlgoBo, an LLM-based tutete chatbot that can simulate misconceptions and unawareness prescribed in its knowledge state. Our technical evaluation confirmed that our prompting pipeline can effectively configure AlgoBo's problem-solving performance. Through a between-subject study with 40 algorithm novices, we also observed that AlgoBo's questions led to knowledge-dense conversations (effect size=0.73). Lastly, we discuss design implications, cost-efficiency, and personalization of LLM-based teachable agents. ## 1. Introduction Interactive learning activities involve learners actively collaborating with peers or engaging with computer systems to deepen their comprehension of a specific topic (Zhu et al., 2018; Li et al., 2020). Compared to passive learning activities (e.g., reading entire text passages without doing anything else), interactive learning activities (e.g., pair programming, peer teaching) can elicit the deepest level of understanding by encouraging learners to elaborate their explanations and construct new knowledge on top of each other through conversations (Zhu et al., 2018; Li et al., 2020; Li et al., 2020; Li et al., 2020; Li et al., 2020). One form of interactive learning is Learning by Teaching (LBT), where learners tutor a peer learner and exchange questions to reorganize their knowledge and identify knowledge gaps. LBT with teachable AI agents (i.e., virtual tutes) can offer many advantages over LBT with humans. Teachable agents can bring scalability to LBT with their around-the-clock availability and motivate learners' participation in LBT by reducing psychological barriers, such as the fear of making mistakes while teaching and the pressure of responding in real-time (Li et al., 2020; Li et al., 2020). However, despite these benefits, disseminating teachable agents to diverse subjects is challenging in practice due to the effort-intensive authoring of the agents' knowledge model (Li et al., 2020). Conventional authoring methods require extensive mapping of agents' knowledge states and high programming skills, precluding teachers and education researchers from tweaking teachable agents for their needs and context. In this paper, rather than constructing teachable agents from the ground up, we propose a top-down methodology in which we use versatile Large Language Models (LLMs) to simulate tutes. Recent advances in LLMs show their remarkable capabilities in making contextual dialogues (Zhu et al., 2018; Li et al., 2020), role mimicry (Li et al., 2020; Li et al., 2020), and learning from demonstrations (Li et al., 2020; Li et al., 2020). We explore using LLMs to lower the cost and barriers of building teachable agents for LBT. Through a formative study that asked 15 novices to conduct LBT with ChatGPT set as a tutete role, we found that there are needs for 1) confining the knowledge level of LLM agents, 2) agent-initiated "why" and "how" questions, and 3) in-conversation feedback on learners' teaching method. Our dialogue analysis revealed that role-playing could prompt learners to self-explain their knowledge but was limited to knowledge-telling, achieving only the rudimentary benefits of doing interactive LBT. Participants struggled to build new knowledge because the teachable agent excelled in writing code even without being taught and did not ask questions that could prompt elaboration and knowledge-building. The participants also commented about the lack of metacognitive guidance and reflection for effective LBT. To address these issues, we built a teachable agent, "AlgoBo", that can exhibit prescribed misconceptions and knowledge level and "TeachYou", an LBT environment for introductory algorithm learning (Fig. 1). In TeachYou, learners solve programming problems on algorithms (e.g., binary search) and reflect on them by teaching AlgoBo. Learners should teach AlgoBo in detail and correctly as our Reflect-Respond prompting pipeline instructs AlgoBo to fix its misconceptions and write code based on what it is taught. We also added Mode-shifting, in which AlgoBo periodically shifts to a questioner mode and asks questions to prompt learners' elaboration and sense-making. Lastly, TeachYou has a Teaching Helper that provides metacognitive feedback and suggestions to learners on their teaching method in real-time through dialogue analysis. We conducted a technical evaluation of our Reflect-Respond prompting pipeline to measure if it simulates a tutee with a prescribed knowledge level on different algorithm topics. We found that the pipeline can effectively configure, persist, and adapt AlgoBo's knowledge level within a conversation. We also conducted a between-subjects user study with 40 algorithm novices, where the participants studied binary search with either TeachYou or a baseline system without Mode-shifting and Teaching helper. Our analysis of LBT dialogues and survey results showed that Mode-shifting improved the density of knowledge-building messages in the conversations significantly (\(p=0.03\)) with an effect size (Cohen's d) of 0.73. Teaching Helper also helped participants reflect on their teaching methods and sequence their questions strategically, but we could not observe significant improvement in participants' metacognition. We structured our paper in the following order. After a discussion of related work, we describe our formative study settings and preliminary findings. We then reorganize the findings into three design goals and introduce our system and pipeline for achieving the goals. With that, we present our technical and user-study evaluation results. Lastly, based on our results and observations, we discuss the design considerations for teachable agents, the benefits of using LLMs, promising directions for personalizing teachable agents, and interaction guidelines for better LBT with teachable agents. This paper makes the following contributions: * AlgoBo, an LLM-based teachable agent that uses the Reflect-Respond prompting pipeline to simulate prescribed learning behaviors and Mode-shifting to scaffold knowledge-building of learners through "why" and "how" questions. * TeachYou, a web-based algorithm learning system that supports LBT with AlgoBo and provides metacognitive feedback on teaching based on real-time conversation analysis. * A technical evaluation of the Reflect-Respond prompting pipeline and an empirical user study results with 40 participants showing that TeachYou improved knowledge-building in LBT. ## 2. Related Work We outline past studies on stimulating effective LBT among humans and using teachable agents. Previous research connects to our work in improving the quality and scalability of LBT using virtual agents. ### Learning by Teaching Learning by Teaching (LBT) is a teaching method in which learners not only articulate and restructure their existing knowledge but also engage in reflective knowledge-building, wherein they extend beyond provided materials to craft deeper explanations, analogies, and inferential connections (K they know already (King et al., 2020). Previous research investigated supports for eliciting knowledge-building responses from learners. King et al. found that training learners to ask reviewing, proving, and thinking questions in sequence to peers during LBT can promote higher-order thinking and learning (King et al., 2020). Roscoe and Chi's analysis of LBT dialogues showed the importance of the tutee's role in knowledge-building; the deep questions from the tutee encouraged tutors to make self-reflective responses and create inferences between new and prior knowledge (Sahriar and Matsuda, 2020). Shahriar and Matsuda also confirmed that tutees' follow-up questions drew the knowledge-building of tutors with low prior knowledge in particular (Matsuda et al., 2020). Matsuda et al. found that LBT with metacognitive guidance for planning and conducting teaching is as effective as being tutored by experts regardless of learners' prior competency (Matsuda et al., 2020). Our primary goal is to build an interactive system that draws knowledge-building from learners in LBT. To do so, we adapt the above-mentioned interventions in human tutor-tutee interactions to the conversational interactions between virtual agents and learners. ### Teachable Agents for LBT A core component of LBT is the presence of a peer learner. However, as human learners cannot always be present, past research introduced teachable agents--virtual agents that can learn declarative and procedural knowledge from learners' explanations and demonstrations, taking the role of peer learners in LBT (King et al., 2020). Teachable agents showed promising results in improving students' performance, self-explanation, and acceptance of constructive feedback (Kang et al., 2020; Kang et al., 2020; Kang et al., 2020; Kang et al., 2020). LBT with early teachable agents was non-conversational; agents revealed their knowledge states as concept maps, and learners taught the agents by directly editing their knowledge states. (Kang et al., 2020; Kang et al., 2020) Recent teachable agents are capsulated and can simulate more authentic learning behaviors; agents can learn from the tutors' demonstrations (Kang et al., 2020), mimic the behaviors of learners (e.g., making arithmetic mistakes) (Kang et al., 2020; Roscoe et al., 2020), improve with correct instructions (Kang et al., 2020), and ask questions (Kang et al., 2020). However, implementing these natural and highly interactive teachable agents requires significant manual efforts and programming skills to specify the background and target knowledge of agents. For example, implementing an agent in SimStudent required more than a thousand lines of Java code for simple algebra equation solving (Kang et al., 2020); the cost may increase exponentially for more complicated topics (e.g., algorithm learning, advanced equation solving). While the development cost and skill barrier has limited teachable agents to few learning activities in the past, recent research showed LLMs can simulate virtual students and coaches through natural language prompting (Kang et al., 2020; Roscoe et al., 2020; Kang et al., 2020). Nevertheless, challenges remain in making these virtual agents suitable for LBT and eliciting knowledge-building, such as making LLMs unaware of specific knowledge to simulate authentic learners and asking didactic questions. In this paper, we investigate using LLMs for building teachable agents with low manual effort and programming barriers to support educators and researchers and explore the necessary components to make them effective specifically for LBT. ## 3. Formative Study We ran a formative study to explore the difficulties of using an LLM as a teachable agent. We recruited 15 Python novices and asked them to teach the binary search algorithm to an LLM chatbot. We surveyed their learning experience and analyzed the quality of their dialogues with the chatbot by annotating the types of messages from tutors and teachable agents. ### Participants and Procedure We recruited 15 participants on campus who could read and write short (about 15 lines) Python programs containing if and while statements and who were not familiar with binary search and LBT. Eleven were from non-CS engineering departments. The study consisted of three stages. In the first stage, the participants went through learning materials on the binary search (taken from Khan Academy1) and solved two Parsons problems, a coding exercise on reordering code fragments (Khan, 2017). Footnote 1: [https://www.khanacademy.org/computing/computer-science/algorithms/binary-search/a/binary-search](https://www.khanacademy.org/computing/computer-science/algorithms/binary-search/a/binary-search) In the second stage, the participants received an introduction to the concepts of LBT, its expected learning benefits, and its procedures. Then, they were given a brief overview of the LBT activity they would be performing next. In the final stage, learners tutored the chatbot on how to write code for the two binary search problems from the prior stage. After the LBT activity, the participants completed an exit survey composed of questions on three themes: the perception of the chatbot as a learner, the self-perceived learning effects, and the familiarity with teaching a chatbot. The participants interacted with a baseline LLM chatbot, AlgoBo, performing the role of a teachable agent. We used GPT-4 (Zhu et al., 2017) as a backbone for AlgoBo and provided a system prompt (see Appendix A.1) that set a persona of a student and added predefined learning challenges it was running into to provide a more convincing teachable agent (Zhu et al., 2017; Zhu et al., 2017). Since we use the name "AlgoBo" again in our main system and evaluation, we use "AlgoBo-Basic" throughout this section to distinguish the two teachable agents we developed. ### Dialogue Analysis In addition to the comments from the exit survey, we also looked into the quality and conversational patterns of the dialogues between participants and AlgoBo-Basic by classifying messages into knowledge-telling and knowledge-building types. Since previous taxonomies that categorize LBT dialogues (Khan, 2017; Zhu et al., 2017; Zhu et al., 2017) were not contextualized enough to programming tutoring, we decided to adapt the taxonomies and create a new taxonomy (Table 1) specific to LBT in programming. We created our initial set of message types based on the prior taxonomies and categorizations of programming QA (Bauer et al., 2016; Zhu et al., 2017). Three authors took three iterations to annotate dialogues, resolve conflicts, and refine the taxonomy (Khan, 2017; Zhu et al., 2017). The authors finalized the taxonomy in the 2nd iteration (20 dialogues, 293 messages). The authors categorized the rest of the messages independently. The inter-rater reliability of the categorization was high; three authors achieved Krippendorff's alpha of 0.731 for the data in the last iteration (11 dialogues, 253 messages). Our taxonomy has three main categories: instructions, prompting, and statements (see Table 1). **Instruction** messages have content that asks the opponent (usually the tutee) to do specific actions, such as fixing code and attempting problem-solving after concept understanding. Instruction messages are mostly related to the proceeding of steps in teaching. **Prompting** messages have intentions for eliciting specific actions from the opponent. These include asking a tutee about a specific concept of interest, giving thought-provoking questions to encourage knowledge-building, and asking a tutor for help. **Statement** messages are utterances explaining one's knowledge and opinions. Statement messages are important in measuring the quality of dialogue because these messages include knowledge-telling and knowledge-building responses (check the types with asterisks). ### Findings from Participants' Comments and Dialogue Analysis We found that an LLM chatbot can serve as a teachable agent for rudimentary LBT. Participants were positive about teaching an LLM chatbot and felt it helped them reorganize and recall their knowledge. However, our dialogue analysis and in-depth survey responses revealed that the LLM chatbot fell short of adequately supporting learners' knowledge-building process. \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline **Category** & **Sub Category** & **Explanation** & **Example** \\ \hline \multirow{4}{*}{Instruction} & \multirow{2}{*}{Fixing\({}^{*}\)} & [Instruct to] correct specific knowledge or part of code. & **Tute: Here is my code: \(<\)code\(>\)** \\ & & [x] knowledge or part of code. & **Tutor: Call the input() function twice so that N and K are separately taken as input.** \\ \cline{2-4} & \multirow{2}{*}{Commanding} & [”] do simple actions irrelevant to learning. (e.g., simply combining code for a submission). & **Tute: I have written the binary search function. Tutor: Now, write the entire Python code.** \\ \cline{2-4} & Encouraging & [”] retry a previous action with emotional encouragement. & **Tutor: You are in the right direction. Keep writing more code.** \\ \hline \multirow{4}{*}{Prompting} & Challenge -finding & [Prompt the opponent to] explain his struggles to find the parts to help. & **Tutor: In which part are you facing difficulties?** \\ & \multirow{2}{*}{Hinting\({}^{*}\)} & [”] think about alternative/specific approaches. & **Tute: I could not complete this part of the code. Tutor: Well, have you considered the case when the number is equal to K?** \\ \cline{2-4} & \multirow{2}{*}{Checking} & [”] show or self-explain his understanding of specific knowledge. & **Tutor: Do you know what binary search is?** \\ \cline{2-4} & \multirow{2}{*}{Thought-provoking\({}^{**}\)} & [”] elaborate previous explanations or think beyond the content of the given learning materials. & **Tutor: What will happen if we switch the min** / **max updating code?** \\ \cline{2-4} & \multirow{2}{*}{Asking for help} & [”] analyze the speaker’s problem or give hints. & **Tute: Could you help me with solving the problem, please?** \\ \hline \multirow{4}{*}{Statement} & Comprehension\({}^{*}\) & [State one’s knowledge or opinion by] paraphrasing / copying / explaining the learning material or the opponent’s response. & **Tutor: First, let’s define the function called binarysearch. In the while loop,...** \\ \cline{2-4} & \multirow{2}{*}{Elaboration\({}^{**}\)} & [”] providing extended clarification or relevant examples beyond the given materials. & **Tute: Can you think of a real-life example where we can use binary search?** \\ \cline{2-4} & \multirow{2}{*}{Sense-making\({}^{**}\)} & [”] realizing own errors / misconceptions or making new inferences / connections to prior knowledge. & **Tutor: Can you take a closer look at the else statement in your code?** \\ \cline{2-4} & Accepting / Reject & [”] agreeing or disagreeing with the opponent’s response. & **Tutor: You should update line 24 to...** \\ \cline{2-4} & Feedback & [”] responding to the opponent’s action or thought. & **Tutor: Yes, that is exactly right.** \\ \hline \multirow{4}{*}{Miscellaneous} & \multirow{2}{*}{Greetings/goodbyes, social expressions} & **Tutor: Do you have any questions?** \\ \cline{2-4} & & Greetings/goodbyes, social expressions & **Tute: No, thank you so much for your guidance so far!** \\ \hline \end{tabular} \end{table} Table 1. Our taxonomy to classify the type of messages in LBT conversations with a teachable agent. The bold texts in the example column are the examples of respective message types. The types with \({}^{*}\) are knowledge-telling responses. The types with \({}^{**}\) fall into knowledge-building responses. **AlgoBo-Basic was perceived as an overly competent learner due to its extensive prior knowledge and self-correcting behavior.** Participants highly appreciated AlgoBo-Basic's ability to "talk like a real person and ask specific questions" (P14) for simulating a learner. However, two-thirds of participants commented that they experienced awkwardness due to AlgoBo-Basic's competence. AlgoBo-Basic initially started a conversation by asking for help. However, after a few chats, AlgoBo-Basic provided competent responses too quickly, which did not reflect a novice learner's learning process. P5 remarked, "I explained it very simply, but he understood it very well... He is so much smarter than me. He seems to fill by himself the knowledge even I am not sure about." AlgoBo-Basic's adeptness in code writing and explanation also limited conversational patterns and confused learners about their roles. AlgoBo-Basic made twice as many knowledge statements (i.e., Statement-Comprehension) as participants did, taking away the chance for learners to self-explain and teach (see the Statement-Comprehension row in Table 2). P7 stated, "AlgoBo-Basic was like a teaching assistant testing a student's ability, rather than a student struggling with binary search problems." Participants responded that they would have liked to see more student-like interactions from AlgoBo-Basic such as "asking more proactive questions" (P1) and "making mistakes and requesting tutors for an elaborated explanation" (P5). **Dialogues between tutors and AlgoBo-Basic were limited to only knowledge-telling.** Participants valued retelling of their knowledge--"Writing down knowledge was very helpful in organizing knowledge. If you want to teach someone, you should create steps in your head, and this process helped a lot" (P1). However, their learning was limited to knowledge-telling; out of 546 messages, we could observe 244 knowledge-telling messages but only 15 knowledge-building utterances (Table 2). Despite helping reorganize knowledge, self-explanations did not lead to building new knowledge beyond what they previously knew--I didn't discover anything new because I explained what I had already learned" (P4). Furthermore, tutors' self-explanations were often undeveloped because AlgoBo-Basic did not ask questions on participants' vague explanations, and AlgoBo-Basic performed well. For example, P15 answered AlgoBo-Basic's question on why the input array needs to be sorted: "Sorted arrays reduce the number of calculations and maximize the effectiveness of binary search." Despite the lack of detailed reasoning (e.g., "how" and "why"), AlgoBo-Basic accepted the explanation and moved on to the next question. **Participants carried out antipatterns of LBT and sought feedback.** Participants remarked tutoring through natural language communication was intuitive and familiar because it resembled tutoring humans, and they could apply the same teaching methods to AlgoBo-Basic. However, some participants wanted to see better methods for them to teach AlgoBo-Basic (P9) and a method to review their learning process (P15). P15 said, "I was able to see that my teaching skills worked, but the reflection [on my tutoring session] left a lot to be desired due to the lack of feedback on my teaching method" (P15). While analyzing participants' dialogues, we found common conversational antipatterns that may restrain the benefits of LBT. The first pattern was **Commanding**, in which participants repeitively gave AlgoBo-Basic specific instructions for writing and correcting code (Appendix B (A)). This pattern lacks an explanation of "why" and "how" which can prompt learners to go beyond recalling facts (i.e., knowledge-telling). The second pattern was **Spoon-feeding**, in which participants give away knowledge without questions to check or prompt a tutee's understanding (Appendix B (B)). Rather than passive explanations, learners can actively construct new knowledge by making thinking questions for their tutes, taking the benefits of having interactive agents. The last pattern was **Under-teaching**, in which AlgoBo-Basic progressed in problem-solving but was limited to knowledge-telling responses (Appendix B (C)). ## 4. Design Goals The findings from our formative study showed that LLMs could serve as a rudimentary teachable agent for LBT. However, we also confirmed the need to improve LLM chatbots' imitation of help-seeking tutees, promote the knowledge-building of learners, and support learners' metacognition in teaching. Based on the insights, we set three design goals. ### D1. Design teachable agents that can simulate misconceptions and gradual learning curves We found that the pre-trained knowledge and self-correcting behavior of LLMs made AlgoBo feel less like a tutee and prevented tutors from learning by identifying tutees' errors and enlightening them with elaborate explanations (Tut ## 5. System We present TeachYou, an LBT system featuring AlgoBo, an LLM-based teachable agent. AlgoBo gets help from learners to solve introductory algorithm problems while asking thought-provoking questions that encourage the learners to expand their knowledge beyond their current level. Through the system, we propose 1) a new LLM prompting pipeline for simulating tutes of specific levels of knowledge and misconceptions and 2) a learning environment for learners to effectively conduct LBT. Programming and algorithm learners can use TeachYou to review what they learned and explore further knowledge through an engaging and interactive LBT activity. We designed an interface (Fig. 2) to help learners conduct the activity. Throughout the LBT activity, learners should achieve three sequential objectives in teaching AlgoBo (Fig. 2 A). The objectives correspond to the three levels in Bloom's taxonomy (Understand-Apply-Analyze) (Understand-Apply-Analyze) (Understand-Apply-Analyze) (Understand-Apply-Analyze); learners first check if AlgoBo correctly understand the concept of interest; then, learners help AlgoBo apply the concept to solve a problem; lastly, learners and AlgoBo discuss real-life use cases and comparison to other related topics. Learners can refer to the profile of AlgoBo to set their attitude and expectations (Fig. 2 B). We set the persona of AlgoBo as a 2nd-year high school student, as opposed to a 1st-year CS student in the formative study, to match the slow learning behavior and to encourage learners' patience in teaching. Learners use a typical chatting interface to teach AlgoBo (Fig. 2 D) and have access to teaching support (Fig. 2 C, E, F, G). While tutoring, learners receive why questions and thought-provoking questions from AlgoBo, helping them self-explain the rationale behind their instructions and expand their knowledge (Fig. 2 Figure 2. To the left, the 3 learning objectives they need to reach (A), learners can see AlgoBo’s profile (B), and the questions they need to help AlgoBo solve (C). To the right, they can see the code they submitted (E), a code playground (F), and the code that AlgoBo write (G). When AlgoBo wrote code, participants could click on the “run test cases” and run AlgoBo’s code. In the middle (D), learners use a typical chatting interface to teach AlgoBo while receiving questions (H) and guidance from Teaching Helper (I) H). TeachYou also provides feedback on learners' teaching methods and suggestions for improvement to encourage reflection on teaching (Fig. 2 I). In order to support the aforementioned learning scenario and the three design goals effectively, we implemented three system components: First, we implemented the Reflect-Respond prompting pipeline for a teachable agent to simulate student-like learning behavior. Secondly, within a conversation, our teachable agent shifts between help-receiving and questioning modes in every third conversation turn, eliciting self-explanation and knowledge construction, respectively. Lastly, the learning environment analyzes the dialogue between learners and AlgoBo and provides feedback on their tutoring methods to promote metacognition. ### Reflect-Respond prompting pipeline to simulate knowledge learning From our observations and user comments in the formative study, we considered three properties crucial for LLM-based teachable agents to simulate knowledge learning--reconfigurability, persistence, and adaptability. **Reconfigurability** refers to how precisely we can set an agent's performance in question-answering and problem-solving. Reconfigurable agents allow us to build tutes with specific misconceptions and help design tutoring scenarios. **Persistence** examines how the knowledge level of interest is sustained cohesively throughout the agent interaction. Persistent agents do not self-correct their misconceptions unless being taught explicitly and can encourage tutors to explain concepts in detail. **Adaptability** measures how well the agent updates its knowledge as it acquires new information from tutors in conversations. Adaptability allows a teachable agent to show progress and remember what tutors have taught. Figure 3. The overview of the Reflect-Respond prompting pipeline for simulating knowledge learning of AlgoBo and examples for each component. From the recent conversation, AlgoBo _extracts_ new knowledge of the while loop condition and _update_ its knowledge state (colored in green). Then, AlgoBo _retrieves_ knowledge relevant to while loops and _composes_ a response that fills its knowledge gap. To achieve these properties, we introduce a prompting pipeline that leverages a knowledge state and two information flow mechanisms--Reflection and Response (Fig. 3). A knowledge state is a store representing the knowledge AlgoBo currently holds. It is comparable to a schema, a cognitive unit of knowledge for problem-solving (Krishnan et al., 2017). AlgoBo's responses are constrained by its knowledge state, and we update the knowledge state consistently throughout a conversation. Knowledge states link to Reconfigurability; if we leave them empty, agents will show zero-knowledge behavior; if we add incorrect or correct information, agents will show misconceptions or prescribed knowledge levels, respectively. Reflection is a flow dedicated to the update of knowledge states. In the Reflection flow, we use an LLM to extract new information from the latest conversations (i.e., the last three messages) and then update knowledge states by adding or correcting information. After Reflection, the Response flow occurs; we first use the LLM to retrieve information relevant to the conversational context from the current knowledge state and then compose a response by only combining the retrieved knowledge. If a knowledge state does not have relevant information and nothing is retrieved, AlgoBo responds: "I'm not sure how to do that. Could you explain it to me?" Reflection and Response connect to the Persistence and Adaptability of agents as the flows control the retrieval and update of knowledge states in reaction to external stimuli. We implemented the knowledge state as a JSON object with two attributes: facts and code_implementation. **Facts** store partial information about the target knowledge, and **Code_implementation** contains code snippets (see Fig. 3 Knowledge State). The four operations in the pipeline are implemented with GPT-4 as a base LLM, and we adopted well-known prompting engineering techniques, such as few-shot prompts (Krishnan et al., 2017; Krishnan et al., 2018), persona setting (Krishnan et al., 2018; Krishnan et al., 2018), and code prompts (Krishnan et al., 2018; Krishnan et al., 2018) (see Appendix A.2). We note that our implementation is one possible instance of our proposed pipeline, and it can improve further with better LLMs and algorithms for the operations. For example, we can represent knowledge states with more complex tree structures (Krishnan et al., 2018; Krishnan et al., 2018), and the update operation may use the Least Recently Used algorithm (Krishnan et al., 2018) to simulate a fixed-size knowledge capacity. We chose GPT-4 for operating our pipeline because it can effectively process the contextual information in conversations compared to other approaches. ### AlgoBo's Mode-shifting to develop constructive LBT dialogues Figure 4. AlgoBo shifts its mode in every three messages. When AlgoBo is in the questioner mode, it keeps asking follow-up questions until receiving a satisfactory response (constructive loop) Beyond telling knowledge to AlgoBo, we aim to push learners to answer thought-provoking questions and build new knowledge. From the formative study, we observed that entrusting LLMs entirely with making conversations did not result in desirable knowledge-building patterns (e.g., question-answering on "why" and "how") spontaneously. To control conversation flows while giving learners the freedom to steer them, we introduce Mode-shifting, in which AlgoBo periodically shifts between two modes: In the help-receiver mode, AlgoBo passively learns from tutors and prompts their self-explanations; and in the questioner mode, AlgoBo asks thought-provoking questions to stimulate the knowledge-building of learners. We use Mode-shifting to make conversation flows dynamic and engaging. In every third message, AlgoBo shifts to the questioner mode and asks a thinking question. The thinking question differs by the phase of the activity (Table 9 A). While learners teach AlgoBo about concepts and code implementation (i.e., the first and second objectives), AlgoBo asks "why" questions in response to learners' instructions and explanations. During the discussion phase (i.e., the third objective), AlgoBo brings up related algorithms or real-life examples and asks "how" questions to prompt learners to explain and connect to what they have learned. After the thinking questions, the conversation goes through a constructive loop, in which learners receive follow-up questions from AlgoBo until they answer the question in depth with a valid example. When AlgoBo assesses learners' responses as satisfactory, AlgoBo summarizes them and shifts back to the receiver mode. The period of Mode-shifting (every three messages) is heuristic; from our pilot studies, we found that such frequency was optimal for prompting elaboration while not distracting tutors too much. To incorporate Mode-shifting to LBT dialogues, we implemented four components (Fig. 4). **Thinking Question Generator** is a module that uses GPT-4 to produce thought-provoking questions related to the current conversation. For managing the constructive loop, we followed the protocol of the constructive tutee inquiry in Shahriar et al.'s work [64] and adapted it to LLM. We used the formative study dialogues with response quality annotations to train **Response Quality Classifier**. The classifier assesses every learner's responses in the loop and determines AlgoBo's follow-up question as pre-defined in **Constructive Tutee Inquiry** protocol [64]. Lastly, **Paraphrasing Module** adjusts the fixed question to the conversational context. All the prompts used for Mode-shifting are available in Appendix A.3. ### Teaching Helper for Metacognitive Guidance Fig. 5. The four Teaching Helper messages and corresponding suggestions that appear depending on the conversational patterns. Throughout our formative study, we found conversational antipatterns that hindered effective LBT. To prevent this, TeachYou provides metacognitive feedback throughout the conversation to help learners reflect on the overall teaching session and offer overarching guidance on steering the discussion. TeachYou presents the feedback through "Teaching Helper," a red or green text box that appears below the messages (see Fig. 2). Teaching Helper provides information on the current problems with the teaching method and elaborates on what learners could do to improve their conversation. TeachYou provides four Teaching Helper messages, depending on detected conversational patterns (Fig. 5). For **Commanding** and **Spoon Feeding** patterns, in which learners should correct their teaching styles, TeachYou shows feedback messages in red boxes. To ensure learners read feedback, we interrupt the conversation with AlgoBo until learners explicitly decide how to act. The send button in the chatting interface is blocked until learners pick an option among the possible teaching methods to address the issue. We chose to give learners multiple suggestions and let them choose their teaching method, instead of giving specific guidance to follow because the active selection of teaching methods may improve learners' recognition and autonomy in tutoring (Toh et al., 2018). For **Under Teaching** pattern and default cases where no antipattern is found, TeachYou shows messages in a green box. The messages either encourage learners to go beyond the current learning topic or give general tips for good answering and questioning (Beng et al., 2018; Li et al., 2019; Li et al., 2019). Teaching Helper messages and learners' selection remain in conversations for revisiting. To avoid frequent interruptions and distractions from Teaching Helper, we restrict the presentation of the feedback to every six messages. Teaching Helper is powered by a message-type classifier for detecting conversational patterns. We used the dialogue dataset from the formative study to fine-tune the GPT-3 davinci model. For training, we used 438 messages, and the classifier achieved an accuracy of 71.3% for the remaining 108 messages in a validation test. ## 6. Evaluation We evaluated the efficacy of TeachYou for eliciting knowledge-building experiences in LBT. This overarching goal broke down into three main research questions: 1. How well does the Reflect-Respond pipeline simulate misconceptions and knowledge development? 2. How does Mode-shifting in a conversation help elicit knowledge-building in LBT conversations? 3. How does Teaching Helper improve learners' metacognition about tutoring? The evaluation was divided into two parts. The initial phase was a technical evaluation that aimed to assess if the Reflect-Respond prompting could induce a teachable agent to produce responses that were reconfigurable, persistent, and adaptive throughout the course of a conversation (RQ1). In the second phase, we ran a user study to examine the effects of Mode-shifting (RQ2) and Teaching Helper (RQ3) on learning experiences. ### Technical Evaluation of the Reflect-Respond Pipeline As defined in Section 5.1, we evaluated the responses generated by our prompting pipeline along three axes--reconfigurability, persistence, and adaptability (RQ1). #### 6.1.1. Evaluating AlgoBo's Knowledge Level We evaluated AlgoBo's knowledge state configuration by observing its performance on Multiple Choice Questions (MCQs) under varying knowledge states and conversational interactions. Although our target learning setting does not involve MCQs, we chose MCQs to follow prior research on assessing LLMs' performance (Kal was answering questions based on its knowledge state only and not picking random choices, we also prompted AlgoBo to explain why it chose the answers (Fig. 6). #### 6.1.2. Procedure and Setup We measured AlgoBo's MCQ performance on three different algorithmic topics. For each topic, we created a set of nine multiple-choice questions (Appendix C.1). Within each set, three MCQ questions are created for each of Bloom's taxonomy categories: Understanding, Implementation (Applying), and Analysis [7, 31]. Understanding questions consisted of questions on factual concepts, Implementation questions were about filling in the blanks in code, and Analysis questions were about time complexity and comparison to other relevant algorithms. AlgoBo was evaluated with 4 different knowledge states (Appendix C.2) and conversational inputs (Appendix C.3, C.4, C.5). For reconfigurability (i.e., the change in knowledge level with different knowledge states), we prepared four seed knowledge states (Appendix C.2). _State 1_ was empty to simulate zero knowledge. _State 2_ had an explanation of a topic algorithm in **facts** only to observe if AlgoBo knows the given information only. _State 3_ had the same explanation plus a piece of incorrect code in **code_implementation** to check if AlgoBo shows the prescribed misconception. _State 4_ had the correct explanation and code to see if AlgoBo becomes competent with more input knowledge. We prompted AlgoBo to solve MCQs with different seed knowledge states and compared the scores between the states. To prevent AlgoBo from storing knowledge learned from the MCQ questions into its knowledge state, we turned off the Reflection flow. For assessing persistence (i.e., the invariance of knowledge level under no stimuli), we ran random conversations on seed knowledge state 2. In the random conversations, AlgoBo was taught irrelevant information, such as arithmetic, translation, and classification thus leading AlgoBo to acquire random information in its knowledge state [65]. We turned on the Reflection flow so that AlgoBo could update its initial knowledge state. We prompted AlgoBo to solve the same MCQs again and compared the difference between the first and second scores. For adaptability (i.e., the acceptance of new knowledge), we considered two cases--Correct and Incorrect tutoring. The performance gap between Correct and Incorrect tutoring is crucial to check an agent's suitability for LBT because a teachable agent should not excel when learners give incorrect or incomplete instruction. Tutoring conversations taught Figure 6. The process of measuring adaptability for correct tutoring with an Implementation problem and _State 2_ as a seed knowledge state. The evaluations were performed in Korean to ensure compatibility with the main study conditions. three pieces of information that mapped to Understanding, Implementation, and Analysis of concepts. Correct tutoring gave AlgoBo correct factual information, whereas Incorrect tutoring provided false information. We ran Correct and Incorrect tutoring separately on AlgoBo configured with _State 2_ and compared the differences between the MCQ scores at the start and after each type of tutoring. We used the GPT-4-0613 model with 0 temperature throughout the evaluation. For a more comprehensive understanding of the four knowledge states and the materials used in the evaluation, please refer to Appendix C. ### Technical Evaluation Result We report the result of the technical evaluation on reconfigurability, persistence, and adaptability. We observed a small variation in the MCQ score even for the same inputs, knowledge states, and LLM model, perhaps due to the randomness inherent in the model and the running hardware 2. We repeated the entire measurement five times for each input configuration and took majority voting to report the MCQ scores. The variance in score was mild; on average, AlgoBo produced a different response once in five repetitions. For the detailed report of the variance, refer to Appendix C.6. Footnote 2: [https://community.openai.com/t/a-question-on-determinism/8185/2](https://community.openai.com/t/a-question-on-determinism/8185/2) **[RQ1] Response flow can effectively reconfigure the knowledge level of AlgoBo.** As expected, AlgoBo got all MCQ questions wrong when its knowledge state was empty (see _State 1_ in Table 3). When the knowledge state had a fact information (_State 2_), AlgoBo could solve some conceptual (Understanding and Analysis) questions but none of the Implementation questions. This showed that separating the knowledge state by knowledge types (facts and code_implementation) can help configure knowledge more precisely by types. When the knowledge state contained code information, AlgoBo started to solve Implementation questions and achieved higher scores when given correct code (_State 4_), compared to incorrect code (_State 3_). AlgoBo followed what was written in its knowledge state (_State 3_) exactly and produced wrong code and answers. **[RQ1] Reflect-Respond makes AlgoBo produce responses persistent to knowledge states.** The random conversation had a mild effect on the MCQ scores (compare the difference between the "At the start" and "After random conversation" columns in Table 4). While random conversation changed the scores of conceptual questions, the scores of Implementation questions stayed the same. We did an in-depth analysis of the inputs and outputs of the operations and found that AlgoBo retrieved algorithm-related knowledge that was missing in the first MCQ solving. Considering our LLM prompt for Retrieve (Appendix A.2 Retrieve), we contemplate that the population of more information in knowledge states might increase the relative importance of relevant knowledge in retrieval and help AlgoBo solve questions correctly. In other words, the scores after the random conversation are closer to what the AlgoBo should have received initially. To see how far the population of random information increases knowledge level, we ran another random conversation and checked MCQ scores (see Table 5 Scenario 1). The second random conversation contained the same type of statements on arithmetic, translation, and classification. We did not observe any significant increase in the scores, confirming that the persistence of knowledge levels is robust regardless of the length of random conversations. **[RQ1] Reflect-Respond allows AlgoBo to adapt knowledge states from conversations.** Correct tutoring significantly improved MCQ scores (compare the difference between the "At the start" and "After Correct tutoring" columns in Table 4) across Understanding, Implementation, and Analysis. Conversely, Incorrect tutoring improved MCQ scores (compare "At the start" and "After Incorrect tutoring" columns in Table 4), but not as much as Correct tutoring did. Incorrect tutoring showed that despite incorrect information, correct insights can be drawn. For example, the incorrect code information "if arr[mid] > x: low = mid + 1 elif arr[mid] < x: high = mid - 1" given in Incorrect tutoring stimulated AlgoBo to infer that "Binary search returns a value indicating not found if the target is not in the list" and solve one of the Implementation questions. To investigate if AlgoBo prefers correct information to incorrect information and if incoming knowledge tends to overwrite pre-existing knowledge, we ran two scenarios in which AlgoBo received Correct and Incorrect tutoring in a sequence (see Table 5 Scenario 3 and 4). The result shows that AlgoBo tends to keep correct information and remove incorrect ones (check Table 6 last knowledge state). We surmise that AlgoBo dropped conflicting information to keep its knowledge state short as instructed in the Update prompt (Appendix A.2). We also speculate that LLMs prefer to follow widespread (often factual) knowledge compared to incorrect information as the way it is trained [(48)]. ### User Study We also ran a user study to evaluate the usefulness of Mode-shifting and Teaching Helper in improving the learning experience. We designed a between-subjects study to check the usefulness of our system components. More specifically, we checked if Mode-shifting increases the density of knowledge-building in conversations (RQ2) and if Teaching Helper improves the metacognition of participants (RQ3) while not overwhelming their cognitive load. #### 6.3.1. Participants We recruited 40 participants through advertisements on the campus community websites (age=\(24\pm 4.0\), 25 males and 15 females). Participants were required to understand short (about 20 lines) Python programs that contain basic syntax such as if and while statements, and we excluded those who participated in the formative study. To cap participants' prior knowledge, we filtered out the participants who were assumed to have mastered binary search already. We collected applicants' confidence in understanding binary search and teaching it to others on a 7-point Likert scale, the last time coding binary search, and their paid teaching experience on programming. We also asked applicants \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{3}{c|}{**At the Start**} & \multicolumn{3}{c|}{**After Random**} & \multicolumn{3}{c|}{**After Incorrect**} & \multicolumn{3}{c|}{**After Correct**} \\ & \multicolumn{3}{c|}{} & \multicolumn{3}{c|}{**conversation**} & \multicolumn{3}{c|}{**Tutoring**} & \multicolumn{3}{c|}{**Tutoring**} \\ \hline Question types & U & I & A & U & I & A & U & I & A & U & I & A \\ \hline **Binary search** & \(\oplus\) & 0 & 1 & 1 & 0 & 1 & 2 & 2 & 1 & 3 & 2 & 3 \\ \hline **Merge sort** & 1 & 0 & 2 & 2 & 0 & 2 & 3 & 1 & 2 & 3 & 3 & 3 \\ \hline **Breadth-first search** & 1 & 0 & 1 & 1 & 0 & 1 & 1 & 0 & 2 & 2 & 3 & 3 \\ \hline \end{tabular} \end{table} Table 4. AlgoBo’s MCQ scores after each conversational input. “U”, “I”, and “A” stand for Understanding, Implementation, and Analysis question types. Note that _State 2_ was used as a seed knowledge state for all topics. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{3}{c|}{**State 1**} & \multicolumn{3}{c|}{**State 2**} & \multicolumn{3}{c|}{**State 3**} & \multicolumn{3}{c|}{**State 4**} \\ \hline Question types & U & I & A & U & I & A & U & I & A & U & I & A \\ \hline **Binary search** & 0 & 0 & 0 & 2 & 0 & 0 & 3 & 8 & 0 & 3 & 8 & 1 \\ \hline **Merge sort** & 0 & 0 & 0 & 1 & 0 & 1 & 3 & 0 & 2 & 3 & 1 & 1 \\ \hline **Breadth-first search** & 0 & 0 & 0 & 0 & 0 & 1 & 2 & 2 & 2 & 2 & 2 & 3 & 1 \\ \hline \end{tabular} \end{table} Table 3. Number of correct MCQ questions for different knowledge states. _State 1_ is an empty knowledge state; _State 2_ has facts only; _State 3_ has facts with wrong code; _State 4_ has facts and correct code. “U”, “I”, and “A” stand for Understanding, Implementation, and Analysis question types. The number in each cell ranges from zero to three as there were three MCQs for a particular question type. to solve 6 Understanding and Implementation MCQs about binary search (Appendix C.1). We filtered out the applicants who met three or more of the following criteria: 1) scored five or more in the MCQ, 2) rated six or more for confidence, 3) implemented binary search within the last six months, and 4) were paid for teaching. We randomly assigned 20 participants to each condition\(-\)_Baseline_ and _TeachYou_, and we did not observe any significant differences between conditions in the self-rated understanding of binary search (_Baseline_=\(4.40\pm 1.35\), _TeachYou_=\(4.25\pm 1.65\), one-tailed t-test, \(p=0.76\)) and the time to solve the exercise problem during our study (_Baseline_=\(116\pm 60\) sec, _TeachYou_=\(124\pm 62\) sec, one-tailed t-test, \(p=0.66\)). #### 6.3.2. Procedure and Materials The user study was run online; after submitting informed consent, the participants received an online link to our system and completed the study in their available time. Participants spent \(60\pm 25\) minutes on average to complete the study and were paid 25,000 KRW (i.e., approximately 18.5 USD). All the instructions and materials used in the study were translated into Korean to avoid any language barrier and unnecessary cognitive overhead. The study procedure was organized into three parts (see Table 7). In the first part, participants learned about binary search and how to implement it in Python. Participants read the lecture materials on binary search taken from Khan Academy3 (Step 1) and solved an exercise problem in the form of a Parsons problem [(16)] (Step 2). After the exercise, participants wrote about their strategies in teaching (if any) and their prior experience in using AI chatbots, such as ChatGPT and Bing search (Step 3). Footnote 3: [https://www.khanacademy.org/computing/computer-science/algorithms/binary-search/a/binary-search](https://www.khanacademy.org/computing/computer-science/algorithms/binary-search/a/binary-search) In the second part, participants conducted LBT with AlgoBo. We provided explanations about LBT, the profile information of AlgoBo, and the participants' objectives for the LBT activity (Step 4). We stated in the objectives that participants should not only help AlgoBo solve the exercise problems but also construct new knowledge for themselves, encouraging the participants to pursue knowledge-building. Then, participants taught different versions of AlgoBo \begin{table} \begin{tabular}{|c|c|c||c|c|c|c|c|c|} \hline Question types & U & I & A & U & I & A & U & I & A \\ \hline Scenario 1 & \multicolumn{4}{c|}{**At the Start**} & \multicolumn{4}{c|}{**Random Conversation**} & \multicolumn{4}{c|}{**Random Conversation**} \\ \hline **Binary search** & 2 & 0 & 1 & 1 & 0 & 1 & 1 & 1 \\ \hline **Merge sort** & 1 & 0 & 2 & 2 & 0 & 2 & 2 & 0 & 2 \\ \hline **Breadth-first search** & 1 & 0 & 1 & 0 & 0 & 1 & 1 & 0 & 1 \\ \hline Scenario 2 & \multicolumn{4}{c|}{**At the Start**} & \multicolumn{4}{c|}{**Random Conversation**} & \multicolumn{4}{c|}{**Correct Tutoring**} \\ \hline **Binary search** & 2 & 0 & 1 & 1 & 1 & 1 & 2 & 2 & 2 \\ \hline **Merge sort** & 1 & 0 & 2 & 2 & 0 & 2 & 2 & 2 \\ \hline **Breadth-first search** & 1 & 0 & 1 & 0 & 0 & 1 & 2 & 2 \\ \hline Scenario 3 & \multicolumn{4}{c|}{**At the Start**} & \multicolumn{4}{c|}{**Incorrect Tutoring**} & \multicolumn{4}{c|}{**Correct Tutoring**} \\ \hline **Binary search** & 2 & 0 & 1 & 3 & 2 & 0 & 2 & 2 \\ \hline **Merge sort** & 1 & 0 & 2 & 2 & 1 & 2 & 2 & 2 & 2 \\ \hline **Breadth-first search** & 1 & 0 & 1 & 1 & 0 & 1 & 2 & 2 & 1 \\ \hline Scenario 4 & \multicolumn{4}{c|}{**At the Start**} & \multicolumn{4}{c|}{**Correct Tutoring**} & \multicolumn{4}{c|}{**Correct Tutoring**} \\ \hline **Binary search** & 2 & 0 & 1 & 3 & 3 & 3 & 3 & 3 \\ \hline **Merge sort** & 1 & 0 & 2 & 3 & 3 & 3 & 3 & 3 \\ \hline **Breadth-first search** & 1 & 0 & 1 & 2 & 3 & 2 & 2 & 2 \\ \hline \end{tabular} \end{table} Table 5. MCQ Evaluation for Different Conversational Scenarios. The numbers refer to the number of questions AlgoBo got right after a tutoring/random interaction. \begin{table} \begin{tabular}{|c|p{142.3pt}|p{142.3pt}|} \hline & **Messages** & **Knowledge State** \\ \hline **At the start** & - & \{"facts": [”Binary search repeats the process of dividing the input list in half.”], "code_implementation": []} \\ \hline **Incorrect tutoring** & \begin{tabular}{} \end{tabular} & \{"Tutor: Binary search uses a hashing function to retrieve values directly by index.”], "code_implementation": []} \\ \hline **Incorrect tutoring** & \begin{tabular}{} \end{tabular} & \{"facts": ["Binary search repeats the process of dividing the input list in half."], "code_implementation": []} \\ \hline **Incorrect tutoring** & \begin{tabular}{} \end{tabular} & \{"facts": ["Binary search repeats the process of dividing the input list in half."], "code_implementation": []} \\ \hline **Incorrect tutoring** & \begin{tabular}{} \end{tabular} & \{"facts": ["Binary search repeats the process of dividing the input list in half."], "Binary search uses a hashing function to retrieve values directly by index."], "code_implementation": []} \\ \hline **In the worst case, the time complexity of binary search is O(N\({}^{2}\)).** & \{"facts": ["Binary search repeats the process of dividing the input list in half.","Binary search uses a hashing function to retrieve values directly by index."], "forst case, the time complexity of binary search is O(N\({}^{2}\))"], "code_implementation": ["if arr[mid] \(>\) x: low = mid + 1 elif arr[mid] \(<\) x: high = mid - 1"]} \\ \hline **Correct tutoring** & \begin{tabular}{} \end{tabular} & \{"Tutor: Binary search is efficient when the data structure is ordered and any index in the data structure can be accessed in constant time. \\ \end{tabular} & \{"facts": ["Binary search repeats the process of dividing the input list in half.","Binary search uses a hashing function to retrieve values directly by index.","In the worst case, the time complexity of binary search is O(N\({}^{2}\))", "Binary search is efficient when the data structure is ordered and any index in the data structure can be accessed in constant time."], "code_implementation": ["if arr[mid] \(>\) x: low = mid + 1 elif arr[mid] \(<\) x: high = mid - 1"]} \\ \hline **Correct tutoring** & \begin{tabular}{} \end{tabular} & \{"Tutor: When finding a target by binary search in the input array list, the range is reduced by half as shown below. if list[middle] == target: return middle elif list[middle] \(<\) target: min = middle + 1 else: max = middle - 1 else: max = middle - 1 \\ \hline **Tutor: The time complexity of binary search is O(log N) because the search range is reduced by half.** & \{"facts": ["Binary search is efficient when any index in the ordered data structure can be accessed in constant time and repeats the process of dividing the input list in half."], "code_implementation": []} \\ \hline **Incorrect tutoring** & \begin{tabular}{} \end{tabular} & \{"facts": ["Binary search repeats the process of dividing the input list in half."], "code_implementation": []} \\ \hline **Incorrect tutoring** & \begin{tabular}{} \end{tabular} & \{"facts": ["Binary search repeats the process of dividing the input list in half."], "Binary search uses a hashing function to retrieve values directly by index."], "code_implementation": []} \\ \hline **Incorrect tutoring** & \begin{tabular}{} \end{tabular} & \{"facts": ["Binary search repeats the process of dividing the input list in half."], "Binary search uses a hashing function to retrieve values directly by index."], "code_implementation": ["if arr[mid] \(>\) x: low = mid + 1 elif arr[mid] \(<\) x: high = mid - 1"]} \\ \hline **Incorrect tutoring** & \begin{tabular}{} \end{tabular} & \{"facts": ["Binary search repeats the process of dividing the input list in half.", "Binary search uses a hashing function to retrieve values directly by index."], "code_implementation": ["if arr[mid] \(>\) x: low = mid + 1 elif arr[mid] \(<\) x: high = mid - 1"]} \\ \hline **Correct tutoring** & \begin{tabular}{} \end{tabular} & \{"facts": ["Binary search repeats the process of dividing the input list in half.", "Binary search uses a hashing function to retrieve values directly by index."], "In the worst case, the time complexity of binary search is O(N\({}^{2}\))", "code_implementation": ["if arr[mid] \(>\) x: low = mid + 1 elif arr[mid] \(<\) x: high = mid - 1"]} \\ \hline **Correct tutoring** & \begin{tabular}{} \end{tabular} & \{"facts": ["Binary search repeats the process of dividing the input list in half.", "Binary search uses a hashing function to retrieve values directly by index."], "In the worst case, the time complexity of binary search is O(N\({}^{2}\))", "code_implementation": ["if arr[mid] \(>\) x: low = mid + 1 elif arr[mid] \(<\) x: high = mid - 1"]} \\ \hline **Correct tutoring** & \begin{tabular}{} \end{tabular} & \{"facts": ["Binary search repeats the process of dividing the input list in half.", "Binary search is efficient when any index in the ordered data structure can be accessed in constant time."], "code_implementation": ["if arr[mid] \(>\) x: low = mid + 1 elif arr[mid] \(<\) x: high = mid - 1"]} \\ \hline **Correct tutoring** & \begin{tabular}{} \end{tabular} & \{"facts": ["Binary search uses a hashing function to retrieve values directly by index."], "code_implementation": []} \\ \hline **Correct tutoring** & \begin{tabular}{} \end{tabular} & \{"facts": ["Binary search repeats the process of dividing the input list in half."], "code_implementation": []} \\ \hline **Correct tutoring** & \begin{tabular}{} \end{tabular} & \{"facts": ["Binary search repeats the process of dividing the input list in half."], "code_implementation": []} \\ \hline **Correct tutoring** & \begin{tabular}{} \end{tabular} & \{"facts": ["Binary search repeats the process of dividing the input list in half."], "code_implementation": ["if arr[mid] \(>\) x: low = mid + 1 elif arr[mid] \(<\) x: high = mid - 1"]} \\ \hline **Correct tutoring** & \begin{tabular}{} \end{tabular} & \{"facts": ["Binary search repeats the process of dividing the input list in half."], "code_implementation": []} \\ \hline **Correct tutoring** & \begin{tabular}{} \end{tabular} & \{"facts": ["Binary search repeats the process of dividing the input list in half."], "code_implementation": ["if arr[mid] \(>\) x: low = mid + 1 elif arr[mid] \(<\) x: high = mid - 1"]} \\ \hline **Correct tutoring** & \begin{tabular}{} \end{tabular} & \{"facts": ["Binary search repeats the process of dividing the input list in half."], "code_implementation": []} \\ \hline **Correct tutoring** & \begin{tabular}{} \end{tabular} & \{"facts": ["Binary search repeats the process of dividing the input list in half."], "code_implementation": ["if arr[mid] \(>\) x: low = mid + 1 elif arr[mid] \(<\) x: high = mid - 1"]} \\ \hline **Correct tutoring** & \begin{tabular}{} \end{tabular} & \{"facts": ["Binary search repeats the process of dividing the input list in half."], "code_implementation": ["if arr[mid] \(>\) x: low = mid + 1 elif arr[mid] \(<\) x: high = mid - 1"]} \\ \hline **Correct tutoring** & \begin{tabular}{} \end{tabular} & \{"facts": ["Binary search repeats the process of dividing the input list in half."], "code_implementation": ["if arr[mid] \(>\) x: low = mid + 1 elif arr[mid] \(<\) x: high = mid - 1"]} \\ \hline **Correct tutoring** & \begin{tabular}{} \end{tabular} & \{"facts": ["Binary search repeats the process of dividing the input list in half."], "code_implementation": ["if arr[mid] \(>\) x: low = mid + 1 elif arr[mid] \(<\) x: high = mid - 1"]} \\ \hline **Correct tutoring** & \begin{tabular}{} \end{tabular} & \{"facts": ["Binary search repeats the process of dividing the input list in half."], "code_implementation": ["if arr[mid] \(>\) x: low = mid + 1 elif arr[mid] \(<\) x: high = mid - 1"]} \\ \hline **Correct tutoring** & \begin{tabular}{} \end{tabular} & \{"facts": ["Binary search repeats the process of dividing the input list in half."], "code_implementation": ["if arr[mid] \(>\) x: low = mid + 1 elif arr[mid] \(<\) x: high = mid - 1"]} \\ \hline **Correct tutoring** & \begin{tabular}{} \end{tabular} & \{"facts": ["Binary search repeats the process of dividing the input list in half."], "code_implementation": ["if arr[mid] \(>\) x: low = mid + 1 elif arr[mid] \(<\) x: high = mid - 1"]} \\ \hline **Correct tutoring** & \begin{tabular}{} \end{tabular} & \{"facts": ["Binary search repeats the process of dividing the input list in half."], "code_implementation": ["if arr[mid] \(>\) x: low = mid + 1 elif arr[mid] \(<\) x: high = mid - 1"]} \\ \hline **Correct tutoring** & \begin{tabular}{} \end{tabular} & \{"facts": ["Binary search repeats the process of dividing the input list in half."], "code_implementation": ["Binary search uses a hashing function to retrieve values directly by index."], "code_implementation": ["if arr[mid] \(>\) x: low = mid + 1 elif arr[mid] \(<\) x: high = mid - 1"]} \\ \hline **Correct tutoring** & \begin{tabular}{} \end{tabular} & \{"facts": ["Binary search repeats the process of dividing the input list in half."], "code_implementation": ["if arr and TeachYou according to their conditions (Step 5) with the interface shown in Fig. 2. AlgoBo was configured by our prompting pipeline, and the seed knowledge state was identical across the conditions. The facts field of the seed knowledge was empty to simulate a lack of understanding, and the code_implementation field had a basic code structure that lacked the entire range update logic in binary search. We did not go for zero-knowledge AlgoBo to keep the entire teaching sessions within 40 min and spare enough time for having discussions. All the participants were given three goals to achieve in series; we asked them to 1) check if AlgoBo understands binary search first, then 2) help AlgoBo solve the exercise problems, and 3) discuss with AlgoBo about binary search in depth. Participants could finish the LBT activity as long as AlgoBo's code passed all test cases, and they could skip to the next step. Participants were also allowed to search for information on the Internet when stuck or finding information. In the third part, the participants completed three questionnaires about their cognitive load, metacognition, satisfaction, and free-form comments (Steps 6, 7, and 8). We adopted the questionnaire from Morrison et al.'s study [47] to measure cognitive load and used the questions from King et al.'s study [29] for assessing metacognition and satisfaction. #### 6.3.3. Measures We summarize our metrics in the user study and their measurement timing along with the steps in Table 7. _Knowledge-building density in LBT dialogues_. Past research assessed the quality of dialogues by measuring the density of expressed and interchanged knowledge-building messages in conversations [60, 64]. To look into how Mode-shifting helps knowledge-building in conversations (RQ2), we classified the message types (Table 1) and examined the ratio of knowledge-building type messages in a dialogue. We collected 1210 messages in 40 dialogues. Two authors took three iterations for annotation and conflict resolution; in the last iteration (400 messages), the authors achieved high inter-rater reliability (Krippendorff's alpha=0.743). We looked into the density of knowledge-building type messages (see Table 1) in a dialogue between conditions. We also summed the messages from participants and AlgoBo because they co-built new knowledge by exchanging ideas and adding ideas on top of each other as illustrated in Table 9. Lastly, we analyze the problem-solving phase and discussion phase separately since they had different objective settings (Fig. 2 A); the problem-solving phase refers to the part of conversations dedicated to the first two objectives, in which participants had a clear goal of helping AlgoBo write code that passes all the test cases; the discussion phase refers to the \begin{table} \begin{tabular}{|c|c|c|} \hline \multirow{2}{*}{Step (min.)} & \multicolumn{2}{c|}{Conditions} \\ \cline{2-3} & **Baseline** & **TeachYou** \\ \hline 1 (10) & \multicolumn{2}{c|}{Learning binary search} \\ \hline 2 (5) & \multicolumn{2}{c|}{Exercise problem} \\ \hline 3 (5) & \multicolumn{2}{c|}{Pre-task survey} \\ \hline 4 (3) & \multicolumn{2}{c|}{Explanation about AlgoBo and LBT} \\ \hline 5 (20) & \begin{tabular}{c} Teaching AlgoBo with the \\ knowledge configuration only \\ \end{tabular} & \begin{tabular}{c} Teaching AlgoBo with the knowledge configuration \\ and Mode-shifting while receiving \\ the metacognitive feedback from Teaching Helper \\ \end{tabular} \\ \hline 6 (5) & \multicolumn{2}{c|}{Cognitive load measurement} \\ \hline 7 (5) & \multicolumn{2}{c|}{Metacognition measurement} \\ \hline 8 (5) & \multicolumn{2}{c|}{Post-task survey} \\ \hline \end{tabular} \end{table} Table 7. The outline of the user study and the time allotted to each step on average. remaining part of conversations in which participants are asked to expand their knowledge freely without completion requirements. _Self-rated cognitive load on tutoring._ As we introduced new functionalities (Teaching Helper, Mode-shifting), it was imperative to evaluate how much these enhancements increased the cognitive load for learners. We adopted and adjusted Morrison et al.'s questionnaire designed to measure cognitive load in CS learning (Mirror et al., 2017). The questionnaire measures three types of cognitive load--intrinsic load (i.e., the inherent complexity in a learning activity), extrinsic load (i.e., the hindrance caused by instructional design), and germane load (i.e., the meaningful load used for learning). Participants rated the questions right after the LBT activity in Step 6. _Self-perceived metacognition on tutoring._ We also aim to improve learners' metacognition of their LBT experience by giving feedback and guidance through Teaching Helper. To confirm the efficacy of Teaching Helper on metacognition (RQ3), we asked participants 8 questions on Understanding, Supportive communication, Explaining, and Self-monitoring based on King et al.'s research (King et al., 2019) (Table 10) in Step 7. _Satisfaction on LBT._ Apart from the learning benefits, we measured how satisfactory the learning experience with virtual agents was. We asked participants to rate 4 statements about their perceived usefulness, comfortability, and preference for future reuse of TeachYou and AlgoBo in Step 8. _Post-task survey._ We revisited the three themes explored in the formative study--learners' perception of AlgoBo as a peer learner, learner-perceived usefulness of TeachYou in identifying knowledge gaps, and familiarity with teaching a virtual agent. Like in the formative study, we asked participants to rate two questions from each theme (Table 11) and write detailed reasons for the rating in Step 8. Additionally, we prepared condition-specific questions; for the _Baseline_ condition, we asked participants further about their perception of AlgoBo; for the _TeachYou_ condition, we received free-form comments on Mode-shifting and Teaching Helper from participants. ### User Study Result In this section, we summarize our findings from the user study. We explain the statistical significance, participants' comments, and system usage logs to support our findings. Participants are labeled with either B[1-20] for the _Baseline_ condition or T[1-20] for the _TeachYou_ condition. **[RQ2] Mode-shifting enriched knowledge-building in the problem-solving phase.** We found a statistically significant (_Baseline_=\(2.5\pm 6.4\), _TeachYou_=\(6.7\pm 5.0\), one-tailed t-test, \(p=0.03\), Cohen's d=0.73) improvement in the knowledge-building density of the dialogues during the problem-solving phase int _TeachYou_ (Table 8). _TeachYou_ condition also had a significantly greater density of Prompting-Thought-provoking type (_Baseline_=\(1.8\pm 5.7\), _TeachYou_=\(5.7\pm 4.5\), one-tailed t-test, \(p=0.02\)), suggesting that tutors and AlgoBo prompted each other's knowledge-building more often when Mode-shifting and Teaching Helper were present (see the dialogue example in Table 9). Participants also found _TeachYou_ more useful for learning new knowledge (_Baseline_=\(3.25\pm 1.71\), _TeachYou_=\(4.95\pm 1.70\), one-tailed t-test, \(p<0.01\), Cohen's d=1.00) (Table 11). _TeachYou_ participants remarked the questions from AlgoBo were useful for reviewing code from a different perspective (T6) and thinking about the edge cases where the input list is not sorted (T10). Further explorations outside of binary search were also explored when participants reasoned deeply about why and how binary search is faster than linear search (T4 and T9), compared the efficiency with other relevant searching algorithms (T2 and T13), and thought about real-life applications (T17). T15 commented that "[Mode-shifting] was the most important component in the system. [Questions] helped me guide what to teach and helped self-explain things I had not thought of." On the contrary, _Baseline_ participants found LBT with AlgoBo "useful for solidifying their prior knowledge but unsupportive for learning new knowledge due to lack of questions" (B4 and B15). T2 remarked on the difficulty in applying the suggestions to his conversation--"Teaching Helper was a useful guide, but it was difficult to relate my explanation to what AlgoBo knew." Teaching Helper was not helpful for the participants who taught well in particular. T13 received positive feedback only (i.e., the green boxes in Fig 5) and felt "suggestions [from Teaching Helper] were repetitive and irrelevant to the current conversation." Nevertheless, the comments from the survey suggest that Teaching Helper functioned as a reminder to participants to think metacognitively about their entire teaching patterns through reflection (T3), ask deep questions (T7), and foster independent thinking (T14). Additionally, Teaching Helper restrained participants from treating AlgoBo merely as a machine. "I sometimes found myself conversing in the usual [impervative] way with ChatGPT. However, when a notification appears, it brings me back to the realization that I am in a teaching context, prompting me to contemplate how best to instruct so that AlgoBo can learn effectively and align with the direction I aim for." (T17). **Mode-shifting and Teaching Helper did not exert additional cognitive load.** We did not observe any significant difference across all types of cognitive load between the conditions. Considering that _TeachYou_ participants exchanged significantly more messages (_Baseline_=\(17\pm 7.7\),_TeachYou_=\(43\pm 18.5\), one-tailed t-test, \(p<0.01\), Cohen's d=\(1.87\)), the result may imply that periodic questions and feedback not only exerted minimal load but also helped participants maintain their load manageable throughout long conversations. ## 7. Discussion We discuss design suggestions, benefits, and future research directions of LLM-based teachable agents. ### Design Considerations for Mode-shifting in LBT Our results showed that Mode-shifting not only led to more knowledge-dense conversations but also improved participants' perceptions of AlgoBo as a convincing tutee (Table 11). Mode-shifting also tended to foster longer discussion phases (_Baseline_=\(5.6\pm 3.7\) messages, _TeachYou_=\(9.4\pm 8.4\) messages, one-tailed t-test, \(p=0.07\), Cohen's d=\(0.59\)). Considering that completion of the discussion phase was up to the participants, the difference may imply that Mode-shifting made LBT conversations more engaging and lingering. Although there was a significant increase in knowledge-building in the _TeachYou_ condition, the ratings on the metacognition questions did not show significant differences (Table 10). As a possible reason, we found some cases \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{**Questions**}} & \multicolumn{2}{c|}{**Mean \(\pm\) Standard Deviation**} & \multirow{2}{*}{**p-value**} & \multirow{2}{*}{**Cohen’s d**} \\ \cline{2-2} \cline{4-5} & **Baseline** & **TeachYou** & & \\ \hline I understood today’s lesson well. & 6.30 \(\pm\) 0.86 & 6.25 \(\pm\) 0.64 & 0.84 & 0.07 \\ \hline I listened to AlgoBo well. & 6.00 \(\pm\) 1.34 & 5.55 \(\pm\) 1.57 & 0.34 & 0.31 \\ \hline I gave feedback to AlgoBo well. & 5.45 \(\pm\) 1.28 & 5.25 \(\pm\) 1.25 & 0.62 & 0.16 \\ \hline I explained well by telling why and how. & 5.10 \(\pm\) 1.62 & 5.30 \(\pm\) 1.03 & 0.64 & 0.15 \\ \hline I connected new materials to what AlgoBo already knew. & 4.50 \(\pm\) 1.54 & 4.05 \(\pm\) 1.43 & 0.34 & 0.30 \\ \hline I stayed with questioning well, rather than telling answers to AlgoBo. & 5.15 \(\pm\) 1.87 & 4.90 \(\pm\) 1.37 & 0.63 & 0.15 \\ \hline I asked probing questions when AlgoBo’s answer was not complete. & 5.00 \(\pm\) 1.81 & 4.60 \(\pm\) 1.31 & 0.43 & 0.25 \\ \hline I sequenced my questions by asking review questions & 4.50 \(\pm\) 1.54 & 5.20 \(\pm\) 1.15 & 0.11 & 0.52 \\ \hline first and then asking thinking questions. & 4.50 \(\pm\) 1.54 & 5.20 \(\pm\) 1.15 & 0.11 & 0.52 \\ \hline \end{tabular} \end{table} Table 10. Participants’ ratings on the questions regarding their metacognition (1: Not the case at all, 7: Completely the case). where Mode-shifting interrupted participants' teaching flows and methods, especially in situations where AlgoBo asked other questions without answering tutors' Socratic questions (T8 and T20). T20 mentioned, "There were many times when AlgoBo asked random questions while writing code [...], which was not intuitive for me in teaching." Although participants could recognize the issues with their teaching methods through Teaching Helper, AlgoBo's pre-programmed interaction in Mode-shifting did not reflect teaching contexts and hindered participants from practicing better teaching strategies. This suggests the need for context-aware Mode-shifting where the system captures adequate timing for thought-provoking questions without interrupting participant-intended teaching flow. There are many aspects to consider when designing Mode-shifting techniques for LBT. While knowledge-building is the primary goal, improvements in learners' metacognition and satisfaction can elicit intrinsic learning benefits. However, from our results, it seems that the two values are in a trade-off relationship. To facilitate knowledge-building, teachable agents should intervene in conversations and ask thought-provoking questions; on the contrary, to support the active exploration of teaching methods and metacognition, learners should be given the control to lead conversation flows. LBT systems with teachable agents should balance between system-controlled and learner-driven conversation flows to support both learning values or give learners and instructors control to steer the drive depending on their learning goals and contexts. ### Using LLMs for Building Teachable Agents Our primary aim was to investigate if prompt engineering LLMs can offer cost-effective authoring and simulation of teachable agents. Past research looked into using interactive authoring methods (Rajaj et al., 2018) and learnersourcing (Krishnan et al., 2019; Krizhevsky et al., 2019) to offload experts' manual efforts for building the knowledge model of teachable agents and intelligent tutoring systems. Nevertheless, these methods required hundreds of lines of code to adapt the systems to specific subjects. LLMs can provide easy adaptation and a low authoring barrier. Our technical evaluation across different topics (Table 3 and Table 4) showed that the Reflect-Respond prompting pipeline is applicable to general algorithm topics even with a few few-shot examples. We wrote 19 few-shot examples (290 lines in length) for the Reflect-Respond pipeline \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline **Themes** & **Questions** & \multicolumn{2}{c|}{**Mean \(\pm\) Standard Deviation**} & \multicolumn{1}{p{56.9pt}|}{**p{56.9pt}|} & \multicolumn{1}{p{56.9pt}|}{**Cohen’s d**} \\ \cline{3-6} Perception of AlgoBo as a learner & \multicolumn{1}{p{56.9pt}|}{1 perceived AlgoBo as a student struggling to solve binary search problems.} & \multicolumn{1}{p{56.9pt}|}{\(3.15\pm 1.31\)} & \multicolumn{1}{p{56.9pt}|}{\(4.60\pm 1.79\)} & \multicolumn{1}{p{56.9pt}|}{\(0.01^{*}\)} & \multicolumn{1}{p{56.9pt}|}{\(0.93\)} \\ \cline{2-6} & AlgoBo solved the binary problems due to my help. & \multicolumn{1}{p{56.9pt}|}{\(5.25\pm 1.59\)} & \multicolumn{1}{p{56.9pt}|}{\(4.90\pm 1.59\)} & \multicolumn{1}{p{56.9pt}|}{\(0.49\)} & \multicolumn{1}{p{56.9pt}|}{\(0.22\)} \\ \hline Usefulness for learning & \multicolumn{1}{p{56.9pt}|}{Conversation with AlgoBo helped me reorganize my knowledge about binary search.} & \multicolumn{1}{p{56.9pt}|}{\(5.20\pm 1.51\)} & \multicolumn{1}{p{56.9pt}|}{\(5.40\pm 1.10\)} & \multicolumn{1}{p{56.9pt}|}{\(0.63\)} & \multicolumn{1}{p{56.9pt}|}{\(0.15\)} \\ \cline{2-6} & \multicolumn{1}{p{56.9pt}|}{Conversation with AlgoBo helped me discover new knowledge that I did not know} & \multicolumn{1}{p{56.9pt}|}{\(3.25\pm 1.71\)} & \multicolumn{1}{p{56.9pt}|}{\(4.95\pm 1.70\)} & \multicolumn{1}{p{56.9pt}|}{\(<\)\(0.01^{*}\)} & \multicolumn{1}{p{56.9pt}|}{\(1.00\)} \\ \hline Familiarity with teaching & \multicolumn{1}{p{56.9pt}|}{Learning by teaching AlgoBo was familiar and intuitive.} & \multicolumn{1}{p{56.9pt}|}{\(4.70\pm 1.66\)} & \multicolumn{1}{p{56.9pt}|}{\(4.75\pm 1.45\)} & \multicolumn{1}{p{56.9pt}|}{\(0.92\)} & \multicolumn{1}{p{56.9pt}|}{\(0.03\)} \\ \cline{2-6} & \multicolumn{1}{p{56.9pt}|}{I taught AlgoBo effectively.} & \multicolumn{1}{p{56.9pt}|}{\(4.65\pm 1.63\)} & \multicolumn{1}{p{56.9pt}|}{\(4.00\pm 1.56\)} & \multicolumn{1}{p{56.9pt}|}{\(0.21\)} & \multicolumn{1}{p{56.9pt}|}{\(0.41\)} \\ \hline \end{tabular} \end{table} Table 11. Six thermal questions given in the Post-task survey. (I: Not the case at all, 7: Completely the case). Statistical significances are marked with \(*\). and another 16 examples (210 lines) for Mode-shifting; with this, we could achieve the desired level of reconfigurability, persistence, and adaptability for all three topics. All the examples and instructions in the LLM prompts were written in natural languages, making our method compelling especially for instructors and education researchers with limited programming expertise. Recent research on AI suggests editing LLMs' pre-trained knowledge by changing hidden states or transformer layers within the model (Han et al., 2017; Zhang et al., 2018; Zhang et al., 2019). While these model-centric approaches can provide alternative ways to build LLM-based teachable agents with specified knowledge levels, our prompting pipeline has strengths in scalability, cost-effectiveness, and explainability. First, our approach offers a scalable and cost-effective method for running different versions of teachable agents. While model-centric methods require retraining of LLMs for different knowledge configurations, our prompting pipeline can share a single LLM and simulate various versions of teachable agents with only knowledge state JSON files. Second, our pipeline can represent the knowledge states of teachable agents in more explainable and manipulable forms, enabling learners with more transparent methods of analyzing the tutee's knowledge state (Zhu et al., 2019; Zhang et al., 2019). ### Learner-driven Customization of Teachable Agents In our user study, we provided participants AlgoBo with the same knowledge configurations regardless of their prior knowledge and teaching preference. This one-size-fits-all setting might explain the high variance in some of our results (Table 8). Peer matching is one of the crucial factors in peer learning and LBT. Learning gain and engagement of tutees and tutors increase only when their reciprocal expertise matches (Han et al., 2017; Li et al., 2019). Although conventional teachable agents can simulate learners of specific abilities and persona, they are limited in flexibility and variety due to high authoring costs and programming barriers. However, LLMs now allow the configuration of agents with natural languages (Zhu et al., 2019; Zhang et al., 2019), opening new doors for learners to adjust teachable agents for their educational needs. We suggest two aspects of customization. First, learners can directly manipulate the seed knowledge state, adjust competency levels, and even introduce specific misconceptions. For example, a learner who already understands binary search may want to skip basic explanations of binary search and spend more time on discussion. The learner can simply input his/her knowledge into AlgoBo, allowing future conversations to start at a more advanced level. Customizable knowledge levels can also make LBT more engaging for learners as they can choose their mates and avoid frustration from the high expertise gap. Second, learners can also customize AlgoBo's parametrized learning behaviors, such as Mode-shifting. Although we can alleviate learners' fatigue and distraction from Mode-shifting by making AlgoBo context-aware and asking questions timely instead of the current rule-based scheme, giving direct control of setting the question-asking frequency to learners can also help them manage their load and self-regulate their learning environment. All these configurations are possible through natural language inputs from the user. Future research can look into how the customization and personalization of teachable agents can increase the benefits of LBT even further. ### Setting the Right Expectation of Teachable Agents Teachable agents often have had visual forms of a human student (Han et al., 2017; Zhang et al., 2018; Zhang et al., 2019; Zhang et al., 2019). Likewise, we also gave AlgoBo a student-like persona to help learners set initial expectations of tutees. Due to the given persona and unfamiliarity in LBT with virtual agents, many participants put the expectation of a human learner to AlgoBo (Zhu et al., 2019). However, the high expectation aggravated awkward instances of AlgoBo's responses compared to human learners. AlgoBo asked repetitive questions and could not transfer natural language explanations to code (T7). AlgoBo asked questions (i.e., because it was in the questioner mode) even when tutors asked AlgoBo's opinions and thoughts, making the question-answering flow unnatural (T20). These clumsy behaviors confused participants in applying effective teaching methods and decreased their satisfaction and engagement. While using better LLMs and a more refined pipeline can alleviate the problem, we argue that reducing learners' expectation gap in understanding the capabilities of teachable agents and their reasoning process is also fundamental in the context of LBT with AI (Beng et al., 2019; Chen et al., 2020). Through the perspective of the gulf of execution and evaluation (Song et al., 2020), we suggest some interaction-centric design implications that can close learners' expectation gap in LBT. For the gulf of execution, learners should be better informed about whom and how they teach. For example, learners may receive more detailed explanations of AlgoBo's operating principles. This can increase learners' tolerance of AlgoBo's awkward responses and help form an appropriate first impression of agents (Song et al., 2020). The learning system can also inform learners of their expected roles in different phases in Mode-shifting clearly. For instance, when AlgoBo is in the questioner mode, the system can clarify that motors should focus on providing answers. This will help learners follow the pedagogical conversation flows (e.g., mode-shifting) and improve learning impact. For the gulf of evaluation, the system can present AlgoBo's learning progress explicitly. Learning systems can show AlgoBo's current knowledge state more directly and allow learners to self-assess the effectiveness of their teaching methods. Future research can explore these modifications to make the conversations with teachable agents more satisfactory and expected. ## 8. Limitation and Future Work First, the scope of our evaluation is limited to algorithm learning and procedural knowledge in programming. Although our results showed that the Reflect-Respond pipeline is generalizable within different algorithm topics, we need to confirm if the framework is generalizable to other subjects (e.g., Math and Physics) as we have optimized our prompts for programming learning and trained our message classifiers on the binary search dialogues. Moreover, since procedural knowledge and declarative knowledge are different in cognitive processing and effective learning interventions (Moh et al., 2019; Chen et al., 2020), TeachYou may not scaffold declarative knowledge learning effectively. As prior research looked into declarative knowledge learning (Song et al., 2020; Chen et al., 2020), future studies can investigate more extensive topics outside algorithm learning. Second, our user study was confined to indirect measures of learning gain. Dialogue quality is one of the primary metrics in LBT adopted in past research (Moh et al., 2019; Chen et al., 2020), and we did a comprehensive analysis of knowledge-building through dialogue analysis and surveys. Nevertheless, we can make our findings more concrete by measuring participants' learning gain directly through pre-post test comparison. Although we did not take a pre-post test in consideration that one-time LBT would not elicit significant improvement in scores between our conditions, future research can design studies to compare the learning gain between conditions and confirm the connection between dialogue quality and learning gain (Song et al., 2020). Lastly, future research can deploy TeachYou to real classrooms of greater size and monitor the longitudinal dynamics among learners' perception, learning gain, and metacognition. Although we could observe statistical significance in some of our measurements, there were high variances among participants, perhaps due to different levels of prior knowledge, teaching styles, and conversational patterns. These properties are hard to control in nature; a user study on larger populations can sharpen the statistics of the results and make our findings more concrete. In addition to the population size, longitudinal studies may reveal significant changes in learners' metacognition and teaching patterns as there is more room for learners to understand the nature of AlgoBo and improve their methods over time. We plan to deploy our system to the classes offered in our institution, in which students learn different algorithm topics throughout a semester. The classroom deployment will require a configuration interface where instructors can set up class materials and edit AlgoBo's knowledge state and the prompts in the Reflect-Respond pipeline for their needs. We also need to reduce the response time of AlgoBo (currently about 30 seconds) for practical use, as many participants pointed out. After the small-scale controlled deployment, we envision deploying TeachYou as an online platform to help instructors of different fields adopt LBT to their classes. LLM-powered LBT will enable the dissemination of interactive learning at scale. ## 9. Conclusion This work presents TeachYou, a system for supporting LBT with an LLM-based teachable agent AlgoBo where learners can learn by teaching AlgoBo how to code. To facilitate effective LBT with AlgoBo, we introduced (1) Reflect-Respond prompting pipeline for simulating knowledge learning of AlgoBo, (2) Mode-shifting for eliciting knowledge-building in conversations through AlgoBo's elaboration questions, and (3) Teaching Helper for providing metacognitive feedback to learners about their teaching styles. Our technical evaluation showed that our Reflect-Respond prompting pipeline could effectively configure, persist, and adapt AlgoBo's knowledge level. Our user study with 40 algorithm novices confirmed that Mode-shifting improved the density of knowledge-building messages in LBT dialogues. We envision that our approach can help researchers and instructors create LLM-based teachable agents with low manual efforts and barriers and support learners to excel in their learning with engaging learning experiences.
2310.00422
Bound State in the Continuum Supported Asymmetric Dome-shaped Dielectric Metasurface: Crossing and Avoided Crossing of Transmission with Applications
This work examines a symmetry-protected bound state in the continuum (BIC) supported unique all-dielectric dome-shaped metasurface (MS). The simple unit MS is made up of four dome-shaped nanobars of silicon in the top layer and silica glass as the substrate. Two strong transmission dips denoted as an electric dipole (ED) and magnetic dipole (MD) quasi-BIC with a high Q-factor that surpasses the value of 104 are observed after the symmetry between the nanobars is broken by a small angle from their initial position. A crossover between the ED and MD resonances is noticed when the periodicity of the MS is changed in the y direction. In addition, transmission spectra show the avoided-crossing phenomenon of dipole quasi-BIC resonances when the superstrate's refractive index (RI) is reduced from its initial value. Other widely used dielectric materials are additionally employed in dome-shaped nanobars to evaluate their performances in terms of sharpness and near-zero transmission. Moreover, Other forms of symmetry breakdown such as diagonal width and length asymmetry have been investigated for their impact on the Q-factor. Finally, we have demonstrated two significant applications, including refractometric glucose sensing and third harmonic generation (THG). Our dome-shaped MS is roughly 300 times more efficient at generating third harmonics than a flat rectangular silicon film MS. Our proposed BIC and q-BIC-facilitated MS may provide a method for enhancing the functionality of biological sensors, multimodal lasing, optical switches, and nonlinear optics.
Ohidul Islam, M. Hussayeen Khan Anik, Shakhawat Hossain Shakib, Nahid Hasan Niloy, Hriteshwar Talukder, Shovasis Kumar Biswas
2023-09-30T16:01:33Z
http://arxiv.org/abs/2310.00422v2
Bound State in the Continuum Supported Asymmetric Dome-shaped Dielectric Metasurface: Crossing and Avoided Crossing of Transmission with Applications ###### Abstract This work examines a symmetry-protected bound state in the continuum (BIC) supported unique all-dielectric dome-shaped metasurface (MS). The simple unit MS is made up of four dome-shaped nanobars of silicon in the top layer and silica glass as the substrate. Two strong transmission dips denoted as an electric dipole (ED) and magnetic dipole (MD) quasi-BIC with high Q-factor that surpasses the value of \(10^{4}\) are observed after the symmetry between the nanobars is broken by a small angle from their initial position. A crossover between the ED and MD resonances is noticed when the periodicity of the MS is changed in the y direction. In addition, transmission spectra show the avoided-crossing phenomenon of dipole quasi-BIC resonances when the superstrate's refractive index (RI) is reduced from its initial value. Other widely used dielectric materials are additionally employed in dome-shaped nanobars to evaluate their performances in terms of sharpness and near-zero transmission. Moreover, Other forms of symmetry breakdown such as diagonal width and length asymmetry have been investigated for their impact on the Q-factor. Finally, we have demonstrated two significant applications, including refractometric glucose sensing and third harmonic generation (THG). Our dome-shaped MS is roughly 300 times more efficient at generating third harmonics than a flat rectangular silicon film MS. Our proposed BIC and q-BIC-facilitated MS may provide a method for enhancing the functionality of biological sensors, multimodal lasing, optical switches, and nonlinear optics. ## 1 Introduction In view of the destructive interference of many modes of radiation, a bound state in the continuum (BIC) is a mathematical concept that refers to a non-expanding resonant state in an open framework that can't couple with radiating pathways propagating beyond the system [1, 2, 3]. It is a confined state that interacts with an uninterrupted range of radiating waves that may transport energy with an extremely high-quality factor and can only exist for very high values of certain parameters or in perfect flawless never-ending structures [4, 5, 6]. In reality, a quasi-BIC may be achieved in which the Q factor becomes big yet finite near the BIC criterion in order to use BIC in nanophotonics [7]. However, the presence of optical BIC modes is categorized as symmetry-protected BIC (SP-BIC), generated by the symmetry-constrained outcoupling, Fredrich-Wintegen (FW-BIC) or accidental BIC, and Fabry-Perot BIC (FP-BIC) [8, 9, 10]. The SP-BICs may withstand minor structural flaws as long as the necessary symmetry is preserved. This symmetry can be broken by changing opto-geometrical parameters, in which case the BICs often shift to a resonant mode with a high Q-factor known as quasi-BIC (q-BIC) modes [11]. To achieve q-BIC resonant phases with a high Q-factor, one is required to either slightly violate the excitation field symmetry with angled incidences or the in-plane/out-of-plane structural symmetry with normal incidence [3, 12]. While out-of-plane symmetry breaking depends on changing the heights of a dimer or trimer cluster, in-plane symmetry breaking is often achieved by deforming the structure of the meta-atom or by creating a relative tiling between two meta-atoms [13, 14, 15]. BICs of various sorts have been used in innumerable photonic systems such as photonic crystals, metamaterials/metasurfaces, plasmonic structures, hybrid plasmonic-photonic structures, and fiber Bragg gratings [16, 17, 18, 19]. In order to demonstrate the stimulation of high-Q resonances for the normal incidence of light, enhanced light focussing, wavefront and polarization regulating, and other applications, metasurfaces (MSs) are unique photonic frameworks with two-dimensional engineered periodic ensembles of subwavelength optical resonators [5, 20, 21]. These metasurfaces may be quickly divided into two categories: plasmonic MS and all-dielectric MS. Plasmonic MS frequently interacts with metals in which significant quantities of light get captured to generate a considerable amount of heat as compared to photocarriers which can commonly degrade the performance [22]. To counter the drawbacks of metallic MS, high-refractive-index (HRI) dielectric MSs have currently gained popularity owing to their benefits of minimal non-radiative losses and outrageous melting temperatures, with silicon being one of the viable HRI materials [23]. Due to the significant scattering signals in Si-based dielectric nanostructure, including guided forward and backward scattering, electric dipole (ED) and magnetic dipole (MD) resonances have been examined [24, 25]. In a Si-based MS, ED, and MD resonances can be generated to spectrally converge and oscillate in phase with one another without any re-emission of the electromagnetic (EM) field in the reverse direction, resulting in scattering cancellation in the reversed direction, known as the Kerker condition [26, 27, 28]. By modifying certain parameters in MS, as well as the shapes and orientations of the dielectric materials on MS, it may be accomplished to control the overlap between ED and MD resonances, which can pave the way for a detrimental interference between the scattered field and the incident field in the forward direction, resulting in zero transmission in the spectrum [23, 29, 26]. Recent research has explored the use of these coupled-dipole compositions in MS with the help of BIC to improve sensing, EM-induced transparency, lasing, optical switching, filtering, and chirality [30, 31]. BIC-supported MSs or dielectric nanoantennas, however, show intriguing applications in wave guiding, on-chip communications, beam steering, nonlinear harmonic production, photodetection, and even imaging [3, 32]. Some new research has discovered that studies on BIC supported low or high-contrast dielect tric gratings [33, 34, 35]. Several investigations on plasmonic-photonic MSs that demonstrate critical coupling with BIC and qBIC formations have also been revealed [36, 37, 38]. All-dielectric MSs with BIC and qBIC support have become one of the most popular and extensively explored topics due to high Q factors in resonances, improved tunability and sensitivity, and a wide variety of applications that have already been discussed [39, 40, 41, 32]. Some investigations on BIC-supported MS center their attention on ED and MD qBIC resonances, as well as their interactions with one another and the coupling of these resonances in homogeneous medium [42, 43, 22]. Although they are all groundbreaking in a sense, a significant number of the earlier research that has been brought out so far have either concentrated on an ordinary elliptical-shaped dielectric MS coupled with an in-depth BIC-related theoretical analysis or have concentrated on the applications of various MS nanostructures [43, 44, 45, 5, 22]. Several investigations that focused on ED and MD qBIC resonances failed to achieve exceptionally high Q-factors, exhibit novel MS designs, or demonstrate potential applications in a computational context [42, 22, 46, 23]. This work introduces an unconventional MS design consisting of four nanobars shaped like domes in a unit cell, where dipole coupling resonances for achieving zero transmission are demonstrated, along with two essential applications of BIC-supported MSs. Angle perturbation was introduced to break the symmetry of the MS which converts the the BIC into qBIC modes. When the periodicity of the MS is adjusted in the y direction, where zero transmission occurs, crossing between the ED-qBIC and MD-qBIC resonances is observed. Furthermore, avoided crossing between both resonances was detected when the period in the y-axis, \(\rm P_{y}\) was varied from 900 nm to 1060 nm and the refractive index value of the superstrate was reduced to 1.42 from its initial value. Additionally, we reported two vital applications of the MS including RI-based sensing of different glucose concentrations in a water-glucose solution and third harmonic generation with our proposed MS. Besides that, frequently employed dielectric materials (such as GaP, InP, and GaAs) are used in dome-shaped nanobars to assess their sharpness and near-zero transmission capability. Finally, we have investigated the Q-factor of the ED and MD qBIC resonances by exploring other methods of breaking symmetry in silicon nanobars such as diagonally breaking length and width symmetry of the nanobars. ## 2 Methodology and Design ### Theoretical Model Our proposed MS consists of dome-shaped elements to evaluate the BIC mode can be analyzed by employing the coupled mode theory. A system can be considered for two resonances (ED-qBIC and MD-qBIC) coupled with two ports which can be expressed by the equation below [42]. \[\left[\begin{array}{c}a_{1}\\ a_{2}\end{array}\right]=j\left[\begin{array}{cc}\omega_{01}+j\gamma_{1}&k+j \gamma_{12}\\ k+j\gamma_{21}&\omega_{02}+j\gamma_{2}\end{array}\right]\left[\begin{array}[] {c}a_{1}\\ a_{2}\end{array}\right]+\left[\begin{array}{cc}k_{11}&k_{12}\\ k_{21}&k_{22}\end{array}\right]\left[\begin{array}{c}s_{1+}\\ s_{2+}\end{array}\right] \tag{1}\] Here, where the \(\rm[a1,a2]^{T}\) represent the time-dependent amplitudes of the ED-qBIC and MD-qBIC resonances, and \(\omega_{01}\) and \(\omega_{02}\) are their resonant frequencies, respectively. Also, k is the direct coupling rate of these two resonances. The \(\gamma_{1}\) and \(\gamma_{2}\) refer to the radiative loss of the system thus it is directly related to the radiative Q-factors(Q\({}_{\rm R}\)) by this equation \(\gamma_{\rm n}=\frac{\omega_{\rm 0n}}{2\rm Q_{Rn}}\). Due to the symmetry of the system, \(\gamma_{12}=\gamma_{21}=\sqrt{\gamma_{1}\gamma_{2}}\). The kij is the coupling coefficient between the mode i and the port j (i,j\(\in\) 1,2). The \(\rm[s_{1+},s_{2+}]^{T}\) and \(\rm[s_{1-},s_{2-}]^{T}\) denotes the input and output wave amplitude of the excited resonance modes at the port 1 and 2, respectively as demonstrated in Eqn. (2). \[\left[\begin{array}{c}s_{1-}\\ s_{2-}\end{array}\right]=\left[\begin{array}{cc}r_{d}&t_{d}\\ t_{d}&r_{d}\end{array}\right]\left[\begin{array}{c}s_{1+}\\ s_{2+}\end{array}\right]+\left[\begin{array}{cc}d_{11}&d_{12}\\ d_{21}&d_{22}\end{array}\right]\left[\begin{array}{c}a_{1}\\ a_{2}\end{array}\right] \tag{2}\] Here, \(r_{d}\) and \(t_{d}\) are the direct reflection and transmission coefficients between the ports in the absence of the resonant modes, and \(d_{ij}\) is the coupling coefficient between the port j and the mode i. When the input wave is incident from port 1, the transmission coefficient from port 1 to port 2 may be calculated by \(t_{21}(\omega)=\frac{s_{2-}}{s_{1+}}\). The coupling of qBICs is extensively explored in the result section, with particular emphasis on coupling coefficients. We also analyze the proposed metasurface structure for nonlinear harmonic generation. The nonlinear optical interaction for a silicon medium is caused by induced polarization which is governed by the equation below [47]. \[\tilde{P}^{(3)}(t)=\varepsilon_{0}\left[\tilde{\chi}^{(3)}E^{3}(t)\right] \tag{3}\] Here, \(\varepsilon_{0}\) is the vacuum permittivity, E(t) is the strength of the electric field, and \(\tilde{\chi}^{(3)}\) is the third-order nonlinear optical susceptibility tensor. we considered a diagonal anisotropy tensor for \(\tilde{\chi}^{(3)}\) with a value of 2.45 \(\times\) 10\({}^{-19}\) m\({}^{2}\)V\({}^{-2}\)[48]. The third harmonic generation (THG) process is associated with the annihilation of three photons of frequency and the subsequent creation of a single photon with three times the frequency. When light is pumped on the surface of the structure, its electromagnetic field excites the unbound electrons which leads them to vibrate in their ionic core. This oscillation introduces a nonlinear shift that leads to an anharmonic response to the electron's motion in relation to the applied electric field [49]. ### Metasurface Structure To support symmetry-protected BIC, we proposed a periodic asymmetric dome-shaped all-dielectric metasurface with in-plane symmetry. Figure 1(a) and Fig. 1(b) illustrate the conceptual 2-D and 3-D views of the structure. The metasurface has a two-layer structure consisting of a periodic array of silicon (Si) domes atop a silica (SiO\({}_{2}\)) substrate. Values of refractive index for both the materials with discussion of simulation environment are briefly analyzed in supplement document 1. In comparison to other materials, silicon structures are considerably more advantageous due to their high transmission rate [50, 51], minimal losses [52, 53], and well-established fabrication method [54, 55]. Silica, on the other hand, functions as the substrate for this structure because of its transparency in the visible and infrared range enabling efficient light transmission. This facilitates simple light manipulation and controls [56]. In order to simulate the structure using the finite difference time domain method (FDTD), we considered a unit cell with a period of P\({}_{\rm x}\) = P\({}_{\rm y}\) = 860 nm in the x and y direction and the substrate thickness, t = 220 nm. Four domes are placed on top of a unit cell. These domes are placed inside the superstate, a material of refractive index, n = 1.5. Each dome has properties of h = 175 nm, r = 70 nm, and \(\mathrm{W=2\times r=140}\) nm. The orientation of each dome is characterized by a rotation angle of \(\theta=9^{0}\). This design may initially appear difficult to fabricate. Nevertheless, modern fabrication techniques have advanced significantly, and even more complex designs have been fabricated in the past [57, 58, 59, 60]. As depicted in Fig. 1(a), a normal incident plane wave propagated along the z-axis when the electric field, E was y-polarized. Since the ideal symmetry-protected BIC may be converted into the quasi-BIC with high Q factor resonances, we incorporated a rotation angle (\(\theta\)) as in-plane symmetry-breaking perturbations to study the characteristics of the BIC or quasi-BIC. Prior to analyzing the outcomes of our metasurface, we conducted a simulation verification process. This involved comparing our simulation results, obtained by reproducing a design described in a prior study, with the experimental findings provided in that study which is discussed in supplement document 1. ## 3 Results Discussion ### Formation of BIC and qBIC with Field Distribution Bound states in the continuum (BIC) are characterized by destructive interference, which occurs when the coupling constants with all radiating waves disappear by accident as a result of continuous adjustment of parameters. Figure 2(a) depicts the transmission spectra of the dome-shaped MS for different values of rotational angle, \(\theta\). When the \(\theta=0^{0}\) which means that the MS is maintaining the in-plane symmetry, there is no spectral line width that can be noticed in the transmission curve. This indicates that the Q factor is infinite, which corresponds to the formation of BIC modes. Changing the geometric characteristics of the system is necessary in order to transition from the BIC state to the qBIC state. This enables Figure 1: (a) Schematic of the all-dielectric metasurface with the additional indication of a unit cell with \(\mathrm{P_{x}=P_{y}=860}\) nm, \(\theta=9^{0}\) and \(\mathrm{L=350}\) nm. The unit cell consists of four dome-shaped Silicon (Si) nanobars. It exhibits in-plane symmetry in the x-y plane and \(\theta\) was used to break the symmetry of the metasurface. (b) A 2-D representation of the unit cell where the Silica (\(\mathrm{SiO_{2}}\)) has been used as the substrate that has a thickness (t) = 220 nm and a material of refractive index, n = 1.5 was used as the superstate. the extraction of energy from the BIC state while maintaining a long-lived state with narrow linewidths, and high q-factor values. So as to disrupt the in-plane symmetry, the angle of the domes was altered from \(0^{0}\) to \(16^{0}\) to analyze the transmission behavior of the quasi-BIC as a function of \(\theta\). Transmission spectra in Fig. 2(a) display two dips at 1110.10 nm and 1154.28 nm corresponding to ED-qBIC and MD-qBIC resonances, respectively. Both qBIC resonances are susceptible to angle perturbations, as evidenced by the increasing linewidth and increasing dips with increasing \(\theta\). In contrast to the situation in which \(\theta=0^{0}\), the net ED-qBIC and MD-qBIC resonances are now capable of being excited by a y-polarized incident plane wave and of coupling to free space radiation due to the fact that the symmetry protection has been broken. Figure 2(b) shows an illustration of the field distributions at 1110.1 nm and 1154.2 nm corresponding to the ED-qBIC and MD-qBIC resonances for \(\theta=9^{0}\) in the x-y plane inside one unit cell. These field distributions reveal that both dipole quasi-BIC resonances display reasonably high field intensity enhancement. Both the electric and magnetic dipole moments that are generated are dissociated and display symmetry in the opposite direction around the plane of the mirror. When viewed in relation to the x-y plane, the moments of electric dipoles that oscillate along the x-axis are equal and symmetrical. Therefore, magnetic dipole moments in the x-z plane that form a displacement current loop are Figure 2: (a) Transmission spectrum of the MS depicting the impact of different rotation angles (\(\theta\)) of the domes. Introducing non-zero values to the \(\theta\) causes two dips in the transmission curve corresponding to ED-qBIC and MD-qBIC, respectively. At \(\theta=9^{0}\), the dips are located at 1110.1 nm and at 1154.28 nm. (b) - I Electric field distribution of the unit cell for \(\theta=9^{0}\) at 1110.11 nm as the ED-qBIC resonance happened at this wavelength. (b) - II Magnetic field distribution of the unit cell for \(\theta=0^{0}\) at 1154.28 nm as the MD-qBIC resonance happened at this wavelength. Vector fields are denoted by the blue arrow lines for both figures. antisymmetric (odd) in comparison to the x-y plane. Figure 3 shows the radiative Q factor in log scale of the dome-shaped MS where \(\rm Q_{RE}\) and \(\rm Q_{EM}\) indicate the radiative Q factor of ED-qBIC and MD-qBIC resonance, respectively. For our suggested configuration where \(\theta=9^{0}\), the values for Q-factor are 10680.7 for \(\rm Q_{RE}\) and 12410 for \(\rm Q_{EM}\). Increases in the \(\theta\) result in a reduction in the radiative Q-factors of the ED-qBIC and MD-qBIC resonances and a widening of the spectral lines caused by their coupling to free space. The \(\rm Q_{R}\) can be related to in such \(\rm Q_{R}\propto\alpha^{-2}\) where \(\alpha\) is the asymmetry parameter defined by \(\alpha=\sin(\theta)\). Increasing the value of the asymmetry parameter induces a non-zero dipole moment, which in turn transforms the BIC into the accessible q-BIC. ### Crossing and Anti-crossing of Transmission The progression of transmission spectra for both ED-qBIC and MD-qBIC with respect to \(\rm P_{y}\) at \(\theta=9^{0}\) is illustrated in Fig. 4. It can be seen that even though MD-qBIC resonance wavelength remained nearly identical throughout the entire spectra, ED-qBIC resonance wavelength shifts to the right side of the wavelength as the \(\rm P_{y}\) increases, getting closer to MD-qBIC resonance. At \(\rm P_{y}=925\) nm, both resonances overlap with each other as indicated by the circle in Fig. 4(a). The crossings area confirms that the coupling coefficient, k is zero, proving that both resonances are orthogonal. It is consistent with the concept of an extreme Huygens' metasurface because the transmittance approaches unity with a large quality factor, Q in the crossing location [46, 52, 61]. However, when the superstate is replaced by a refractive index, \(\rm n=1.42\) material, the vertical symmetry between the silica substrate and the superstate is broken. This causes a nonzero value of the coupling coefficient. In this case, the metasurface exhibits omega-type bianisotropy which leads to coupling between MD-qBIC and ED-qBIC resonances where both modes strongly exchange energy [62]. At \(\theta=9^{0}\), varying \(\rm P_{y}\) from 1080 nm to 1220 nm, the corresponding outcome of this modification Figure 3: Radiative Q-factor in logarithm scale as a function of angles (\(\theta\)) of the metasurface where \(\rm Q_{RE}\) and \(\rm Q_{RM}\) represents Q-factor for ED-qBIC and MD-qBIC resonance, respectively. can be visualized in Fig. 4(b) where instead of overlapping, an avoided crossing was observed for P\({}_{\text{y}}\) = 960 nm. The supplement document 1 provides a comprehensive depiction of the transmission curves pertaining to both the crossing and anti-crossing phenomena seen in the interaction between the resonances. ### Influence of Different Dielectric Materials We attempted to evaluate the performance of other commonly used materials (GaAs, InP, and GaP) for this structure by incorporating them into different dome components. A comparison has been made in terms of crossing wavelength, crossing P\({}_{\text{y}}\), lowest transmission, and FWHM (Full Width at Half Maximum) of the qBIC resonance. Table 1 demonstrates that although all material combinations have about similar crossing wavelength and crossing P\({}_{\text{y}}\), the FWHM for the Si-GaAs, GaAs-Si, and InP-InP combinations is substantially larger with values of 4.101 nm, 5.21 nm, and 4.36 nm, respectively. Since we are aware that a larger FWHM leads to a lower Q-factor [13], only Si-Si and Si-GaP pairings with FWHM values of 2.123 nm and 1.61 nm, respectively, ought to be considered when constructing an MS with a high Q-factor value. Although the FWHM of the Si-Si combination is greater than that of the Si-GaP combination, the Si-Si combination allows for the lowest T (0.001) to be obtained, making it the more overpowering material combination for this MS. The supplement document 1 contains visual representations of the transmission curves corresponding to the various material implementations in the dielectric nanobars. Figure 4: (a) Transmission spectrum of the MS at \(\theta=9^{0}\) as a function of period P\({}_{\text{y}}\) shows a crossing of the ED-qBIC and MD-qBIC for P\({}_{\text{y}}\) = 925 nm at 1152 nm wavelength. (b) Transmission spectrum of the MS when the superstate’s refractive index was changed from 1.5 to 1.42. Avoided Crossing of ED-qBIC and MD-qBIC was observed at \(P_{y}=960\) nm. Both crossing and anti-crossing were marked by a small circle. ### Applications #### 3.4.1 Third Harmonic Generation Figure 5(a) depicts the normalized THG outcome for the metasurface structure together with the normalized intensity of the pump. We pumped a plane wave into the structure with a 2.2 THz bandwidth at 1110 nm and 1147 nm near the ED-qBIC and MD-qBIC resonances, respectively, and observed peaks in intensity at 370.46 nm and 381.15 nm, respectively, in the electric field spectra which clearly indicates the generation of the third harmonic in both cases. \begin{table} \begin{tabular}{c c c c c} \hline **Dome Structure** & **Crossing** & **Crossing** & **Lowest T** & **FWHM** \\ **(upper half circle** & **Wavelength** & **(nm)** & \(P_{y}\) **(nm)** & **Lowest T** & **(nm)** \\ \hline & 1150.4 & 925 & 0.001 & 2.123 \\ & 1142.3 & 920 & 0.009 & 4.101 \\ & 1146.3 & 920 & 0.207 & 5.21 \\ & 1103.3 & 897 & 0.206 & 4.36 \\ & 1097.9 & 900 & 0.07 & 1.61 \\ \hline **Label** & Si & GaAs & InP & GaP \\ \hline \end{tabular} \end{table} Table 1: Comparative analysis of inserting different dielectric materials. Figure 5: (a) Normalized intensity spectra of plane waves which were pumped at 1110 nm and 1147 nm as well as generated third harmonics at 370.46 nm and 381.15 nm, respectively for those pump spectra. (b) TH efficiency as a function of pump power for our proposed MS and an MS comprised of rectangular elements, with all other parameters remaining unchanged. The THG efficiency was calculated using the \(\eta_{\rm THG}=\frac{\rm P_{TH}}{\rm P_{Pump}}\) equation. In order to determine the pump and TH power, the Poyting vector was integrated over the x-y plane. In terms of \(\eta_{THG}\) as a function of pump power, Fig. 5(b) illustrates a comparison between our proposed metasurface and a metasurface consisting of rectangular elements while maintaining all other parameters the same as our proposed metasurface. The supplement document 1 delivers a brief analysis of nonlinear simulation. A nearly quadratic dependence of \(\rm P_{TH}=aP_{Pump}^{b}\) was found by curve fitting the data which was expected for THG. At the initial point of \(0.5\times 10^{-21}\) mWHz\({}^{-2}\) pulse power, dome-shaped MS and planar rectangular MS display TH efficiency of \(3.9\times 10^{-4}\%\) and \(3.0\times 10^{-6}\%\), respectively. As we raised pump power for both MS structures, the efficiency improved and reached \(17.9\times 10^{-4}\%\) and \(9.6\times 10^{-6}\%\) for \(3.0\times 10^{-21}\) mWHz\({}^{-2}\). At \(1.75\times 10^{-21}\) mWHz\({}^{-2}\) pulse power, it was observed that the TH efficiency is approximately 300 times more for dome-shaped MS compared to the planar rectangular MS of silicon with having the same thickness, therefore, being more appropriate for THG. #### 3.4.2 Refractive Index (RI) based Sensing The confined light of the qBIC modes has an evanescent tail that usually propagates into the substrate medium, and the interaction and any modifications in the substrate medium have a significant effect on the resonance properties [63]. The addition of a target analyte to the substrate can alter the dielectric environment. Alteration in the local dielectric composition can be detected by observing shifts in the transmission or reflection curve at resonant wavelengths. In this study, we investigated whether the proposed metasurface can detect various glucose concentrations in a glucose-water solution. As shown in Fig. 6(a), a hollow chamber with dimensions of 840 nm \(\times\) 840 nm \(\times\) 110 nm was established at the top of the substrate to insert and support the analyte. Figure 6: (a) 2D view of the unit cell depicting the formation of a chamber on the upper surface of the substrate to accommodate the dielectric analyte. The yellow rectangle denotes the chamber filled with water-glucose solution. The chamber has properties of thickness, \(\rm t_{g}\) = 110 nm, width, \(\rm W_{g}\) = 840 nm, and length = 840 nm. (b) Transmission spectra of the metasurface when introducing different glucose levels (00%, 10%, 20%, 50%, and 60%) in glucose-water solution as the sensing material. Then, one by one 00%, 10%, 20%, 50%, and 60% concentration of glucose-water solution was incorporated into the system. It can be seen from Fig. 6(b) that as the glucose level increases, the resonant wavelength shifts to longer wavelengths, exhibiting a distinct peak at 1078 nm, 1081.50 nm, 1083 nm, 1085 nm, and 1087.5 nm, respectively, for the various solutions. These outcomes validate the capability of RI-based sensing of the MS. ### Asymmetry Analysis Two types of additional asymmetry were analyzed: the diagonal width asymmetry and the diagonal length asymmetry which are illustrated in Fig. 7(a) and Fig. 7(b), respectively. For diagonal width asymmetry, the symmetric width of the four domes was taken as 140 nm initially and then the widths of diagonally placed domes were changed from 142 nm to 164 nm to break the symmetry of the meta-atoms. For the diagonal length asymmetry, the lengths of the diagonally placed domes varied from 175 nm to 185 nm. Here, the asymmetry parameter, \(\alpha\) was defined by \(\Delta\)L/L and \(\Delta\)W/W. Figure 7(c) and Fig. 7(d) show the radiative Q-factors for both ED-qBIC and MD-qBIC of diagonal width and diagonal length asymmetry. From the figure, we can see that Q-factors are almost proportional to \(\alpha^{-2}\) for both MD-qBIC and ED-qBIC resonances. Figure 7: (a-b) 3D schematic of the MS with diagonal width and length asymmetry of the silicon nanobars, respectively. Red and green surrounding the area indicate the added and removed perturbation, respectively. (c-d) Radiative Q-factor in logarithm scale of width and length asymmetry for varying asymmetric parameter (\(\alpha\)). Here, \(\mathrm{Q_{RE}}\) and \(\mathrm{Q_{EM}}\) refer to the radiative Q factor for ED-QBIC and MD-QBIC, respectively. Conclusion In conclusion, we propose an asymmetric dome-shaped dielectric metasurface to explore the different properties of BIC. We discovered two dips that correspond to the ED-qBIC and MD-qBIC in the transmission spectra of in-plane symmetry-broken MS in the near-IR region. The behavior of these qBICs was investigated in great detail, with particular focus placed on asymmetry characteristics such as and period along the y-axis, \(\mathrm{P_{y}}\). We detected two distinct types of transmission spectrum behavior as a consequence of \(\mathrm{P_{y}}\) variation based on the superstate material refractive index: crossing and avoided-crossing of both qBIC resonances. Later on, we discussed two potential applications of the suggested MS: third harmonic generation and RI-based sensing and we reported that both applications can be implemented using the MS. In the end, we demonstrated two different types of asymmetry analyses, namely diagonal length asymmetry and diagonal width asymmetry. We feel that MS design with these forms of asymmetries may have the potential in the future to be investigated further for a variety of applications. ## 5 Acknowledgement The authors appreciate Fawjia Shakhi from the Department of Electrical and Electronic Engineering, Shahjalal University of Science and Technology for the design-related illustration. ## 6 Disclosures The authors declare no conflicts of interest. ## 7 Data availability Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.
2309.05439
An axially symmetric spacetime with causality violation in Ricci-inverse gravity
In this paper, Ricci-inverse gravity is investigated. It is an alternative theory of gravity that introduces into the Einstein-Hilbert action an anti-curvature scalar that is obtained from the anti-curvature tensor which is the inverse of the Ricci tensor. An axially symmetric spacetime with causality violation is studied. Two classes of the model are discussed. Different sources of matter are considered. Then a direct relation between the content of matter and causality violation is shown. Our results confirm that Ricci-inverse gravity allows the existence of Closed Time-like Curves (CTCs) that lead to the violation of causality. Furthermore, a comparison is made between the results of general relativity and Ricci-inverse gravity. Other spacetimes, such as G\"{o}del and G\"{o}del-type universes, which are exact solutions of general relativity and allow for causality violations, are also explored in Ricci-inverse gravity framework.
J. C. R. de Souza, A. F. Santos
2023-09-11T13:25:22Z
http://arxiv.org/abs/2309.05439v1
# An axially symmetric spacetime with causality violation in Ricci-inverse gravity ###### Abstract In this paper, Ricci-inverse gravity is investigated. It is an alternative theory of gravity that introduces into the Einstein-Hilbert action an anti-curvature scalar that is obtained from the anti-curvature tensor which is the inverse of the Ricci tensor. An axially symmetric spacetime with causality violation is studied. Two classes of the model are discussed. Different sources of matter are considered. Then a direct relation between the content of matter and causality violation is shown. Our results confirm that Ricci-inverse gravity allows the existence of Closed Time-like Curves (CTCs) that lead to the violation of causality. Furthermore, a comparison is made between the results of general relativity and Ricci-inverse gravity. Other spacetimes, such as Godel and Godel-type universes, which are exact solutions of general relativity and allow for causality violations, are also explored in Ricci-inverse gravity framework. ## I Introduction General relativity is an extraordinary gravitational model that describes physical reality very well and has undergone numerous tests since its formulation in 1915 [1; 2]. Although the theory and observational data confirm the success of general relativity, it is not a complete theory. There are two fundamental problems that require attention and solution. The classic version of gravity is complete, but a consistent quantum version has yet to be built. This fact shows that the gravitational interaction is different from other fundamental interactions described by the standard model of particle physics, since they have a well-known quantum version. Another open question in general relativity is: how to explain the recent accelerated expansion of the universe? The current accelerated cosmic expansion is a widely accepted fact in the scientific community and confirmed by several observational sources [3; 4; 5; 6; 7; 8; 9]. In an attempt to find solutions to these problems, two different ways have been proposed in the literature: (i) introducing an exotic component of energy, called dark energy, into general relativity or (ii) modifying the Lagrangian of general relativity without resorting to dark energy. In this work, the second option is considered. Alternative theories to general relativity have been a hot topic since Einstein proposed his model. A historical review of the first attempts to extend general relativity, for different motivations, is presented in reference [10]. For an overview of modified gravity theories, see references [11; 12]. Here, Ricci-inverse gravity [13] is considered the alternative theory of gravity to be investigated. In this model, the modification consists of adding the anti-curvature tensor \(A^{\mu\nu}\) in the Einstein-Hilbert action. The tensor \(A^{\mu\nu}\) is defined as the inverse of the Ricci tensor \(R_{\mu\nu}\). With this tensor the anti-curvature scalar \(A=g_{\mu\nu}A^{\mu\nu}\) is defined. It is important to emphasize that \(A\neq R^{-1}\). In the reference [14] the Ricci-inverse theory is generalized and two classes of Ricci-inverse gravity are defined: Class I and Class II. In Class I, the Lagrangian is proportional to a function \(f(R,A)\) that depends on the Ricci scalar \(R\) and the anti-curvature scalar \(A\), while in Class II the function takes the form \(f(R,A^{\mu\nu}A_{\mu\nu})\) which is a function of Ricci scalar and square of anti-curvature tensor. Ricci-inverse gravity has been investigated in several contexts, for example, anisotropic compact structures has been explored [15], the matter-antimatter asymmetry through baryogenesis in the realm of \(f(R,A)\) theory has been analyzed [16], a non-relativistic static and spherically symmetric cosmic structure embedded into a de Sitter cosmology has been investigated [17] and no-go theorem for inflation in an extended Ricci-inverse gravity model has been studied [18; 19]. Among the various topics investigated in this theory, an investigation into causality, its violation, and the existence of Closed Time-like Curves (CTCs) is missing. Here such a study is carried out. To explore causality and the presence of CTCs in Ricci-inverse gravity, an axially symmetric metric is considered [20; 21; 22]. The main feature of this spacetime is the possibility of CTCs which are trajectories that allows objects to return to a point in their past. Another important point of this spacetime is that it satisfies the energy conditions. In such a study, different matter contents are analyzed. Furthermore, it is interesting to say that the existence of CTCs is not exclusive to axially symmetric metrics. Other widely studied solutions that lead to the violation of causality are the Godel metric [23] and Godel-type metric [24], which are general relativity solutions with rotating matter and cosmological constant. These metrics are also analyzed in the context of Ricci-inverse gravity. The present paper is organized as follows. In section II, an introduction to Ricci-inverse gravity is made. In section III, an axially symmetric metric in Ricci-inverse gravity is studied. The conditions for the existence of CTCs are presented. The set of field equations is solved in both general relativity and Ricci-inverse gravity. The results of the two theories are compared. It is shown that this alternative theory allows for the causality violation. Different matter contents are considered, such as a scalar field and an electromagnetic field. A discussion about Godel-type universes in Ricci-inverse gravity is made. In section IV, remarks and conclusions are presented. ## II Ricci-inverse gravity In this section, a brief introduction to Ricci-inverse gravity is presented. This is a new way to modify the Einstein-Hilbert action proposed in [13]. This modification in Einstein's theory consists of the introduction of the anti-curvature tensor (\(A^{\mu\nu}\)) defined as \(A^{\mu\nu}R_{\nu\sigma}=\delta^{\mu}_{\sigma}\). In the reference [14] two classes of Ricci-inverse gravity are introduced: Class I: the gravitational action is characterized by the function \(f(R,A)\) which depends on Ricci (\(R\)) and anti-curvature (\(A\)) scalars. Class II: the theory is described by the function \(f(R,A^{\mu\nu}A_{\mu\nu})\) which is a function of Ricci scalar and square of anti-curvature tensor. Here, Class I is considered. Then the action describing Ricci-inverse gravity is given as \[S=\int d^{4}x\sqrt{-g}\left[f(R,A)-2\Lambda+\mathcal{L}_{m}\right]. \tag{1}\] Taking \(f(R,A)=R+\kappa A\), Eq. (1) becomes \[S=\int d^{4}x\sqrt{-g}\left[(R+\kappa A-2\Lambda)+\mathcal{L}_{m}\right], \tag{2}\] where \(g\) is the metric determinant, \(\kappa\) is the coupling constant, \(R\) is the Ricci scalar, \(A=g_{\mu\nu}A^{\mu\nu}\) is anti-curvature scalar, \(\Lambda\) is the cosmological constant and \(\mathcal{L}_{m}\) is the matter Lagrangian. This action can be written as \[S=\int d^{4}x\sqrt{-g}\left[(g_{\mu\nu}R^{\mu\nu}+\kappa g_{\mu\nu}A^{\mu\nu} -2\Lambda)+\mathcal{L}_{m}\right]. \tag{3}\] Varying the action with respect to the metric leads to the field equations given by \[R^{\mu\nu}-\frac{1}{2}Rg^{\mu\nu}+\Lambda g^{\mu\nu}-\kappa A^{ \mu\nu}-\frac{\kappa}{2}Ag^{\mu\nu}+\frac{\kappa}{2}\Big{[}2g^{\rho\mu}\nabla_ {\alpha}\nabla_{\rho}(A^{\sigma\alpha}A^{\nu}_{\sigma})\] \[-\nabla^{2}(A^{\sigma\mu}A^{\nu}_{\sigma})-g^{\mu\nu}\nabla_{ \alpha}\nabla_{\beta}(A^{\sigma\alpha}A^{\beta}_{\sigma})\Big{]}{=T^{\mu\nu}}, \tag{4}\] where \(\nabla_{\mu}\) denotes the covariant derivative and \(T^{\mu\nu}\) is the energy-momentum tensor defined as \[T^{\mu\nu}=\frac{1}{\sqrt{-g}}\frac{\delta(\sqrt{-g}\mathcal{L}_{m})}{\delta g _{\mu\nu}}. \tag{5}\] Using that \(A^{\alpha}_{\sigma}A^{\nu\sigma}=A^{\alpha\tau}g_{\tau\sigma}A^{\sigma\nu}=A^{ \alpha\tau}A^{\nu}_{\tau}=A^{\alpha\sigma}A^{\nu}_{\sigma}=A^{\nu}_{\sigma}A^{ \alpha\sigma}\), the field equations become \[R^{\mu\nu}-\frac{1}{2}Rg^{\mu\nu}+\Lambda g^{\mu\nu}-\kappa A^{ \mu\nu}-\frac{\kappa}{2}Ag^{\mu\nu}+\frac{\kappa}{2}\Big{[}2g^{\rho\mu}\nabla _{\alpha}\nabla_{\rho}(A^{\alpha}_{\sigma}A^{\nu\sigma})\] \[-\nabla^{2}(A^{\mu}_{\sigma}A^{\nu\sigma})-g^{\mu\nu}\nabla_{ \alpha}\nabla_{\rho}(A^{\alpha}_{\sigma}A^{\rho\sigma})\Big{]}{=T^{\mu\nu}}. \tag{6}\] For simplicity, the Ricci-inverse gravity equations are written as \[R^{\mu\nu}-\frac{1}{2}Rg^{\mu\nu}+\Lambda g^{\mu\nu}+M^{\mu\nu}=T^{\mu\nu}, \tag{7}\] where \(M^{\mu\nu}\) contain the terms that modify the equations of general relativity due to the anti-curvature tensor and is defined as \[M^{\mu\nu}=\kappa\left[A^{\mu\nu}+\frac{A}{2}g^{\mu\nu}-\frac{1}{2}\left(2g^{ \rho\mu}\nabla_{\alpha}\nabla_{\rho}(A^{\alpha}_{\sigma}A^{\nu\sigma})- \nabla^{2}(A^{\mu}_{\sigma}A^{\nu\sigma})-g^{\mu\nu}\nabla_{\alpha}\nabla_{ \rho}(A^{\alpha}_{\sigma}A^{\rho\sigma})\right)\right]. \tag{8}\] Our aim is to investigate cosmological solutions associated with causality violation. However, it is important to note that the Ricci-inverse gravity is constructed assuming that there is an anti-curvature tensor, i.e., \(A^{\mu\nu}=R_{\mu\nu}^{-1}\). Therefore, such a study is possible only in spacetime where there exists this quantity which is defined as \[A^{\mu\nu}=\frac{1}{det[R_{\mu\nu}]}adj[R_{\mu\nu}]. \tag{9}\] Note that, if the determinant of the Ricci tensor is null, there is no way to invert \(R_{\mu\nu}\). Therefore, \(det(R_{\mu\nu})\neq 0\) is a necessary condition to investigate a metric as a possible solution to Ricci-inverse gravity. In the next section, an axially symmetric spacetime is studied in Ricci-inverse gravity. The main objective is to investigate whether this gravitational model allows solutions that leads to causality violation. In addition to the axially symmetric metric, other solutions of general relativity, such as Godel and Godel-type solutions, are discussed in the approach described by Ricci-inverse gravity. ## III Axially symmetric metric in Ricci-inverse gravity Here, an axially symmetric metric is investigated in Ricci-inverse gravity. The main feature of this solution is the presence of Closed Timelike Curves (CTCs), as discussed in references [20; 21; 22]. The line element that describes this spacetime at \((t,r,\phi,z)\) coordinates is given as \[\mathrm{d}s^{2}=\frac{\mathrm{d}r^{2}}{\alpha^{2}r^{2}}+r^{2}\mathrm{d}z^{2}+ \left(-2r^{2}\mathrm{d}t+\frac{\beta z\mathrm{d}r}{r^{2}}-tr^{2}\mathrm{d} \phi\right)\mathrm{d}\phi, \tag{10}\] where \(\alpha\) and \(\beta\) are non-zero constants, with \(\beta>0\). It should be noted from Eq. (10) that this spacetime has a coordinate singularity at \(r=0\). The most notable characteristic of this metric is its ability to display CTCs. These curves can be obtained considering \(r=r_{0}\), \(z=z_{0}\) and \(t=t_{0}\), with \(r_{0},z_{0},t_{0}=\mathrm{const.}\) in Eq. (10). Then \[\mathrm{d}s^{2}=-tr^{2}\mathrm{d}\phi^{2}. \tag{11}\] This leads to three different curves: (i) null-like curve for \(t=0\); (ii) space-like curve for \(t_{0}<0\) and (iii) time-like curve for \(t_{0}>0\). Therefore, in this spacetime, CTCs appear in an instant of time \(t=t_{0}>0\). In order to study this axially symmetric metric in Ricci-inverse gravity and compare this result with those obtained in general relativity, let's first review this solution in Einstein's theory. For this proposal, some geometric elements associated with the metric are necessary. Explicitly, the metric components are given as \[g_{02} = g_{20}=-r^{2},\] \[g_{11} = \frac{1}{\alpha^{2}r^{2}},\] \[g_{12} = g_{21}=\frac{\beta z}{2r^{2}},\] \[g_{22} = -r^{2}t,\] \[g_{33} = r^{2}, \tag{12}\] and their inverses are \[g^{00} = \frac{\alpha^{2}\beta^{2}z^{2}}{4r^{6}}+\frac{t}{r^{2}},\] \[g^{01} = g^{10}=\frac{\alpha^{2}\beta z}{2r^{2}},\] \[g^{02} = g^{20}=-\frac{1}{r^{2}},\] \[g^{11} = \alpha^{2}r^{2},\] \[g^{33} = \frac{1}{r^{2}}. \tag{13}\] These metric components lead to the non-zero Christoffel symbols \[\Gamma^{0}_{01} = \frac{1}{r},\qquad\Gamma^{0}_{02}=\frac{\alpha^{2}\beta z+r}{2r}, \qquad\Gamma^{0}_{11}=\frac{\beta z}{2r^{5}},\] \[\Gamma^{0}_{12} = -\frac{\alpha^{2}\beta^{2}z^{2}}{4r^{5}},\qquad\Gamma^{0}_{13}=- \frac{\beta}{4r^{4}},\] \[\Gamma^{0}_{22} = \frac{\alpha^{2}\beta^{2}z^{2}+4\alpha^{2}\beta r^{3}tz+4r^{4}t}{ 8r^{4}},\qquad\Gamma^{0}_{23}=\frac{\alpha^{2}\beta^{2}z}{8r^{4}},\] \[\Gamma^{0}_{33} = -\frac{\alpha^{2}\beta z}{2r},\qquad\Gamma^{1}_{02}=\alpha^{2}r^{ 3},\qquad\Gamma^{1}_{11}=-\frac{1}{r},\] \[\Gamma^{1}_{12} = -\frac{\alpha^{2}\beta z}{2r},\qquad\Gamma^{1}_{22}=\frac{1}{4} \alpha^{2}\left(\beta z+4r^{3}t\right),\] \[\Gamma^{1}_{23} = \frac{\alpha^{2}\beta}{4},\qquad\Gamma^{1}_{33}=-\alpha^{2}r^{3}, \qquad\Gamma^{2}_{12}=\frac{1}{r},\] \[\Gamma^{2}_{22} = -\frac{1}{2},\qquad\Gamma^{3}_{12}=-\frac{\beta}{4r^{4}},\qquad \Gamma^{3}_{13}=\frac{1}{r}.\] Thus, the non-zero components of the Ricci tensor are \[R_{02} = 3\alpha^{2}r^{2},\] \[R_{11} = -\frac{3}{r^{2}},\] \[R_{12} = -\frac{3\alpha^{2}\beta z}{2r^{2}},\] \[R_{22} = \frac{\alpha^{2}\left(\beta^{2}+24r^{6}t\right)}{8r^{4}},\] \[R_{33} = -3\alpha^{2}r^{2}. \tag{14}\] and in the contravariant form become \[R^{00} = \frac{\alpha^{2}\left[\beta^{2}\left(1-6\alpha^{2}r^{2}z^{2} \right)-24r^{6}t\right]}{8r^{8}},\] \[R^{01} = -\frac{3\alpha^{4}\beta z}{2r^{2}},\] \[R^{02} = \frac{3\alpha^{2}}{r^{2}},\] \[R^{11} = -3\alpha^{4}r^{2}\] \[R^{33} = -\frac{3\alpha^{2}}{r^{2}}. \tag{15}\] The Ricci scalar is given as \[R=g_{\mu\nu}R^{\mu\nu} = -12\alpha^{2}.\] With these ingredients, the field equations of general relativity, \[R^{\mu\nu}-\frac{1}{2}Rg^{\mu\nu}+\Lambda g^{\mu\nu}=T^{\mu\nu}, \tag{16}\] can be solved. For such an analysis, pure radiation is chosen as the matter content whose energy-momentum tensor is defined as \[T^{\mu\nu}=\rho\zeta^{\mu}\zeta^{\nu}, \tag{17}\] with \(\zeta^{\mu}=(1,0,0,0)\) being a null vector. Then the field equations of general relativity are given as \[\rho = \frac{6\alpha^{4}\beta^{2}r^{2}z^{2}+\alpha^{2}\left[\beta^{2} \left(2\Lambda r^{2}z^{2}+1\right)+24r^{6}t\right]+8\Lambda r^{6}t}{8r^{8}}, \tag{18}\] \[0 = \frac{\alpha^{2}\beta z\left(3\alpha^{2}+\Lambda\right)}{2r^{2}},\] (19) \[0 = -\frac{3\alpha^{2}+\Lambda}{r^{2}},\] (20) \[0 = \alpha^{2}r^{2}\left(3\alpha^{2}+\Lambda\right),\] (21) \[0 = \frac{3\alpha^{2}+\Lambda}{r^{2}}. \tag{22}\] This set of equations leads to the solution \[\rho = \frac{\alpha^{2}\beta^{2}}{8r^{8}},\] \[\varLambda = -3\alpha^{2}. \tag{23}\] Therefore, the axially symmetric metric is a solution of general relativity with a negative cosmological constant and an energy density that decreases with \(r\)[20; 21]. Now the goal is to study this cosmological solution in Ricci-inverse gravity and discuss whether CTCs are present in this gravitational theory. To calculate the anti-curvature tensor using Eq. (9), it is first necessary to analyze the determinant of the Ricci tensor. For the axially symmetric metric one finds that \[det[R_{\mu\nu}]=-81\alpha^{6}r^{4}. \tag{24}\] Then, as the determinant of the Ricci tensor is not zero, it is possible to determine all components of the anti-curvature tensor. The non-zero components are \[A^{00} = -\frac{6\alpha^{2}\beta^{2}r^{2}z^{2}+\beta^{2}+24r^{6}t}{72 \alpha^{2}r^{8}},\] \[A^{01} = -\frac{\beta z}{6r^{2}},\] \[A^{02} = \frac{1}{3\alpha^{2}r^{2}},\] \[A^{11} = -\frac{r^{2}}{3},\] \[A^{33} = -\frac{1}{3\alpha^{2}r^{2}}, \tag{25}\] and in the covariant form become \[A_{02} = \frac{r^{2}}{3\alpha^{2}},\] \[A_{11} = -\frac{1}{3\alpha^{4}r^{2}},\] \[A_{12} = -\frac{\beta z}{6\alpha^{2}r^{2}},\] \[A_{22} = -\frac{\beta^{2}-24r^{6}t}{72\alpha^{2}r^{4}},\] \[A_{33} = -\frac{r^{2}}{3\alpha^{2}}. \tag{26}\] And the anti-curvature scalar is given as \[A=g_{\mu\nu}A^{\mu\nu} = -\frac{4}{3\alpha^{2}}. \tag{27}\] With these quantities and considering pure radiation given in the energy-momentum tensor (17) as the source of matter, the Ricci-inverse gravity field equations, Eq. (7), become \[\rho = \frac{1}{216\alpha^{2}r^{8}}\left\{162\alpha^{6}\beta^{2}r^{2}z^{2} +27\alpha^{4}\left[\beta^{2}\left(2\Lambda r^{2}z^{2}+1\right)+24r^{6}t\right]+\right\} \tag{28}\] \[+\frac{1}{216\alpha^{2}r^{8}}\left\{54\alpha^{2}\left(\beta^{2} \kappa r^{2}z^{2}+4\Lambda r^{6}t\right)-35b^{2}\kappa+216\kappa r^{6}t\right\},\] \[0 = \frac{\beta z\left(3\alpha^{4}+\alpha^{2}\Lambda+\kappa\right)}{2 r^{2}},\] (29) \[0 = -\frac{3\alpha^{4}+\alpha^{2}\Lambda+\kappa}{\alpha^{2}r^{2}},\] (30) \[0 = r^{2}\left(3\alpha^{4}+\alpha^{2}\Lambda+\kappa\right),\] (31) \[0 = \frac{3\alpha^{4}+\alpha^{2}\Lambda+\kappa}{\alpha^{2}r^{2}}. \tag{32}\] Solving this set of equations for the energy density \(\rho\) and the cosmological constant \(\Lambda\), we get \[\rho = \frac{\beta^{2}\left(27\alpha^{4}-35\kappa\right)}{216\alpha^{2}r ^{8}},\] \[\Lambda = -\frac{3\alpha^{4}+\kappa}{\alpha^{2}}. \tag{33}\] With the above results, it can be stated that the axially symmetric metric is a solution to the Ricci-inverse gravity field equations. Therefore, this gravitational theory presents CTCs as defined in Eq. (11), and as a consequence, causality is violated. Comparing with the results of general relativity, Eq. (23), it is noted that in both cases the cosmological constant assumes a negative value. With regard to energy density, some observations must be made. In general relativity, the energy density satisfies the energy condition and is always positive [20; 21]. In Ricci-inverse gravity, to obtain a positive energy density, a relation arises between the constant \(\alpha\) and the coupling constant \(\kappa\), i.e., \(\kappa<\frac{27\alpha^{4}}{35}\). Also, it is important to note that the results of general relativity are recovered when the coupling constant \(\kappa\) becomes zero. In addition to these results that lead to a non-causal universe in Ricci-inverse gravity, other analyzes have been developed considering different sources of matter. Two matter contents have been chosen: (i) a scalar field whose energy-momentum tensor is given as \[T^{\mu\nu(S)}=\partial^{\mu}\phi\partial^{\nu}\phi-\frac{1}{2}g^{\mu\nu}g_{ \rho\lambda}\partial^{\rho}\phi\partial^{\lambda}\phi, \tag{34}\] and (ii) an electromagnetic field described by energy-momentum tensor \[T^{\mu\nu(EM)}=-F^{\mu\alpha}F^{\nu}_{\alpha}+\frac{1}{4}g^{\mu\nu}F_{\beta \alpha}F^{\alpha\beta}. \tag{35}\] Using Eqs. (34) and (35) in Eq. (7), two sets of field equations are obtained. However, there are no solutions to these sets of equations. Therefore, these matter sources prevent the axially symmetric metric given in Eq. (10) from being a solution of Ricci-inverse gravity. This implies that the causality violation generated by this metric is avoided for proper matter content. Although the main objective has been to study the axially symmetric metric with causality violation, other metrics that present CTCs have also been investigated in the context of Ricci-inverse gravity. The Godel metric [23] is an exact solution of general relativity, but it is not a solution in Ricci-inverse gravity. The same result occurs for the Godel-type metric [24]. In fact, these metrics have that the determinant of the Ricci tensor is zero, i.e., the inverse of the Ricci tensor is not determined. This means that the anti-curvature tensor for both metrics can not be calculated. Therefore, these metrics fail the first test to be studied in Ricci-inverse gravity. ## IV Conclusion An alternative theory of gravity has been considered. An anti-curvature tensor that is defined as the inverse of the Ricci tensor is used to construct the Ricci-inverse gravity. In this theory an axially symmetric spacetime is studied. This metric allows the existence of CTCs, closed curves in time that permit travel to the past. As a consequence, the violation of causality arises. Considering pure radiation as matter content, our results show that the axially symmetric metric is a solution in Ricci-inverse gravity. Then this gravitational model allows for the violation of causality. It is found that the cosmological constant has a negative value as obtained in general relativity. But, a positive energy density, which satisfies the energy conditions, requires a condition involving the Ricci-inverse gravity coupling constant and the constant \(\alpha\), a free parameter of the metric. These results are consistent with the class I of the model which is defined as a function of the form \(f(R,A)\). Class II, which is characterized by a function of Ricci scalar and square of the anti-curvature tensor, is also analyzed. But the axially symmetric metric is not a solution for this set of equations. For different sources of matter, such as scalar field and electromagnetic field, it is shown that this spacetime is not a solution for the gravitational model. Therefore, our results indicate that there is a relation between the matter content of the universe and the existence of CTCs or causality violation. Furthermore, the Godel and Godel-type universes were also investigated in Ricci-inverse gravity. Our analysis shows that these metrics lead to a \(det(R_{\mu\nu})=0\), then there is no Ricci tensor inverse. Thus, it is impossible to construct an anti-curvature tensor associated with the Godel-types universes. Therefore, these exact solutions of general relativity do not satisfy the mandatory condition to be investigated in Ricci-inverse gravity. Here a natural question arises: how to investigate metrics that display \(det(R_{\mu\nu})=0\) in Ricci-inverse gravity? Are they simply excluded? or are they a problem of this alternative theory of gravity? These are open questions and are under investigation. ###### Acknowledgements. This work by A. F. S. is partially supported by National Council for Scientific and Technological Development - CNPq project No. 313400/2020-2. J. C. R. S. thanks CAPES for financial support.
2309.09203
Using Artificial Neural Networks to Determine Ontologies Most Relevant to Scientific Texts
This paper provides an insight into the possibility of how to find ontologies most relevant to scientific texts using artificial neural networks. The basic idea of the presented approach is to select a representative paragraph from a source text file, embed it to a vector space by a pre-trained fine-tuned transformer, and classify the embedded vector according to its relevance to a target ontology. We have considered different classifiers to categorize the output from the transformer, in particular random forest, support vector machine, multilayer perceptron, k-nearest neighbors, and Gaussian process classifiers. Their suitability has been evaluated in a use case with ontologies and scientific texts concerning catalysis research. From results we can say the worst results have random forest. The best results in this task brought support vector machine classifier.
Lukáš Korel, Alexander S. Behr, Norbert Kockmann, Martin Holeňa
2023-09-17T08:08:50Z
http://arxiv.org/abs/2309.09203v1
# Using Artificial Neural Networks to Determine Ontologies Most Relevant to Scientific Texts ###### Abstract This paper provides an insight into the possibility of how to find ontologies most relevant to scientific texts using artificial neural networks. The basic idea of the presented approach is to select a representative paragraph from a source text file, embed it to a vector space by a pre-trained fine-tuned transformer, and classify the embedded vector according to its relevance to a target ontology. We have considered different classifiers to categorize the output from the transformer, in particular random forest, support vector machine, multilayer perceptron, k-nearest neighbors, and Gaussian process classifiers. Their suitability has been evaluated in a use case with ontologies and scientific texts concerning catalysis research. The obtained results confirm support vector machines as a promising classifier, but surprisingly show random forest as unsuitable for this task. _Keywords:_ ontology; text data; text preprocessing; text representation learning; text classification ## 1 Introduction A domain ontology defines a set of representational primitives with which to model a domain of knowledge or discourse. The representational primitives are typically classes, attributes, and relationships. The definitions of the representational primitives include information about their meaning and constraints on their logically consistent application. Classes can be defined in two ways: by annotating their definitions, or by connecting classes with each other and with properties. Each domain ontology typically uses domain-specific definitions of terms denoting its primitives. The FAIR research data management (Findable, Accessable, Interoperable, and Reuseable) needs a consistent data representation in ontologies, particularly for representing the data structure in the specific domain [34]. Since different ontologies are written by different people, they are often incompatible, even within the same domain. As systems that rely on domain ontologies expand, it is often needed to merge domain ontologies by manual tuning. The same is true for enhancing an ontology with information available in domain-related texts. Merging and enhancing ontologies is thus a largely manual process and therefore time-consuming and expensive. The need to find a suitable ontology for an input text can help in classifying the information presented within the text as well as to connect the input text with data. This would allow for automated selection of ontologies and respective classification of the text. Different text data could thus be compared automatically in an understandable way and connected with corresponding research data. Ontologies represent "a formal specification of a shared conceptualization" [7] and can thus be used to express knowledge and data in a formalized, standardized description language to specify terms and relations between those terms. Current ontology recommenders, such as the NCBO ontology recommender [8], score annotations based on words similar to preferred and alternate labels of ontology classes and term frequency. In contrast to this, this work aims to use text representation learning in order to not only search for words also contained in ontologies but also to find concepts with similar semantic meaning between text and ontology. This paper is devoted to a specific problem encountered during enhancing ontologies and sometimes during their merging: to decide which of several available ontologies is most relevant to given domain-related piece of text. Our solution to the problem relies primarily on artificial neural networks (ANNs), in particular on natural language processing (NLP). The next section surveys the applicability of artificial neural networks to ontologies. Section 3 recalls the employed methods of text preprocessing. There have been used modules for text extractions from PDF files, for transforming extracted files to pure text and for eliminating irrelevant paragraphs. In the section, text representation learning is described as well as the principles of the employed classifiers. In section 4, an application of the proposed methodology to catalysis is described and evaluated. With regard to sources we have studied described in part 2 of this article, we are not aware that classifiers learned from the results of representational learning have ever been used to determine the most relevant of a given set of ontologies.
2309.06130
JOADAA: joint online action detection and action anticipation
Action anticipation involves forecasting future actions by connecting past events to future ones. However, this reasoning ignores the real-life hierarchy of events which is considered to be composed of three main parts: past, present, and future. We argue that considering these three main parts and their dependencies could improve performance. On the other hand, online action detection is the task of predicting actions in a streaming manner. In this case, one has access only to the past and present information. Therefore, in online action detection (OAD) the existing approaches miss semantics or future information which limits their performance. To sum up, for both of these tasks, the complete set of knowledge (past-present-future) is missing, which makes it challenging to infer action dependencies, therefore having low performances. To address this limitation, we propose to fuse both tasks into a single uniform architecture. By combining action anticipation and online action detection, our approach can cover the missing dependencies of future information in online action detection. This method referred to as JOADAA, presents a uniform model that jointly performs action anticipation and online action detection. We validate our proposed model on three challenging datasets: THUMOS'14, which is a sparsely annotated dataset with one action per time step, CHARADES, and Multi-THUMOS, two densely annotated datasets with more complex scenarios. JOADAA achieves SOTA results on these benchmarks for both tasks.
Mohammed Guermal, Francois Bremond, Rui Dai, Abid Ali
2023-09-12T11:17:25Z
http://arxiv.org/abs/2309.06130v1
# JOADAA: joint online action detection and action anticipation ###### Abstract Action anticipation involves forecasting future actions by connecting past events to future ones. However, this reasoning ignores the real-life hierarchy of events which is considered to be composed of three main parts: past, present, and future. We argue that considering these three main parts and their dependencies could improve performance. On the other hand, online action detection is the task of predicting actions in a streaming manner. In this case, one has access only to the past and present information. Therefore, in online action detection (OAD) the existing approaches miss semantics or future information which limits their performance. To sum up, for both of these tasks, the complete set of knowledge (past-present-future) is missing, which makes it challenging to infer action dependencies, therefore having low performances. To address this limitation, we propose to fuse both tasks into a single uniform architecture. By combining action anticipation and online action detection, our approach can cover the missing dependencies of future information in online action detection. This method referred to as JOADAA, presents a uniform model that jointly performs action anticipation and online action detection. We validate our proposed model on three challenging datasets: THUMOS'14, which is a sparsely annotated dataset with one action per time step, CHARADES, and Multi-THUMOS, two densely annotated datasets with more complex scenarios. JOADAA achieves SOTA results on these benchmarks for both tasks. ## I Introduction Envisioning upcoming occurrences plays a vital role in human intelligence as it aids in making choices while engaging with the surroundings. Humans possess an inherent skill to predict future happenings in diverse situations involving interactions with the environment. Likewise, the capacity to anticipate events is imperative for advanced AI systems operating in intricate settings, including interactions with other agents or individuals. The goal of online action detection (OAD) is to accurately pinpoint ongoing actions in streaming media, by predicting impending events. While action anticipation advances OAD and imitates the capacity of human cognition to anticipate events before they occur. Therefore, OAD and action anticipation are two important areas of research in computer vision, which have numerous applications in security surveillance, home-care, sports analysis, self-driving cars, and online danger detection. Human perception of actions can be viewed as a continuous cycle in which prior knowledge is used to forecast future behavior, and then present knowledge is used to revise and update future predictions. To tackle action detection, we propose a unified framework of action anticipation and online action detection. Our predictions are in two steps, first we anticipate up-coming actions based on past information. Second, we update the anticipation by introducing the present information. By doing so, we gain in the online action detection by introducing the anticipated actions as pseudo-future information. In addition, it improves the action anticipation by comparing the prediction to the present information, thus combining them to improve both tasks. Transformer networks such as [1, 2, 3] have had a significant impact on computer vision and video understanding. This is due to their ability to capture long-range dependencies. LSTR [4], TesTra [5], or FUTR [6] have benefited from the transformer backbones to address the tasks of OAD and AA. However, OAD and AA (action anticipation) tasks suffer from limited information as they don't have access to future information and global knowledge of the scene. This limited information restricts the ability of transformers to capture long-range dependencies and to learn significant relations between events. This can be demonstrated by comparing the effectiveness of models for offline action detection with online action detection. Offline, one has access to all pieces of information and a clear knowledge of the past, present, and future. Furthermore, complex densely annotated datasets (such as Multi-THUMOS [7]) have not been explored for online action detection and anticipation. It is challenging to recognize and foresee activities in such datasets. Most OAD architectures are only validated on sparsely-annotated activity datasets. Such simple annotated datasets are less challenging. First, these datasets do not have co-occurring actions. Second, they rarely have dependencies between actions in distant time steps. Furthermore, actions in densely annotated datasets have many possible outcomes. An example of these complex dependencies is given in Figure 1. Due to these challenges, OAD methods are only validated on simple datasets. Therefore, even with the help of transformers, it is difficult to build knowledge of these long-range dependencies without having access to complete information. In the past, OAD and action anticipation have been treated as separate tasks. However, to tackle the above challenges, we propose JOADAA (Joint Online Action Detection and Action Anticipation) to tackle OAD and AA together. We create a pseudo-future when performing online action detection. By leveraging cross-attention between the real frame features and the anticipated frames, we enhance the quality of the features, thus improving the accuracy of the predictions by making the present aware of a pseudo-future. Next, we propose to extract two types of information from these updated features: Local dependencies using TCNs (temporal convolution networks) and global dependencies using MHA (multi-head attention). Finally, we fuse both pieces of information to make online action detection predictions. In this paper, following previous work, we extract features from video clips using 3D convolution neural networks (3D CNNs). We use I3D [8] as a pre-trained backbone on the Kinetics dataset [9]. We store these extracted features in a memory bank. JOADAA consists of three main parts i) **Past Processing Block**, ii) **Anticipation prediction Block**, and iii) **Online action prediction Block**. First, we capture past information using a transformer encoder. The encoder output is first passed through a classification layer, which helps improve the quality of the embedding by making it class-dependent. Next, in the anticipation prediction part, we assume that we have not yet got the current frame. A transformer decoder is employed to learn from the last layer of the past embeddings to anticipate the upcoming actions in the next frame. This is carried out by introducing a set of learnable queries, called _anticipation queries_. Finally, the online action prediction part uses anticipation embedding and current frame features to enhance the quality of the current frame. The new enhanced present frame features are fused with past features. Finally, global and local information is extracted using MHA and TCN layers, respectively, achieving a new enhanced feature map. Based on the challenges discussed, we propose the following main contributions: * We propose a new architecture **JOADDA**, to jointly perform online action detection and action anticipation. * We tackle both tasks for two different types of datasets, a densely annotated dataset and a simple activity dataset. * We validate our proposed method on three benchmark datasets and achieve new SOTA results for online action detection and action anticipation. ## II Related work **Online Action Detection** is the task of localizing action instances in time steps. We distinguish two types of action detection i.e., offline and online. In off-line action detection, the model has access to the entire video [10, 11, 12, 13, 14]. Online action detection, on the other hand, occurs in real-time and has access to the past and the present only. RED [15] uses reinforcement loss to encourage early recognition of activities. IDN [16] learns discriminative features and stores only knowledge that is relevant in the present. To achieve optimal features, LAP-Net [17] presents an adaptive sampling technique. PKD [18] uses curriculum learning to transfer information from offline to online models. Shou et al. [19], similar to early action detection, focus on online detection of action start (ODAS). StartNet [20] divides ODAS into two stages and learns using a policy gradient. WOAD [21] employs video-level labeling and weakly-supervised learning. LSTR [4] uses a set of encoder-decoder architectures to capture the relations between long-term and short-term actions. They achieve state-of-the-art results on sparsely-annotated datasets but perform poorly on densely labeled datasets such as Multi-Thumas [7]. **Action Anticipation** is the task of predicting future actions given the limited observation of a video. In the past, many strategies have been proposed to solve the next action anticipation, forecasting a single future action in a matter of seconds. Recently, the idea of anticipating long-term activities from a long-range video has been put out. Girdhar and Grauman [22] introduced the anticipative video transformer (AVT), which anticipates the following action using a self-attention decoder, which was further improved by FUTR [6] for minutes-long future actions. However, their architecture is suitable only for simple activities and simple datasets, which is not applicable to real-world scenarios that have multiple actions occurring at Fig. 1: An example of human non-sequential dependencies. For instance, the actions _RUN and OneHanded Catch_ are highly correlated but distant. Also the same start action _RUN_ can lead to many different actions and scenarios. Therefore, it is very hard for online action detection or action anticipation to detect such relations without access to the future. In JOADAA, we propose to tackle this limitation by introducing a pseudo-future information by combining action anticipation and online action detection in the same task. the same time. Finally, in the study of mixing action anticipation and online action prediction, the authors in [5] use the same architecture for both action anticipation and online action detection tasks. However, they dissociate these tasks, while we tackle both tasks jointly to improve both of them. Furthermore, the architecture in [5] is very similar to [4], therefore, the same limitations apply here as well. In summary, to have adequate predictions, we need to build a well-descriptive hierarchy of information consisting of past, present, and future. Unfortunately, tasks such as online action detection or action anticipation do not have access to this global knowledge. In our work, we suggest combining OAD and AA in order to create pseudo-full knowledge that can improve action anticipation accuracy and produce comparable results for online action detection. ## III Proposed method The whole architecture consists of three main parts, i) Past Processing Block, ii) Anticipation prediction Block, and iii) Online Action Prediction, as shown in Figure 2. First, a short-term past transformer-encoder enhances features. Second, an anticipation transformer-decoder anticipates the upcoming actions in the upcoming frames, using embedding output from the previous block and a set of learnable queries, which we call anticipation queries. Finally, a transformer-decoder uses the anticipation results and past information to predict the actions for the current frame (online action detection). Each module is explained in the following. ### _Past Processing Block_ To enhance the ongoing action prediction, the initial stage in our model is to infer prior information. We employ a transformer encoder that accepts the embedding of previous frames as input. This enables us to highlight salient and robust frames by leveraging attention mechanisms, making our features more descriptive of previous activities (features). It can be challenging to identify which activity a person is performing solely based on the raw embedding or the current frame. For instance, if the current frame shows the person _holding a bottle_, we are not sure if the ongoing action will be _picking up the bottle, placing the bottle, drinking water, or pouring water_. However, if we know from the past that one of the previous actions was _opening the bottle_, we can be more confident that the person is more likely to _drink water_. These features are later used to anticipate future actions. Following [1], the equations below sum up the first block of our architecture: \[F^{\prime}=ATTENTION(F) \tag{1}\] \[ATTENTION(F)=Softmax(QK^{T}/\sqrt{d_{k}})V \tag{2}\] \[Q=W_{q}\times X,K=W_{k}\times X,V=W_{v}\times X \tag{3}\] \[X=F+PE(F) \tag{4}\] PE stands for positional encoding, and \(F\in\mathbb{R}^{T\times D}\) are the extracted features using the pre-trained I3D model [8], and \(W_{q}\), \(W_{k}\) and \(W_{v}\) are learnable weights. Furthermore, we propose different approaches for the use of past information. Following [4] we use long-term and short-term past information. Experimentally, the use of long-term and short-term past information is highly dependent on the type of dataset. The first intuition is that more information is always good for a neural network as it provides a more detailed description of events in a video. Especially with the use of transformers, we can capture long-range dependencies to learn all the steps that lead to the current actions. However, in our study, we find that this is not always true. For instance, the very long-past knowledge may sometimes harm performances, especially for densely annotated datasets. In scenarios where many actions co-occur, it is challenging to learn significant long-term relations, and thus these long-term features may act as noise to the model. Further experimental details are provided in Section IV-D. ### _Anticipation prediction Block_ Inspired by [6], the module takes a feature map \(F^{{}^{\prime}}\in\mathbb{R}^{T\times D}\) and a set of anticipation queries (learnable) \(LQ\in\mathbb{R}^{N_{q}\times D}\), as inputs. Here, \(N_{q}\) represents the number of queries and \(D\) is the embedding dimension, which is the same as the feature map. Action anticipation can be achieved in two different ways. The first way is to proceed directly with a transformer encoder and to learn to predict the future. An encoder sees only a glimpse of the past and learns to predict the future. On the contrary, another way is to utilize a transformer decoder. In this approach, the strength of using learnable queries with a transformer decoder is that each query learns a specific feature for a specific frame in the future. The positional encoding indicates to the transformer the order of these learnable queries and helps the model relate each query to a corresponding point in the future. Additionally, by having these learnable queries in our model, it learns to adapt to each clip, since the queries are based on the past information of each clip. Therefore, these learnable queries learn to be aware of the past. JOADAA uses these learnable queries as a link between past events and possible future ones. \[N_{q}=1+N_{f} \tag{5}\] Where in Eq. 5, 1 is for the upcoming frame that represents the ongoing action (represented in red in the Figure 2). Since we do not have access yet to this frame; thus, it is also anticipated. \(N_{f}\) is the number of frames to anticipate in the future to which we have no access. Information from the past, present, and future are connected by these learnable queries to improve both tasks efficiently. Later, these anticipation queries act as a pseudo-future to do the prediction of the ongoing action, see Section III-C. ### _Online action prediction Block_ At this stage, we feed the features of the current frame and the previously learned features of potential actions in the current time step and subsequent time steps into a decoder. Our model can classify the current frame more accurately because it has pseudo-future knowledge. Modeling information this way has two effects. The prediction of the current frame is initially optimized by employing anticipation queries, and since we can access the current frame, we can also enhance the learned query on the current frame, which benefits our anticipation module. In addition, our local-to-global layers improve the performance of JOADAA. Adding a TCN layer (1D temporal convolution) helps the model capture local information. Transformers have proven to be a good tool to capture global and long-range dependencies. However, as explained earlier, this huge amount of information is not always helpful and may act as noise. Therefore, by mixing transformers with TCNs, our model learns complementary information from an updated feature map that we pass through an FC (fully connected) layer for classification. Notably, we utilize a Softmax layer for basic datasets with only one action at a time for validation and a Sigmoid layer for datasets with co-occurring actions in all categorization layers (past, future, and present). Note that we use three different concatenation layers in our architecture. The first concatenation is between past frames features and anticipated frames features, the aim of this concatenation is to provide the decoder with a pseudo full information (past and pseudo future), which is the main idea of our paper (use AA to enhance OAD). The second concatenation is between past frames and the currently updated feature (since it is now aware of past and possible future actions). Here we only concatenate past and present because online action action detection is our main objective, which is why there is no more need for future information. The last concatenation is to use both local information learned through the TCNs and global information from the transformer decoder, which allows us to have better predictions as shown in the ablation studies Table VIII. We also use the same decoder for future frame anticipation and current frame prediction. Experiments have been conducted that showed that using different decoders does not improve the accuracy and sometimes leads to a slight decrease in accuracy. Hence, to keep the model lighter and have better prediction we keep the same weights. As for the encoders, the two of them are different; the last encoder is part of our proposed classification head, where we use a TCN to capture local dependencies and a transformer encoder to capture long-range dependencies. Therefore, our intuition was not to share the weights between the encoders as they have a separate function in our architecture. Fig. 2: Proposed JOADAA architecture with three units i) Past processing, ii) Anticipation prediction, and iii) Online Action prediction. Each stage is highlighted by a color for better understanding. Each block will be explained in details in section III ## IV Experiments In this section, we discuss experiments carried out for online action detection and action anticipation tasks on two different types of datasets. First, we briefly describe the datasets used and explain the implementation of the experiments carried out. Second, we compare JOADAA with existing SOTA methods for both online action detection and action anticipation. Finally, we explore the effectiveness of each module of our approach by performing an ablation study. More qualitative results are provided in the supplementary materials. ### _Datasets_ In this section, we briefly explain the datasets used in our experiments. We experiment on two types of datasets, i) sparsely annotated dataset (THUMOS'14 [23]), and ii) densely annotated datasets (Multi-THUMOS [7] and CHARADES [24]). Each of them is described below. **THUMOS'14**: contains 413 untrimmed videos with 20 categories of actions. The dataset is divided into two subsets: the validation set and the test set. The validation set contains 200 videos, and the test set contains 213 videos. Following common practice, we use the validation set for training and report the results in the test set. More details are available in [23]. **Multi-THUMOS**: contains dense, multilabel frame-level action annotations for 30 hours across 400 videos from the THUMOS'14 [23] action detection dataset. It consists of 38,690 annotations of 65 action classes, with an average of 1.5 labels per frame and 10.5 action classes per video. More details can be found in [7]. **CHARADES**: is composed of 9,848 videos of daily indoor activities with an average length of 30 seconds, involving interactions with 46 object classes in 15 types of indoor scenes and containing a vocabulary of 30 verbs leading to 157 action classes. Readers can find more details in [24]. ### _Implementation Details_ We implement our proposed model in PyTorch [25]. All experiments are performed on a system with 3 Nvidia V100 graphics cards. For all Transformer units, we set their number of heads to 16 and hidden units to 1024 dimensions. To learn the weights of the model, we use Adam Optimizer [26] with weight decay \(5\times 10^{-5}\). The learning rate increases linearly from zero to \(5\times 10^{-5}\) in the first 40% training iterations and then decreases to zero using a cosine warm-up. Our models are optimized with a batch size of 16, and trained for 25 epochs. **Evaluation protocol**: We follow previous work and use mean average precision per frame (mAP) to evaluate performances. ### _Comparison with the SoTA_ #### Iv-C1 OAD Comparison on the simple dataset (THUMOS'14) Table I presents the results of online action detection. For the THUMOS'14 [23] dataset we achieve state-of-the-art results by a margin of **1.4%**. GateHUB[31] was SoTA results for OAD on the THUMOS'14 dataset. However, they provide two results on this dataset, one with TSN as the backbone feature extractor and one with Timesformer[33]. Upon careful examination, we noticed the following points: 1) Our accuracy still surpasses theirs. 2) The GateHUB method was not compared with TesTra, which demonstrated better accuracy with the same settings. 3) GateHUB achieves SOTA results only when TimeSformer[33] is used as an RGB feature extractor, making it difficult to determine whether the results are due to the extractor or to their proposed solution. In conclusion, while the GateHUB paper argues for capturing relevant information from the past to the present, our JOADAA method, which employs a simple implementation of transformers, outperforms it along with TesTra[5]. #### Iv-C2 OAD comparison on densely annotated datasets We evaluate JOADAA on more complex datasets such as Multi-THUMOS[7] and CHARADES [24]. We utilize LSTR [4], TesTra[5], and TRN[29] to train on these datasets to build baseline methods, as there are no validated online methods to compare JOADAA to these datasets. JOADAA improves the baselines by **1.5%** on CHARADES[24] and **2.2%** on Multi-THUMOS [7] dataset. The main difference between our approach and baseline methods [4] and [5], is the introduction of pseudo-future knowledge to our online action prediction. It helps make more precise predictions by having a knowledge of different possible outcomes. #### Iv-C3 OAD comparison using off-line methods For further comparison, we adapt offline methods to online settings. We use PDAN[14] and MSTCT[30] two SoTA methods on CHARADES and Multi-THUMOS in off-line action detection. We outperform these two methods on all three datasets THUMOS'14, Multi-THUMOS, and CHARADES. #### Iv-C4 AA SoTA comparison Similarly, our model achieves SOTA results on action anticipation as noted in Table II. When Increasing the anticipated frames from 1 to 6, TesTra's [5] accuracy drops by **13.6%** on the THUMOS'14 dataset, whereas our model decreases by only **8.4%**, which showcases robustness of our proposed solution. Also, JOADAA performs much better in more complex datasets (CHARADES and Multi-THUMOS). In Table III, we demonstrate how far we can foresee the future. We notice that, in general, the further we anticipate, the better the accuracy of the online action detection (blue) until it reaches a level where the accuracy stops increasing. Such a behavior makes sense because the model can learn more action dependencies by inferring more information about upcoming events. On the other hand, action anticipation results (red) decrease when the anticipation period increases, because the model has more space to explore. ### _Ablation study_ In this section, we discuss how the different modules contribute to JOADAA. #### Iv-D1 Ablation on the past processing block First, we analyze the use of long-range past features on different datasets. As discussed in Section III, past information can be used in two manners, either using only short-term past (32 frames) or long-short-term past (512+32 frames). This past information is used to infer the pseudo-future in our approach. In Tables IV and V, we observe that our model is more robust when it comes to using only short-term past information (decreases by **2%**) on the THUMOS'14[23], unlike LSTR [4] where the accuracy decreases by **4.1%**. One important result of our study is that long-past knowledge is more important for simple actions (single-action datasets) than for complex actions (densely annotated datasets). This is because numerous actions may occur simultaneously without being connected in densely annotated datasets, making it more challenging to infer relations from them. As a result, including information from the distant past can skew model predictions. Recently, transformers have been widely used, since they outperformed the existing approaches such as 3D-CNNs and RNNs. In fact, 3D-CNNs are known to be good general feature extractors as they can capture overall visual appearances in a video. However, their CNN filters capture pixel-level information in a local neighborhood but struggle with long-term dependencies. Therefore, we limit the use of 3D-CNNs to extract video clip features for our architecture. Furthermore, action detection tasks require a strong grasp of long-range temporal dependencies, and transformers excel at capturing long-term information compared to RNNs. Therefore, the transformers are the best choice for OAD and AA tasks. However, most papers lately use transformers based on the previous intuition without any justification. Table VI presents a comparison study between RNNs (LSTMs[34]) and transformers. We replace our first encoder for past information processing with 3 blocks of LSTM and a convolution layer to reduce the feature map size. Results show that transformers are better suited for capturing long-range dependencies and produce far more better results which justifies our design choice. #### Iv-D2 Ablation on the action anticipation module Another ablation study is done in Table VII. We conduct two main experiments: one with the full JOADAA model and the other one without the Action Anticipation (AA) module. We can see that the AA module enhances online action detection, which supports our claim that combining AA and OAD leads to better results. #### Iv-D3 Ablation on the OAD prediction layer Table VIII shows the effect of fusing local and global knowledge, in contrast to using directly the output of the decoder on the current frame which carries only global information in it. By doing so, our results increase by **2.9%**. As argued earlier, this is due to the fact that TCNs can extract local changes and better detect relations in neighboring frames, whereas baseline transformers capture long-range dependencies that sometimes are not adapted to predicting the current frame events. ### _Qualitative Analysis_ In this section, we analyze the effectiveness of our method on densely annotated datasets. We study anticipation improvement on six different actions, from the Multi-THUMOS dataset, according to their complexity as shown in Figure 3. We observe that the gain in some of these actions can reach 37%, while in some other actions, it is almost zero. In fact, our prediction block anticipates the upcoming frame alongside future frames. By having access to the current frame our model can correlate the anticipated action to the real action, hence we can learn to better anticipate the current frame, leading to a better-performing anticipation module. Upon closer examination of these actions, we find that the improvement is particularly important for activities where there are multiple dependencies, or if the activity is interconnected with many other actions. The action **Run** for instance, has correlations with up to seven other activities, as illustrated in Figure 1. The qualitative results in Figure 3 demonstrate the robustness of JOADAA for complex correlated activities. This opens doors for future studies to analyze OAD and action anticipation on complex dense datasets. ## V Conclusion Online action detection and anticipation are important fields in computer vision that have many real-world applications. These two tasks are highly correlated, and that is why we design JOADAA to address both tasks jointly, improving one using the other and vice versa. Furthermore, we discuss the limitations of OAD and action anticipation for sparsely and densely annotated datasets. Our model is limited in terms of effectively using long-range past features, especially for densely annotated datasets. Past knowledge undoubtedly adds to current knowledge and should lead to improvements. However, as demonstrated in this study, just adding pre-extracted features to transformers can also introduce noise. In the future, we are interested in tackling this limitation by modeling past features more effectively. One possible solution is to use an intermediate filter to learn only important features [35] or to learn the dependencies using a graph model to model only relevant features following [36]. Fig. 3: Action anticipation accuracy improvement on six actions w.r.t. TesTra model. This is performed on the Multi-THUMOS dataset, using 4 frames as anticipation length.
2309.07548
Proximal Bellman mappings for reinforcement learning and their application to robust adaptive filtering
This paper aims at the algorithmic/theoretical core of reinforcement learning (RL) by introducing the novel class of proximal Bellman mappings. These mappings are defined in reproducing kernel Hilbert spaces (RKHSs), to benefit from the rich approximation properties and inner product of RKHSs, they are shown to belong to the powerful Hilbertian family of (firmly) nonexpansive mappings, regardless of the values of their discount factors, and possess ample degrees of design freedom to even reproduce attributes of the classical Bellman mappings and to pave the way for novel RL designs. An approximate policy-iteration scheme is built on the proposed class of mappings to solve the problem of selecting online, at every time instance, the "optimal" exponent $p$ in a $p$-norm loss to combat outliers in linear adaptive filtering, without training data and any knowledge on the statistical properties of the outliers. Numerical tests on synthetic data showcase the superior performance of the proposed framework over several non-RL and kernel-based RL schemes.
Yuki Akiyama, Konstantinos Slavakis
2023-09-14T09:20:21Z
http://arxiv.org/abs/2309.07548v1
# Proximal Bellman mappings for reinforcement learning ###### Abstract This paper aims at the algorithmic/theoretical core of reinforcement learning (RL) by introducing the novel class of proximal Bellman mappings. These mappings are defined in reproducing kernel Hilbert spaces (RKHSs), to benefit from the rich approximation properties and inner product of RKHSs, they are shown to belong to the powerful Hilbertian family of (firmly) nonexpansive mappings, regardless of the values of their discount factors, and possess ample degrees of design freedom to even reproduce attributes of the classical Bellman mappings and to pave the way for novel RL designs. An approximate policy-iteration scheme is built on the proposed class of mappings to solve the problem of selecting online, at every time instance, the "optimal" exponent \(p\) in a \(p\)-norm loss to combat outliers in linear adaptive filtering, without training data and any knowledge on the statistical properties of the outliers. Numerical tests on synthetic data showcase the superior performance of the proposed framework over several non-RL and kernel-based RL schemes. Yuki Akiyama Konstantinos Slavakis + Tokyo Institute of Technology, Japan Department of Information and Communications Engineering Emails: {akiyama.y.am, slavakis.k.aa}@m.titech.ac.jp ## 1 Introduction In reinforcement learning (RL) [1], an agent takes a decision/action based on feedback provided by the surrounding environment on the agent's past actions. RL is a sequential-decision-making framework with the goal of minimizing the long-term loss/price \(Q\) to be paid by the agent for its own decisions. RL has deep roots in dynamic programming [1, 2], and a far-reaching range of applications which extend from autonomous navigation, robotics, resource planning, sensor networks, biomedical imaging, and can reach even to gaming [1]. This paper aims at the algorithmic/theoretical core of RL by introducing the _novel class_ of _proximal Bellman mappings_ (3), defined in reproducing kernel Hilbert spaces (RKHSs), which serve as approximating spaces for the one-step \(g\) and long-term \(Q\) losses in RL, and are well known for their rich properties, such as the reproducing property of their inner product [3, 4]. This study stands as the _first_ stepping stone for **(i)** a simple, flexible and general framework, which can even reproduce attributes of the classical Bellman mappings (2), such as their fixed-point sets or the mappings themselves (see Proposition 1), and for **(ii)** the exciting combination of arguments from RKHSs and nonparametric-approximation theory [5] with the powerful Hilbertian toolbox of nonexpansive mappings [6]; see Theorem 1. Usually in RL, \(g\) and \(Q\) are considered as points of the Banach space \(\mathcal{B}\) of all (essentially) bounded functions [7], equipped with the \(\mathcal{E}_{\infty}\)-norm. As such, Bellman mappings operate from \(\mathcal{B}\) to \(\mathcal{B}\), they are shown to be contractions [6] by appropriately constraining the values of their discount factors (see (2)), and possess thus unique fixed points [6]. Unfortunately, \(\mathcal{B}\) lacks an inner product by definition. To overcome this inconvenience, the popular strategy in RL is to assume that \(g,Q\) are spanned by a basis of vectors, usually learned from training data, with a fixed and finite cardinality, which amounts to saying that \(g,Q\) belong to a Euclidean vector space of fixed dimension. These modeling assumptions can be met in almost all currently popular RL frameworks, from temporal difference (TD) [1] and least-squares (LS)TD [8, 9, 10], to Bellman-residual (BR) methodologies [11] and kernel-based RL (KBRL) [11, 12, 13, 14, 15, 16, 17]. Notwithstanding, the Bellman mappings introduced in [12, 13] are still defined on Banach spaces, with no guarantees that they operate from an RKHS \(\mathcal{H}\) to \(\mathcal{H}\). Although [18, 19] utilize RKHSs, they do not discuss Bellman mappings. Proximal mappings have been used in [20] only in the popular context of minimizing loss functions, without any consideration of using proximal mappings _directly_ as Bellman ones. Unlike typical contraction-based designs in Banach spaces, the novel proximal Bellman mappings (3) are shown to be (firmly) non-expansive with potentially non-unique fixed points in RKHSs, _regardless_ of the value of their discount factors (see Theorem 1). This result improves upon the result of the predecessor [21] of this work, where the nonexpansivity of the introduced Bellman mappings was established via appropriate conditions on their discount factors. Moreover, the benefit of using potentially infinite-dimensional RKHSs comes also from the freedom of allowing for \(Q\)-function representations by dynamically changing bases, with variable and even growing cardinality, to accommodate online-learning scenarios where the basis vectors are not learned solely from (offline) training data, but may be continuously and dynamically leady from streaming (online) test data. To highlight such online-learning settings, this study considers _robust adaptive filtering_[22] as the application domain of the proximal Bellman mappings (3). The goal is to combat outliers in the classical data-generation model \(y_{n}=\mathbf{\theta}_{\mathsf{T}}^{\intercal}\mathbf{x}_{n}+o_{\mathsf{n}}\), where \(n\in\mathbb{N}\) denotes discrete time (\(\mathbb{N}\) is the set of all non-negative integers), \(\mathbf{\theta}\), is the \(L\times 1\) vector whose entries are the system parameters that need to be identified, \((\mathbf{x}_{n},y_{n})\) stands for the input-output pair of available data, where \(\mathbf{x}_{n}\) is an \(L\times 1\) vector and \(y_{n}\) is real-valued, and \(\intercal\) denotes vector/matrix transposition. Outliers \(o_{n}\) are defined as contaminating data that do not adhere to a nominal data generation model [23], and are often modeled as random variables (RVs) with non-Gaussian heavy tailed distributions, e.g., \(\alpha\)-stable ones [24]. Since the least-squares (LS) error criterion is notoriously sensitive to outliers [23], non-LS criteria, such as least mean p-power (LMP) [25, 26, 27, 28, 29, 30, 31] and maximum correntropy (MC) [32], have been studied instead of LS ones in robust adaptive filtering. To avoid a lengthy exposition, this work focuses on the LMP criterion and algorithm [25], which, for an arbitrarily fixed \(\mathbf{\theta}_{0}\), generates estimates \((\mathbf{\theta}_{n})_{n\in\mathbb{N}}\) of \(\mathbf{\theta}\), as follows: \[\mathbf{\theta}_{n+1}:=\mathbf{\theta}_{n}+\rho p|e_{n}|^{p-2}e_{n}\mathbf{x}_{n}\;, \tag{1}\] where \(e_{n}:=y_{n}-\mathbf{x}_{n}^{\intercal}\mathbf{\theta}_{n}\), \(\rho\) is the learning rate (step size), and \(p\) is a _fixed_ user-defined real-valued number within the interval \([1,2]\) to ensure that the \(p\)-norm loss \(|y_{n}-\mathbf{x}_{n}^{\intercal}\mathbf{\theta}|^{p}\) is a convex function of \(\mathbf{\theta}\)[25]. Notice that if \(p=1\) and \(2\), then (1) boils down to the classical sign-LMS and LMS, respectively [22]. Combination of adap tive filters with different forgetting factors but with the same fixed \(p\)-norm [29], as well as with different \(p\)-norms [33] have been also considered. This paper provides an _online_ and _data-driven_ solution to the problem of _dynamically_ selecting \(p\) by using the proposed proximal Bellman mappings, _without_ any prior knowledge on the statistical properties of \(o_{n}\). It is worth mentioning that this work and its predecessor [21] are the first attempts in the literature to apply RL arguments to robust adaptive filtering. Algorithm 1 offers an approximate policy-iteration (API) strategy, where the underlying state space is considered to be the low-dimensional \(\mathbb{R}^{4}\), independent of the dimension \(L\) of \(\mathbf{\theta}_{*}\), whereas [21] uses the high-dimensional \(\mathbb{R}^{2L+1}\). The action space is considered to be discrete: an action is a value of \(p\) taken from a finite grid of the interval \([1,2]\). Moreover, experience replay [34] (past-data reuse) is introduced, unlike the classical Bellman operators where information on transition probabilities in a Markov decision process is required in advance [1]. Note that [21] employs rollout for data reuse and exploration [1]. Numerical tests on synthetic data showcase the superior performance of the advocated framework over several non-RL and RL schemes. Due to space limitations, proofs, the convergence analysis of the proposed algorithm, as well as further RL designs and numerical tests will be reported elsewhere. ## 2 Proximal Bellman mappings First, some key RL concepts are in order [1]. The state space is denoted in general by \(\mathfrak{S}\subset\mathbb{R}^{D}\), with a state vector \(\mathbf{s}\in\mathfrak{S}\), and the action space by \(\mathfrak{A}\), with action \(a\in\mathfrak{A}\). For convenience, the state-action tuple is defined as \(\mathbf{z}\coloneqq(\mathbf{s},a)\in\mathfrak{Z}\coloneqq\mathfrak{S}\times \mathfrak{A}\). The classical Bellman mappings, _e.g._, [35]: \(\forall(\mathbf{s},a)\in\mathfrak{Z}\), \[(T^{\diamond}_{\mu})(\mathbf{s},a) \coloneqq g(\mathbf{s},a)+\alpha\mathbb{E}_{\pi^{\prime}|(\mathbf{ s},a)}\{Q(\mathbf{s}^{\prime},\mu(\mathbf{s}^{\prime}))\}\,, \tag{2a}\] \[(T^{\diamond}Q)(\mathbf{s},a) \coloneqq g(\mathbf{s},a)+\alpha\mathbb{E}_{\pi^{\prime}|(\mathbf{ s},a)}\{\inf_{a^{\prime}\in\mathfrak{A}}Q(\mathbf{s}^{\prime},a^{\prime})\}\,, \tag{2b}\] quantify the total loss (= one-step loss \(g\) + expected long-term loss \(Q\)) that the agent suffers whenever it takes action \(a\) at state \(\mathbf{s}\). Both \(g\) and \(Q\) map a state-action tuple \((\mathbf{s},a)\) to a real number. In (2), \(\mathbb{E}_{\pi^{\prime}|(\mathbf{s},a)}\{\cdot\}\) stands for the conditional expectation over all possible subsequent states \(\mathbf{s}^{\prime}\) of \(\mathbf{s}\), conditioned on \((\mathbf{s},a)\), and \(\alpha\) is the discount factor with typical values in \((0,1)\). Mapping (2a) refers to the case where the agent takes actions according to the stationary policy \(\mu(\cdot):\mathfrak{S}\rightarrow\mathfrak{A}:\mathbf{s}\mapsto\mu(\mathbf{ s})\), while (2b) stands as the greedy case of (2a). As explained in Section 1, typically in RL, \(T^{\diamond}_{\mu},T^{\diamond}\) map points of \(\mathscr{B}\) to \(\mathscr{B}\). Because of \(\alpha\in(0,1)\), \(T^{\diamond}_{\mu},T^{\diamond}\) are shown to be contractions [1], and thus, their fixed-point sets \(\operatorname{Fix}T^{\diamond}_{\mu}\) and \(\operatorname{Fix}T^{\diamond}\) are singletons, where \(\operatorname{Fix}T\coloneqq\{Q\in\mathscr{H}\mid TQ=Q\}\) for a mapping \(T:\mathscr{B}\rightarrow\mathscr{B}\). In quest of an inner product, the blanket assumption of this study is that \(g,Q\) belong to an RKHS \(\mathscr{H}\), with well-known properties [3, 4], a reproducing kernel \(\kappa(\cdot,\cdot):\mathfrak{Z}\times\mathfrak{Z}\rightarrow\mathbb{R}\), with \(\kappa(\mathbf{z},\cdot)\in\mathscr{H},\forall\mathbf{z}\in\mathfrak{Z}\), \(3\), and an inner product which satisfies the reproducing property: \(Q(\mathbf{z})=\langle Q\mid\kappa(\mathbf{z},\cdot)\rangle_{\mathbf{x}}\), \(\forall Q\in\mathscr{H},\forall\mathbf{z}\in\mathfrak{Z}\). Space \(\mathscr{H}\) may be infinite dimensional; e.g., whenever \(\kappa(\cdot,\cdot)\) is the Gaussian kernel [3, 4]. For compact notations, let \(\varphi(\mathbf{z})\coloneqq\kappa(\mathbf{z},\cdot)\), and \(Q^{\dagger}Q^{\dagger}\coloneqq\langle Q\mid Q^{\dagger}\rangle^{\mathbf{x}}\), \(\forall Q,Q^{\prime}\in\mathscr{H}\). This study introduces the following novel class of _proximal Bellman mappings:_ for a user-defined set of proper and lower-semi-continuous convex functions \(\{f_{i}\colon\mathscr{H}\rightarrow\mathbb{R}\cup\{+\infty\}\}_{i=1}^{I}\)[6], define \(T\coloneqq T_{\{f_{i}\}_{i=1}^{I}}\colon\mathscr{H}\rightarrow\mathscr{H}\): \(Q\mapsto T_{\{f_{i}\}_{i=1}^{I}}Q\) as \[TQ\coloneqq g+\alpha\sum\nolimits_{i=1}^{I}w_{i}\operatorname{Prox}_{f_{i}}( \tfrac{Q-g}{\alpha})\,, \tag{3}\] where the proximal mapping \(\operatorname{Prox}_{f_{i}}(Q)\coloneqq\operatorname*{argmin}_{Q^{\prime}\in \mathscr{H}}f_{i}(Q^{\prime})+(1/2)\|Q^{\prime}-Q\|_{\mathbf{z}}^{2}\)[6], and coefficients \(\{w_{i}\}_{i=1}^{I}\subset[0,1]\) satisfy \(\sum\nolimits_{i=1}^{I}w_{i}=1\). The user-defined \(\{f_{i}\}_{i=1}^{I}\) introduce ample degrees of design freedom as the following proposition demonstrates. Under conditions, mappings (3) can reproduce \(\operatorname{Fix}T^{\diamond}_{\mu}\) in Proposition 1(i), and even replicate (2b) in Proposition 1(ii). Moreover, (3) open the door to novel RL designs, as Proposition 1(iii) exhibits. **Proposition 1**.: 1. _[label=()]_ 2. _Consider a stationary policy_ \(\mu(\cdot)\) _and define_ \(\bar{\varphi}^{\mu}(\mathbf{z})\coloneqq\mathbb{E}_{\pi^{\prime}|\pi}\{ \varphi(\mathbf{s}^{\prime},\mu(\mathbf{s}^{\prime}))\}\)_,_ \(\forall\mathbf{z}\in\mathfrak{Z}\)_, and let_ \(h^{\mu}(\mathbf{z})\coloneqq\varphi(\mathbf{z})-\alpha\bar{\varphi}^{\mu}( \mathbf{z})\)_. Assume also that_ \(\bar{\mathbf{z}}_{\pi^{\prime}|\pi}\{\cdot\}\) _interchanges with the inner product_ \(\langle\cdot\mid\cdot\rangle_{\mathbf{x}}\)_. Further, define_ \[C^{\mu}\coloneqq\bigcap_{\mathbf{z}\in\mathfrak{Z}}\left\{Q\in\mathscr{H}\mid \langle Q\mid\bar{h}^{\mu}(\mathbf{z})\rangle_{\mathbf{x}}=\mathbb{E}_{\pi^{ \prime}|\pi}\{g(\mathbf{s}^{\prime},\mu(\mathbf{s}^{\prime}))\}\right\},\] (4) _with_ \(C^{\mu}\neq\emptyset\)_, let_ \(I=1\) _in (_3_), and set_ \(f_{1}\coloneqq\iota_{C^{\mu}}\)_, where_ \(\iota_{A}\) _stands for the indicator function of a set_ \(A\subset\mathscr{H}\)_, that is,_ \(\iota_{A}(Q)=0\)_, if_ \(Q\in A\)_, and_ \(\iota_{A}(Q)=+\infty\)_, if_ \(Q\notin A\)___[_6_]__. Then,_ \(\operatorname{Fix}T=\operatorname{Fix}T^{\diamond}_{\mu}\)_._ 3. _Define_ \(h_{Q}(\mathbf{s},a)\coloneqq\mathbb{E}_{\pi^{\prime}|(\mathbf{s},a)}\{\inf_{a^{ \prime}\in\mathfrak{Z}}Q(\mathbf{s}^{\prime},a^{\prime})\}\)_,_ \(\forall(\mathbf{s},a)\in\mathfrak{S}\times\mathfrak{A}\)_, and assume that_ \(h_{Q}\in\mathscr{H}\)_,_ \(\forall Q\in\mathscr{H}\)_. For_ \(I=1\)_, the form_ \(TQ\coloneqq g+\alpha\operatorname{Prox}_{\{h_{Q\}}}((Q-g)/\alpha)\) _of (_3_) coincides with (_2b_)._ 4. _For each_ \(i\) _in (_3_), consider a number_ \(N_{i}^{\text{av}}\) _of sampling points_ \(\{\mathbf{s}_{ij}^{\text{av}},a_{ij}^{\text{av}},\mathbf{s}_{ij}^{\text{av}} \}_{j=1}^{N_{i}^{\text{av}}}\)_, and let_ \(\mathbf{z}_{ij}^{\text{av}}:=(\mathbf{s}_{ij}^{\text{av}},a_{ij}^{\text{av}})\)_. Consider also a stationary policy_ \(\mu(\cdot)\) _and define_ \(h_{i}^{\mu}\coloneqq\varphi(\mathbf{z}_{i1}^{\text{av}})-\alpha\sum_{j=1}^{N_{i}^{ \text{av}}}d_{ij}\varphi(\mathbf{s}_{ij}^{\text{av}},\mu(\mathbf{s}_{ij}^{\text{ av}}))\)_, for some real-valued averaging weights_ \(\{\ell_{ij}\}\)_, where_ \(\mathbf{z}_{ij}^{\text{av}}\) _is selected arbitrarily as a reference point in the definition of_ \(h_{i}^{\mu}\)_. Moreover, by letting_ \(g_{ij}^{\mu}\coloneqq g(\mathbf{s}_{ij}^{\text{av}},\mu(\mathbf{s}_{ ## 3 Application to robust adaptive filtering In this section, mappings (3) are applied to the problem of robust adaptive filtering of Section 1. An RL solution to this problem appeared for the first time in the literature in the predecessor [21] of this study. Since online-learning solutions are of interest, the iteration index of the proposed sequential-decision RL framework coincides with the time index \(n\) of the streaming data \((\mathbf{x}_{n},y_{n})_{n\in\mathbb{N}}\) of (1). To this end, the arguments of Section 2 are adequately adapted to include hereafter the extra time dimension \(n\), which will be indicated by the super-/sub-scripts \([n]\), \((n)\), or \(n\) in notations. ### Defining the state-action space The action space \(\mathfrak{A}\) is defined as any finite grid of the range \([1,2]\) of the exponent \(p\) in the \(\ell_{p}\)-norm. The goal of the proposed RL framework is to generate a sequence of actions \((a_{n}=p_{n})_{n\in\mathbb{N}}\) in some "optimal" sense. The state space \(\mathfrak{S}\) is assumed to be continuous. Unlike [21], where \(\mathfrak{S}\) is the high dimensional \(\mathbb{R}^{2L+1}\), this study considers \(\mathfrak{S}\coloneqq\mathbb{R}^{4}\), rendering the dimension of \(\mathfrak{S}\) independent of \(L\) to address the "curse of dimensionality" observed in [21]. Due to the streaming nature of \((\mathbf{x}_{n},y_{n})_{n\in\mathbb{N}}\), state vectors \((\mathbf{s}_{n}\coloneqq[s_{1}^{(n)},s_{2}^{(n)},s_{3}^{(n)},s_{4}^{(n)}]^{ \mathsf{T}})_{n\in\mathbb{N}}\) are defined inductively by the following heuristic rules: \[s_{1}^{(n)} \coloneqq\log_{10}(y_{n}-\mathbf{\theta}_{n}^{\intercal}\mathbf{x}_ {n})^{2}\,, \tag{6a}\] \[s_{2}^{(n)} \coloneqq\tfrac{1}{M_{\mathsf{w}}}\sum_{m=1}^{M_{\mathsf{w}}} \log_{10}\frac{\left(y_{n-m}-\mathbf{\theta}_{n}^{\intercal}\mathbf{x}_{n-m} \right)^{2}}{\|\mathbf{x}_{n-m}\|_{2}^{2}}\,,\] (6b) \[s_{3}^{(n)} \coloneqq\log_{10}\|\mathbf{x}_{n}\|_{2}\,,\] (6c) \[s_{4}^{(n)} \coloneqq\varpi s_{4}^{(n-1)}+(1-\varpi)\log_{10}\frac{\|\mathbf{ \theta}_{n}-\mathbf{\theta}_{n-1}\|_{2}}{\rho}\,, \tag{6d}\] where \(M_{\mathsf{w}}\in\mathbb{N}_{*}\), \(\varpi\in(0,1)\) are user-defined parameters, and \(\rho\) comes from (1). The classical _prior loss_[22] is used in (6a), an \(M_{\mathsf{w}}\)-length sliding-window sampling average of the _posterior loss_[22] is provided in (6b), normalized by the norm of the input signal to remove as much as possible its effect on the error, the instantaneous norm of the input signal appears in (6c), and a smoothing auto-regressive process is used in (6d) to monitor the consecutive displacement of the estimates \((\mathbf{\theta}_{n})_{n\in\mathbb{N}}\). The reason for including \(\rho\) in (6d) is to remove \(\rho\)'s effect from \(s_{4}^{(n)}\). Owing to (1), the initial value \(s_{4}^{(0)}\) in (6d) is set equal to \(\log_{10}[(1/\rho)\|\mathbf{\theta}_{1}-\mathbf{\theta}_{0}\|_{2}]=\log_{10}p_{0}+(p_{ 0}-1)s_{1}^{(0)}+s_{3}^{(0)}\). The \(\log_{10}(\cdot)\) function is employed to decrease the dynamic range of the positive values in (6). ### Approximate policy iteration (API) The setting of Proposition 1(iii) is considered here. Since that setting can be viewed as an approximation of the classical Bellman mappings in (2) (see Propositions 1(i) and 1(ii)), this section offers the approximate-policy-iteration (API) Algorithm 1 for the problem at hand based on the standard PI strategy [1]. With \(Q_{n}\) available, the stationary policy \(\mu_{n}(\cdot)\), which is necessary in Line 4 of Algorithm 1 and for constructing (4) and (5), is defined according to the standard greedy rule [1] \[\mu_{n}(\mathbf{s})\coloneqq\arg\min_{a\in\mathfrak{A}}Q_{n}(\mathbf{s},a)\,, \quad\forall\mathbf{s}\in\mathfrak{S}\,. \tag{7}\] Variations of policy improvement via rollout can be also considered; see for example [21]. To avoid lengthy arguments, only two hyperslabs \(\{H_{i}^{\mu_{n}}\}_{i=1}^{2}\) are utilized in (4); that is, \(I\coloneqq 2\). Hyperslab \(H_{1}^{\mu_{n}}\) employs currently and recently sampled data, while \(H_{2}^{\mu_{n}}\) employs sampled data from the "remote past." Samples \(\{\mathbf{x}_{1j}^{\mu_{1}}[n],\mathbf{s}_{1j}^{\nu_{1}}[n]\}_{j=1}^{N_{1}[n]}\) are defined as follows: \(\mathbf{z}_{11}^{\mu_{1}}[n]\coloneqq\mathbf{z}_{n-1}=(\mathbf{s}_{n-1},a_{n-1})\) and \(\mathbf{s}_{1j}^{\nu_{1}}[n]\coloneqq\mathbf{s}_{n}\), and \(\{(\mathbf{z}_{1j}^{\mu_{1}}[n],\mathbf{s}_{1j}^{\nu_{1}}[n])\}_{j=2}^{N_{1}[n]} \coloneqq\{(\mathbf{s}_{r},a_{r},\mathbf{s}_{r+1})\}_{r\in\mathcal{A}[n]}\), where \[\mathcal{M}[n]\coloneqq\{\tau\in\{n-1-N_{\mathsf{w}},\ldots,n-2\} \mid\kappa(\mathbf{z}_{n-1},\mathbf{z}_{r})>c\}\,, \tag{8}\] for the user-defined sliding-window size \(N_{\mathsf{w}}\) and \(c>0\). The value \(g(\mathbf{z}_{11}^{\mu_{1}}[n])\) of the one-step loss needed in (5) is defined as \[g(\mathbf{z}_{11}^{\mu_{n}}[n])\coloneqq\tfrac{1}{M_{\mathsf{w}}}\sum_{m=1}^{M_{ \mathsf{w}}}\log_{10}\frac{\left(y_{n-m}-\mathbf{\theta}_{n}^{\intercal}\mathbf{x}_ {n-m}\right)^{2}}{\|\mathbf{x}_{n-m}\|_{2}^{2}}\,. \tag{9}\] Hyperslab \(H_{2}^{\mu_{n}}\) reuses samples \(\{\mathbf{z}_{2j}^{\mu}[n],\mathbf{s}_{2j}^{\mu_{n}^{\prime}}[n]\}_{j=1}^{N_{2}[n]}\) from the "remote past," along the lines of [34]. Arbitrarily sampling the state-action space to receive feedback from the surrounding environment (exploration) led to slow adaptation and unstable performance. This is the reason why samples from the remote past are used here. For illustration, only a single "remote-past" datum, that is, \(N_{2}^{\mu_{n}}[n]\coloneqq 1\), is used: \(\mathbf{z}_{11}^{\mu_{n}}[n]\coloneqq\mathbf{z}_{\nu-1}=(\mathbf{s}_{\nu-1},a_{\nu-1})\) and \(\mathbf{s}_{21}^{\mu_{1}}[n]\coloneqq\mathbf{s}_{\nu}\), for some \(\nu<n\). The value \(g(\mathbf{z}_{21}^{\mu_{n}}[n])\) of the one-step loss is defined as in (9), but with \(\nu\) in the place of \(n\). Recalling propositions 1(i) and 1(iii), coefficients \(\mathbf{d}_{i}[n]\coloneqq[d_{i1}[n],\ldots,d_{iN_{i}^{\mu_{n}}[n]}[n]]^{\mathsf{T}}\), which are needed in (4) and (5), were introduced so that \(\sum_{j=1}^{N_{\mu}[n]}d_{ij}[n]\varphi(\mathbf{s}_{ij}^{\mu_{n}^{\prime\prime}}[n], \mu(\mathbf{s}_{ij}^{\mu_{n^{\prime}}}[n]))\) approximates \(\bar{\varphi}^{\mu}(\mathbf{z})=\mathbb{E}_{\mathbf{\varphi}^{\prime}\mathbf{\mathbbm{z}} }\{\varphi(\mathbf{\mathbbm{z}}^{\prime},\mathbf{\mathbbm{z}}(\mathbf{\mathbbm{z}}^{\prime}))\}\). Although there are several ways to determine \(\mathbf{d}_{i}[n]\), motivated by the offline-learning context in [36], the following solution to a ridge-regression problem is put forth here: \[\mathbf{d}_{i}[n] \coloneqq\arg\min_{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{\mathbf{ \mathbf{\mathbf{\color{\ \(\mathcal{P}\); see, for example, the Gaussian-kernel case [4]. To address this "curse of dimensionality," random Fourier features (RFF) [37] are employed here. Avoiding most of the details due to space limitations, the feature map \(\varphi(\mathbf{z})\) is approximated by the following Euclidean vector \[\tilde{\varphi}(\mathbf{z})\coloneqq(\tfrac{2}{D_{\mathrm{F}}})^{1/2}[\cos \left(\mathbf{v}_{1}^{\intercal}\mathbf{z}+b_{1}\right),\ldots,\cos\left( \mathbf{v}_{D_{\mathrm{F}}}^{\intercal}\mathbf{z}+b_{D_{\mathrm{F}}}\right)]^ {\intercal}, \tag{12}\] with \(D_{\mathrm{F}}\in\mathbb{N}_{*}\) being a user-defined dimension, while \(\{\mathbf{v}_{k}\}_{k=1}^{D_{\mathrm{F}}}\) and \(\{b_{k}\}_{k=1}^{D_{\mathrm{F}}}\) are Gaussian and uniform RVs, respectively. ## 4 Numerical Tests In all tests, the action space \(\mathfrak{A}\coloneqq\{1,1.25,1.5,1.75,2\}\). Figures 1 and 2 demonstrate the performance of Algorithm 1 against **(i)** (1), where \(p\in\mathfrak{A}\) is kept fixed throughout all iterations; **(ii)**[29], which uses a combination of adaptive filters with different forgetting factors but with the same fixed \(p\)-norm; **(iii)**[33], which uses a combination of LMP (1) (\(p=1\) and \(p=2\)) iterations; **(iv)** the kernel-based TD(0) [18] with experience replay and RFF; **(v)** the online-Bellman-residual (OBR) method [11] with experience replay and RFF; **(vi)** the kernel-based (K)LSPI [10] with experience replay, and **(vii)** the predecessor of this work [21]. Tests were also run to examine the effect of several of Algorithm 1's parameters on performance; see Figure 3. Due to the similarity of OBR with (5), additional realizations of OBR are shown also in Figure 3. The metric of performance is the normalized deviation from the desired \(\boldsymbol{\theta}_{*}\); see the vertical axes in all figures. The Gaussian kernel [4] was considered, approximated by RFF as in (12). The dimension \(L\) of \(\mathbf{x}_{n},\boldsymbol{\theta}_{*}\) in (1) is \(100\), and the learning rate \(\rho=10^{-3}\). Both \(\mathbf{x}_{n}\) and \(\boldsymbol{\theta}_{*}\) are generated from the Gaussian distribution \(\mathcal{N}(\mathbf{0},\mathbf{I}_{L})\), with \((\mathbf{x}_{n})_{n\in\mathbb{N}}\) and the entries of \(\boldsymbol{\theta}_{*}\) designed to be independent. Moreover, \(M_{\mathrm{w}}\coloneqq 300\) and \(\varpi\coloneqq 0.3\) in (6), \(w_{1}\coloneqq w_{2}\coloneqq 0.5\) and \(\epsilon_{1}\coloneqq 0,\epsilon_{2}\coloneqq 0.05\) in (5), while \(\lambda_{n}\coloneqq 0.25\), \(\forall n\in\mathbb{N}\), in (11). In (10), \(\sigma_{1}\coloneqq 10^{-3}\) and \(\sigma_{2}\coloneqq 0\). In (8), \(N_{\mathrm{w}}=10\) and \(c=0.95\). Two types of outliers were considered. First, \(\alpha\)-stable outliers [38] are considered, with parameters \(\alpha_{\text{stable}}=1\), \(\beta_{\text{stable}}=0.5\), \(\sigma_{\text{stable}}=1\), which yield a considerably heavy-tailed distribution. Second, "sparse" outliers are also generated, with values taken from the interval \([-100,100]\) via the uniform distribution. Sparse outliers appear in randomly selected \(10\%\) of the total number of time instances in Figures 1 to 3, whereas Gaussian noise with \(\text{SNR}=30\)dB appears at every time instance \(n\). As it is customary in adaptive filtering, system \(\boldsymbol{\theta}_{*}\) is changed randomly at time \(20,000\) to test the tracking ability of Algorithm 1. Each test is repeated independently for \(100\) times, and uniformly averaged curves are reported. As it can be verified by Figures 1 to 3, Algorithm 1 outperforms all competing methods. KLSPI [10] fails to provide fast convergence speed. The convergence speed of [21] is hindered by the high dimensionality of the adopted state space \(\mathbb{R}^{2L+1}\). TD(0) [18] converges fast, but with a subpar performance when compared with Algorithm 1. More tests on several other scenarios will be reported in the journal version of the paper. ## 5 Conclusions The novel class of proximal Bellman mappings was introduced to offer a simple, flexible, and general framework for reinforcement learning (RL). The proposed framework possesses ample degrees of design freedom that allows not only for reproducing attributes of the classical Bellman mappings, widely used in RL, but also to open the door to novel RL designs. The paper provided also the exciting connection between the advocated proximal Bellman mappings and the powerful Hilbertian toolbox of nonexpansive and monotone mappings. As a non-trivial application of the proposed class of mappings, the problem of robust adaptive filtering was considered, which appears to be addressed under the light of RL for the first time in the literature by this study and its predecessor. Numerical tests showcase the superior performance of the proposed design over non-RL and kernel-based RL schemes.
2310.00350
Cluster-Persistence for Weighted Graphs
Persistent homology is a natural tool for probing the topological characteristics of weighted graphs, essentially focusing on their $0$-dimensional homology. While this area has been substantially studied, we present a new approach to constructing a filtration for cluster analysis via persistent homology. The key advantages of the new filtration is that (a) it provides richer signatures for connected components by introducing non-trivial birth times, and (b) it is robust to outliers. The key idea is that nodes are ignored until they belong to sufficiently large clusters. We demonstrate the computational efficiency of our filtration, its practical effectiveness, and explore into its properties when applied to random graphs.
Omer Bobrowski, Primoz Skraba
2023-09-30T12:14:49Z
http://arxiv.org/abs/2310.00350v1
# Cluster-Persistence for Weighted Graphs ###### Abstract Persistent homology is a natural tool for probing the topological characteristics of weighted graphs, essentially focusing on their \(0\)-dimensional homology. While this area has been substantially studied, we present a new approach to constructing a filtration for cluster analysis via persistent homology. The key advantages of the new filtration is that (a) it provides richer signatures for connected components by introducing non-trivial birth times, and (b) it is robust to outliers. The key idea is that nodes are ignored until they belong to sufficiently large clusters. We demonstrate the computational efficiency of our filtration, its practical effectiveness, and explore into its properties when applied to random graphs. ## 1 Introduction **Clustering** data is a fundamental task in unsupervised machine learning and exploratory data analysis. It has been the subject of countless studies over the last 50 years with many definitions and algorithms proposed, e.g., [13, 15]. **Persistent homology**[8, 22] is a powerful topological tool that provides multi-scale structural information about data. Given an increasing sequence of spaces (filtration), persistent homology tracks the formation of connected components (\(0\)-dimensional cycles), holes (\(1\)-dimensional cycles), cavities (\(2\)-dimensional cycles), and their higher-dimensional extensions. The information encoded in persistent homology is often represented by a _persistence diagram_ - a collection of points in \(\mathbb{R}^{2}\), representing the birth and death of homology classes, and providing an intuitive numerical representation for topological information (see Figure 1). The connection between clustering and \(0\)-dimensional persistent homology has been well-established under a various different scenarios including the relationship with functoriality [4, 5], and density-based methods [2, 6]. An important motivating factor for connecting these methods is _stability_. Namely, given small perturbations of the input data, persistent homology and can provide guarantees on the number of the output clusters. One important drawback of this topological approach is that statistical tests for persistent homology and clustering based on persistence, have been lacking. Recently, for persistent homology in dimensions \(1\) and above (i.e., excluding connected components), persistent homology based on distance filtrations was experimentally shown to exhibit a strong sense of universal behavior [3]. Suppose we are given as input a point-cloud generated by some unknown distribution. If we compute the distance-based persistent homology, under an appropriate transformation, the distribution of persistence values was shown to be independent of the original point-cloud distribution. This phenomenon was then used to develop a statistical test to detect statistically significant homology classes. A key point in [3] is that in order to obtain such universal behavior, the measure of persistence is given by the value of \(\mathrm{death}\,/\,\mathrm{birth}\), which makes the measure of persistence scale-invariant. However, in distance-based filtrations, the \(0\)-dimensional persistent homology (tracking clusters) does not fit into this universality framework, as the birth time of all the \(0\)-dimensional homology classes is set to \(0\). To address this issue, and to enable the study of universality in the context of clustering, we introduce a new filtration, which we call the \(k\)-cluster filtration. This is a novel, non-local, construction, where vertices become "alive" only once they belong to a sufficiently large cluster. In other words, while traditional persistent homology considers every vertex as an individual clusters and tracks its evolution, in the \(k\)-cluster filtration we only consider components of \(k\) or more vertices as'meaningful' clusters. We note that while the motivation for this new filtration is distance-based filtration, the \(k\)-cluster filtration can be constructed over any weighted graph. It generally provides two key advantages to the traditional filtration. Firstly, it results in a 'richer' persistence diagram, in the sense that components have non-trivial birth times. This improves our ability to compare between the different features within the same diagram, or across different diagrams. Secondly, the \(k\)-cluster filtration provides a more 'focused' view on connected components, by discarding those that are considered small (determined by application). In particular it easily allows to remove outliers from the persistence diagram. The paper is organized as follows. Section 2 provides essential background about persistent homology. In Section 3 we introduce the \(k\)-cluster filtration and give some preliminary properties. In Section 4 we provide an algorithm for computing the filtration and corresponding persistence diagram in a single pass. In Section 5 we show some experimental results comparing the clustering method to some other approaches. Finally, in Section 6 we discuss some probabilistic aspects of this filtration, in comparison with known properties of random graphs and simplicial complexes. Remark.As we deal exclusively with \(0\)-dimensional homology, we phrase all the statements in this paper terms of weighted graphs rather than simplicial complexes. Graph Filtrations and Persistent Homology In this section, we introduce the required topological notions. As we focus on the special case of graphs and connected components, i.e. \(0\)-dimensional homology, we restrict our definitions to this case. For a general description of \(k\)-dimensional homology we refer the reader to [11, 16]. Let \(G=(V,E)\) be an undirected graph. Our main object of study is a _graph filtration_ or an increasing sequence of graphs. This can be constructed by defining a function \(\tau:(V\cup E)\to[0,\infty)\), under the restriction that if \(e=(u,v)\in E\), then \(\tau(e)\geq\max(\tau(u),\tau(v))\). This restriction ensures that the sublevel sets of \(\tau\) define a subgraph. The filtration \(\left\{\mathcal{G}_{t}\right\}_{t\geq 0}\) is then defined via \[\mathcal{G}_{t}=\left\{\sigma\in V\cup E:\tau(\sigma)\leq t\right\}.\] As we increase \(t\) from \(0\) to \(\infty\), we can track connected components of \(\mathcal{G}_{t}\) as they appear and merge, which are referred to as _birth_ and _deaths_, respectively. When two components merge, we use the 'elder rule' to determine that the later born component is the one that dies. Note that at least one component has an infinite death time in any graph filtration. We refer the reader to [8] for further details on this. These birth-death events can be tracked by an algebraic object called a \(0\)-dimensional persistent homology. Its most common visualization is via a _persistence diagram_ - a collection of points in \(\mathbb{R}^{2+}\), where each corresponds to a single connected component. The coordinates of a points encode the information with the \(x\)-coordinate representing the birth time, and the \(y\)-coordinate representing the death time. An example for a function on a line-graph is shown in Figure 1. Note that one component is infinite which we denote with a dashed line at the top of the diagram. In a more general context, given a filtration of higher-dimensional objects (e.g., simplicial complexes), we can study the \(k\)-dimensional persistent homology. This object tracks the formation of \(k\)-dimensional cycles (various types of holes), and its definition is a natural extension of the \(0\)-dimensional persistent homology we study here. We refer the reader to [8] for more information. ## 3 The \(k\)-Cluster Filtration Let \(G=(V,E,W)\) be an undirected weighted graph. In computing \(0\)-dimensional persistent homology, the filtration values are commonly taken to be \(\tau(v)=0\) for all \(v\in V\), and \(\tau(e)=W(e)\) for all \(e\in E\). We will denote this filtration by \(\mathcal{G}_{t}^{*}\). In other words, we assume all vertices are present at time zero, and edges are gradually added according to the weight function \(W\). This has been the practice in the TDA literature in almost all studies, and in particular in the geometric settings where \(W\) represents the distance between points (i.e., the geometric graph, which is the skeleton of both the Cech and Vietoris-Rips complexes). While in many models, this choice of \(\tau\) seems reasonable, it has Figure 1: An example of a graph filtration on a line-graph. The filtration value of the vertices are given by \(\tau\) (the \(y\)-axis). The filtration value of each edge is taken as the highest value between its plotted endpoints. The bars in the middle represent the tracking of the components. The vertices which are local minima, i.e. \(a\), \(c\), \(f\), and \(i\), generate new components and so \(\tau(a)\), \(\tau(c)\), \(\tau(f)\), and \(\tau(i)\) correspond to birth times. The first merge occurs at \(\tau(b)=\tau((a,b))=\tau((b,c))\) merging \(\{a\}\) with \(\{c,d\}\). In this case we declare the latter as dead since \(\tau(a)<\tau(c)\). Next, at \(\tau((d,e))\), the components \(\{a,b,c,d\}\) and \(\{e,f\}\) are merged, and the latter dies. Finally, at \(\tau((g,h))\), the components \(\{a,b,c,d,e,f,g\}\) and \(\{h,i\}\) are merged, killing the former. The component containing \(i\) has the earliest birth time, and thus is declared infinite. two significant drawbacks: * The produced persistence diagrams are _degenerate_, as the birth times of all 0-cycles is \(t=0\). This significantly reduces the amount of information we can extract from persistence diagrams. * The generated persistence diagrams are _superfluous_, in the sense that they contains a point for each vertex \(V\), while obviously not all vertices contribute significant structural information. In this paper we propose a modification to the standard graph filtration, that will resolve both of these issues, and will lead to a more concise and informative persistence diagrams. We will first define the filtration value for the vertices. For every vertex, and a value \(t>0\) we define \(N_{t}(v)\) to be the number of vertices in the connected component of \(\mathcal{G}_{t}^{*}\) that contains \(v\). Fix \(k\geq 1\), and define \[\tau_{k}(v):=\inf\left\{t:N_{t}(v)\geq k\right\}. \tag{3.1}\] The edges values are then \[\tau_{k}((u,v))=\max(\tau_{k}(u),\tau_{k}(v),W((u,v))). \tag{3.2}\] Denoting the corresponding filtration by \(\mathcal{G}_{t}^{(k)}\), note that \(\mathcal{G}_{t}^{(1)}\equiv\mathcal{G}_{t}^{*}\). In other words, compared to \(\mathcal{G}_{t}^{*}\), in \(\mathcal{G}_{t}^{(k)}\) we delay the vertices appearance, until the first time each vertex is contained in a component with at least \(k\) vertices (and adjust the edge appearance to be compatible). Effectively, the assignment of the new filtration values to the vertices introduces two changes to the persistence diagrams: 1. All the points that are linked to components of size smaller than \(k\) are removed. 2. Each birth time corresponds to an edge merging two components \(C_{1},C_{2}\) in \(\mathcal{G}_{t}^{*}\), such that \(|C_{1}|,|C_{2}|<k\), and \(|C_{1}|+|C_{2}|\geq k\). 3. Each death time corresponds to an edge merging two components larger than \(k\). We call this filtration the '\(k\)-cluster filtration', to represent the fact that it tracks the formation and merging of clusters of size at least \(k\). The parameter \(k\) determines what we consider as a sufficiently meaningful cluster. In \(\mathcal{G}_{t}^{*}\), every vertex is considered a cluster, but statistically speaking, this is an overkill. The chosen value of \(k\) should depend on the application as well as the sample size. We conclude this section showing that the \(k\)-cluster filtrations are decreasing (in a set sense) as we increase \(k\). This can be useful, for example, in the context of multi-parameter persistence, which we briefly mention but leave for future work. **Lemma 3.1**.: _The filtrations \(\mathcal{G}_{t}^{(k)}\) are decreasing in \(k\), i.e.,_ \[\tau_{k-1}(x)\leq\tau_{k}(x),\quad\forall x\in V\cup E.\] Proof.: For any vertex \(v\in V\), if \(|N_{t}(v)|\geq k\), then \(|N_{t}(v)|\geq k-1\). From (3.1) we therefore have that \(\tau_{k-1}(v)\leq\tau_{k}(v)\). Using (3.2), we have \(\tau_{k-1}(e)\leq\tau_{k}(e)\) for all \(e\in E\). ## 4 Algorithm In this section, we describe an efficient one-pass algorithm for computing the filtration and persistence diagram at the same time. The time complexity of the algorithm is \(O(|E|\times\alpha(|V|))\), where \(\alpha(\cdot)\) is the inverse Ackermann function [7]. This is the same complexity as computing the 0-dimensional persistence diagram if we were given the filtration as input. We begin with the (standard) terminology and data structures. For simplicity of the description, we assume that the weights on the edges are unique and the vertices have a lexicographical order. We first define a total order on the vertices as follows: the filtration function determines the ordering. Undefined filtration functions are assumed to be \(\infty\). If the function is the same or undefined for both vertices, the order is then determined by lexicographical ordering. It is straightforward to check this is a total ordering. **Remark 4.1**.: _In the case of a total ordering, one can choose a representative of 0-dimensional persistent homology classes - notably, in the total ordering a unique vertex is the earliest generator for the homology class (i.e., the cluster) which we denote as the canonical representative of the persistent component._ To track components as we proceed incrementally through the filtration, we use the union-find data structure, which supports two operations: * ROOT(\(v\)): returns the canonical representative for the connected component containing \(v\). * including updating the root. We augment the data structure by keeping track of two additional records: * SIZE(\(v\)): returns the size of the connected component containing \(v\). * COMPONENT(\(v\)): returns the list of vertices in the same component as \(v\). To track the size of the component, we store the size at the root (i.e., the canonical representative) of each component, updating each time a merge occurs. To access a connected component, recall that the union-find data structure is implemented as a rooted tree. For each vertex, we store a list of children in the tree. To recover the list of vertices in the component, we perform a depth-first search of the tree starting from the root (although any other traversal method could be used). All update operations have \(O(1)\) cost (cf., [7]). Note that when \(k=1\), the filtration value for all vertices is \(0\) and so the problem reduces to finding the minimum spanning tree of a weighted graph. Hence, we will assume that \(k>1\). Initially, we set the filtration function \(\tau(v)=0\) for all vertices, and \(\tau(e)=W(e)\) for all edges, and assume the edges are sorted by increasing weight. Note that if this is not the case, this step will be the bottleneck, with a cost of \(O(|E|\log|E|)\). Thus, we begin with a forest where each component is a single vertex, i.e. all components are initially born at 0. We proceed as in the case of standard 0-dimensional persistence, adding edges incrementally. As no components are added, we are only concerned with merges, the problem is reduced to updating the birth times as we proceed by keeping track of "active" components (i.e., larger than \(k\)). We omit points in the persistence diagram which are on the diagonal (birth=death), but these can be included with some additional book-keeping. Assume we are adding the edge \(e=(u,v)\). If \(e\) is internal to a connected component (i.e., \(\text{ROOT}(u)=\text{ROOT}(v)\)), then it does not affect the \(0\)-persistence. Otherwise, it connects two components denoted \(C_{u},C_{v}\). There are a few cases to consider: 1. \(|C_{u}\cup C_{v}|<k\): The merged component is too small to affect the persistence diagram. We only perform a merge of the components. 2. \(|C_{u}\cup C_{v}|\geq k\) and \(|C_{u}|<k\): In this case, \(C_{u}\) becomes active. Thus, we merge the components, and update the value of \(\tau\) for all vertices in \(C_{u}\). \[\tau(x)\gets W(e)\qquad\forall x\in C_{u}\] is performed. We take similar action if \(|C_{v}|<k\) (or both are less than \(k\)). 3. \(|C_{u}|,|C_{v}|\geq k\): Both components are already active and so a new point \((\text{birth},\text{death})\) is added to the persistence diagram, with \[\text{birth} =\max\left\{\tau(\text{ROOT}(u)),\tau(\text{ROOT}(v))\right\},\] \[\text{death} =W(e).\] The components are again merged. We note that for any \(v\), \[\tau(\text{ROOT}(v))=\min_{x\in C_{v}}\tau(x).\] The full procedure is given in Algorithm 1. Note that we only compute the filtration for the vertices, as the correct edge values can then be computed by Equation 3.2. ``` 1:\(G=(V,E,W)\) 2:\(\tau:V\rightarrow[0,\infty)\) 3:Initialize union-find data structure: \(\text{ROOT}(v)=v\) for all \(v\in V\) 4:\(\text{Dgm},\text{MST}=\emptyset\) 5:for\(e=(u,v)\in E\)do 6:if\(\text{ROOT}(u)\neq\text{ROOT}(v)\)then 7:\(\text{MST}\leftarrow\text{MST}\cup e\) 8:if\(\text{SIZE}(u)+\text{SIZE}(v)>k\)then 9:if\(\text{SIZE}(u)<k\)then 10:for\(x\in\text{COMPONENT}(u)\)do 11:\(\tau(x)\gets W(e)\) 12:endfor 13:endif 14:if\(\text{SIZE}(v)<k\)then 15:for\(x\in\text{COMPONENT}(v)\)do 16:\(\tau(x)\gets W(e)\) 17:endfor 18:endif 19:if\(\text{SIZE}(u),\text{SIZE}(v)\geq k\)then 20:\(\text{birth}=\max\{\tau(\text{ROOT}(u)),\tau(\text{ROOT}(v))\}\) 21:\(\text{death}=W(e)\) 22:\(\text{Dgm}\leftarrow\text{Dgm}\cup(\text{birth},\text{death})\) 23:endif 24:endif 25:\(\text{MERGE}(u,v)\) 26:endif 27:endfor 28:return\(\text{Dgm},\text{MST},\tau\) ``` **Algorithm 1** One-pass Algorithm Proof of Correctness.We first argue that the function \(\tau\) is correctly computed. This follows directly from the fact that the algorithm explicitly tests when the component contains at least \(k\) vertices. The fact that the persistence diagram is correctly computed is a consequence of the following result. **Lemma 4.2**.: _The minimum spanning tree for \(k=1\) is a minimum spanning tree for any \(k\)._ Proof.: The key observation is that until a component contains \(k\) vertices, any spanning tree is a minimum spanning tree, as all the edges will be assigned the value when the component becomes active. The remaining edges do not have their values changed and so remain in the MST. The equivalence of the MST and the persistence diagram [21] then implies correctness of the algorithm. Proof of Running Time.The analysis of the merging is covered verbatim from the standard analysis of the union-find data structure. As described above, the update to the size of the component and updating the list of children in the merge are \(O(1)\) operations. All that remains is to prove is the cost of updating the function \(\tau\). We observe that each vertex is only updated once. This therefore has a total cost of \(O(|V|)\), and the edges can be updated at a cost of \(O(1)\) per edge (however, there is no practical need for that). This implies the overall running time is \(O(|E|\times\alpha(|V))\). Extracting the Clusters.To obtain clusters, we can use the algorithm in [6]. This algorithm extracts the \(\ell\)-most persistent clusters by performing merges only when the resulting persistence is less than a threshold. This threshold can be chosen such that there are only \(\ell\) points above the threshold in the diagram. Finally, we note that the cluster extraction can be done on the MST rather than the full graph. ## 5 Experiments and Applications ### Simulated point-clouds We start by generating point-clouds from a mixture of Gaussians, resulting in several blobs of points (Figure 2). We first show the effect of the parameter \(k\) on the filtration function and the corresponding persistence diagrams. For the two point-clouds in Figure 2, we show the resulting persistence diagrams for the \(k\)-cluster filtrations in Figure 3. Notice that the correct number of persistent clusters is evident, especially for \(k=10,20,\) and \(50\). An important phenomenon that is evident in the figures is that higher values of \(k\) filter out more of the 'noise'. To place the behaviour of the persistence diagrams into further context, we compare the \(k\)-cluster filtration with a related construction from the applied topology literature, which has been suggested for dealing with outliers in clustering (and in higher homological dimensions) - the \(k\)-degree Vietoris-Rips filtration [14]. Given a weighted graph \(G=(V,E,W)\), we define the \(k\)- Figure 3: The persistence diagrams with death/birth on the \(y\)-axis with different choices of \(k\) for the points sampled from the two mixtures of Gaussians (top row) 3 blobs (bottom row) 4 blobs. Note that the number of outstanding features in the diagrams correspond to the number of clusters in the data. Figure 2: Two examples point-clouds consisting of an i.i.d. sampling from a mixture of three and four Gaussians and consisting of 1000 and 2000 points respectively. \(\delta_{k}:(V\cup E)\rightarrow[0,\infty)\) as follows. For every vertex \(v\in V\) we take \(\delta_{k}(v)\) to be its \(k\)-nearest neighbor distance. The values of the edges, is then determined the same as in (3.2). The \(k\)-degree filtration has been used in the context of multi-parameter persistence, with the bifiltration induced by decreasing \(k\) and increasing the edge weight (commonly, Euclidean distance). In this paper, we do not explore the multi-parameter setting. Rather, we focus the properties of the persistence diagrams for a fixed \(k\). We make two observations before investigating the differences: 1. The \(k\)-degree filtration function is determined completely by the local neighborhood of a vertex (i.e., its immediate neighbors in the graph). The same is not true for the \(k\)-cluster filtration. 2. For a fixed value of \(k\) we have \(\tau_{k}(v)\leq\delta_{k-1}(v)\) for all \(v\in V\). In other words, the value of \(k\)-cluster function is less than or equal to than the value of the \((k-1)\)-degree function. This follows from the fact that if a vertex has \(k-1\) neighbors, then it is part of a cluster of at least \(k\) vertices. In Figure 4, we show the relative persistence diagrams for two non-convex clusters for both the \(k\)-degree and \(k\)-cluster filtrations, for different values of \(k\). In this example, especially for larger \(k\), the persistent clusters are much more prominent in the \(k\)-cluster filtration compared to the \(k\)-degree filtration. This may be explained by the fact that a much larger radius is needed to obtain the required number of neighbors. In Figure 5, we show the same comparison for relative persistence diagrams for 3 and 4 blobs, where the difference between the two methods is less clear. However, Figure 6 highlights Figure 4: A comparison of the \(k\)-cluster and \(k\)-degree for the two moons data set. On the right we have the death/birth ratios for different values of \(k\). an additional difference in the behaviors of the two filtrations. In this figure, we compare the persistence (death/birth) for the second most persistent cluster, for a wide range of \(k\) values. In the left and center plots, the second most persistent cluster corresponds to a true cluster in the data. We observe that the persistence value decays much more slowly for the \(k\)-cluster filtration, i.e. the true cluster remains more persistent for increasing values of \(k\). The plot on the right presents the same comparison, but for uniformly distributed random points. In this case, the second most persistent cluster is by construction noise (i.e., not a real cluster in the data). Here although the \(k\)-cluster filtration decays more slowly, it is comparable to the \(k\)-filtration. Hence we can conclude that persistent clusters show a more stable behavior over ranges of \(k\) for the \(k\)-cluster filtration compared to the \(k\)-degree filtration. ### Universality In [3], we published a comprehensive experimental work, showing that the distribution of persistence values is universal. We consider a persistence diagram as a finite collection of points in \(\mathbb{R}^{2}\), \(\mathrm{dgm}=\{(b_{1},d_{1}),\ldots,(b_{M},d_{M})\}\). For each point \(p_{i}=(b_{i},d_{i})\) we consider the multiplicative persistence value \(\pi(p_{i})=d_{i}/b_{i}\). Our goal is to study the distribution of the \(\pi\)-values across an entire diagram. Our results in [3] are divided into two main parts. Given a point cloud of size \(n\), we compute the persistence diagram for either the Cech or the Vietoris-Rips filtrations. In _weak universality_ we consider Figure 5: (top row) 3 blobs, (bottom row) 4 blobs. The relative persistence diagrams for each point cloud, with the \(k\)-degree filtration in yellow and the \(k\)-cluster filtration in blue for \(k=5,10,20,\) and \(50\). the empirical measure \[\Pi_{n}:=\frac{1}{|\mathrm{dgm}_{k}|}\sum_{p\in\mathrm{dgm}_{k}}\delta_{\pi(p)},\] and we conjecture that for iid samples, we have \[\lim_{n\to\infty}\Pi_{n}=\Pi_{d,\mathcal{T},k}^{*},\] where \(d\) is the dimension of the point-cloud, \(k\) is the degree of homology, and \(\mathcal{T}\) is the filtration type (i.e., Cech or Vietoris-Rips). In other words, the limiting distribution for the \(\pi\)-values depends on \(d,k,\mathcal{T}\) but is independent of probability distribution generating the point-cloud. In _strong universality_ we present a much more powerful and surprising conjecture. Here, we define \(\ell(p):=A\log\log(\pi(p))+B\) (the values of \(A\) and \(B\) are speficied in [3]), and the empirical measure \[\mathcal{L}_{n}:=\frac{1}{|\mathrm{dgm}_{k}|}\sum_{p\in\mathrm{dgm}_{k}} \delta_{\ell(p)}.\] Our conjecture is that for wide class of random point-clouds (including non-iid and real-data), we have \[\lim_{n\to\infty}\mathcal{L}=\mathcal{L}^{*},\] where \(\mathcal{L}^{*}\) is a unique universal limit. Furthermore, we conjecture that \(\mathcal{L}^{*}\) might be the left-skewed Gumbel distribution. Originally, the results in [3] are irrelevant for the \(0\)-th persistence diagram of random point-clouds, as the birth times are all zeros. However, once we replace the standard filtration with the \(k\)-cluster filtration, we have a new persistence diagrams with non-trivial birth time that we can study. In Figure 7 we demonstrate both weak and strong universality properties for the \(k\)-cluster persistent homology. We generated iid point-clouds across different dimensions, with different distributions (uniform in a box, exponential, normal). The results show that both weak and strong universality hold in these cases as well. We note that for weak universality, the limiting distribution depends on both \(d\) (dimension of point-cloud) and \(k\) (minimum cluster size). Figure 6: The effect on the second most persistent cluster for different values of \(k\). On the left and center, this corresponds to a true cluster (left – two moons and center – mixture of 3 Gaussians). On the right –uniform random points. Here the noise cluster drops nearly as quickly in both cases. ### Clustering As mentioned in the introduction, a key motivation for this work was to apply the \(k\)-cluster filtration to clustering. To obtain a clustering from a 0-dimensional persistence diagram, we use the algorithm proposed in [6]. Roughly speaking, given a threshold \(\alpha\), it extracts all clusters which are more than \(\alpha\)-persistent. We note that the original measure for persistence in [6] was given by \(d-b\), however the change to use \(d/b\) in the algorithm is trivial. Statistical Testing.An important consequence of the universality results in Section 5.2 is that the limiting distribution (after normalization) appears to be a known distribution, i.e. left-skewed Gumbel. We can thus perform statistical testing on the number of clusters as in [3]. The null-hypothesis denoted by \(\mathcal{H}_{0}^{(i)}\), is that the \(i\)-th most persistent cluster is due to noise. Assuming the universality conjectures hold, the null hypothesis is given in terms of the \(\ell\)-values as \[\mathcal{H}_{0}^{(i)}\ :\ \ell(p_{i})\sim\mathrm{LGumbel}.\] where \(p_{i}\) represents the \(i\)-th most persistent cluster in terms of death/birth. The corresponding p-value is given by \[\text{p-value}_{i}=\mathbb{P}\left(\ell(p_{i})\geq x\mid\mathcal{H}_{0}^{(i)} \right)=e^{-e^{x}}.\] Note that since we are testing sorted values, we must use a multiple hypothesis testing correction. In the experiments we describe below, we use the Bonferroni correction. In Figure 8, we compared the \(k\)-cluster filtration and the \(k\)-degree filtration using persistence based clustering from [6] with other common algorithms for clustering. For the other approaches, we used Figure 7: Universal distribution for \(k\)-cluster persistence. The labels in the legend are structured as distribution/\(d\)/\(k\), where \(d\) is the point-cloud dimension, and \(k\) is the cluster size. The distributions taken are uniform in a unit box, exponential, and normal. The first two plots show that weak universality holds, and that the limit depends on \(d,k\), but not on the distribution. The rightmost plot, demonstrates that strong universality holds under a proper normalization. We also included the left-skewed Gumbel distribution (dashed line) for comparison. Figure 8: A comparison of standard clustering examples for different clustering approaches. In the case of the \(k\)-cluster filtration (PD) and \(k\)-degree filtration (Deg), the number of clusters was chosen using statistical significance testing. the standard implementations found in [17], which have associated techniques for choosing the number of clusters. In the cases of the \(k\)-cluster filtration and the \(k\)-degree filtration, the number of clusters was chosen using the statistical testing described above. Note that since the number of points in the standard examples was quite small, we limited \(k\) to \(5\) and \(10\). The best result is for the \(k\)-cluster filtration with \(k=10\) (\(k=5\) fails to identify one of the clusters in the third example). The \(k\)-degree filtration performs well but the additional "noise" points in the diagram, mean that some clusters are not identified as significant. Clustering on Trees.As a second example, we describe clustering on weighted trees. We generated a uniform random tree on \(n\) vertices, and assigned uniformly distributed random weights on the edges (between \(0\) and \(1\)). We show an example in Figure 9. The methods seems to capture certain structure about the tree, although we leave further investigation of this structure as future work. Note that in the tree case, it is often impossible to use \(k\)-degree filtrations, as the tree will have vertices with degree smaller than \(k\) that will never be included in the filtration, whereas for the \(k\)-clustering filtration, all nodes are included as long as the underlying graph is connected (or all components have at least \(k\) vertices). We note that it is possible to use an alternative definition for the \(k\)-degree filtrations, by embedding the tree into a metric space (i.e., using the graph metric induced by the weights). However, this is similar to studying a complete graph induced by the metric which is somewhat different than studying the graph directly. We use this method in the rightmost plot of Figure 9. Figure 9: A clustering on a uniform random tree. The threshold with \(k\)-clustering gives 4 clusters while only 3 with the (metric) \(k\)-degree. Probabilistic Analysis In this section we wish to revisit some of the fundamental results known for the (persistent) homology of random graphs and simplicial complexes, and show that analogous statements hold for our new \(k\)-cluster filtration. We provide here the main statements. Proofs are available in the appendix. ### Connectivity We will consider two models here. In the \(G(n,p)\) random graph we have \(n\) vertices, and each edge is placed independently with probability \(p\). In the \(G(n,r)\) random geometric graph, we take a homogeneous Poisson process \(\mathcal{P}_{n}\) on the \(d\)-dimensional flat torus, with rate \(n\). Edges are then placed between vertices that are less than distance \(r\) apart. In both models, connectivity results are tied to the expected degree. For the \(G(n,p)\) model we define \(\Lambda=np\), and for the \(G(n,r)\) we take \(\Lambda=n\omega_{d}r^{d}\). Then in [9] and [19] the following was proved. **Theorem 6.1**.: _Let \(G_{n}\) be either \(G(n,p)\) or \(G(n,r)\). Then_ \[\lim_{n\to\infty}\mathbb{P}\left(\text{$G_{n}$ is connected}\right)=\begin{cases}1& \Lambda=\log n+w(n),\\ 0&\Lambda=\log n-w(n).\end{cases}\] A key element in proving connectivity (for either models) is to show that around \(\Lambda=\log n\), the random graph consists of a single giant component, a few isolated vertices, and nothing else. Thus, connectivity is achieved when the last isolated vertices gets connected. Our goal in this section is to analyze connectivity in the \(G(n,p)\) and \(G(n,r)\) model, via our new \(k\)-cluster filtration. Note that for a fixed \(n\), we can view both models as filtrations over the complete graph. For the \(G(n,p)\) model the weights of the edges, are independent random variables, uniformly distributed in \([0,1]\). For the \(G(n,r)\) the weight of an edge is given by the distance between the corresponding points in the torus. We define \(G^{(k)}(n,p)\) and \(G^{(k)}(n,r)\) to be the random filtrations generated by changing the filtration function to be \(\tau_{k}\). Our goal here is to explore the phase transition for the \(k\)-cluster connectivity. As opposed to connectivity in the original random graphs, the results here differ between the models. **Theorem 6.2**.: _For the \(G^{(k)}(n,p)\) filtered graph we have,_ \[\lim_{n\to\infty}\mathbb{P}\left(\text{$G^{(k)}(n,p)$ is connected}\right)= \begin{cases}1&\Lambda=\frac{1}{k}(\log n+(k-1)\log\log n)+w(n),\\ 0&\Lambda=\frac{1}{k}(\log n+(k-1)\log\log n)-w(n),\end{cases}\] _for any \(w(n)=o(\log\log n)\) such that \(w(n)\to\infty\)._ For the \(G^{(k)}(n,r)\) model, proving the connectivity is a much more challenging task and beyond the scope of this paper. The following statement, however, is relatively straightforward to prove. **Proposition 6.3**.: _Let \(N_{k}=N_{k}(n,r)\) be the number of connected components of size \(k\) in \(G(n,r)\). Then,_ \[\lim_{n\to\infty}\mathbb{P}\left(N_{k}=0\right)=\begin{cases}1&\Lambda=\log n-( d-1)(k-1)\log\log n+w(n),\\ 0&\Lambda=\log n-(d-1)(k-1)\log\log n-w(n).\end{cases}\] _for any \(w(n)=o(\log\log n)\) such that \(w(n)\to\infty\)._ From this lemma we conclude that when \(\Lambda=\log n-(d-1)(k-1)\log\log n-w(n)\) the graph \(G(n,r)\) has components of size \(k\), which implies that \(G^{(k)}(n,r)\) is not connected. On the other hand, when \(\Lambda=\log n-(d-1)(k-1)\log\log n+w(n)\), we have \(N_{j}=0\) for all fixed \(j\geq k\). Which indicates that \(G^{(k)}(n,r)\) should be connected. This leads to the following conjecture. **Conjecture 6.4**.: _For the \(G^{(k)}(n,r)\) filtered graph we have,_ \[\lim_{n\to\infty}\mathbb{P}\left(G^{(k)}(n,r)\text{ is connected}\right)= \begin{cases}1&\Lambda=\log n-(d-1)(k-1)\log\log n+w(n),\\ 0&\Lambda=\log n-(d-1)(k-1)\log\log n-w(n).\end{cases}\] Note that both phase transitions occur before the ones for the original graph models. This is due to the fact that for \(k>1\) the \(k\)-cluster filtration does not allow having any isolated vertices. Also note that taking \(k=1\) both results coincide with Theorem 6.1. ### Limiting Persistence Diagrams In [12], it was shown that for stationary point processes, persistence diagrams have a non-random limit (in the vague convergences of measures). A similar statement will hold for the \(k\)-cluster persistence diagrams. Let \(\mathrm{Dgm}^{(k)}(\mathcal{P})\) be the \(k\)-cluster persistence diagram for a point-cloud \(\mathcal{P}\). We define the discrete measure on \(\mathbb{R}^{2}\), \[\xi^{(k)}(\mathcal{P}):=\sum_{(b,d)\in\mathrm{Dgm}^{(k)}(\mathcal{P})}\delta_ {(b,d)}.\] Let \(Q_{L}=[-L/2,L/2]^{d}\). The following is an analogue of Theorem 1.5 in [12]. **Theorem 6.5**.: _Assume that \(\mathcal{P}\) is a stationary point process in \(\mathbb{R}^{d}\) with all finite moments. For any \(k\), there exists a deterministic measure \(\mu_{k}\), such that_ \[\lim_{L\to\infty}\frac{1}{L^{d}}\mathbb{E}\left\{\xi^{(k)}(\mathcal{P}\cap Q _{L})\right\}=\mu_{k},\] _where the limit is in the sense of vague convergence. Furthermore, if \(\mathcal{P}\) is ergodic, then almost surely_ \[\lim_{L\to\infty}\frac{1}{L^{d}}\xi^{(k)}(\mathcal{P}\cap Q_{L})=\mu_{k}.\] ### Maximal Cycles In [1] the largest cycles in persistence diagrams were studied. Specifically, for every point \(p=(b,d)\) in a diagram, we compute the so-called \(\pi\)-value - \(\pi(p)=d/b\), as a scale-invariant measure of size. Considering the homogeneous Poisson process \(\mathcal{P}_{n}\), we define \(\Pi_{k,\max}\) as the largest \(\pi\)-value in the \(k\)-th persistent homology. The main result in [1] then states that with high probability \[A_{k}\Delta_{k}(n)\leq\Pi_{k,\max}\leq B_{k}\Delta_{k}(n),\] where \(A_{k},B_{k}>0\) are constants, and \[\Delta_{k}(n)=\left(\frac{\log n}{\log\log n}\right)^{1/k}.\] For the \(k\)-cluster persistence, we will show that the largest \(\pi\)-value has a completely different scaling. **Theorem 6.6**.: _Let \(\mathcal{P}_{n}\) be a homogeneous Poisson process in the flat torus, with rate \(n\). Let \(\Pi_{\max}^{(k)}\) denote the maximum \(\pi\)-value in the \(k\)-cluster persistence diagram (excluding the infinite cluster). Then, for every \(\epsilon>0\) we have_ \[\lim_{n\to\infty}\mathbb{P}\left(n^{\frac{1}{d(k-1)}-\epsilon}\leq\Pi_{\max} ^{(k)}\leq n^{\frac{1}{d(k-1)}+\epsilon}\right)=1.\] **Remark 6.7**.: _We observe that the largest \(\pi\)-value in the \(k\)-cluster persistence, is significantly larger than that of the \(k\)-dimensional homology. The main reason for that is the following. In [1], our upper bound for \(\Pi_{k,\max}\) used an iso-perimetric inequality, which implies that large \(\pi\)-values require large connected components. However, the \(\pi\)-values in the \(k\)-cluster persistence, only require a cluster of size \(k\) to be formed, and thus can be generated by much smaller connected components._ ## Appendix In this appendix we provide the proofs for the statements made in Section 6. ## Appendix A Connectivity Proof of Theorem 6.2.: Note that for the \(k\)-cluster filtration, connectivity is equivalent to the original \(G(n,p)\) graph having no components of size \(j\) for any \(k\leq j\leq n/2\). Let \(N_{j}=N_{j}(n,p)\) be the number of components of size \(j\) in \(G(n,p)\). Taking similar steps to the proof of connectivity for random graphs (e.g., [10]), we have (A.1) \[\mathbb{E}\left\{N_{j}\right\}\leq\binom{n}{j}j^{j-2}p^{j-1}(1-p)^{j(n-j)}.\] For \(k+1\leq j<4k\), we have \[\mathbb{E}\left\{N_{j}\right\}\leq Cn^{j}\left(\frac{\Lambda}{n}\right)^{j-1}e^{ -j(n-4k)(\Lambda/n)},\] for some \(C>0\). Taking \(\Lambda=\frac{1}{k}(\log n+(k-1)\log\log n)+c\), we have \[\mathbb{E}\left\{N_{j}\right\}\leq Cn^{-1/k}(\log n)^{j-1}e^{4j\log n/n}.\] For \(4k\leq j\leq n/2\), we have \[\mathbb{E}\left\{N_{j}\right\}\leq\left(\frac{ne}{j}\right)^{j}j^{j-2}\left( \frac{\Lambda}{n}\right)^{j-1}e^{-j\Lambda/2}\leq\frac{n}{j^{2}}\left(\frac{e ^{1-c/2}\log n}{n^{1/2k}}\right)^{j}\leq\frac{n}{j^{2}}n^{-j/3k}.\] Therefore, \[\sum_{j=4k}^{n/2}\mathbb{E}\left\{N_{j}\right\}\leq\frac{1}{8k^{2}}n^{-1/3}.\] To conclude, we showed that \[\lim_{n\to\infty}\sum_{j=k+1}^{n/2}\mathbb{E}\left\{N_{j}\right\}=0.\] This implies that for \(\Lambda=\frac{1}{k}(\log n+(k-1)\log\log n)+c\), we have \[\mathbb{P}\left(G^{(k)}(n,p)\text{ is connected}\right)\approx\mathbb{P} \left(N_{k}>0\right).\] Similar estimates to the ones above, show that \[\mathbb{E}\left\{N_{k}\right\}\approx e^{-kc}.\] Therefore, when \(c=w(n)\to\infty\), we have \(\mathbb{P}(N_{k}>0)\to 0\). Together with a second-order argument, we can similarly show that when \(c=-w(n)\), we have \(\mathbb{P}\left(N_{k}>0\right)\to 1\). This concludes the proof. Proof of Proposition 6.3.: Recall that \(N_{k}\) is the number of components of size \(k\) in \(G(n,r)\). In [20][Theorem 3.3], it was shown that \[\mathbb{E}\left\{N_{k}\right\}\approx\mathrm{Var}\left(N_{k}\right)\approx C_ {k}n\Lambda^{-(d-1)(k-1)}e^{-\Lambda},\] for some constant \(C_{k}>0\). When \(\Lambda=\log n-(d-1)(k-1)\log\log n+w(n)\), we have \(\mathbb{E}\left\{N_{k}\right\}\to 0\), implying that \(\mathbb{P}\left(N_{k}>0\right)\to 0\). When \(\Lambda=\log n-(d-1)(k-1)\log\log n-w(n)\), we can use Chebyshev's inequality, \[\mathbb{P}\left(N_{k}=0\right)\leq\mathbb{P}\left(\left|N_{k}-\mathbb{E} \left\{N_{k}\right\}\right|\geq\mathbb{E}\left\{N_{k}\right\}\right)\leq\frac {\mathrm{Var}\left(N_{k}\right)}{(\mathbb{E}\left\{N_{k}\right\})^{2}}\approx \frac{1}{\mathbb{E}\left\{N_{k}\right\}}\to 0.\] This completes the proof. Maximal \(\pi\)-value Proof.: Let \(r,R\) denote the birth and death radii of a cluster in the \(k\)-cluster persistence diagram, and recall that \(\pi=R/r\). For an upper bound, we denote by \(N_{k}(r)\) the number of connected subsets of size \(k\), at radius \(r\). Using Mecke's formula (cf. [18]), \[\mathbb{E}\left\{N_{k}(r)\right\} =\frac{n^{k}}{k!}\int_{(\mathbb{T}^{d})^{k}}\mathbbm{1}\left\{G( \mathbf{x},r)\text{ is connected}\right\}d\mathbf{x}\] \[=\frac{n\lambda^{k-1}}{k!}\int_{(\mathbb{R}^{d})^{k-1}}\mathbbm{1 }\left\{G((0,\mathbf{y}),1)\text{ is connected}\right\}d\mathbf{y},\] \[=C_{k}n\lambda^{k-1},\] where \(C_{k}\) is a positive constant, \(\lambda=nr^{d}\), and we used the change of variables \(x_{i}\to x_{1}+ry_{i}\) (\(i=2,\ldots,k\)). For any \(\epsilon>0\), if \(\lambda=n^{-1/(k-1)-\epsilon}\) then \(\mathbb{E}\left\{N_{k}(r)\right\}\to 0\). Thus, we can assume that with high probability the birth time of all \(k\)-clusters has \(\lambda\geq n^{-1/(k-1)-\epsilon}\). In addition, from Theorem 6.1, if we denote \(\Lambda=nR^{d}\), then when \(\Lambda=C\log n\) the graph \(G(n,r)\) is connected. This implies that with high probability all death times of \(k\)-clusters have \(\Lambda\leq C\log n\). These bounds together imply that with high probability, for all the points in the \(k\)-cluster persistence diagram, for any \(\epsilon>0\), we have \[\pi=\left(\frac{\Lambda}{\lambda}\right)^{1/d}\leq\left(\frac{C\log n}{n^{-1/ (k-1)-\epsilon}}\right)^{1/d}.\] Therefore, for any \(\epsilon>0\), we have \[\lim_{n\to\infty}\mathbb{P}\left(\Pi_{\max}^{(k)}\leq n^{\frac{1}{d(k-1)}+ \epsilon}\right)=1.\] For the lower bound, we denote by \(\hat{N}_{k}(r,R)\) the number of components of size \(k\), born before \(r\), that are isolated at radius \(R\) (and hence die after \(R\)). Then \[\mathbb{E}\{\hat{N}_{k}(r,R)\} =\frac{n^{k}}{k!}\int_{(\mathbb{T}^{d})^{k}}\mathbbm{1}\left\{G( \mathbf{x},r)\text{ is connected}\right\}e^{-n\operatorname{Vol}(B_{R}(\mathbf{x }))}d\mathbf{x},\] \[=\frac{n\lambda^{k-1}}{k!}\int_{(\mathbb{R}^{d})^{k-1}}\mathbbm{1 }\left\{G((0,\mathbf{y}),1)\text{ is connected}\right\}e^{-n\operatorname{ Vol}(B_{R}(0,r\mathbf{y}))}d\mathbf{y},\] where \(B_{R}(\mathbf{x})\) is the union of balls of radius \(R\) around \(\mathbf{x}\). We will apply the dominated convergence theorem, using the fact that when \(r/R\to 0\), we have \[\lim_{n\to\infty}\frac{\operatorname{Vol}(B_{R}(0,0+r\mathbf{y}))}{\omega_{d} R^{d}}=1.\] This leads to \[\mathbb{E}\{\hat{N}_{k}(r,R)\}\approx C_{k}n\lambda^{k-1}e^{-\omega_{d}\Lambda}.\] Taking \(\Lambda=C>0\), and \(\lambda=n^{-1/(k-1)+\epsilon}\), we have \[\mathbb{E}\{\hat{N}_{k}(r,R)\}\rightarrow\infty.\] Using a second moment argument will show that for all \(\epsilon>0\) \[\mathbb{P}\left(\Pi_{\max}^{(k)}\geq n^{\frac{1}{4(k-1)}-\epsilon}\right) \to 1,\] completing the proof. ## Appendix C Limiting Persistence Diagram The key part of the proof in [12], is bounding the add-one cost of the _persistent_ Betti numbers. Let \(G=(V,E,W)\) be a weighted graph, and \(\{G_{t}^{(k)}\}\) be the corresponding \(k\)-cluster filtration. Define \(\beta_{0}^{r,s}(G^{(k)})\) as the \(0\)-th persistent Betti number, i.e., the number of components born in \(t\in[0,r]\) and die at \((s,\infty]\) (for a formal definition, see [12]). Fix an edge \(e_{0}\not\in E\), with a given weight \(W(e_{0})=w_{0}\), and let \(\tilde{G}=(V,\tilde{E},\tilde{W})\) be a weighted graph with \(\tilde{E}=E\cup\{e_{0}\}\), and \[\tilde{W}(e):=\begin{cases}W(e)&e\neq e_{0},\\ w_{0}&e=e_{0}.\end{cases}\] Let \(\{\tilde{G}_{t}^{(k)}\}\) denote the corresponding \(k\)-cluster filtration. The entire proof Theorem 6.5 follows verbatim from the proofs in [12], provided that we prove the following lemma. **Lemma C.1**.: \[\left|\beta_{0}^{r,s}(\tilde{G}^{(k)})-\beta_{0}^{r,s}(G^{(k)})\right|\leq 1.\] In other words, if we add a single edge to the filtration, the number of persistent clusters can change by at most \(1\). Note that the proof here is not a straightforward application of Lemma 2.10 in [12], since in our case, when a single edge is added to the filtration, the filtration values of other vertices and edges might be affected. Proof.: Let \(e_{0}=(u,v)\) with \(W(e_{0})=w_{0}\). Let \(C_{u}\) and \(C_{v}\) denote the components of the end points of \(e_{0}\) at \(w_{0}\) in the original filtration \(\{G_{t}^{(k)}\}\). There are three possible cases which can occur. **Case I:** Both \(|C_{u}|<k\) and \(|C_{v}|<k\). Note that in this case \(\tau_{k}(u),\tau_{k}(v)>w_{0}\). Let \(C_{u}^{\prime}\) be the cluster of \(u\) at \(\tau_{k}(u)\), so that it is the component of \(u\) when it first appears in \(G_{t}^{(k)}\). Similarly define \(C_{v}^{\prime}\). Note that aside from \(C_{u}^{\prime}\cup C_{v}^{\prime}\) the filtration value of all other vertices remains unchanged by adding \(e_{0}\). Without loss of generality, suppose that \(w_{0}<\tau_{k}(u)<\tau_{k}(v)\). Then comparing the persistence diagram for \(G_{t}^{(k)}\) and \(\tilde{G}_{t}^{(k)}\), only two differences can occur: 1. The point representing \(C_{v}^{\prime}\) in \(G_{t}^{(k)}\) is removed, since \(C_{v}^{\prime}\) is no longer a connected component in \(\tilde{G}_{t}^{(k)}\) (as it is merged with \(C_{u}^{\prime}\)). 2. The point representing \(C_{u}^{\prime}\) in \(G_{t}^{(k)}\) may get an earlier birth time in \(\tilde{G}_{t}^{(k)}\), in the interval \([w_{0},\tau_{k}(u))\). For a given \(r,s\), the first change might decrease \(\beta_{k}^{r,s}\) by \(1\), while the second change might increase it by \(1\). In any case, the total difference between \(\beta_{0}^{r,s}(\tilde{G}^{(k)})\) and \(\beta_{0}^{r,s}(G^{(k)})\) is no more than one. **Case II:**\(|C_{u}|\geq k\), and \(|C_{v}|<k\). Defining \(C_{v}^{\prime}\) the same as above, note in this case the filtration value of all points outside \(C_{v}^{\prime}\) will remain unchanged by adding \(e_{0}\). The only change that will occur in this case is that the point in the diagram of \(G_{t}^{(k)}\) corresponding to \(C_{v}^{\prime}\) will be removed in \(\tilde{G}_{t}^{(k)}\), since it is now merged with \(C_{u}\). Therefore, the difference in the persistent Betti numbers is at most 1. **Case III:** Both \(|C_{u}|\geq k\) and \(|C_{v}|\geq k\). If \(C_{u}=C_{v}\) then adding \(e_{0}\) creates a \(1\)-cycle (loop) and does not affect the \(k\)-cluster persistence diagram. If \(C_{u}\neq C_{v}\), then both \(C_{u}\) and \(C_{v}\) are represented by different points in the persistence diagram of \(G_{t}^{(k)}\). Adding \(e_{0}\) will cause one of these components to die earlier. In this case \(\beta_{0}^{r,s}\) may be decreased by \(1\) (if \(s>w_{0}\)).
2302.00141
Revisiting Bellman Errors for Offline Model Selection
Offline model selection (OMS), that is, choosing the best policy from a set of many policies given only logged data, is crucial for applying offline RL in real-world settings. One idea that has been extensively explored is to select policies based on the mean squared Bellman error (MSBE) of the associated Q-functions. However, previous work has struggled to obtain adequate OMS performance with Bellman errors, leading many researchers to abandon the idea. To this end, we elucidate why previous work has seen pessimistic results with Bellman errors and identify conditions under which OMS algorithms based on Bellman errors will perform well. Moreover, we develop a new estimator of the MSBE that is more accurate than prior methods. Our estimator obtains impressive OMS performance on diverse discrete control tasks, including Atari games.
Joshua P. Zitovsky, Daniel de Marchi, Rishabh Agarwal, Michael R. Kosorok
2023-01-31T23:14:25Z
http://arxiv.org/abs/2302.00141v2
# Revisiting Bellman Errors for ###### Abstract Offline model selection (OMS), that is, choosing the best policy from a set of many policies given only logged data, is crucial for applying offline RL in real-world settings. One idea that has been extensively explored is to select policies based on the mean squared Bellman error (MSBE) of the associated Q-functions. However, previous work has struggled to obtain adequate OMS performance with Bellman errors, leading many researchers to abandon the idea. Through theoretical and empirical analyses, we elucidate why previous work has seen pessimistic results with Bellman errors and identify conditions under which OMS algorithms based on Bellman errors will perform well. Moreover, we develop a new estimator of the MSBE that is more accurate than prior methods and obtains impressive OMS performance on diverse discrete control tasks, including Atari games. We open-source our data and code to enable researchers to conduct OMS experiments more easily. Offline Reinforcement Learning, Deep Reinforcement Learning, Model Selection, Hyperparameter Tuning, Bellman Errors ## 1 Introduction Offline reinforcement learning (RL) (Ernst et al., 2005; Levine et al., 2020) focuses on training an agent solely from a fixed dataset of environment interactions. By not requiring any online interactions, offline RL can be applied to real-world settings, such as autonomous driving (Yu et al., 2020) and healthcare (Shortreed et al., 2010), where online data collection may be expensive or unsafe but large amounts of previously-logged interactions are available. While there has been a recent surge of methods that can train an agent offline (Fu et al., 2020; Gulcehre et al., 2020), such methods typically tune their hyperparameters using online interactions, which undermines the aim of offline RL. To this end, we focus on the problem of _offline model selection (OMS)_, that is, selecting the best policy from a set of many policies using only logged data. Common OMS approaches are based on offline policy evaluation (OPE) algorithms that estimate the expected returns under a target policy using only offline data. Unfortunately, such estimates are often inaccurate (Fu et al., 2021). As an alternative, many works have explored using empirical Bellman errors to perform OMS, but have found them to be poor predictors of value model accuracy (Irpan et al., 2019; Paine et al., 2020). This has led to a belief among many researchers that Bellman errors are not useful for OMS (Geron, 2019; Fujimoto et al., 2022). To this end, we propose a new algorithm, _Supervised Bellman Validation (SBV)_, that provides a better proxy for the true Bellman errors than empirical Bellman errors. SBV achieves strong and robust performance on diverse offline datasets ranging from simulated clinical trials (Klasnja et al., 2015) to Atari games (Bellemare et al., 2013; Agarwal et al., 2020). In contrast, competing baselines suffer from limitations that hinder real-world applicability and perform no better than random chance on certain tasks. Moreover, in addition to demonstrating the potential utility of Bellman errors in OMS, we also investigate _when_ they are effective by exploring the factors most predictive of their performance. Our theoretical and empirical investigations help explain why Bellman errors have achieved mixed performance in the past, provide guidance on how to achieve better performance with these errors, and highlight several avenues for future work. Finally, we open-source our data and code at [https://github.com/jzitovsky/SBV](https://github.com/jzitovsky/SBV). To help others conduct OMS experiments on Atari, our repository includes over 1000 trained Q-functions as well as efficient implementations for several deep OMS algorithms (Appendix A). ## 2 Preliminaries ### Offline Reinforcement Learning In offline RL, we have a static dataset \(\mathcal{D}=\{(s,a,r,s^{\prime})\}\) of transitions, where we observe the reward \(r\) and next state \(s^{\prime}\) after taking action \(a\) on state \(s\). We assume the data comes from a Markov decision process (MDP) \(\mathcal{M}=(\mathcal{S},\mathcal{A},T,R,d_{0},\gamma)\)(Puterman, 1994) with state and action space \(\mathcal{S}\) and \(\mathcal{A}\), transition probabilities \(T(s^{\prime}|a,s)\), rewards \(R(s,a,s^{\prime})=r\), initial state probabilities \(d_{0}(s_{0})\) and discount factor \(\gamma\in[0,1)\). Throughout we assume that \(\mathcal{A}\) is discrete: the case where \(\mathcal{A}\) is continuous is discussed in Appendix F.1. We assume that the observed state-action pairs in \(\mathcal{D}\) are identically distributed as \(P^{\mu}(s,a)=d^{\mu}(s)\mu(a|s)\) where \(\mu\) is the behavioral policy and \(d^{\mu}\) is the marginal distribution of states over time points induced by policy \(\mu\) and MDP \(\mathcal{M}\)(Levine et al., 2020). A real-valued function of state-action pairs is known as a _Q-function_. One such Q-function is the action-value function for policy \(\pi\), \(Q^{\pi}(s,a)=\mathbb{E}_{\pi}[\sum_{t=0}^{\infty}\gamma^{t}R_{t}|S_{0}=s,A_{0 }=a]\), where \(\mathbb{E}_{\pi}\) denotes expectation over MDP \(\mathcal{M}\) and policy \(\pi\). The optimal policy \(\pi^{*}\) is the policy whose action-value function equals the optimal action-value function \(Q^{*}\). It is well-known that \(\pi^{*}=\pi_{Q^{*}}\) where \(\pi_{Q}(s)=\text{argmax}_{a}Q(s,a)\) is the greedy policy of Q-function \(Q\). Our paper focuses on the **offline model selection (OMS)** problem where we have a _candidate set_\(\mathcal{Q}=\{Q_{1},...,Q_{M}\}\), or set of estimates for \(Q^{*}\), and our goal is to choose the "best" among them based on some criterion. For example, \(Q_{1},...,Q_{M}\) can be obtained by running a deep Q-learning (DQL) algorithm for \(M\) iterations and evaluating the Q-Network after each iteration, or it can be obtained by running \(M\) different DQL algorithms to convergence (Mnih et al., 2015). Moreover, the number of candidates \(M\) need not be fixed in advance. For example, if one evaluated all the value models in \(\mathcal{Q}\) and determined that none were adequate, one could then augment \(\mathcal{Q}\) with more Q-functions obtained by running more DQL training algorithms and evaluate those Q-functions as well. ### Bellman Errors For any Q-function \(Q\), the Bellman operator \(\mathcal{B}^{*}\) satisfies: \[\mathcal{B}^{*}Q(s,a)=\mathbb{E}\left[R_{t}+\gamma\max_{a^{\prime}\in \mathcal{A}}Q(S_{t+1},a^{\prime})|S_{t}=s,A_{t}=a\right] \tag{1}\] It is known that \(Q=\mathcal{B}^{*}Q\) if and only if \(Q=Q^{*}\). The function \((\mathcal{B}^{*}Q)(s,a)\) is known as the _Bellman backup_ of Q-function \(Q\) and \((Q-\mathcal{B}^{*}Q)(s,a)\) is known as its _Bellman error_. As the Bellman errors are zero uniquely for \(Q^{*}\), a reasonable approach is to assess candidates \(Q_{m},1\leq m\leq M\) via their **mean squared Bellman error (MSBE)**: \[\mathbb{E}_{(s,a)\sim P^{\mu}}\left[(Q_{m}(s,a)-(\mathcal{B}^{*}Q_{m})(s,a))^ {2}\right] \tag{2}\] Unfortunately, directly estimating the MSBE from an offline dataset of transitions is not straightforward due to the _double-sampling_ problem (Baird, 1995). For example, consider the **empirical mean squared Bellman error (EMSBE)**: \[\mathbb{E}_{\mathcal{D}}\left[\left(r+\gamma\max_{a^{\prime}\in\mathcal{A}}Q_ {m}(s^{\prime},a^{\prime})-Q_{m}(s,a)\right)^{2}\right] \tag{3}\] where \(\mathbb{E}_{\mathcal{D}}\) denotes empirical expectation over our observed dataset \(\mathcal{D}=\{(s,a,r,s^{\prime})\}\). _Empirical_ Bellman errors replace the true Bellman backup with a single sample bootstrapped from the observed dataset. Unless the environment is deterministic, the EMSBE will be biased for the true MSBE (Farahmand and Szepesvari, 2010). Fitted Q-Iteration (FQI) (Ernst, Geurts and Wehenkel, 2005) and the DQN algorithm (Mnih et al., 2015) perform updates: \[Q^{(k+1)}\leftarrow\underset{f}{\text{argmin}}\mathbb{E}_{\mathcal{D}}\left[ \left(r+\gamma\max_{a^{\prime}}Q^{(k)}(s^{\prime},a^{\prime})-f(s,a)\right)^{2}\right]\] The terms \(r+\gamma\max_{a^{\prime}}Q^{(k)}(s^{\prime},a^{\prime})-Q^{(k+1)}(s,a)\) are often referred to as empirical Bellman errors as well, with the true Bellman error being \(\epsilon^{(k+1)}=\mathcal{B}^{*}Q^{(k)}-Q^{(k+1)}\). We refer to \(\epsilon^{(k+1)}\) as a _fixed-target_ Bellman error, as the Bellman backup \(\mathcal{B}^{*}Q^{(k)}\) remains fixed while the Q-function \(Q^{(k+1)}\) is being evaluated. In contrast, our version of the Bellman error evaluates a Q-function by taking the difference between itself and it's **own** Bellman backup, i.e. it is a _variable-target_ Bellman error. Unlike variable-target Bellman errors, fixed-target Bellman errors can be reliably replaced by their empirical counterparts, at least when using them for model training. As a result, FQI updates will often do a good job at minimizing the true fixed-target Bellman errors \(\epsilon^{(k+1)}\) as well. Differences between these errors as well as between FQI and SBV are further discussed in Appendix E.1. Unless otherwise specified, "Bellman errors" refers to variable-target Bellman errors. ## 3 Related Work The most popular approach for OMS is to use an _offline policy evaluation (OPE)_ algorithm, which estimates the marginal expectation of returns \(J(\pi)=\mathbb{E}_{\pi}[\sum_{t=0}^{\infty}\gamma^{t}R_{t}]\) under policies of interest \(\pi\in\{\pi_{Q_{1}},\pi_{Q_{2}},...,\pi_{Q_{M}}\}\) from \(\mathcal{D}\). For example, importance sampling (IS) estimators such as per-decision IS estimators (Precup, Sutton and Singh, 2000), doubly-robust IS estimators (Jiang and Li, 2016; Thomas and Brunskill, 2016) and marginal IS estimators (Xie, Ma and Wang, 2019; Yang et al., 2020) estimate \(J(\pi)\) from \(\mathcal{D}\) by using importance weights to adjust for the distribution shift. Fitted Q-Evaluation (FQE) estimates \(Q^{\pi}\) with an off-policy RL algorithm (Le, Voloshin and Yue, 2019; Paine et al., 2020). Finally, model-based approaches estimate the underlying MDP with density estimation techniques (Sutton, 1991; Zhang et al., 2021). Existing OPE approaches often have difficulties with accurately estimating \(J(\pi)\)(Fu et al., 2021). In particular, per-decision IS usually has prohibitively-large estimation variance, FQE introduces its own hyperparameters that cannot be easily tuned offline and model-based approaches can have great difficulty modelling the MDP in complex and high-dimensional settings. Doubly-robust and most marginal IS estimators use function approximation to reduce variance, but at the cost of introducing hyperparameter-tuning difficulties shared by FQE. The poor performance of empirical Bellman errors has led to several proposed alternatives. For example, the BErMin algorithm (Farahmand and Szepesvari, 2010) selects a candidate Q-function whose MSBE obtains the minimum MSBE in the candidate set, up to a multiplicative constant and plus terms that converge to zero asymptotically. As in SBV, BerMin requires the use of a regression algorithm to estimate the Bellman error. Unlike SBV, however, BErMin also requires partitioning the data into fourths as well as a tight upper bound on the excess risk of the regression algorithm (Vapnik, 1998), and empirical performance was never evaluated. The BVFT algorithm (Xie and Jiang, 2021; Zhang and Jiang, 2021) takes advantage of several theoretical properties of piecewise-constant projections, including the fact that an \(L_{2}\) piecewise-constant projection of the Bellman operator will still be an \(L_{\infty}\) contraction with the same fixed point under restrictive conditions. Instead of trying to estimate the Bellman error directly, BVFT calculates a related criterion that has its own theoretical guarantees (see Appendix E.2 for details). The ModBE algorithm (Lee et al., 2022) compares candidate model classes by running FQI using one model class and then using the fixed-target empirical Bellman errors minimized at every iteration to evaluate alternative classes. In contrast to our work, ModBE performs OMS based on fixed-target Bellman errors, not variable-target Bellman errors, and can only compare between nested model classes (see Appendix E.3 for details). ## 4 Supervised Bellman Validation To understand our algorithm, consider first the case where \(Q^{*}(s,a)\) is actually **known** for state-action pairs \((s,a)\in\mathcal{D}\) and we wish to evaluate candidates \(Q_{m},1\leq m\leq M\) based on how well they estimate \(Q^{*}\). An obvious criterion in this case would be the mean squared error (MSE): \[\mathbb{E}_{(s,a)\sim P^{\mu}}\left[\left(Q^{*}(s,a)-Q_{m}(s,a)\right)^{2}\right] \tag{4}\] While \(P^{\mu}\) is unknown, we can still estimate the expectation in Equation 4 by randomly partitioning \(80\%\) of the trajectories present in \(\mathcal{D}\) into a training set \(\mathcal{D}_{T}\) and reserving the remaining \(20\%\) of trajectories as a validation set \(\mathcal{D}_{V}\). We would then generate candidates \(\mathcal{Q}=\{Q_{1},...,Q_{M}\}\) by running DQL algorithms on \(\mathcal{D}_{T}\) with \(M\) different hyperparameter configurations, and use \(\mathcal{D}_{V}\) to estimate the MSE for each \(Q_{m}\) as \(\mathbb{E}_{\mathcal{D}_{V}}\left[\left(Q^{*}(s,a)-Q_{m}(s,a)\right)^{2}\right]\). Typically, the targets \(Q^{*}(s,a),(s,a)\in\mathcal{D}\) are not known: this is what separates supervised learning from reinforcement learning. Instead of using a criterion based on Equation 4, our algorithm, _Supervised Bellman Validation (SBV)_, uses a surrogate criterion based on the MSBE (Equation 2). The relationship between estimation error and Bellman error is discussed more in section 5. Similar to the supervised learning case, SBV creates a training set \(\mathcal{D}_{T}\) and a validation set \(\mathcal{D}_{V}\) by randomly partitioning trajectories from \(\mathcal{D}\), and trains \(M\) Q-functions \(\mathcal{Q}=\{Q_{1},...,Q_{M}\}\) on \(\mathcal{D}_{T}\). Note that the MSBE actually contains **two** unknown quantities: the population density \(P^{\mu}\), and the \(M\) Bellman backup functions \(\mathcal{B}^{*}Q_{m},1\leq m\leq M\). We can see from Equation 1 that each \((\mathcal{B}^{*}Q_{m})(s,a)\) is just a conditional expectation. Therefore, the \(M\) Bellman backup functions can be estimated by running \(M\) regression algorithms on \(\mathcal{D}_{T}\), with the \(m\)th such algorithm estimating \(\mathcal{B}^{*}Q_{m}\) by fitting function \(f\) to minimize: \[\mathbb{E}_{\mathcal{D}_{T}}\left[\left(r+\gamma\max_{a^{\prime}}Q_{m}(s^{ \prime},a^{\prime})-f(s,a)\right)^{2}\right] \tag{5}\] We refer to Equation 5 as the _Bellman backup MSE_ of \(Q_{m}\). Denote the fitted models from our regression algorithms as \(\widehat{\mathcal{B}}^{*}Q_{1},...,\widehat{\mathcal{B}}^{*}Q_{M}\). The MSBE for each candidate \(Q_{m}\) can then be estimated as \(\mathbb{E}_{\mathcal{D}_{V}}[(Q_{m}(s,a)-(\widehat{\mathcal{B}}^{*}Q_{m})(s, a))^{2}]\). Our algorithm is summarized in Algorithm 1. Here \(H_{m}\) fully specifies an algorithm and its relevant hyperparameters for estimating \(Q^{*}\). For DQL, this would include the training algorithm (e.g. double DQN (Hasselt, Guez and Silver, 2016) or dueling DQN (Wang et al., 2016)), the Q-network architecture and the number of training iterations. While the RL algorithm can be tuned via SBV, the regression algorithm employed by SBV to estimate the relevant Bellman backups itself must be tuned. Fortunately, regression algorithms can easily be tuned offline using MSE on a held-out validation set, which in this case would be \(\mathcal{D}_{V}\). For example, Algorithm A.1 extends Algorithm 1 to tune the regression algorithm, separately for each Bellman backup that is estimated. For complex control problems where deep RL is required, it is a good idea to estimate each \(\mathcal{B}^{*}Q_{m},1\leq m\leq M\) with a neural network approximator. For computational efficiency, the same architecture and training configuration can be used for estimating all Bellman backups. In such cases, we will refer to the neural network used by SBV as the _Bellman network_. For example, Algorithm A.2 provides a computationally-efficient implementation of SBV for tuning the number of training iterations used by DQN. ## 5 Theoretical Results We begin by investigating the theoretical properties of Bellman errors, empirical Bellman errors and SBV. We assume \(|\mathcal{S}|<\infty\) here: Extensions that allow \(\mathcal{S}\) to be uncountable (as well as relevant mathematical proofs) can be found in Appendix B. For any Q-function \(Q\), density \(P\) of state-action pairs and dataset of transitions \(\mathcal{S}=\{(s,a,r,s^{\prime})\}\), let \(||Q||_{P}^{2}=\mathbb{E}_{(s,a)\sim P}[Q(s,a)^{2}]\) and \(||Q||_{\mathcal{S}}^{2}=|\mathcal{S}|^{-1}\sum_{(s,a)\in\mathcal{S}}[Q(s,a)^{ 2}]\). Define the empirical Bellman backups for Q-function \(Q\) and dataset \(\mathcal{S}\) as \((\mathcal{B}_{\mathcal{S}}Q)(s,a)=r+\gamma\max_{a^{\prime}}Q(s^{\prime},a^{ \prime}),\ (s,a,r,s^{\prime})\in\mathcal{S}\). For example, the MSBE (Equation 2) and the EMSBE (Equation 3) can be re-written as \(||Q_{m}-\mathcal{B}^{*}Q_{m}||_{P}^{2}\) and \(||Q_{m}-\mathcal{B}_{\mathcal{D}}Q_{m}||_{\mathcal{D}}^{2}\), respectively. We first derive \(L_{\infty}\) bounds based on the Bellman error in Proposition 5.1. These bounds imply that a candidate value model \(Q_{m}\) with small Bellman error should be close to \(Q^{*}\) and have a greedy policy that gives high returns, and the same holds for estimates of the Bellman error provided the estimation is accurate. Because the Bellman operator is a \(\gamma\)-contraction in \(L_{\infty}\) space, the error and regret bounds present in Proposition 5.1 are tight and simple to interpret. **Proposition 5.1**.: _Let \(\widehat{\mathcal{B}}^{*}Q_{m}\) be an estimate of \(\mathcal{B}^{*}Q_{m}\) and assume that \(||Q_{m}-\widehat{\mathcal{B}}^{*}Q_{m}||_{\infty}\leq\epsilon\) and \(||\widehat{\mathcal{B}}^{*}Q_{m}-\mathcal{B}^{*}Q_{m}||_{\infty}\leq\delta\). Then i) \(||Q_{m}-Q^{*}||_{\infty}\leq\frac{1}{1-\gamma}(\epsilon+\delta)\) and ii) \(||V^{\pi^{*}}-V^{\pi_{Q_{m}}}||_{\infty}\leq\frac{2}{(1-\gamma)^{2}}(\epsilon+\delta)\)._ Theorem 5.2 can be thought of as the analogue of Proposition 5.1 in \(L_{2}\) space. The derived bounds resemble those derived in Munos (2005) for fixed-target Bellman errors. Theorem 5.2 states that the candidate Q-function selected by the MSBE is guaranteed to be an accurate estimate of \(Q^{*}\) and have a high-performing greedy policy provided the MSBE of the selected policy is sufficiently small and the observed data covers the state-action space adequately. In other words, the MSBE upper bounds estimation error, lower bounds policy performance and is minimized uniquely at \(Q^{*}\). Moreover, even if the MSBE is unknown and estimated with error, the same results will hold for the estimated MSBE provided the estimation is sufficiently accurate. These properties constitute strong theoretical guarantees of the MSBE and justify its utility in OMS. **Theorem 5.2**.: _Assume \(P^{\mu}(s,a)\geq\psi\) for some \(\psi>0\) and all \((s,a)\in\mathcal{S}\times\mathcal{A}\). Let \(\hat{m}(Q_{m})\) be an estimate of \(||Q_{m}-\mathcal{B}^{*}Q_{m}||_{P^{\mu}}\) with absolute estimation error \(e(\hat{m}(Q_{m}))\) and assume that \(\hat{m}(Q_{m})\leq\epsilon\) and \(e(\hat{m}(Q_{m}))\leq\delta\). Then i) \(||Q_{m}-Q^{*}||_{P^{\mu}}\leq\frac{1}{\sqrt{\epsilon}(1-\gamma)}(\epsilon+\delta)\) and ii) \(J(\pi^{*})-J(\pi_{Q_{m}})\leq\frac{2}{\psi(1-\gamma)^{2}}(\epsilon+\delta)\)._ Theorem 5.2 also suggests a few reasons why the MSBE may have performed poorly in previous literature: (1) The MSBE was not estimated accurately; (2) The behavioral policy did not perform enough exploration and there was not sufficient diversity in the observed state-action pairs; (3) The evaluated Q-functions all had MSBE values that were too high. In Appendix B, we conduct a more thorough theoretical analysis of the MSBE and propose additional factors that could impact its performance. Note that the bounds on estimation error in Proposition 5.1 and Theorem 5.2 are tighter than the bounds on policy regret. This implies that Bellman errors are more closely associated with estimation error than with policy performance, and will primarily select high-quality policies by selecting accurate Q-functions. Let \(P^{\mathcal{D}}(s,a)\) be the proportion of state-action pairs in dataset \(\mathcal{D}\) equal to \((s,a)\). When studying the theoretical performance of the EMSBE and SBV, we focus on the setting where \(|\mathcal{D}|=\infty\) or \(P^{\mathcal{D}}=P^{\mu}\). Proposition 5.3 is similar to theoretical results derived in previous work (e.g. Farahmand and Szepesvari (2010)) and states that the EMSBE is not equal to the true MSBE even with infinite samples unless the environment is deterministic. This implies that the EMSBE has estimation bias, with the degree of bias depending on the amount of noise in the MDP. **Proposition 5.3**.: _Assume that \(P^{\mathcal{D}}=P^{\mu}\). Then:_ \[||Q_{m}-\mathcal{B}_{\mathcal{D}}Q_{m}||_{\mathcal{D}}^{2}-||Q_{m}-\mathcal{ B}^{*}Q_{m}||_{P^{\mu}}^{2}=\mathbb{E}_{(S_{t},A_{t})\sim P^{\mu}}\left\{ \text{Var}\left[R_{t}+\gamma\max_{a^{\prime}\in\mathcal{A}}Q_{m}(S_{t+1},a^{ \prime})|S_{t},A_{t}\right]\right\}.\] In contrast, Proposition 5.4 states that SBV (Algorithm 1) correctly recovers the true MSBE in the asymptotic case, suggesting that it has the potential to reduce estimation bias of the EMSBE. The proof is straightforward and relies on the well-known theoretical result that the population minimizer of the MSE is the conditional expectation of the targets (Hastie, Tibshirani and Friedman, 2009). In the case of the Bellman backup MSE \(||\mathcal{B}_{\mathcal{D}_{T}}Q_{m}-f||_{\mathcal{D}_{T}}^{2}\), it is easy to see that this is equal to the Bellman backup function. **Proposition 5.4**.: _Assume \(P^{\mathcal{D}_{V}}=P^{\mathcal{D}_{T}}=P^{\mu}\) and \(\widehat{\mathcal{B}}^{*}Q_{m}=\text{argmin}_{f}||\mathcal{B}_{\mathcal{D}_{ T}}Q_{m}-f||_{\mathcal{D}_{T}}^{2}\). Then \(||Q_{m}-\widehat{\mathcal{B}}^{*}Q_{m}||_{\mathcal{D}_{V}}^{2}=||Q_{m}- \mathcal{B}^{*}Q_{m}||_{P^{\mu}}^{2}\)._ ## 6 Empirical Results ### Case Study: Toy Environment To study the empirical properties of the MSBE (Equation 2), EMSBE (Equation 3) and SBV algorithm (Algorithm 1), we consider a simple stochastic 4-state MDP with the candidate set consisting of \(Q^{*}\) as well as \(29\) Q-functions generated using ridge-regularized polynomial FQI on a small offline dataset (see Appendix C.1 for details). In Figure 1, we plot the returns of our various Q-functions and group Q-functions by their MSBE values. MSBE values greater than that of the zero function are considered "high" while estimates with MSBE values close to zero are considered "low". We can see that as the MSBE decreases, the floor of the observed return distribution increases and returns get more concentrated around the optimal return. These empirical findings are in-line with Theorem 5.2, verifying that Bellman errors lower bound the expected return. We can see from Figure 1 that the Spearman correlation between the MSBE and returns is imperfect, but this does not preclude the MSBE from selecting high-performing policies. Because high Spearman correlation is not necessary for OMS, we do not focus on this metric for our experiments. We can also see from Figure 1 that among the high MSBE Q-functions, the Q-function with smallest MSBE only has return \(0.185\), while the best Q-function still has a return of \(0.953\). The issue is that the MSBE values are all too high to be informative. However, once the Q-functions with low MSBE are included, the top Q-functions selected by the MSBE all have returns very close to that of the optimal policy. These results imply that the MSBE will be effective for OMS if our candidate set contains Q-functions with sufficiently low MSBE (again in-line with Theorem 5.2). The noise in the MDP dynamics is controlled by a stochasticity parameter \(\phi\), where \(\phi=0\) corresponds to a deterministic MDP. We set \(\phi=0.25\) for generating Figure 1. We then generated offline datasets for different values of \(\phi\), and generated our candidate set similar to before. For each candidate \(Q_{j},1\leq j\leq 30\), SBV estimated \(\mathcal{B}^{*}Q_{j}\) using polynomial ridge regression with hyperparameters tuned to minimize Bellman backup MSE on the validation set, as discussed in Algorithm A.1. We compared SBV to a modified version of the EMSBE that uses only the validation set (using the full dataset for the EMSBE yielded worse performance). From Figure 2, we can see that the EMSBE's performance declines rapidly as noise increases, while the performance of SBV remains stable. Relative to the EMSBE, SBV is more robust to environment stochasticity and reduces bias, in-line with Propositions 5.3 and 5.4. ### Robotics and Healthcare Environments We next assessed the empirical performance of SBV (Algorithm 1) on two well-known discrete control problems: The bicycle balancing control problem (Randlov and Alstrom, 1998) and the mobile health (mHealth) control problem (Luckett et al., 2020). These environments were chosen due to their diverse characteristics: the Bicycle MDP has highly nonlinear transition dynamics, sparse rewards and little environmental noise, and is typically associated with larger offline datasets. In contrast, the mHealth MDP has simple transition dynamics, dense rewards and a large amount of environmental noise, and is typically associated with very small offline datasets. In addition to comparing SBV to validation EMSBE (Equation 3), we also compared our algorithm to weighted precision importance sampling (WIS) (Precup, Sutton and Singh, 2000) and Fitted Q-Evaluation (FQE) (Le, Voloshin and Yue, 2019): WIS is one of the few OPE algorithms that can tune its hyperparameters offline, while FQE has achieved state-of-the-art performance in terms of model-free OMS (Fu et al., 2021). See Appendix A.2 for mathematical details on these three baselines. As doubly-robust and marginal IS estimators tend to suffer from large estimation variance like WIS (Xie, Ma and Wang, 2019) or have hyperparameters that cannot be easily tuned offline like FQE (Thomas and Brunskill, 2016; Yang et al., 2020), we conjectured that the problems observed from our selected OPE benchmarks would also be observed by these IS estimators. There are also many lesser-known OMS algorithms (e.g. Gulcehre et al. (2020); Xie and Jiang (2021); Lee et al. (2022)) which we do not evaluate due to computational and time constraints. A more comprehensive comparison with many additional benchmarks and methods would be an important avenue for future work. For the Bicycle control problem, we generated 10 offline datasets consisting of \(240\) episodes of \(500\) time steps each and our candidate Q-functions were primarily random forest functions fit using FQI, following Ernst, Geurts and Wehenkel (2005). For the mHealth control problem, we generated 10 offline datasets consisting of \(30\) episodes of \(25\) time steps each, following Luckett et al. (2020), and candidate Q-functions were primarily polynomial functions fit by Least Square Policy Iteration (LSPI) (Lagoudakis and Parr, 2003). When implementing SBV, each Bellman backup function was estimated using a different regression algorithm tuned to minimize MSE on the validation set, in-line with Algorithm A.1. More details on our environments and experimental setup can be found in Appendix C. Figure 1: Returns vs. MSBE. Each data point represents a Q-function and return of its greedy policy. Q-functions are grouped by the size of their MSBE. The vertical bars represent the range of observed returns within each category. As the MSBE decreases, this range decreases and returns get more concentrated around the optimal return. Figure 2: Performance vs. Stochasticity. The environment is deterministic when the stochasticity parameter is zero, and becomes noisier as the stochasticity parameter increases. Results are given in Figure 3. For the bicycle datasets, WIS gives an identical estimate of zero for almost all policies: The estimation variance of WIS makes it difficult to account for long-term consequences of actions and rewards that occur far away from the initial state, and for the bicycle datasets, a non-zero reward is usually only observed after nearly 100 time steps (see Appendices A.2 and C.2 for more details). On the other hand, the EMSBE performs well as there is only a small amount of noise in the MDP. For the mHealth datasets, rewards are dense and long-term consequences of actions are less important, but the MDP is noisier. Therefore, WIS performs much better, while EMSBE performs much worse. Only SBV performs well on both environments. Following previous work (Fu et al., 2021), we also compared performance based on Spearman correlation in Figure G.2, and based on max top-\(k\) policy value for varying values of \(k\) in Figure G.3. We chose to focus on mean top-3 policy value instead of top-1 policy value as the former relies on more than a single Q-function, thus providing a more stable and robust measure of OMS performance. In this case, however, looking at top-1 policy values instead still yields similar conclusions (see Figure G.3). SBV only requires a regression algorithm, and its hyperparameters can easily be tuned offline with cross-validation. In contrast, FQE requires an offline RL training algorithm to estimate the action-value function, and tuning this algorithm's hyperparameters offline is not nearly as straightforward. This makes it very difficult to compare FQE to competitors, as its performance will depend entirely on the arbitrary choice of what algorithm we use to estimate the action-value function. For example, in Table 1, we find that FQE performance varies greatly depending on the algorithm utilized for estimating the action-value function. We can also see from Table 1 that FQE is biased towards \(Q^{*}\) estimation algorithms similar to its own training algorithm (FQE choose its own training algorithm as the best training algorithm for estimating \(Q^{*}\) in three out of four cases). More details about these hyperparameters can be found in Appendix C. While FQE does perform well with the right training algorithm, we would not need OMS in the first place if we knew in advance which offline RL training algorithm performed best. ### High-Dimensional Atari Environments Finally, we evaluated SBV (Algorithm 1) on 12 offline DQN-Replay datasets (Agarwal, Schuurmans and Norouzi, 2020), corresponding to three seeds each for four Atari games: Pong, Breakout, Asterix and Seaquest. Atari games have high-dimensional state spaces, making them more challenging than previous environments evaluated so far. We chose to focus on these four games in particular as they have received more attention in recent literature (Agarwal, Schuurmans and Norouzi, 2020; Kumar et al., 2020). The performance of DQN is also sensitive to the number of training iterations for most of these games, making OMS more challenging. As in Section 6.2, we also evaluated validation EMSBE, WIS and FQE. \begin{table} \begin{tabular}{l l l l} \hline \hline Dataset & FQE Training Algorithm & Top-3 Policy Value & Top-Ranked Estimator for \(Q^{*}\) \\ \hline \multirow{2}{*}{**mHealth**} & Quadratic LSPI, \(\lambda=100\) & 0 & Quadratic LSPI, \(\lambda=100\) \\ & Quadratic LSPI, \(\lambda=0\) & 0.984 & Quadratic LSPI, \(\lambda=0\) \\ \hline \multirow{2}{*}{**Bike**} & FQI, \(n_{\min}=625,m_{\text{try}}=5\) & 0.239 & FQI, \(n_{\min}=1,m_{\text{try}}=1\) \\ & FQI, \(n_{\min}=5,m_{\text{try}}=3\) & 0.878 & FQI, \(n_{\min}=5,m_{\text{try}}=3\) \\ \hline \hline \end{tabular} \end{table} Table 1: Top Policies According to FQE. Note that FQE performance (top-3 policy value) is sensitive to its training algorithm. As FQE cannot tune its hyperparameters offline, this sensitivity precludes it from being a practical OMS algorithm. SBV doesn’t have this problem because it can be tuned via cross-validation. Figure 3: Mean Top-3 Policy Values. For each dataset and method, we calculated the mean policy value of the top-3 policies and standardized to \([0,1]\). Solid bars show the mean and error bars show the std of this metric across datasets. Only SBV performs well on both environments. Following Agarwal et al. (2020), we performed uniform sub-sampling to obtain 12 training and validation datasets with 10M and 2.5M transitions each, respectively. We used two training configurations for DQN: a _shallow_ configuration that uses the "DQN (Adam)" setup from Agarwal et al. (2020), and a _deep_ configuration that uses a deeper architecture and a slower target update frequency. For each training configuration, we ran DQN for 50 iterations (one iteration = 640k gradient steps) and evaluated the Q-network after each iteration. This resulted in evaluating 100 Q-functions for each Atari dataset. Unlike in previous experiments, the same Bellman network training configuration was used by SBV to estimate all Bellman backup functions,2 and was tuned offline so as to minimize validation error across Bellman backups and datasets. The Bellman network (see Section 4) incorporates prevalent design choices for image classification such as batch normalization (Ioffe and Szegedy, 2015), skip connections (He et al., 2016) and squeeze-and-excitation units (Chollet, 2017). The same architecture was also used to estimate the behavioral policy for WIS. See Appendix D for more details on our experimental setup. Footnote 2: For Pong datasets, we used a simpler Bellman network and only evaluated the shallow Q-networks to speed-up experiments. From Table 2, we can see that SBV consistently performs better than competing methods with respect to its top-5 selected policies. Moreover, from Figure 4, we can see that SBV performs nearly as well as or better than no early stopping for all environments. Note that the same cannot be said for WIS and the EMSBE. This suggests that SBV is a more robust early stopping procedure than competing baselines. Results were similar for the other datasets (Figure G.5). While we only looked at the top-3 policies in section 6.2, we chose to look at the top-5 policies here as the total number of Q-functions being evaluated was much higher. We also compared performance based on max top-\(k\) policy values in Figure G.4 and obtained similar conclusions. The tricks we employed to speed-up computations involving SBV hindered us from calculating Spearman correlations with policy returns (see Appendix D.2), but as discussed in section 6.1, this metric is not as important for OMS anyway. Due to the computational cost of FQE and the difficulty of tuning its hyperparameters offline, we only applied FQE to a single Seaquest dataset using the shallow DQN training algorithm (modified to perform policy evaluation instead of optimization). We found that FQE failed to identify Q-functions from the deep configuration as superior to those from the shallow configuration: the top-5 policies selected by FQE were all from the shallow configuration and had a standardized mean return of \(34.3\%\), whereas the top-5 policies selected by SBV were all from the deep configuration \begin{table} \begin{tabular}{l l l l l} \hline \hline Method & Pong & Breakout & Asterix & Seaquest \\ \hline WIS (Precup, Sutton and Singh, 2000) & 66\% (45-90\%) & 37\% (34-39\%) & 43\% (37-55\%) & 24\% (13-34\%) \\ EMSBE (Equation 3) & 87\% (77-98\%) & 64\% (43-77\%) & 60\% (51-67\%) & 47\% (44-52\%) \\ SBV (Ours) & **95\% (93-98\%)** & **81\% (73-90\%)** & **69\% (62-74\%)** & **65\% (60-71\%)** \\ \hline \hline \end{tabular} \end{table} Table 2: Standardized Top-5 Policy Mean Returns. A mean return of 0% (100%) for a dataset implies that the method choose the worst (best) five policies possible on the given dataset. The average and range of these mean returns across datasets is displayed. Figure 4: Learning Curves from the Best Configuration for Four Datasets. Returns are standardized to \([0,1]\). The dashed horizontal line represents performance with no early stopping. The vertical lines represent the iterations where training was stopped according to different methods. SBV performs nearly as well as or better than no early stopping for all games. However, the same cannot be said for WIS and the EMSBE. and had a standardized mean return of \(70.4\%\). This supports our conclusions from Section 6.2 that FQE will be biased towards Q-functions trained similarly to its own training algorithm, and that FQE will perform poorly when its own hyperparameters are not optimally tuned. ## 7 Discussion and Future Work Our results show that for the MSBE to be effective for OMS, certain conditions must be met. For example, the MSBE must be accurately estimated and the behavioral policy must sufficiently explore the state-action space. We also should not apply Bellman errors directly to tune deep actor-critic algorithms (see Appendix F.1). Finally, the candidate set must include Q-functions with sufficiently small MSBE. This last condition was discussed in Sections 5 and 6.1, and was also observed for Atari. For instance, SBV performance declined when the deep configuration Q-functions were removed from the candidate set. This is because the Q-network trained by the shallow architecture had large Bellman error at every training iteration, while the Q-network trained by the deep architecture yielded much smaller Bellman error provided we stopped training early enough. This is not a limitation of SBV but rather of Bellman errors more generally, and as such has been observed with other algorithms based on Bellman errors as well (Zhang and Jiang, 2021). We should also recall that the OPE algorithms we evaluated had real-world applicability issues even with accurate Q-functions in the candidate set, suggesting OPE may have more serious limitations. In real-world settings, we can ensure that our candidate set contains Q-functions with sufficiently small MSBE by exploring a large number of offline RL training configurations or by exploring configurations which are known to achieve low Bellman error in many scenarios. In Appendix F.2, we discuss how we exploited similarities between DQL and SBV to achieve Q-functions with low Bellman error on Atari without having to explore a large number of DQL configurations. A worthwhile direction for future work would be developing new RL training algorithms that achieve smaller Bellman error than traditional off-policy RL algorithms, so as to improve effectiveness of the MSBE in OMS. Our work also opens up many other avenues for future research. For example, successfully applying SBV to more environments from the Atari benchmark would further demonstrate its potential utility in OMS. This will likely require improving the Bellman network as well as exploring DQL training algorithms that yield smaller Bellman error on the other Atari games. Moreover, while we used Bellman errors to assess estimators of \(Q^{*}\) here, they could also be used to assess estimators of other action-value functions (Appendix F.1), such as those used by FQE. Combining both SBV and FQE in a computationally feasible manner is therefore another important avenue for future research. Other worthwhile avenues for future work include extending Bellman errors to tune actor-critic algorithms for when the action space is continuous, extending Bellman errors to incorporate behavior regularization (Fujimoto, Meger and Precup, 2019) for when state-action coverage is restricted and deriving finite-sample error bounds for SBV. Finally, we believe that more theoretical and empirical investigations are needed to fully understand when Bellman errors are effective and how to maximize their performance in practice. ## 8 Conclusions In this work, we proposed a new algorithm to accurately estimate the MSBE that was effective at selecting high-performing policies across diverse offline datasets, from small simulated clinical trials to large-scale Atari datasets. Tuning SBV's regression algorithm to minimize validation MSE was critical here: This allowed SBV to choose different regression algorithms (e.g. linear models, trees, neural networks) based on what was ideal for a given offline dataset and achieve high estimation accuracy on a diverse set of tasks. In addition to demonstrating the potential utility of our proposed algorithm and of the MSBE more generally, we also investigated which factors were most predictive of Bellman error performance, developing guidelines on how to improve Bellman error performance in practice and proposing promising avenues for future work in the process. Overall, we believe that our paper challenges current beliefs regarding the utility of Bellman errors for offline model selection and will help shape future research in offline RL. ## Acknowledgement We thank Google Cloud for providing us with \(\$20,000\) worth of GCP credits to assist us in running our experiments. We thank George Tucker and Bo Dai for early review of the paper and helpful feedback. Finally, we thank Cameron Voloshin for helpful discussions. ## Author Contributions JPZ (primary author) conceived the project, developed the methodology, and took the lead on deriving and executing the empirical experiments, writing and documenting code that implements SBV and competing methodology efficiently, deriving and proving theoretical results, and writing the initial draft of the paper. DM assisted in executing empirical experiments on Atari and making efficient implementations of SBV and competing methods. RA advised and supervised the project, assisted with deriving experiments, and took the lead on reviewing and providing critical feedback of the paper. MRK (senior author) advised and supervised the project, assisted with deriving and proving theoretical results and assisted with editing the paper.
2309.14246
Learning Risk-Aware Quadrupedal Locomotion using Distributional Reinforcement Learning
Deployment in hazardous environments requires robots to understand the risks associated with their actions and movements to prevent accidents. Despite its importance, these risks are not explicitly modeled by currently deployed locomotion controllers for legged robots. In this work, we propose a risk sensitive locomotion training method employing distributional reinforcement learning to consider safety explicitly. Instead of relying on a value expectation, we estimate the complete value distribution to account for uncertainty in the robot's interaction with the environment. The value distribution is consumed by a risk metric to extract risk sensitive value estimates. These are integrated into Proximal Policy Optimization (PPO) to derive our method, Distributional Proximal Policy Optimization (DPPO). The risk preference, ranging from risk-averse to risk-seeking, can be controlled by a single parameter, which enables to adjust the robot's behavior dynamically. Importantly, our approach removes the need for additional reward function tuning to achieve risk sensitivity. We show emergent risk sensitive locomotion behavior in simulation and on the quadrupedal robot ANYmal. Videos of the experiments and code are available at https://sites.google.com/leggedrobotics.com/risk-aware-locomotion.
Lukas Schneider, Jonas Frey, Takahiro Miki, Marco Hutter
2023-09-25T16:05:32Z
http://arxiv.org/abs/2309.14246v2
# Learning Risk-Aware Quadrupedal Locomotion using Distributional Reinforcement Learning ###### Abstract Deployment in hazardous environments requires robots to understand the risks associated with their actions and movements to prevent accidents. Despite its importance, these risks are not explicitly modeled by currently deployed locomotion controllers for legged robots. In this work, we propose a risk sensitive locomotion training method employing distributional reinforcement learning to consider safety explicitly. Instead of relying on a value expectation, we estimate the complete value distribution to account for uncertainty in the robot's interaction with the environment. The value distribution is consumed by a risk metric to extract risk sensitive value estimates. These are integrated into Proximal Policy Optimization (PPO) to derive our method, Distributional Proximal Policy Optimization (DPPO). The risk preference, ranging from risk-averse to risk-seeking, can be controlled by a single parameter, which enables to adjust the robot's behavior dynamically. Importantly, our approach removes the need for additional reward function tuning to achieve risk sensitivity. We show emergent risk sensitive locomotion behavior in simulation and on the quadrupedal robot ANYmal. ## I Introduction Legged robots can traverse rugged terrain inaccessible to wheeled or tracked systems. They can overcome sloped and uneven terrain, stones, stairs, and even large gaps by carefully choosing their foot placement. This versatility makes them ideal for operation in hazardous environments, such as cave systems [9, 49], forests [15, 14], or the surfaces of other planets [12, 22]. In these situations, the safe operation of the robot is paramount, as failure could result in hardware damage or mission failure. Recent developments in model-free deep RL [40, 39] have enabled legged robots to traverse difficult terrain reliably [24, 30]. While these systems are robust against perception failures, a notable limitation remains: they do not account for risk explicitly. Encoding behavior, such as avoiding dangerous obstacles or reducing locomotion speed on rough ground, requires adaptation of the reward formulation. Such adaptations are undesirable as learning a reliable locomotion already requires expensive and time-consuming reward function tuning. In this work, we overcome this limitation by adapting the RL algorithm to allow risk sensitive behavior to arise without changing the reward formulation. We adopt the distributional perspective on RL [4], learning the entire distribution of returns instead of just its scalar expectation. This value distribution models the intrinsic uncertainty of the agent's interaction with its environment. While estimating the full distribution was initially proposed for more accurate value estimates, it also captures useful information, such as the probability of catastrophic and high-return events. We use the value distribution in the actor-critic framework [2] by incorporating risk sensitive value estimates into Generalized Advantage Estimation (GAE) [38]. To extract these scalar value estimates, we employ a risk metric [27] that allows us to emphasize a specific part of the value distribution, for example, unlikely but catastrophic events. The learned locomotion policy accounts for risks explicitly. Since the metric encodes a fixed risk preference based on a metric parameter, and no single risk preference is appropriate under all circumstances, we condition the policy on this parameter [47]. Conditioning the policy allows the adjustment of the robot's behavior to either seek or avoid risks on demand by an operator or a high-level planner. For example, it enables the safe teleoperation of the robot by an amateur in risk-averse mode while recovering the full range of capabilities in risk-neutral / seeking mode. Figure 1 visualizes the behavior of different risk sensitivities of a single policy. Fig. 1: Our robot learns to adapt its locomotion behavior in risky situations. When commanded to walk up a large step, the risk-averse policy () refuses while the risk-seeking policy () complies. The risk parameter, controlling the value distribution distortion, can be adapted online during deployment. Right: Respective risk-metric distorted value distribution per robot. We thoroughly analyze the risk sensitive behavior arising from our method and study the effects of using different risk metrics and parameters in simulation. We further ablate algorithmic augmentations and deploy the trained policy to the quadrupedal ANYmal robot [20], demonstrating risk sensitive behavior based on the risk parameter when climbing steps of varying heights. Our contribution is threefold. Firstly, we integrate explicit risk modeling into the locomotion controller, a capability that currently deployed controllers do not possess. Secondly, we study the emergent risk sensitive locomotion behavior in simulation and on hardware. Lastly, on the algorithmic side, we explore introducing risk sensitivity into the RL formulation through risk sensitive advantage estimates extracted from the value distribution. ## II Related Work **Legged Locomotion** Legged locomotion is a thoroughly studied research area in robotics. Early successes have been achieved by model-based methods [20, 7]. However, due to their limited ability to generalize to more complex and unpredictable environments, learned methods have recently gained interest. Several deep RL algorithms have solved simulated legged locomotion tasks [40, 39, 25, 31]. Beyond that, RL policies learned in simulation have been successfully transferred to real-world scenarios, either through high simulation fidelity and data augmentation [36, 1, 30, 52, 46, 24, 21] or test-time adaptation [42]. **Distributional Reinforcement Learning** Dist. RL algorithms learn a value distribution, as defined by the recursive distributional Bellman Equation [4], which replaces the role of the value function in classical RL. Value-based Dist. RL algorithms [4, 10, 11, 53] use this value distribution for value-iteration, commonly maximizing its expectation when comparing different actions. These methods differ primarily in how they represent the value distribution: as a categorical distribution [4], by its cumulative distribution at fixed quantiles [10], or at arbitrary quantiles [11, 53]. Generally, algorithms with more accurate representations achieve better results. To extend Dist. RL to continuous actions, the value distribution is used as the critic in an actor-critic architecture [2, 23, 26, 32, 41]. Several works explore Dist. RL theoretically [5, 44, 35, 34], use the additional information contained in the value distribution to improve exploration [28], or for risk sensitivity [11, 26, 47, 6, 43, 48]. Dist. RL has been used for algorithm discovery [13] and as an AI racing agent in the Gran Turismo 7 game [51]. Further, Dist. RL control policies have been successfully employed in the real world: for robotic grasping in a lab environment [8], robot soccer [18], and stratospheric balloon navigation [3]. **Safe Reinforcement Learning** Safe RL is a thoroughly studied research area [17]. The work of Tang, Zhang, and Salakhutdinov [47] is most closely related to ours, learning a risk-averse policy with online adaptation of risk-aversion. They model the value function as a Gaussian distribution and integrate it into an actor-critic architecture. When updating the policy, they maximize the Conditional Value at Risk (CVaR) metric and condition the policy on the confidence level. Schubert et al. [37] extend this work to adjust risk sensitivity automatically. Further, DSAC is a state-of-the-art risk sensitive RL algorithm that integrates a value distribution into the maximum entropy objective of SAC [19]. Several other works explore risk metrics in Dist. RL with fixed risk parameters [11, 6, 43, 48, 8]. ## III Method Our goal is to learn a locomotion policy \(\pi_{\phi}(a|s;\beta)\) that maps the robot's state \(s\) to desired joint positions \(a\), is conditioned on a risk parameter \(\beta\), and has learned weights \(\phi\). The state consists of proprioceptive measurements (base velocity, projected gravity, joint positions and velocities), the previous actions, samples from a local height map around the robot, and the input commands (desired linear and angular base velocities). The risk parameter is a scalar value and depends on the metric used during training. We use the robot dynamics model from Rudin et al. [36], including learned actuator dynamics. An overview of the architecture can be found in Figure 2. **Distributional Proximal Policy Optimization (DPPO)** We base our RL algorithm on the PPO algorithm [39] as it Fig. 2: Architecture overview. The critic learns to predict a value distribution, used in combination with a risk metric to update the policy. The policy is conditioned on the risk parameter. The risk parameter is part of the command, set by the operator. shows superior performance for learning legged locomotion in our setup, compared to DDPG [25], TD3 [16], SAC [19], and DSAC [26] (Experiment A: Section IV-B). **Critic Representation** As the distributional critic, we use the Quantile Regression Deep Q-Network (QR-DQN) [10]. It parameterizes the value distribution as a uniform distribution supported on \(\{\theta_{1}(s),...,\theta_{N}(s)\}\), which can be written as \[Z_{\theta}(s)=\frac{1}{N}\sum_{i=1}^{N}\delta_{\theta_{i}(s)}, \tag{1}\] where \(\delta_{z}\) is a Dirac at \(z\in\mathbb{R}\). The support positions \(\theta_{i}(s)\) are state-dependent and predicted by the critic's neural network. **Critic Updates** To compute reward-to-go value targets, we rely on the sample-replacement strategy SR\((\lambda)\) introduced by Nam, Kim, and Park [32]. This technique extends the temporal-difference TD\((\lambda)\) method [45] for computing multi-step value targets to Dist. RL. As the name suggests, it computes value targets \(\mathcal{T}\,Z_{\theta}(s)\) as a mixture of sample distributions. It therefore starts with samples of the distribution at the final time-step \(Z_{\theta}(s_{T})\). Then, going backwards in time (\(t=T-1,...,0\)), this sample distribution is discounted, shifted by the obtained reward \(r_{t}\), and a fraction \(1-\lambda\) of samples is replaced with samples of the distribution \(Z_{\theta}(s_{t})\). 1-step and \(N\)-step target distributions are recovered by setting \(\lambda=0\) and \(\lambda=1\), respectively. The critic is updated by minimizing the energy distance \[\mathcal{L}=2\,\mathbb{E}_{i,j}\left[\theta_{i}-\mathcal{T}\,\theta_{j} \right]-\mathbb{E}_{i,j}\left[\mathcal{T}\,\theta_{i}-\mathcal{T}\,\theta_{j} \right]-\mathbb{E}_{i,j}\left[\theta_{i}-\theta_{j}\right] \tag{2}\] between the SR\((\lambda)\) target distribution and samples from the predicted critic distribution \(Z_{\theta}(s)\). **Rewards** We adapt the rewards from Rudin et al. [36], which primarily reward command tracking and penalize colliding / expending energy. We introduce an alive reward and modify the collision penalties to be impact force-dependent. Further, the joint motion and torque penalties are increased based on a curriculum. **Risk Quantification** To quantify the risk associated with different states, we apply a risk metric to the value distribution \(Z_{\theta}(s)\), as depicted in Figure 3. We thereby follow Majumdar and Pavone [27] who argue for using distortion risk metrics, a class of metrics with properties sensible for robotics applications. Specifically, we use the Wang [50] metric, which encodes a subjective measure of risk. We further use Conditional Value at Risk (CVaR) [33], which measures the expected worst-case return (beyond a confidence level). For the distribution produced by the QR-DQN, we compute these metrics as the expectation under a distortion \(g(\tau)\) of the distribution supports [26]. We denote the quantile fractions of the distribution as \((\tau_{i})_{i=0,...,N}\) with \(\tau_{i}=\frac{i}{N}\). The distorted expectation is computed as \[V_{\beta}(s)=\int_{0}^{1}g^{\prime}_{\beta}(\tau)Z^{\tau}_{\theta}(s)d\tau= \sum_{k=1}^{N}(g_{\beta}(\tau_{k})-g_{\beta}(\tau_{k-1}))\theta_{k}(s) \tag{3}\] where we denote the \(\tau\)-quantile of the value distribution as \(Z^{\tau}_{\theta}(s)\). We distort the quantile fractions for CVaR as \[g^{\text{CVaR}}_{\beta}(\tau)=\min\left(\frac{\tau}{\beta},1.0\right) \tag{4}\] where \(\beta\) is a scalar risk parameter. We choose \(\beta=1\) for risk-neutrality and \(0<\beta<1\) for risk aversion. The CVaR metric, in effect, computes the expectation of the tail of the value distribution, where \(\beta\) decides the cutoff. For the Wang metric, we compute the distorted quantile fractions as \[g^{\text{Wang}}_{\beta}(\tau)=\Phi(\Phi^{-1}(\tau)+\beta) \tag{5}\] where \(\Phi\) is the standard normal distribution and \(\beta\) is, again, the scalar risk parameter. We choose \(\beta=0\) for risk-neutrality, \(\beta>0\) for risk aversion, and \(\beta<0\) for risk affinity. For the CVaR metric, we train the policy on risk parameters in the range \(\beta\in[0,1]\) while for the Wang metric, we use \(\beta\in[-1.5,1.5]\). We find that the risk sensitive behavior generalizes well beyond the parameter bounds used during training. That is, using \(\beta\) outside the training range results in stable but more risk-averse or affine behavior. **Policy Updates** We update the policy by maximizing the usual PPO clip-objective \[\mathcal{L}=\min\left(\frac{\pi_{\phi}(s|a;r)}{\pi_{\phi_{\text{ old}}}(s|a;r)}A^{\pi_{\phi_{\text{old}}}}(s,a;r),g\left(\epsilon,A^{\pi_{\phi_{ \text{old}}}}(s,a;r)\right)\right) \tag{6}\] \[\text{where}\ \ \ g(\epsilon,A)=\begin{cases}(1+\epsilon)A,&\text{if }A \geq 0;\\ (1-\epsilon)A,&\text{if }A<0.\end{cases}\] When computing the advantages \(A^{\pi}(s_{t},a_{t};\beta)\), we replace the value estimates with our risk sensitive value estimates. To estimate the advantage, we use a truncated version of GAE [38] motivated by Mnih et al. [31]. It approximates the advantage for each sample \((s_{t},a_{t},r_{t},s_{t+1})\) by rolling out the policy \(\pi(a|s;\beta)\) for \(T\) timesteps without looking beyond the \(T\)-th timestep. We compute truncated GAE estimates as \[A^{\pi}(s_{t},a_{t};\beta)=\sum_{l=0}^{T-t-1}(\lambda\gamma)^{l} \delta^{r}_{t+l} \tag{7}\] \[\text{where}\ \ \ \delta^{r}_{t}=r_{t}+\gamma V_{\beta}(s_{t+1})-V_{ \beta}(s_{t})\] and \(\lambda\) is the GAE bias / variance trade-off hyperparameter. Fig. 3: Application of a risk metric. The risk sensitivity selects how the value distribution is distorted. The module outputs the mean of the distorted distribution. ## IV Experiments We evaluate our method through a series of experiments. In Sections IV-A to IV-C, we conduct simulation experiments to study risk sensitivity and compare our method with other approaches. In Section IV-D, we conduct hardware experiments on the ANYmal robot [20]. **Training Setup** To train policies, we used the legged_gym library [36], based on the IsaacGym simulator. We used the same observation and action space, except for the additional risk sensitivity. The risk sensitivity is uniformly sampled with the command during training. The training environment comprises seven different terrains with ten difficulty levels per terrain. Robots spawn at the center of each terrain with a random offset. **Simulation Experiment Setup** The quantitative simulation experiments (Sections IV-B and IV-C) were conducted in a deterministic environment to enable a fair comparison. In both experiments, we spawned 72 robots at the center of each environment tile with initial z-rotations uniformly distributed across \(360^{\circ}\) and gave \(1\,\mathrm{ms}^{-1}\) forward commands. ### _Simulation Experiment: Obstacle Course_ To demonstrate different risk sensitivities in a teleoperation scenario, we navigated a robot through the obstacle course depicted in Figure 4. The shortest path from start to goal in this environment is to descend steep stairs (\(\mathrm{b}\)) [\(25\mathrm{cm}\) steps], cross the barrier (\(\mathrm{c}\)) [\(55\mathrm{cm}\) height], walk up the ramp, and descend into the pit (\(\mathrm{e}\)) [\(85\mathrm{cm}\) depth]. To follow this route successfully, the operator could make use of the different risk sensitivities. **Discussion** Based on risk sensitivity, the robot exhibited different locomotion behaviors. When using a risk-seeking policy, the robot generally walked faster and attempted to overcome obstacles despite the risk of falling. When commanding the robot over the barrier along the path (\(\mathrm{c}\)), the risk-seeking policy always attempted to follow the command. In most cases, it managed to surmount the barrier but, in a few cases, got stuck on the barrier or fell down the opposite side. When sending the risk-seeking policy along path (\(\mathrm{b}\)), it sometimes managed to descend the stairs but fell in most cases. We obtained similar results when descending into the pit along the path (\(\mathrm{e}\)), where the robot complied each time but often crashed. The risk-averse policy, on the other hand, demonstrated a slow but safe walking gate. It thus safely descended the stairs along path (\(\mathrm{b}\)) on each attempt. However, the risk-averse policy refused to climb the barrier along the path (\(\mathrm{c}\)) and descend into the pit (\(\mathrm{f}\)). This experiment showed that different risk sensitivities change the behavior of the robot. This allows either an operator or a navigation system to make use of the risk-sensitivity to increase safety during the mission. ### _Simulation Experiment: Algorithm Performance_ We compare the attained return of our method with PPO [39], commonly used in state-of-the-art legged locomotion approaches [1, 30], and DSAC [26], a recent distributional actor-critic algorithm that can learn risk sensitive policies. DSAC has previously been tested on legged locomotion [37] and, with some adaptations, been employed in complex control scenarios [51]. For each method, we trained \(10\) policies. In this experiment, our method was trained with "risk-neutral" expectation instead of a risk metric. Due to computation constraints, we employed MLPs for both actor and critic across all three methods. Figure 5 shows the learning curve for each method. **Discussion** We found that both DPPO (return of \(27.62\) Fig. 4: Robot operated remotely along an obstacle course. Depending on the chosen path, different risk sensitivities are preferable. For the gentle incline along path (a), a risk-seeking policy (\(\blacksquare\)) can be chosen for increased walking speed. Using a risk-averse policy (\(\blacksquare\)) when descending the stairs along route (\(\mathrm{b}\)) ensures the robot’s safety. A risk-averse policy (\(\blacksquare\)) won’t climb the dangerous obstacle along route (\(\mathrm{c}\)) and thus would have to walk around it, along route (\(\mathrm{d}\)). Meanwhile, setting the policy to risk-seeking (\(\blacksquare\)) allows the robot to surmount the obstacle (\(\mathrm{c}\)). To step down into the deep pit along route (\(\mathrm{e}\)), one must set the sensitivity to risk-seeking (\(\blacksquare\)). The risk-averse policy (\(\blacksquare\)) will refuse to step into the pit (\(\mathrm{f}\)) as it may lead to a crash. Fig. 5: Average return in the evaluation environment. Shaded regions indicate \(95\%\) confidence intervals across seeds and evaluation spawns. Hyperparameters for DPPO were not tuned. after \(20\)k iterations) and PPO (\(27.86\)) significantly outperformed DSAC (\(21.1\)). Qualitatively, DSAC, which also uses a Dist. RL formulation, failed to learn a proper locomotion policy. Despite reusing the hyperparameters from Rudin et al. [36], extensively tuned for PPO, our method performs on-par with PPO. We thus establish our method as competitive in terms of attained return. We highlight that the core contribution of our work is not outperforming PPO in terms of attained return, but to allow for risk sensitive behavior. ### _Simulation Experiment: Risk Sensitive Performance_ We further compare the attained return between different ablations of our method (listed in Table I) as well as non-distributional reward-shaping baselines, introduced below. Return is computed using the original reward formulation. We show the undiscounted return in Figure 6. We also include the fraction of early terminations and the linear velocity tracking error. These two metrics underline our qualitative assessment on changing behavior with risk sensitivity from Section IV-A with quantitative results. **PPO Baseline** To contrast our distributional approach, we implement PPO baselines that modulate "risk"-preference through the reward formulation. In these baselines, risk-preference is introduced before accounting for environment dynamics, setting them apart from our Dist. RL approach. Our first approach, which we call PPO1, adapts individual terms of the reward formulation. Specifically, we scale the linear velocity tracking reward by \(\beta\in[0,2]\). This scaling aims at trading-off command tracking and other rewards, most notably the alive reward. The policy should thus learn, depending on the risk parameter, to mediate differently between command compliance and keeping the robot safe. However, reward tuning is a complex process. It is often unclear how a reward term affects the learned policy. Further, individual terms cannot be tuned in isolation as they influence one another: e.g. high torque penalties may prevent the algorithm from discovering the linear velocity tracking reward. The approach of the PPO1 baseline further increases this design complexity (especially when scaling multiple reward terms). Thus, in a second approach we call PPO2, we rescale the full reward formulation in a principled manner. We treat individual reward terms as Dirac impulses defining a probability distribution (mirroring Equation 1) and compute the distorted mean by applying the Wang metric. As risk parameter, we choose \(\beta\in[-0.25,0.25]\) as we are unable to learn a policy for a larger range. The policy \(\pi(a|s;\beta)\) learned by either baseline is conditioned on the scalar parameter \(\beta\), which emulates the risk parameter from our method. In both methods, the critic learns to predict the expected return used for GAE computation as in standard PPO. Aside from the critic, we use the same training setup, rewards, and hyperparameters as for DPPO. We expect the PPO1 baseline to learn risk-averse behavior for \(\beta\in[0,1)\) and risk-seeking behavior for \(\beta\in(1,2]\). It reverts to the initial, "risk-neutral", reward formulation for \(\beta=1\). We expect the PPO2 baseline to exhibit behavior appropriate for the Wang metric, as outlined in Section III. **Discussion** For a policy to qualify as risk sensitive, we expect the Early Terminations and the Linear Velocity Error to change appropriately across different risk settings. All methods showed such risk sensitive behavior that matches our intuition: risk-seeking policies had improved command tracking but also more premature terminations; risk-averse policies sacrificed command tracking to keep the robot safe. The return generally increased with risk-averseness. This trend could be due to high-reward actions which, despite often failing, would still be pursued by a risk-seeking policy. In our environment, however, such risk-seeking behavior was not rewarded. Our method significantly outperformed the reward-shaping baselines regarding return. Especially the PPO2 baseline had severely lower returns. These mirrored its high early termination rate and linear velocity tracking error. For the PPO1 baseline, we experienced outlier behavior for the most risk-averse setting: since the velocity tracking reward was set to zero during training, the robot learned to stop \begin{table} \begin{tabular}{l l l} \hline \hline **Name** & **Loss Function** & **Value Target** \\ \hline DPPO-Wang & Energy & \(N\)-step distribution [SR(\(\lambda\)), \(\lambda=1\)] \\ DPPO-Wang-QH & Quantile Huber & \(N\)-step distribution \\ DPPO-Wang-SRL & Energy & SR(\(\lambda\)), \(\lambda=0.95\) \\ DPPO-Wang-1Step & Energy & 1-step distribution [SR(\(\lambda\)), \(\lambda=0\)] \\ \hline \hline \end{tabular} \end{table} TABLE I: Overview of ablated components on DPPO-Wang. Fig. 6: Average undiscounted return, fraction of episodes in which the robot terminates before episode end, and \(L2\) distance between commanded and actual linear velocity in the evaluation environment. Shaded regions indicate \(95\%\) confidence intervals across evaluation spawns. Only a single seed was trained for each method due to computation constraints. walking. The undesired behavior of PPO1 and low return of PPO2 demonstrate the difficulties when incorporating risk sensitivity through the standard RL framework. Comparing our method to its ablations, none attained similar return. DPPO-CVaR most significantly underperformed in our experiments. This underperformance may be due to the CVaR metric disregarding an entire portion of the value distribution, appearing to result in less stable training. Thus, our design decisions are experimentally justified. ### _Hardware Experiment_ Our learned policy seamlessly transferred to the real world with appropriate domain randomization [36]. We conducted experiments on the ANYmal robot [20] with onboard perception [29] to replicate the risk sensitive behavior observed in simulation. We commanded the robot to walk up steps of height \(27\)cm and \(41\)cm with a speed of \(1.2\) ms\({}^{-1}\) and counted _refusals_, _failures_, and _successes_ to surmount the obstacle. A high step has some risk of the robot falling and, thus, a risk-averse policy might avoid it while a risk-seeking policy should neglect the risk. The experiments are depicted in Figure 7. We collected a total of \(45\) tries, \(8\) for each policy on the \(41\)cm step height and \(7\) on the \(27\)cm step height. The results are listed in Table II. **Discussion** The robot's actions aligned well with the risk sensitivity of its assigned policies. When commanded to be risk-averse, the robot hesitated before climbing the step while it tried to follow with a risk-seeking command. On the higher step height (\(41\)cm), the risk-averse policy mostly refused to walk the step. We often observed the policy standing before the obstacle, unresponsive to its forward command. The risk-seeking and neutral policies, while failing to surmount the obstacle aswell, exhibited more aggressive behavior. The difference between the risk-neutral and risk-seeking policies became emphasized for the smaller step size (\(27\)cm). In this experiment, the risk-seeking policy compiled every time, often surmounting the obstacle. The risk-neutral policy mostly attempted to overcome the obstacle but failed. A slight inconsistency arose between the risk-neutral and risk-averse policies, where the latter succeeded in walking the step more often. Overall, the risk-neutral policy attempted to walk the step more often than the risk-averse policy. These observations mirror our findings in simulation and confirm that risk sensitivity successfully transfers to hardware. ## V Conclusion Our work presents a distributional reinforcement learning approach for learning risk sensitive locomotion policies. These policies allow the robot to adapt its behavior to environment risks. A risk preference is encoded through a single parameter that can be modulated during deployment. To incorporate risk sensitivity into the RL formulation, we introduce risk sensitive advantage estimates. The emergent risk sensitive behavior, demonstrated in simulation and on hardware, opens up a new application domain for Dist. RL in real-world applications. Our work directly benefits teleoperation and navigation, given that a user / planner can now change the risk affinity of the robot during deployment, increasing safety while retaining full locomotion capability. Further research is needed to establish procedures on evaluating risk sensitivity and study risk sensitive Dist. RL from a theoretical standpoint: how the choice of risk metric impacts the resulting policy and how well the estimated value distribution matches the true return distribution are some suggestions. Additionally, integrating DPPO into a navigation system that utilizes risk sensitivity is promising for future work. \begin{table} \begin{tabular}{l|c c c|c c c} \hline \hline & \multicolumn{3}{c|}{\(27cm\) Stop} & \multicolumn{3}{c}{\(41cm\) Stop} \\ \hline **Risk Sensitivity** & **Refusal** & **Failure** & **Successes** & **Refusal** & **Failure** & **Successes** \\ \hline Averse (\(\beta=-1.5\)) & \(25\%\) & \(50\%\) & \(25\%\) & \(85.7\%\) & \(14.3\%\) & \(0\%\) \\ Neutral (\(\beta=0.0\)) & \(12.5\%\) & \(75\%\) & \(12.5\%\) & \(0\%\) & \(100\%\) & \(0\%\) \\ Seeking (\(\beta=+1.5\)) & \(0\%\) & \(37.5\%\) & \(62.5\%\) & \(0\%\) & \(100\%\) & \(0\%\) \\ \hline \hline \end{tabular} \end{table} TABLE II: Hardware experiment results. The robot received a \(1.2\) ms\({}^{-1}\) command to walk up the step. We counted the number of _refusals_, _failures_ (active attempts to surmount the obstacle without ultimate success), and _successes_. The given percentages indicate how often each behavior was observed during the experiments. Fig. 7: ANYmal showed different step-up behaviors depending on risk sensitivity and step height.
2309.13529
Bubble nucleation in the two-flavor quark-meson model
We investigate the dynamics of a first-order quark-hadron transition via homogeneous thermal nucleation in the two-flavor quark-meson model. The contribution of the fermionic vacuum loop in the effective thermodynamics potential and phase diagram together with the location of critical end point (CEP) have been obtained in the temperature and chemical potential plane. For a weak and strong first-order phase transition, by taking the temperature as a variable, the critical bubble profiles, the evolutions of the surface tension and the saddle-point action in the presence of a nucleation bubble are numerically calculated in detail when fixing the chemical potentials at $\mu=306 \mathrm{MeV}$ and $\mu=309 \mathrm{MeV}$. Our results show that the system could be trapped in the metastable state for a long time as long as the temperature is between the metastable region characterized by the up and low spinodal lines. Moreover, the surface tension at criticality will rise up to about $4 \mathrm{MeV/fm^2}$ when the chemical potential is very high. Such a small value of the surface tension would favor a mixed phase in the cores of compact stars and may have an important implication in astrophysics.
Junrong Wang, Ziwan Yu, Hong Mao
2023-09-24T02:37:47Z
http://arxiv.org/abs/2309.13529v3
# Bubble nucleation in the two-flavor quark-meson model ###### Abstract We investigate the dynamics of a first-order quark-hadron transition via homogeneous thermal nucleation in the two-flavor quark-meson model. The contribution of fermionic vacuum loop in the effective thermodynamics potential and phase diagram together with the location of critical end point (CEP) have been obtained in the temperature and chemical potential plane. By taking the temperature as variable, the critical bubble profiles, the evolutions of the surface tension and the saddle-point action in the presence of a nucleation bubble are calculated in detail when fixing the chemical potentials at \(\mu=306\)MeV and \(\mu=309\)MeV. Our results show that the system could be trapped in the metastable state for a long time as long as the temperature is between the up and low spinodal lines, and the surface tension at criticality will rise up to about \(4\)MeV/fm\({}^{2}\) when the chemical potential is very high. Such a small value of the surface tension would favor a mixed phase in the cores of compact stars and may have an important implication in astrophysics. ## I Introduction It is widely believed that the hadronic matter characterized by confinement and chiral-symmetry at low net baryon densities undergoes a phase transition into a deconfined and chirally symmetric quark-gluon-plasma (QGP) through a smooth crossover as the temperature is increased. At large densities, one expects a first-order phase transition line separating the hadronic matter from the QGP and possible more exotic phases. At end of this line, there should exist a so-called critical endpoint (CEP) where the transition should be a continuous second-order one. To investigate and identify phase diagram is one of the most challenging problems in high energy physics and astrophysics [1; 2; 3], and the study is experimentally supported by the heavy-ion collision experiments, such as the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory and the Large Hadron Collider (LHC) at CERN. These conducted experiments provide us with the opportunity to inspect and reveal fundamental properties of the strong interaction. Moreover, to explore a wider range of the QCD phase diagram up to several times the normal nuclear-matter density, the new Facility for Antiproton and Ion Research at Darmstadt, the Nuclotron-based Ion Collider Facility at the Joint Institute for Nuclear Research in Dubna, and the Japan Proton Accelerator Research Complex at Japan Atomic Energy Research Institute and Japan's National Laboratory for High Energy Physics have been scheduled and planed, and the CEP can be explored in phase II of Beam Energy Scan program at RHIC and in upcoming experiments [4; 5]. From a theoretical point of view, Quantum Chromodynamics (QCD), the gauge theory describing strong interactions in elementary particle physics, is applicable for determining the properties of strongly interacting matter at finite temperature and density. However, due to the fermion sign problem, an _ab initio_ approach, Lattice Field Theory, is severely hampered by the failure of importance sampling if a chemical potential is involved [6]. In order to describe the low-energy nonperturbative phenomena in the framework of QCD theory, an alternative approach is effective models with possessing two salient features of QCD, i.e. chiral symmetry and confinement. To mention a few, these effective models, which are successfully utilized for many decades, are the Nambu-Jona-Lasinio (NJL) model [7; 8], the linear sigma model (LSM) [9] and their modernized extensions, the Polyakov Nambu-Jona-Lasinio model (PNJL) [10; 14] and the Polyakov Quark Meson Model (PQM) [11; 12; 13]. Recently, after the discovery of gravitational waves by LIGO Collaboration [15], the subject of cosmological first-order phase transition has gathered increasingly interest due to the stochastic gravitational wave background which would be produced [16; 17; 18]. The stochastic gravitational wave background could be detected by current and near future detectors. Once this signal is to be observed, it would provide us with the earliest known probe of the universe. Moreover, aside from the early universe phenomena in primordial first-order phase transition, the observation of gravitational waves also shed light on the field of astrophysics, future gravitational wave observations related to a first-order quark-hadron phase transition would not only enable the probing of the equation of state for matter under extreme circumstances, but also give a constrain on the quark-hadron surface tension. In combination with other observations, astrophysics has now entered the multimessenger era [19; 20; 21; 22; 23; 24]. Therefore, to understand the dynamics of the first-order phase transition is crucial and important. With upcoming gravitational waves experiments, there is a need to explore the anticipated phenomenology tightly connected with its underlining fundamental mechanism. It is well known that the dynamics of first-order phase transition in the early universe and heavy-ion collisions at ultrarelativistic energies can be applied through the homogeneous nucleation theory [25; 26]. The modern theory pioneered by Langer in the late 1960s in the context of classical statistical mechanics [27; 28] has been extended to relativistic quantum field theory by Callan and Coleman for zero temperature [29; 30; 31] and by Affect [32] and Linde [33; 34] for finite temperature. The remarkable goal of a nucleation theory is to calculate the nucleation rate of a bubble or droplet of a stable (true) vacuum inside a metastable (false) vacuum near the critical temperature. Suppose that a system nearby its critical temperature, due to the thermal and quantum fluctuations of any thermodynamic systems, bubbles of the stable vacuum created by fluctuations may grow or shrink inside the homogeneous false vacuum depending on its energy budget with regard to the false vacuum. If a droplet is too small, the free energy gain from the phase transition of the bulk is less than the energy cost in the creation of an interface between two vacua. The total free energy is positive, the droplet will shrink and evaporate. On the other hand, if the droplet is large, a bulk free energy gain is relatively large and the surface energy cost is negligible, the droplet will tend to grow and eventually occupy the whole system, completing the phase conversion. For a strong first-order phase transition, which is usually characterized by an effective potential with a zero-temperature potential barrier, the dynamics of the quark-hadron phase conversion based on the Friedberg-Lee (FL) model [35] has been studied numerically [36], and the found has been also compared to its analytic results obtained with the thin-wall approximation [37]. Since the FL model is lack of chiral symmetry, the model only predicts a first-order phase transition in whole QCD phase diagram, this, of course, conflicts with other studies based on Lattice simulations or chiral models, so that the model can merely serve as the prototypical toy model for current interests. To fixed this problem, it is necessary to introduce a chiral symmetry in the FL model in order to provide a proper description of hadron-quark phase transition beyond the first-order transition. Then, the quark meson model treated as an upgrade to the FL model seems fulfill the requirements in both the studies of the static nucleon properties and the QCD phase transition [38; 39]. In the framework of the quark meson model, homogeneous bubble nucleation has been initially investigated both in numerical and analytic methods in Ref [40], but the study was constrained on the temperature below the critical temperature and an unphysical coupling constant was chosen in order to enhance the strength of the first-order phase transition. Furthermore, with the thin-wall approximation, bubble nucleations at low temperature but high density and the strong magnetic field have been previously investigated in Refs.[41; 42]. In a first-order phase transition, a key ingredient is the surface tension, by using the analytical method or the thin-wall approximation, surface tensions and phase diagram have been obtained in the quark meson model with the Polyakov-loop in Refs [43; 44]. Since the thin-wall approximation can not be applied when the temperature is far from the critical temperature, in this work, by adopting the realistic coupling constant, we will carry on systematic study on the first-order quark-hadron phase transition through the exact numerical method when the temperature is below the critical one as well as the temperature is above the critical value, also the fermion vacuum fluctuation is included in the present study. Furthermore, we will compare our results with the recently found for the quark meson model within the thin-wall approximation [45], in order to discover some nontrivial but important results missing in the analytical method. The structure of the paper is as follows. In the next section we briefly describe the quark meson model. After that we will discuss the effective potential at finite temperatures and densities and present a phase diagram of the QCD phase transition. In Sec. IV, we give detailed description of homogeneous nucleation and the methods used for both numerical and analytic computations of the critical bubble profiles. Our results and discussions are presented in Sec. V, while in the last section we give our conclusions. ## II The model In terms of chiral fields, the Lagrangians of two massless noninteracting quarks \(u\) and \(d\) are invariant under the global \(SU(2)_{L}\times SU(2)_{R}\) chiral phase transformations \[\psi_{L,R}\rightarrow\psi^{\prime}_{L,R}=U_{L,R}\psi_{L,R}, \tag{1}\] where \(\psi^{T}_{L,R}=(u,d)_{L,R}\) and \(U_{L,R}=\exp(-i\vec{\theta}_{L,R}\cdot\frac{\vec{r}}{2})\). However this chiral symmetry does not appear in the low energy particle spectrum and the strong interaction theory exhibits the phenomenon of spontaneous symmetry breaking. Consequently, three Goldstone bosons appear and the constituent quarks become massive at low energy. In describing the symmetries of the Lagrangian, it is useful to introduce three pion mesons \(\vec{\pi}\) and a \(\sigma\) meson in terms of a matrix field as \[\Phi=\sigma\frac{\tau^{0}}{2}+i\vec{\pi}\cdot\frac{\vec{\tau}}{2}, \tag{2}\] where \(\tau^{0}\) is the unity matrix and \(\vec{\tau}\) are the three Pauli matrices. Under the \(SU(2)_{L}\times SU(2)_{R}\) chiral symmetry transformations, \(\Phi\) transforms as \[\Phi\rightarrow\Phi^{\prime}=U_{L}\Phi U_{R}^{+}. \tag{3}\] Then the renormalizable effective Lagrangian of the two-flavors quark meson model is defined as [9; 46] \[\mathcal{L}=\mathcal{L}_{\Phi}+\mathcal{L}_{q}, \tag{4}\] where \[\mathcal{L}_{\Phi}=\mathrm{Tr}[(\partial_{\mu}\Phi)^{+}(\partial^{\mu}\Phi)] -\lambda[\mathrm{Tr}(\Phi^{+}\Phi)-\frac{\vartheta^{2}}{2}]^{2}-H\mathrm{Tr}[ \Phi], \tag{5}\] and \[\mathcal{L}_{q}=\overline{\psi}_{L}i\not{\partial}\psi_{L}+\overline{\psi}_{ R}i\not{\partial}\psi_{R}-2g\overline{\psi}_{L}\Phi\psi_{R}+h.c.. \tag{6}\] Here we have introduced a flavor-blind Yukawa coupling \(g\) of the left-handed and right-handed quark fields to interact with the \(\Phi\) field. The parameters of the Lagrangian \(\mathcal{L}\) are chosen by the requirement that the chiral symmetry \(SU(2)_{L}\times SU(2)_{R}\) is spontaneously broken down to \(SU(2)_{L+R}\) in the vacuum while the \(\sigma\) field takes on a non-vanishing vacuum expectation value \(\langle\sigma\rangle=f_{\pi}=93\mathrm{MeV}\). It results in a massive \(\sigma\) meson and three massless Goldstone bosons \(\vec{\pi}\) mesons in chiral limit, as well as giving an effective mass \(m_{q}=gf_{\pi}\) to the constituent quarks. Furthermore, the chiral symmetry is explicitly broken by adding the last term in Eq.(5) due to the finite current quark masses. With this additional term, the vector isospin \(SU(2)\) symmetry remains exact but the axial \(SU(2)\) transformation is no longer invariant. Accordingly, the constant \(H\) is to be fixed by the partially conserved axial vector current relation which gives \(H=f_{\pi}m_{\pi}^{2}\), where the pion mass is taken as \(m_{\pi}=138\) MeV. Moreover, the dimensionless coupling constant \(g\) in the model is determined by the constituent quark mass in vacuum, which is about \(1/3\) of the nucleon mass and gives \(g\simeq 3.3\). The another dimensionless coupling constant \(\lambda\) is usually fixed by the sigma mass \(m_{\sigma}^{2}=m_{\pi}^{2}+2\lambda f_{\pi}^{2}\), here we set to \(500\) MeV in accord with the most recent compilation of the Particle Data Group [47]. Finally, the quantity \(\vartheta\) is actually not a free parameter and can be formally expressed as \(\vartheta^{2}=f_{\pi}^{2}-m_{\pi}^{2}/\lambda\). ## III Effective potential and phase structure A convenient framework of studying phase transitions and the restoration of the chiral symmetry at extreme high energy is the thermal field theory [18; 48]. Within this framework, the effective potential is one of the important and powerful theoretical tool, and the standard approach for dealing with the thermodynamics of various observables of interest relies on the grand canonical ensemble. To make things lucid, we start with a spatially uniform system in thermodynamical equilibrium at temperature \(T\) and quark chemical potential \(\mu\), from here and henceforward, we will use the chemical potential to represent the quark chemical potential. In general, the grand partition function is commonly given in a schematic form \[\mathcal{Z} = \mathrm{Trexp}[-(\hat{\mathcal{H}}-\mu\hat{\mathcal{N}})/T] \tag{7}\] \[= \int\prod_{a}\mathcal{D}\sigma\mathcal{D}\pi_{a}\int\mathcal{D} \psi\mathcal{D}\bar{\psi}\mathrm{exp}\left[\int_{X}(\mathcal{L}+\mu\bar{\psi} \gamma^{0}\psi)\right],\] where \(\int_{X}\equiv\int_{0}^{\beta}d\tau\int d^{3}x\), the inverse temperature \(\beta=1/T\), and \(\mu=\mu_{B}/3\) for the homogeneous background field. In the mean-field approximation, the meson fields in the Lagrangian are replaced by their expectation values, whereas the quark and antiquark fields are still retained as quantum fields. This implies that the one-loop correction to the effective potential from the quark fields are taken into account, but treats the mesonic degrees of freedom at tree level. Following this scheme, the integration over the fermions yields a determinant which can be calculated by standard procedures [26; 50], it will generate an effective potential for the mesons. Finally, the effective potential of the model can be obtained exactly in a closed form as \[\Omega(T,\mu)=\frac{-T\mathrm{ln}\mathcal{Z}}{V}=U(\sigma,\vec{\pi})+\Omega_{ \bar{\psi}\psi}, \tag{8}\] where the classical potential for the \(\sigma\) and \(\vec{\pi}\) is rewritten as \[U(\sigma,\vec{\pi})=\frac{\lambda}{4}\left(\sigma^{2}+\vec{\pi}^{2}-\vartheta ^{2}\right)^{2}-H\sigma, \tag{9}\] and the contribution of quarks and antiquarks are given by \[\Omega_{\tilde{\psi}\psi} = \Omega^{\rm v}_{\tilde{\psi}\psi}+\Omega^{th}_{\psi\psi} \tag{10}\] \[= -\nu\int\frac{d^{3}\vec{p}}{(2\pi)^{3}}E\] \[-\nu T\int\frac{d^{3}\vec{p}}{(2\pi)^{3}}\left\{\ln\left[1+e^{-(E -\mu)/T}\right]+\ln\left[1+e^{-(E+\mu)/T}\right]\right\}.\] Here, \(\nu=2N_{f}N_{c}=12\) and \(E=\sqrt{\vec{p}^{2}+m_{q}^{2}}\) is the valence quark and antiquark energy for \(u\) and \(d\) quarks, and the minus sign is the consequence of Fermi-Dirac statistics. The constituent quark (antiquark) mass is set to \(m_{q}=g\sigma\). The first term of Eq.(10) denotes the fermions vacuum one-loop contribution which is ultraviolet divergent and can only be evaluated in the presence of a regulator. The divergence in Eq.(10) then can be appropriately renormalized by using the dimensional regularization scheme [49; 51; 52]. After taking into account the vacuum fluctuations and therefore renormalization issues, the renormalized fermion vacuum one-loop contribution reads \[\Omega^{\rm v}_{\tilde{\psi}\psi}=\Omega^{\rm reg}_{\tilde{\psi}\psi}=-\frac{N _{c}N_{f}}{8\pi^{2}}m_{q}^{4}\ln(\frac{m_{q}}{\Lambda}), \tag{11}\] where \(\Lambda\) denotes the arbitrary renormalization scale. It is worth pointing out that dimensional regularization really introduces an arbitrary renormalization scale parameter, nevertheless, at least in the one-loop approximation, the thermodynamic potential and all physical observable are independent of the choice of \(\Lambda\), and the scale dependence can be neatly cancelled out after the rearrangement of parameters in the model [51; 52; 39; 53]. Equipped with the above effective potential, we can explore the phase diagram of the model at finite temperature and density by minimizing the thermodynamical potential in equation (8) with respect to the order parameter \(\sigma\). Then an equation of motion is given by \[\frac{\partial\Omega(T,\mu)}{\partial\sigma}=0. \tag{12}\] The solution of the equation of motion determines the behavior of the chiral order parameters \(\sigma\) as a function of \(T\) and \(\mu\), as well as the phase diagram of the model. As we know, the thermodynamical state of equilibrium is given by the values of the order parameter at the global minimum of the effective potential, once the order parameter for each given \(T\) and \(\mu\) is obtained, any thermodynamical quantity of equilibrium, such as the pressure, the entropy density, the energy density, the speed of sound, et al., can be described and calculated. In figure 1, we have presented the phase diagram in calculation with the fermion vacuum fluctuation for the two-flavor quark meson model. The temperature behavior of the chiral condensate \(\sigma\) shows that the system experiences a smooth crossover transition at low chemical potential, while there is a first-order phase transition for larger chemical potential due to the fact that the chiral order parameter makes a jump across the gap of the condensate near the critical temperature \(T_{c}\). Normally, the temperature derivative of the chiral condensate \(\sigma\) for quarks has a peak at some specific temperature, which is established as the critical temperature for the chiral phase transition. Because the temperature derivative of the chiral condensate has simply one peak, we can not tell when and where the crossover phase transition would convert to a first-order one at the critical end point (CEP) with a second order phase transition [45; 46]. In order to locate the CEP in the phase diagram, the quark number susceptibility \(\chi_{q}=\partial^{2}\Omega(T,\mu)/\partial^{2}\mu\) is to be introduced, and it is believed to be divergent at the CEP [4; 5]. Aside from calculation of the quark number susceptibility \(\chi_{q}\), in the present work, we prefer to use the shapes of the effective potential at various temperatures and chemical potentials to decide the position of the CEP. In the case of the first-order phase transition, along the critical line with the temperature \(T\simeq T_{c}\), the thermodynamical potential \(\Omega(T,\mu)\) has two minima of equal depth separated by a potential barrier. With the reduction of the chemical potential, the height of the barrier decreases and finally disappears at the CEP, in which the phase transition is of second order. In our calculation the corresponding CEP is located at \((T_{E},\mu_{E})\simeq(30,301)\) MeV in Fig.1. As shown in Fig.2, in the region of the first-order phase transition, a typical effective potential commonly displays a local minimum at a low sigma \(\sigma_{l}\) which is separated by a potential barrier from another local minimum at a relative larger sigma \(\sigma_{h}\). When a critical temperature \(T_{c}\) is reached, these two minima are degenerate. For \(T<T_{c}\), the minimum of the effective potential at \(\sigma=\sigma_{h}\) is the absolute or global minimum, which is regarded as the stable (true) vacuum, whereas the minimum at \(\sigma=\sigma_{l}\) is treated as the metastable (false) vacuum. In this case the chiral symmetry is broken so that the constituent quarks become massive. On the contrary, when the temperature \(T\) goes across above the critical value \(T_{c}\), these two vacua will flip over, the global minimum is now at \(\sigma=\sigma_{l}\), and the local minimum is at \(\sigma=\sigma_{h}\). Since the chiral symmetry is approximately restored and the quarks become almost massless, the system for \(T>T_{c}\) is then considered as the quark phase. The previous case for \(T<T_{c}\) is taken as the hadron phase, therefore the critical lines divide the whole phase diagram into two categories: hadron phase and the quark phase. Normally, apart from the critical temperature \(T_{c}\), there have two other temperatures of interests in a first-order phase transitions. These two temperatures \(T_{c1}\) and \(T_{c2}\) are named as the lower and upper spinodal critical points, respectively. A typical example is shown in Fig.2, where the evolutions of the potential for several temperatures when fixed a chemical potential at \(\mu=306\) MeV and \(\mu=309\) MeV are exhibited. For the left panel in Fig.2 as \(\mu=306\) MeV, when the temperature is around \(T_{c}\simeq 20.6\) MeV, the shape of the potential exhibits two degenerate minima. However, with the increase of the temperature, the second minimum of the potential at \(\sigma=\sigma_{h}\) disappears at a higher temperature \(T_{c2}\simeq 23.1\) MeV. In the meanwhile, when the temperature is to fall below the critical temperature \(T_{c}\), the first minimum of the potential at \(\sigma=\sigma_{l}\) tends to wipe out around \(T_{c1}\simeq 14.7\) MeV. Between these two specific temperatures, metastable states or false vacuum exist, and the system can exhibit supercooling or superheating. Figure 1: (Color online) The phase diagram in the \(T-\mu\) plane for the two-flavor quark meson model. The dashed lines are the critical line for conventional chiral phase transition in the region of crossover. The solid lines indicates the first-order phase transitions, and the solid circle indicates the critical end points for chiral phase transitions of \(u\) and \(d\) quarks. The dashed-dotted line and the dashed-doted-dotted line are the lower and upper spinodal lines. Figure 2: (Color online) (a) The grand canonical potentials \(\Omega\) as a function of the chiral order parameter \(\sigma\) for \(\mu=306\) MeV at various temperatures. (b)The grand canonical potentials \(\Omega\) as a function of the chiral order parameter \(\sigma\) for \(\mu=309\) MeV at various temperatures. For \(\mu=309\)MeV, one can also observe the characteristic pattern of a first order phase transition: two minima corresponding to phases of restored and broken chiral symmetry are separated by a potential barrier and they will become degenerate when the temperature is at \(T_{c}\simeq 13.3\) MeV. Chiral symmetry is approximately restored for \(T>T_{c}\), where the minimum at false vacuum \(\sigma=\sigma_{l}\) becomes the absolute minimum as shown in the right panel in Fig.2. Similarly to the previous case for \(\mu=306\) MeV, when the temperature \(T\) goes across up the critical line and rises further, the potential barrier between two minima will start to decrease gradually and shrink to zero at the moment when the second minimum of the potential at \(\sigma=\sigma_{h}\) vanishes at a spinodal temperature \(T_{c2}\simeq 18.8\) MeV. On the other hand, when \(T<T_{c}\), the shapes of the effective potential at various low temperatures display quite different behaviors in comparison with those of the previous case as \(\mu=306\) MeV. The barrier between two minima of the effective potential is to maintain even when the temperature \(T\) is very close to zero. This means that the first minimum of the effective potential at \(\sigma=\sigma_{l}\) always could exist in hadron phase. Therefore, the phase transition could be identified as a strongly first-order phase transition which is usually induced by an effective potential with a nonvanishing zero-temperature potentia barrier. To give out a complete description of a first-order phase transition, two particular lines of the spinodal points which constrain the regions of spinodal instability for the first-order phase transition at high density are illustrated in Fig.1. Similarly to the critical line in Fig.1, both the lower and upper spinodal lines grow up with the reduction of the chemical potential \(\mu\), but the gaps of these two spinodal lines become small and small, in the end, two spinodal lines and the critical line will terminate by the same point at the CEP. Moreover, since the lower spinodal line will end up at some point \(\mu_{c}\simeq 308\) MeV on the vertical axis of the chemical potential, the area of the first-order phase transition can be technically split into a weakly first-order phase transition and a strongly first-order phase transition according to the above discussion. Therefore, for a weakly first-order chiral phase transition as the chemical potential is \(\mu<\mu_{c}\) as \(T<T_{c}\), the thermodynamic potential exhibits a local minimum aside from the global minimum, when the temperature decreases from \(T_{c}\) to a specific value \(T_{c1}\). The local minimum gradually disappears at a point of the inflection known as spinodal instability. Whereas, for \(\mu>\mu_{c}\), the chiral phase transition is to be considered as a strongly first-order one due to the fact that the local minimum remains for the temperature is at \(T<T_{c}\) and there is no spinodal temperature anymore. The critical chemical potential for a transition from a weak first-order phase transition to that of a strong one is then identified as the critical chemical potential at \(\mu_{c}\simeq 308\) MeV in hadron phase [36; 54]. ## IV Homogeneous thermal nucleation If a first-order phase transition takes place, its dynamics is non-trivial due to the fact that the discontinuity in energy density known as the latent heat and the basic mechanism for the first-order phase transition can be described by the bubble nucleation. In terms of the effective potential, the general setting is illustrated as shown in Fig.2. For a first-order phase transition, the effective potential has at least two minima which are separated by a potential barrier. When the temperature is around its critical value, the effective potential exhibits degenerate minima. However, as long as the temperature \(T\) deviates from its critical value \(T_{c}\), one of the minimum of the effective potential becomes a local minimum, whereas the other turns into a global minimum. Traditionally, the local minimum is referred to a metastable state or a false vacuum, and the global minimum is taken as the stable state or a physical vacuum. The false vacuum would be classically stable, but quantum mechanically it is only a metastable state and can decay via the nucleation of bubbles of the stable state inside the unstable state. This decay can be triggered by either quantum or thermal fluctuations, depending on what kind of physics we are interested in. In the following discussion, we will be mostly concerned with the regime in which thermal fluctuations are much larger than quantum fluctuations. The mechanism of the nucleation theory can be used to study the probability that a bubble or droplet of the stable vacuum in a system initially trapped in the metastable vacuum near the critical temperature \(T_{c}\). For a pure system, the formation of bubbles originates from intrinsic thermodynamic fluctuations, this kind of nucleation mechanism is commonly called as homogeneous nucleation. On the contrary, when impurities cause the formation of bubbles or droplets, such a mechanism of the nucleation theory is known as heterogeneous nucleation. In the everyday world, the external agents would play the role of nucleating centers, such as dust or ions in the atmosphere, leading to a much more efficient increase of the nucleation rate. Nevertheless, for the physical interests related to our study, homogeneous nucleation theory is appropriate and we will use this basic theoretical apparatus to describe the decay of the a metastable vacuum of a system interacting with a heat bath at temperature \(T\). A quantum field theoretical description of metastable vacuum decay was initiated with the work of Coleman and collaborators in the late 1970s [29; 30]. These methods built upon earlier work of Langer [27; 28] were subsequently extended to finite temperature by Linde [33; 34] and Affleck [32]. Based on the framework of the homogeneous thermal nucleation, in the limit that thermal fluctuations dominate quantum fluctuations, the nucleation rate per unit time per unit volume is given by the form of \[\Gamma=\mathcal{P}\exp\left[-\frac{S_{3}}{T}\right], \tag{13}\] where \(T\) is the temperature of the system in equilibrium with the thermal bath, \(S_{3}\) is the three-dimensional action associated with the \(O(3)\)-symmetric critical bubble or droplet and \(\mathcal{P}\) is the exponential prefactor. For the mechanism of the bubble nucleation, to the leading order, the nucleation rate is controlled by the exponent of the three-dimensional action evaluated on the critical bubble. The sub-leading corrections to the leading-order bubble action are included in the prefactor \(\mathcal{P}\), which can be technically expressed as \[\mathcal{P}=\frac{\omega_{-}}{2\pi}\left(\frac{S_{3}}{2\pi T}\right)^{3/2} \left[\frac{\text{Det}(-\nabla^{2}+\Omega_{FV}^{\prime\prime})}{\text{Det}^{ \prime}(-\nabla^{2}+\Omega_{B}^{\prime\prime})}\right]^{1/2}. \tag{14}\] Here \(\omega_{-}\) is the eigenvalue of the negative mode, the terms \(\Omega_{FV}^{\prime\prime}\) and \(\Omega_{B}^{\prime\prime}\) are abbreviations for \(\Omega^{\prime\prime}\) evaluated in the false vacuum and the critical bubble, the prime in the determinant signifies that the zero eigenvalues associated with the translation symmetry of the bubble are omitted. \(\Omega^{\prime\prime}\) is the second derivative of the effective potential \(\Omega(T,\mu)\) with respect to the order parameter \(\sigma\) which is actually represented a field in describing the extremum of the three-dimensional Euclidean action, more specifically, the critical bubble or the bounce. Usually, the calculation and evaluation of this prefactor is a nontrivial matter, a rough estimate of their ratios can be obtained by dimensional analysis and it can be approximately expressed as \(T^{4}\) or \(T_{c}^{4}\) for simplicity [25; 40]. The result represented in the above equation (13) is a semi-classical contribution based on a saddle point approximation around the bounce solution. By taking the scalar field \(\sigma\) as the order parameter, at finite-temperature field theory, an Euclidean action we are interested in is \[S_{E}(\sigma)=\int_{0}^{\beta}d\tau\int dr^{3}\left[\frac{1}{2}\left(\frac{ \partial\sigma}{\partial\tau}\right)^{2}+\frac{1}{2}\left(\nabla\sigma\right) ^{2}+\Omega(\sigma;T,\mu)\right], \tag{15}\] in which the subscript \(E\) denotes Euclidean and the integral is over Euclidean space. For the sake of convenience, in the following discussions, we will keep the \(\sigma\) field in the effective potential \(\Omega\) in Eq.(8) explicitly. As argued by Linde [34], for sufficiently high temperature as length scales large compared to \(\beta\), the relevant number of dimensions is \(d=3\), and the Euclidean action becomes \[S_{E}(\sigma)\equiv\frac{S_{3}}{T}, \tag{16}\] where \(S_{3}\) is the three-dimensional saddle-point action associated with the formation of a critical-sized bubble or droplet, in what follows it is to be called as the saddle-point action for abbreviation. Therefore, the bounce is an \(O(3)\) symmetric solution to the classical equation of motion that extremizes the Euclidean action \(S_{3}\). In particular, for a scalar field \(\sigma\), the bounce satisfies a nonlinear ordinary differential equation, \[\frac{d^{2}\sigma(r)}{dr^{2}}+\frac{2}{r}\frac{d\sigma(r)}{dr}=\frac{\partial \Omega(\sigma;T,\mu)}{\partial\sigma}, \tag{17}\] with boundary conditions \(\lim\limits_{r\rightarrow\infty}\sigma(r)=\sigma_{FV}\) and \(\frac{d\sigma(r)}{dr}|_{r=0}=0\). The first boundary condition is because that the bubbles are embedded in the homogeneous false vacuum, outside the bubble, the \(\sigma\) field should arrive at its false vacuum at \(\sigma\simeq\sigma_{FV}\). While the second one is set by the requirement of finite energy at the origin. The solution for this equation of motion with the above proper boundary conditions is a saddle point solution or a bounce \(\sigma_{b}\). It is an \(O(3)\) non-trivial field configuration that starts in the false vacuum and reaches the other side of the potential barrier with zero velocity. In this work we will use AnyBubble package [55] to determine the bounce. Once the solution \(\sigma_{b}\) is obtained, the \(S_{3}\) exponent in Eq.(13) can be evaluated on the bounce solution \(\sigma_{b}\) as \[S_{3}=\int dr^{3}\left[\frac{1}{2}\left(\nabla\sigma\right)^{2}+\Omega(\sigma; T,\mu)\right], \tag{18}\] and the surface tension of the nucleation bubble interface between the false vacuum and the true vacuum is defined accordingly as \[\Sigma=\int dr\left[\frac{1}{2}\left(\frac{d\sigma}{dr}\right)^{2}+\Omega( \sigma;T,\mu)\right]. \tag{19}\] It is worth noting that in the practical calculations, if the false vacuum has a non-zero potential energy, an additional term \(-\Omega(\sigma_{FV};T,\mu)\) should be included in the \(S_{3}\) action and the surface tension \(\Sigma\). For a generic effective potential, the equation of motion of the bounce with boundary conditions usually cannot be obtained analytically, we should rely on numerical methods to do the computation. However, when the system is very close to the critical coexistence line, the bubble radius \(R\) is much larger than the wall thickness (\(\triangle R\sim m_{\sigma}^{-1}\)), hence the damping force \(2\sigma^{\prime}/r\) in the field equation has become negligible, the thin-wall approximation is applicable and the equation of motion (17) reduces to the field equation for a typical one-dimensional soliton \[\frac{d^{2}\sigma(r)}{dr^{2}}=\frac{d\Omega}{d\sigma}. \tag{20}\] This static field equation implies that \[\frac{d\sigma(r)}{dr}=\pm\sqrt{2\Omega}. \tag{21}\] Integrating Eq.(21) yields \[r = \int_{\sigma}^{\sigma_{v}}\frac{d\sigma}{\sqrt{2\Omega}}. \tag{22}\] Therefore an approximate solution for the bounce can be obtained for arbitrary potential with two or more degenerate minima. Moreover, within the thin-wall approximation, the surface tension of the bubble can be calculated as \[\Sigma_{\rm tw}=\int_{0}^{\infty}dr\left[\frac{1}{2}\left(\frac{d\sigma}{dr} \right)^{2}+\Omega\right]=\int_{0}^{\sigma_{v}}d\sigma\sqrt{2\Omega}, \tag{23}\] and the saddle-point action \(S_{3}\) is given by \[S_{3}=\frac{16\pi}{3}\frac{\Sigma_{\rm tw}^{3}}{\varepsilon}. \tag{24}\] The quantity \(\varepsilon=\Omega(\sigma_{f};T,\mu)-\Omega(\sigma_{t};T,\mu)\) is the difference between the values of the effective potential at the false vacuum and at the true vacuum. ## V Results and Discussion In this section, we will numerically solve the equation of motion in Eq.(17) with some boundary conditions, \(\sigma\rightarrow\sigma_{FV}\) as \(r\rightarrow\infty\) and \(\sigma^{\prime}(0)=0\). Since a first-order phase transition necessitates a discontinuity in the scalar field \(\sigma\), the transition does not take place exactly at the critical temperature \(T_{c}\) along to the critical coexistence line in phase diagram in figure 1. Consequently, the false vacuum \(\sigma_{FV}\) should be addressed properly before we obtain the exact numerical solutions of the equation of motion. As discussed in Sec.III, for the first-order phase transition, the typical potential of the \(\sigma\) field exhibits three distinct extrema. One is the maximum of the effective potential, which represents the barrier between the quark and hadron phases, while others are two minima of the effective potential corresponding to the false and true vacuum. In order to clarify these two vacua, in the former section, we have denoted these two local minima as \(\sigma_{l}\) and \(\sigma_{h}\) respectively. Here, \(\sigma_{l}\) is the value of the \(\sigma\) field at one minimum of the potential close to the origin, whereas \(\sigma_{h}\) represents another minimum of the potential at a relative lager sigma. Traditionally, \(\sigma_{l}\) is referred as the quark phase, due to the fact that the chiral symmetry is approximately restored and the quarks throw away most of their constituent masses as \(\sigma=\sigma_{l}\). On the other hand, \(\sigma_{h}\) stands for the hadron phase, because the chiral symmetry is spontaneously broken and the quarks have a larger constituent masses. So that the quark phase is a stable vacuum and the hadron phase is a metastable vacuum as \(T>T_{c}\). On the contrary, when \(T<T_{c}\), the hadron phase becomes stable and the original quark phase becomes a metastable one, and these two vacua will get exchanged at \(T=T_{c}\). In order to get an intuitional description of the false vacuum decay at finite temperature and density, the dynamics of a first-order phase transition at metastability is to be classified into two categories: top-down and bottom-up. More specifically, by "top-down", we mean that we will study the false vacuum decay at finite temperature and density for the temperature \(T<T_{c}\) and the initial false vacuum of the system is well prepared as quark phase at \(\sigma=\sigma_{l}\) for \(T\simeq T_{c}\) when the system is cooling down from very high temperature. On the other side, by "bottom-up", we mean that the original false vacuum for the temperature \(T\simeq T_{c}\) is defined as the hadron phase at \(\sigma=\sigma_{h}\), and we will study the decay of the hadron phase for the temperature \(T>T_{c}\) in the event that the system is heating up from the low energy state. ### Top-down As the system is cooling down from very high temperature. When \(T\simeq T_{c}\), the initial false vacuum can be causally set as the quark phase since the first-order transition does not happen exactly at \(T_{c}\) but upon lowing the temperature. The appearance of a bubble of the hadron phase (the stable state) inside the quark phase (the metastable state) is a natural consequence of the thermal fluctuations of the thermodynamical system sufficiently close to the coexistence line in phase diagram. To study the dynamics of a first-order phase transition in this "top-down" scenario as \(T<T_{c}\), we numerically solve the equation of motion in Eq.(17) with the specific boundary conditions as \(\sigma\rightarrow\sigma_{l}\) as \(r\rightarrow\infty\) and \(d\sigma(0)/dr=0\). Here, the false vacuum \(\sigma_{l}\) is temperature-dependent as a matter of fact that the local minimum of the effective potential varies with increasing of the temperature when fixing the chemical potential. For \(\mu=306\) MeV, the exact numerical solutions by taking the temperatures as \(T=15\), 16, 17, 18, 19, 20 MeV are plotted in the left panel of Fig.3. It is shown that with the temperature decreasing from \(T_{c}\simeq 20.6\) MeV, all curves approach to their false vacuum \(\sigma_{l}\) when the radius \(r\) is large, whereas \(\sigma(r)\) at the center of the bubble deviates from its stable vacuum value at \(\sigma=\sigma_{v}\) significantly. When the temperature is sufficiently close to the critical temperature, the \(\sigma\) field at the center of the bubble only slightly deviates from its stable vacuum value at \(\sigma=\sigma_{h}\), however, as soon as \(T<T_{c}\) MeV, the \(\sigma(0)\) field is visibly different from its stable vacuum value. Such a deviation can be qualitatively explained by an "overshoot-undershoot" argument due to Coleman Coleman (1975). According to this argument, the equation of motion (17) is reinterpreted as the equation for a particle moving in an "upside-down" potential energy \(\Omega\), the second term in the field equation is taken as a damping force. When the system is very close to the critical coexistence line, the bubble radius \(R\) is very larger, so that the damping force can be neglected, the field \(\sigma\) can start to roll down at the top of the effective potential \(\Omega\) around \(\sigma\simeq\sigma_{h}\) to rest at its false vacuum \(\sigma_{l}\). However, with the temperature descends, the radius of the bubble decreases accordingly, the damping force in the field equation becomes important, consequently, the field \(\sigma(0)\) will deviate from its true vacuum value more and more dramatically. In other words, the thin-wall approximation mentioned above is expected to be invalid, and any further extension of the thin-wall approximation to the temperatures deviation from \(T_{c}\) should be checked very carefully. A similar discussion can be applied to the second case when the chemical potential is fixed at \(\mu=309\) MeV. The critical bubble profiles at different temperatures are illustrated in the right panel of Fig.3, where the temperatures are taken as \(T=1\), 4, 6, 8, 10, 12 MeV from left to right. The evolution of \(\sigma(r)\) for different temperatures tells that the typical radius of the critical bubble should increase as well as the increase of the temperature, and \(\sigma(r)\) approaches to its false vacuum value at \(\sigma=\sigma_{l}\) as \(r\rightarrow\infty\). Also, from this figure, as long as the temperature is lower than the critical temperature \(T_{c}\), \(\sigma(0)\) will depart from its stable vacuum value at \(\sigma=\sigma_{v}\) significantly, this nontrivial behavior of the \(\sigma(r)\) in the center of the bubble can also be interpreted as a limit to the applicability of the thin-wall approximation. Once the bubble profiles have been solved, from the definition in Eq.(19), the surface tension of the nucleation bubble interface between the false vacuum and the stable vacuum as a function of the temperature can be obtained, and the results are shown in Fig.4 when the chemical potentials are taken as \(\mu=306\) MeV and \(\mu=309\) MeV. For both Figure 3: (Color online) (a) Critical bubble profiles for different temperatures when fixing the chemical potential at \(\mu=306\) MeV for \(T<T_{c}\). From left to right, the curves correspond to \(T=15\), 16, 17, 18, 19 and 20 MeV. (b) Critical bubble profiles for different temperatures when fixing the chemical potential at \(\mu=309\) MeV for \(T<T_{c}\). From left to right, the curves correspond to \(T=1\), 4, 6, 8, 10 and 12 MeV. cases, with increasing of the temperature, the surface tension \(\Sigma(T)\) starts to grow firstly and reaches a maximum at a certain temperature, then it will inflect and slope downwards. These nontrivial behaviors of \(\Sigma(T)\) were also reported in a weak first-order phase transition [54] and a strong first-order phase transition [36] by using the exact numerical method, but these nontrivial properties seem to be completely destroyed by the thin-wall approximation. As shown in the left panel of Fig.2, for a weak first-order phase transition, there exists a spinodal temperature \(T_{c1}\) where a small barrier between the two minima in the potential will disappear. As a consequence, there is no bubble solution anymore as \(T<T_{c1}\) and the surface tension decreases rapidly to zero when \(T\to T_{c1}\). But for a strong first-order phase transition, according to a standard criterion to guarantee the existence of the stable bounce or soliton, it is indispensable for the potential of the order parameter \(\sigma\) field to exhibit three distinct extrema. Hence, we can have a nontrivial solution to the equation of motion (17) for any temperature as \(T<T_{c}\). The surface tension will then decline to a nonzero value as the decreasing of the temperature as depicted at the right panel of Fig.4. This is the main difference between a strong first-order phase transition and a weak one, this is the reason why we have separated the area of the first-order phase transition in phase diagram into two categories: a strong one and a weak one in the previous discussion. We are now to determine the \(S_{3}/T\) exponent in Eq.(13), which is the saddle-point action evaluated on the bounce solution. Because the decay rate per unit volume is what we are interested, the argument of the exponential \(S_{3}/T\) is Figure 4: (Color online) (a) Surface tension as a function of temperature \(T\) for \(T\leq T_{c}\) at \(\mu=306\) MeV. (b) Surface tension as a function of temperature \(T\) for \(T\leq T_{c}\) at \(\mu=309\) MeV. Figure 5: (Color online) (a) The saddle-point action as a function of temperature \(T\) for \(T\leq T_{c}\) at \(\mu=306\) MeV. (b) The saddle-point action as a function of temperature \(T\) for \(T\leq T_{c}\) at \(\mu=309\) MeV. more important in comparison with the prefactor \(\Gamma\) if the \(S_{3}/T\) is larger than the unity \(1\). As shown in the following discussions, for the most studies in the present work, an estimate for the prefactor based on dimensional analysis is sufficient. To show the saddle-point action due to the appearance of the critical bubble and its crucial role played in the nucleation rate for the first-order phase transition, the \(S_{3}/T\) exponent as a function of the temperature \(T\) at different chemical potentials are plotted in Fig.5. As shown in the left panel in Fig.5, for a weak first-order phase transition as \(\mu=306\) MeV, \(S_{3}/T\) will start from zero when the temperature is at the spinodal temperature \(T_{c1}\) because the barrier is to disappear and there is only a trivial solution for the field equation of the \(\sigma\) field. Then it will climb up very quickly with the increase of the temperature and tend to diverge near the critical temperature \(T_{c}\). By the exponential form of equation (13), \(\Gamma\) will be strongly suppressed by the saddle-point action and the system is likely to stay in the metastable vacuum for a relatively long time as long as \(S_{3}/T>1\). Therefore, for a weak first-order phase transition, the system can be trapped in quark phase even when the temperature is below than the critical temperature \(T_{c}\), until to the temperature such that \(S_{3}/T\simeq 1\). After that, the exponential factor is unimportant and the probability of a false vacuum decay through the barrier is essentially enhanced by the thermodynamical fluctuation. From the left panel in Fig.5, for \(\mu=306\) MeV, the remarkable temperature is about \(T\simeq 16\) MeV when \(S_{3}/T\simeq 1\), it is very close to the spinodal critical temperature \(T_{c1}\simeq 14.7\) MeV. This indicates that the quark phase could survive safely up to the temperature nearby the low spinodal temperature \(T_{c1}\) for a weak first-order phase transition. When fixing the chemical potential at \(309\) MeV, the resulting plot of the \(S_{3}/T\) exponent as a function of the temperature \(T\) are shown in the right panel in Fig.5. In this case, the \(S_{3}/T\) firstly decreases with the increase of the temperature, then it reaches a minimum point and restarts to grow up very quickly. As the temperature is close to the critical temperature \(T_{c}\), it will become divergent. In comparison with the former case of a weak first-order phase transition, for a strong one, the saddle-point action as a function of temperature \(T\) usually exhibits a non-monotonic behavior with increasing of the temperature, this can be taken as one of the salient properties of the strong first-order phase transition [36]. Besides the non-trivial property of the saddle-point action, there is another obvious difference from that of the first-order phase transitions. For \(T<T_{c}\), no matter what the temperature we are taken, the \(S_{3}/T\) is always larger than the unity \(1\). Such that, for a strong first-order phase transition in the present model, it seems the conversion of quark phase to hadron phase is exponentially suppressed for \(T\leq T_{c}\) and \(\mu\geq\mu_{c}\) with \(\mu_{c}\simeq 308\) MeV, and the system is likely to stay in quark phase rather than hadron phase, this could in turn induce various exotic structures of the phase of the strong-interaction matter in high density and low temperature due to the presence of the free quarks. ### Bottom-up By "bottom-up", we mean that the system is heating up from low energy to very high energy and the starting point is set as hadron phase when \(T\simeq T_{c}\). With increasing of the temperature further, while \(T>T_{c}\), the appearance of a bubble of the quark phase (the stable vacuum) inside the hadron phase (the false vacuum) is also treated as a natural consequence of the thermal fluctuations of the thermodynamical system sufficiently nearby the coexistence line in phase diagram. Therefore in the "bottom-up" scenario when \(T>T_{c}\), we should numerically solve the equation of motion in Eq.(17) with the specific boundary conditions as \(\sigma\rightarrow\sigma_{h}\) as \(r\rightarrow\infty\) and \(d\sigma(0)/dr=0\) because the false vacuum is located at the hadron phase and the quark phase is now the stable state. In what follows, we begin by showing the critical bubble profiles obtained from the exact numerical solution of Eq.(17) with the boundary conditions \(\sigma(\infty)=\sigma_{h}\) and \(\sigma^{\prime}(0)=0\) when fixing chemical potentials at \(\mu=306\) MeV and \(\mu=309\) MeV in Fig.6. This is unlike the results presented in the "top-down" scenario when \(T<T_{c}\), in which there have two different first-order phase transitions. However, for \(T>T_{c}\), the results for both panels in Fig.6 exhibit some quite similar features. For both cases, with the increase of the temperature from the critical temperature to the up spinodal line at \(T=T_{c2}\), the typical size of the bubble which can be approximately estimated by the maximal value of the quantity \(|\sigma^{\prime}(r)|\) will decrease rapidly to zero because the barrier between the two minima in the potential disappears and there is no stable bubble solution anymore as \(T\geq T_{c2}\). Furthermore, the structures of the critical bubbles also share alike properties. When the temperature is close to the critical temperature \(T_{c}\), the critical bubble has an obvious "core" structure with \(\sigma\simeq\sigma_{l}\) separated by a relatively thin wall from the region \(\sigma\simeq\sigma_{h}\). On the other side, when the temperature comes up to another critical point at \(T\simeq T_{c2}\), the critical bubble usually becomes "coreless" structure due to the fact that the thickness of the critical bubble has the same order as the radius and the field at the original point \(\sigma(r=0)\) departs from its true vacuum \(\sigma_{l}\) largely. At last, the curves in Fig.6 can be explained qualitatively according to the "overshoot-undershoot" argument given by Coleman. When the temperature is very close to the critical temperature \(T_{c}\), the potential has two degenerate vacua, and the damping force is neglectful small, the field at escape point \(\sigma(r=0)\) starts at the top of the effective potential around \(\sigma\simeq\sigma_{l}\). On the contrary, with the temperature rising up, two degenerate vacua get decoupled and the damping force takes effect, the field \(\sigma(r=0)\) will deviate from its vacuum value dramatically, especially as \(T\to T_{c2}\). In other words, the thin-wall approximation is not expected to be valid, any further extension of the thin-wall approximation to higher temperatures should be checked very carefully particularly when the temperature is close to the up spinodal line. From the definition of the surface tension in Eq.(19), we plot the temperature dependence of the surface tensions for \(T\geq T_{c}\) when fixing the chemical potential at \(\mu=306\) MeV and \(\mu=309\) MeV in Fig.7. For \(\mu=306\) MeV, the values of the surface tension are between about 1.6 MeV and 0 MeV, while for a relatively larger chemical potential \(\mu=309\) MeV, they are among \(\sim 2.8\) MeV to zero. Then, we can find the biggest values of the surface tension occur near the critical line since this domain is characterized by large barriers and a small energy difference between the true and false vacua. Besides, the result in Fig.7 also implies that the surface tension nearby the critical line at \(T\simeq T_{c}\) will increase accordingly with the increase of the chemical potential. Moreover, for both cases, the surface tensions will continuously decline to zero as long as the temperature approaches to the up spinodal line at \(T\simeq T_{c2}\). Therefore, in the bottom-up scenario, \(\Sigma(T)\) is a monotonically decreasing function of \(T\), whereas in the top-down scenario it is a non-monotonic function and has a nontrivial behavior. Sometimes, the nontrivial evolution of \(\Sigma(T)\) suggests that the temperature dependent surface tension has a maximum value at a specific temperature and it can be taken as a limit to the applicability of the thin-wall approximation [36; 54]. Thus, for the bottom-up scenario, we need to develop an Figure 6: (Color online) (a) Critical bubble profiles for different temperatures when fixing the chemical potential at \(\mu=306\) MeV for \(T>T_{c}\). From right to left, the curves correspond to \(T=20.7\), \(21\), \(22\) and \(23\) MeV. (b) Critical bubble profiles for different temperatures when fixing the chemical potential at \(\mu=309\) MeV for \(T>T_{c}\). From right to left, the curves correspond to \(T=14\), \(15\), \(16\) and \(17\) MeV. Figure 7: (Color online) (a) Surface tension as a function of temperature \(T\) for \(T\geq T_{c}\) at \(\mu=306\) MeV. (b) Surface tension as a function of temperature \(T\) for \(T\geq T_{c}\) at \(\mu=309\) MeV. alternative method to estimate a scope in which the thin-wall approximation is valid. To study the dynamics of a first-order phase transition, the last important quantity to be evaluated is the saddle-point action \(S_{3}/T\) due to the activation of a nucleation bubble, which is an essential ingredient for the nucleation rate per unit time per unit volume in Eq.(13). In Fig.8, \(S_{3}/T\) is plotted as a function of temperature \(T\) for \(T\geq T_{c}\) when fixing the chemical potentials at \(\mu=306\) MeV and \(\mu=309\) MeV. From this figure, as the temperature approaches from above to the critical temperature \(T_{c}\), \(S_{3}/T\) will increase every quickly and diverges near the critical temperature \(T_{c}\). In the opposite direction, when the temperature is close to the spinodal critical temperature \(T_{c2}\), \(S_{3}/T\) will decrease rapidly to zero, during this procedure, what we are interested in is the moment when \(S_{3}/T\) is about a unity one because if \(S_{3}/T>1\) the nucleation rate \(\Gamma\) will be strongly suppressed by the exponential factor and the system is likely to stay in the false vacuum for a very long time. From the left panel of Fig.8, for \(\mu=306\) MeV, when the temperature is about 22.8 MeV, \(S_{3}/T\simeq 1\), so that the system is likely to remain in hadron phase until the temperature is less than 22.8 MeV. For \(\mu=309\) MeV, this specific temperature is approximately 18.4 MeV when \(S_{3}/T\simeq 1\), this indicates that hadron phase could also maintain its existence as long as the temperature is below 18.4 MeV. Since these two specific temperatures are very close to their spinodal critical temperatures \(T_{c2}\), to a rough estimation, we can simply taken the spinodal critical line in phase diagram as a phase boundary for the stable existence of the false vacuum in the first-order phase transition. ## VI Conclusion In this work we have computed the effective potential for two-flavor quark-meson model at finite temperature and density in the presence of a fermionic vacuum term. Having the in-medium effective potential, the phase diagram together with the critical end point has been given and the up and low spinodal lines have been calculated explicitly for the first-order hadron quark phase transition. For the low spinodal line, the first-order phase transition can further be divided into a strong and weak ones in phase diagram when the temperature is below the critical coexistence line. The critical chemical potential is taken as \(\mu\simeq 308\) MeV as the low spinodal line terminates at this point. So that as \(T<T_{c}\), for \(\mu<308\) MeV, it is a weak first-order phase transition, but for \(\mu>308\) MeV, it belongs to a strong first-order phase transition. Provided by the temperature-dependent effective potential, the problem of homogeneous nucleation of bubble in a first-order phase transition can be investigated accordingly. For convenience, we have separated our discussions into two scenarios: the top-down case and the bottom-up case. By "top-down", we consider the quark phase as a metastable (false) vacuum and the hadron phase as a stable (true) vacuum when \(T<T_{c}\). On the contrary, by "bottom -up", we mean that the false vacuum is the hadron phase whereas the true vacuum is the quark phase as \(T>T_{c}\). Then for the former case, the boundary condition at infinite radius is \(\sigma=\sigma_{h}\), whereas it is \(\sigma=\sigma_{l}\) for the latter case. With these specific boundary conditions, a saddle point solution of the field equation has been solved and the exact bubble profiles were obtained. Usually, when the temperature is close to the critical temperature \(T_{c}\), the bubble profile shows a "core" structure with the sigma field at true vacuum \(\sigma\simeq\sigma_{T}\) separated by a relatively thin wall from Figure 8: (Color online) (a) The saddle-point action as a function of temperature \(T\) for \(T\geq T_{c}\) at \(\mu=306\) MeV. (b) The saddle-point action as a function of temperature \(T\) for \(T\geq T_{c}\) at \(\mu=309\) MeV. the false vacuum at \(\sigma\simeq\sigma_{FV}\). However, when the temperature approaches to the spinodal critical temperature, the bubble profile exhibits a coreless structure since now the thickness of the critical bubble has the order of the radius and the \(\sigma\) field inside the bubble departs from its true vacuum value significantly. The calculation for the surface tension between a quark phase and hadron phase were also presented in these two different scenarios. In the "top-down" context, the surface tension first increases to a maximum value and then decrease with the increase of the temperature. The top of the surface tension could be taken as a limit on the reliability of the thin-wall approximation because the bubble profile at this point hints of a largest distortion of that of the thin-wall approximation. On the other side, for the "bottom-up" context, the surface tension demonstrates a monotonic property with the growth of the temperature, it will descend continually from its top value at the critical temperature \(T_{c}\) to zero as \(T\to T_{c2}\). As we known, the surface tension plays an important role in the field of nuclear physics and astrophysics and has attracted much attention recently. To provide a comprehensive consultation of the relevant research, the surface tension along the critical coexistence line has been laid out in Fig.9, we believe the remaining value of the surface tension in the domain of the first-order phase transition can be easily extracted and estimated by using the method in this work. Consequently, the present model predicts a surface tension of \(\Sigma\sim 0-4\) MeV/fm\({}^{2}\), our results are very close to the ones recently found for the same model in the thin-wall approximation in Ref [45] since the thin-wall approximation is a reliable tool as \(T\simeq T_{c}\). Note that most effective models also give a predict for a small values as \(\Sigma\leq 30\) MeV/fm\({}^{2}\), such as the MIT bag model[56], NJL model[57; 58], three-flavor PQM model[43], the nucleon-meson model[59] and the Friedberg-Lee model [36]. Such a small value of the surface tension would lead to a quark-matter formation during core-collapse supernova explosion and favors a mixed phase in the cores of compact stars, so that this reasonably small surface tension could provide an observable signal of the first-order phase transition within compact stars and play an important role in astrophysics. For a weak first-order phase transition, our results show a rapid decrease with temperature in the saddle-point action of critical bubbles from infinity at the critical temperature to zero at the spinodal critical temperature. This implies that we can always have the moment at which \(S_{3}/T\simeq 1\) for the weak first-order phase transition. However, for a strong first-order phase transition, the saddle-point action of critical bubbles shows some quite different characterizations. When the temperature increases, it first decreases to a minimum value, then it rises up and diverges as the temperature is close to the critical temperature \(T_{c}\). This non-monotonic behavior of the \(S_{3}/T\) with the increase of the temperature also reported in a previous study based on the Friedberg-Lee model [36], thus it can be taken as a salient feature of a strong first-order phase transition in comparison with the weak ones. Another interesting character of the \(S_{3}/T\) is that the saddle-point action never gets chance to reach the unity 1 for a strong first-order phase transition. The result can be roughly interpreted with the former study in Fig.6 in Ref.[36], where the evolution of the \(S_{3}/T\) as a function of chemical potential shows that for a fixed temperature the saddle-point action will increase and go across unity 1 quickly when the chemical potential rises up to \(\sim 231\) MeV. Hence the \(S_{3}/T\) is believed to satisfy the condition \(S_{3}/T>1\) in the present study since we have a lower temperature and larger chemical potential. Given the exponential dependence of \(\Gamma\) on \(S_{3}/T\), the decay of the false vacuum is to be exponentially suppressed Figure 9: (Color online) Surface tension as a function of a quark chemical potential when \(T\simeq T_{c}\). The solid line is for the “top-down” scenario and the dashed line is for the “bottom-up” scenario. and the system is likely to stay in the metastable state for a relatively long time when \(S_{3}/T>1\). So that the false vacuum could survive and exist as a metastable state as long as the temperature lies between the up and low spinodal lines, only if the temperature is close to the spinodal critical line, the \(S_{3}/T\) will descend and go through the unity 1. More specifically, a "conventional" hadron matter among the critical coexistence line and the low spinodal line could be potentially treated as a quark matter, then the exotic structures of the strong-interaction matter, such as the quarkyonic matter [60; 61], pion superfluidity [62] and color superconductors [63; 64], could be there too. ###### Acknowledgements. We thank Ang Li, Jinshuang Jin, Ken D. Olum and Xinjian Wen for valuable comments and discussions. This work is supported in part by National Natural Science Foundation of China (NSFC) under No.11675048.
2308.16778
Norm Convergence Rate for Multivariate Quadratic Polynomials of Wigner Matrices
We study Hermitian non-commutative quadratic polynomials of multiple independent Wigner matrices. We prove that, with the exception of some specific reducible cases, the limiting spectral density of the polynomials always has a square root growth at its edges and prove an optimal local law around these edges. Combining these two results, we establish that, as the dimension $N$ of the matrices grows to infinity, the operator norm of such polynomials $q$ converges to a deterministic limit with a rate of convergence of $N^{-2/3+o(1)}$. Here, the exponent in the rate of convergence is optimal. For the specific reducible cases, we also provide a classification of all possible edge behaviours.
Jacob Fronk, Torben Krüger, Yuriy Nemish
2023-08-31T14:56:05Z
http://arxiv.org/abs/2308.16778v1
# Norm Convergence Rate for Multivariate Quadratic Polynomials of Wigner Matrices ###### Abstract We study Hermitian non-commutative quadratic polynomials of multiple independent Wigner matrices. We prove that, with the exception of some specific reducible cases, the limiting spectral density of the polynomials always has a square root growth at its edges and prove an optimal local law around these edges. Combining these two results, we establish that, as the dimension \(N\) of the matrices grows to infinity, the operator norm of such polynomials \(q\) converges to a deterministic limit with a rate of convergence of \(N^{-2/3+o(1)}\). Here, the exponent in the rate of convergence is optimal. For the specific reducible cases, we also provide a classification of all possible edge behaviours. 1 Footnote 1: Partially supported by VILLUM FONDEN research grant no. 29369 E-mail addresses: [email protected] (J. Fronk), [email protected] (T. Krüger), [email protected] (Yu. Nemish) _Keywords: polynomials of random matrices, local laws, Dyson equation, extreme eigenvalues_ **AMS Subject Classification: 60B20, 15B52** ## 1 Introduction The empirical spectral distribution of a random matrix is typically well approximated by a deterministic measure as its dimension grows to infinity. A clear contender for the most famous example of such a convergence is the celebrated semi-circle law. It states that the spectral measure of a Wigner matrix, a Hermitian \(N\times N\)-matrix \(\mathbf{X}\) with centered i.i.d. entries \(x_{ij}\) above the diagonal and \(\mathbb{E}|x_{ij}|^{2}=N^{-1}\), converges to the semi-circle distribution, supported on the interval \([-2,2]\)[51]. In particular, the largest and smallest eigenvalues of \(\mathbf{X}\) converge to the respective edges of the support, implying the convergence \(\|\mathbf{X}\|\to 2\) of the operator norm, provided the fourth moments of the entries of \(\sqrt{N}\mathbf{X}\) are finite [7, 8]. For non-commutative Hermitian polynomials \(\mathbf{Q}=q(\mathbf{X}_{1},\ldots,\mathbf{X}_{l})\) in several independent Wigner matrices \(\mathbf{X}_{i}\) an analogous statement holds. In this setup the limit of the eigenvalue distribution equals the distribution associated to the polynomial \(\mathfrak{q}=q(\mathfrak{s}_{1},\ldots,\mathfrak{s}_{l})\), where the matrices are replaced by free semicircular random variables within a non-commutative probability space, i.e. \(\mathfrak{s}_{i}\) can be interpreted as operators acting on an infinite dimensional Hilbert space. This result was first established for Gaussian random matrices in [50] and extended to Wigner matrices in [18]. Similarly, the convergence of the norms \(\|\mathbf{Q}\|\to\|\mathfrak{q}\|\) was first shown by Haagerup-Thorbjornsen [31] in the Gaussian case and the Wigner case was proven by Anderson in [2]. Such results have also been shown when some \(\mathbf{X}_{i}\) are replaced by non-random matrices [12, 39]. For non-Hermitian polynomials convergence of the spectral measure to the limiting Brown measure, predicted by free probability theory, is known only for very specific cases, e.g. for products of random matrices [29, 41] and for quadratic polynomials [17]. Determining the limiting spectral measure \(\rho\) of Hermitian polynomials \(\mathbf{Q}\), or equivalently the distribution of \(\mathfrak{q}\), becomes a nontrivial task beyond particular computable cases. Several works have been devoted to the analysis of the regularity properties of \(\rho\). It has been shown that \(\rho\) has a single interval support [30] and does not contain any atoms [38, 44]. The cumulative distribution function of \(\rho\) is Holder-continuous [9], and if \(q\) is a monomial or a homogeneous quadratic polynomial then \(\rho\) is even absolutely continuous [15, 21]. In the present paper we consider the case when \(q\) is a general polynomial of degree two, i.e. we study self-adjoint polynomials in \(l\) independent Wigner matrices \(\mathbf{X}_{1},\ldots,\mathbf{X}_{l}\) of the form \[q(\mathbf{X}_{1},\ldots,\mathbf{X}_{l})=\sum_{i,j=1}^{l}\mathbf{X}_{i}A_{ij} \mathbf{X}_{j}+\sum_{i=1}^{l}b_{i}\mathbf{X}_{i}+c, \tag{1.1}\] where \(0\neq A=(A_{ij})\in\mathbb{C}^{l\times l}\) is a Hermitian matrix, \(b=(b_{i})\in\mathbb{R}^{l}\) and \(c\in\mathbb{R}\). For these polynomials we classify the edge behaviour of \(\rho\) and show (see Proposition 2.7 below) that at both edges the limiting spectral measure is absolutely continuous and apart from specific reducible cases its density exhibits a square root growth. The reducible cases are, up to a shift and change in sign, of the form \(\mathbf{Y}^{*}\mathbf{Y}\), where \(\mathbf{Y}\) is an affine combination of the underlying Wigner matrices. Such polynomials still have a square root edge at the rightmost point of the spectrum, but have a density blow-up at the leftmost point if it is equal to zero. All these cases are classified in Proposition 2.8 below. The square root growth of the limiting spectral distribution is a well-known phenomenon, that is already present in the semicircle law for Wigner matrices \(\mathbf{X}\). In this setup the rate of convergence for the norm of \(\mathbf{X}\) is \(\|\mathbf{X}\|=2+\mathcal{O}(N^{-2/3+o(1)})\) with very high probability [20] and several tail estimates have been established [5, 6, 23]. In fact, for Wigner matrices the distribution of the largest eigenvalue is known to be universal and given by the Tracy-Widom law [25, 45, 46]. This distribution was first identified by Tracy and Widom for the Gaussian ensembles [48, 49] and necessary and sufficient conditions for its universality in the context of Wigner matrices identified in [37]. Edge universality has been extended to many other Hermitian random matrix models, including invariant ensembles [14], covariance matrices [26, 43], deformed Wigner matrices [36], deterministic matrices with Gaussian perturbations [35] and models with correlations [1, 42]. Such universality results often rely on control of the eigenvalue location on mesoscopic scales between \(O(1)\) and \(O(N^{-1})\), i.e. on a local law. Local laws arose in the context of Wigner matrices [20, 47] and have subsequently been extended to the more complex models listed above. For non-commuting polynomials of several random matrices local laws are known only in specific cases, starting with Anderson's work on the anti-commutator \(\mathbf{X}_{1}\mathbf{X}_{2}+\mathbf{X}_{2}\mathbf{X}_{1}\) of Wigner matrices [3] that controls the deviation of bulk eigenvalues from their expected position on the scale \(\mathcal{O}(N^{-1/2})\). For general polynomials, that satisfy certain checkable conditions, an optimal local law in the bulk regime was proved in [21]. Related results are [10] and [11], where it was shown that for two random matrices satisfying a local law and whose eigenvectors are in generic direction to each other also their sum satisfies a local law at the edge and in the bulk. This result covers e.g. \(\mathbf{X}_{1}^{2}+\mathbf{X}_{2}^{3}\) if one of these matrices is a Gaussian unitary ensemble. For non-Hermitian polynomials the results [40] and [28] cover products of independent matrices with i.i.d. entries. and [17] quadratic polynomials. Currently the best estimate on the convergence rate of the norm for general polynomials of GUE matrices is \(-N^{-\varepsilon}\leq\|\mathbf{Q}\|-\|\mathbf{q}\|\leq CN^{-1/4}\), for some \(\varepsilon<\frac{1}{3}\) and \(C>0\) established in [16]. Our work improves this bound for polynomials \(\mathbf{Q}=q(\mathbf{X}_{1},\ldots,\mathbf{X}_{l})\) of the form (1.1) to the optimal rate of \(N^{-2/3+o(1)}\) with square root growth at the edges of the spectral density and extends the result to Wigner matrices. The main novelty here is a detailed analysis of the Dyson equation, describing a generalized resolvent of the linearization matrix associated with the polynomial in the limit \(N\to\infty\) and, consequently, the resolvent of \(\mathbf{Q}\) itself. The idea of linearising polynomials of random matrices in this way stems from [30, 31] and has been used in many works since, in particular in [2, 4, 13, 32, 33]. In particular, we perform a comprehensive stability analysis of the Dyson equation that allows us (i) to prove a square root growth of the limiting spectral density \(\rho\) and (ii) to establish an optimal bound on the difference between the solution to the Dyson equation and the generalized resolvent by using a modification of the bound on the random error matrix in the Dyson equation from [22]. The main insight is that the matrix Dyson equation for the linearization, which has a linear self-energy term, can be reduced to a scalar equation for a function \(m=m(z)\) of the form \[-\frac{1}{m}=z+\gamma(m), \tag{1.2}\] where the self-energy term \(\gamma(m)\) is now a non-linear function of \(m\). This representation allows us to identify the values of the spectral edges in terms of the coefficients of \(q\) and study the quadratic singularity at these edge points. ### Notations In this section, we introduce some definitions commonly used throughout the paper. The standard scalar product of vectors \(v,w\in\mathbb{C}^{n}\) will be denoted by \(\langle v,w\rangle\) and the standard euclidean norm by \(\|v\|=\sqrt{\langle v,v\rangle}\). A vector \(v\) is called normalized if \(\|v\|=1\). Matrices \(R\in\mathbb{C}^{k\times n}\) for (fixed) \(k,n\in\mathbb{N}\) are usually denoted by non-boldfaced roman letters and matrices \(\mathbf{R}\in\mathbb{C}^{kN\times nN}\) are usually denoted by boldfaced roman letters. In particular, we denote the identity matrix on \(\mathbb{C}^{k\times k}\) by \(I_{k}\) and the identity matrix on \(\mathbb{C}^{kN\times kN}\) by \(\mathbf{I}_{kN}\). For any \(k,n\in\mathbb{N}\) we embed \(\mathbb{C}^{k\times n}\) in \(\mathbb{C}^{k\times n}\otimes\mathbb{C}^{N\times N}\) by identifying \(R\in\mathbb{C}^{k\times n}\) with \(R\otimes\mathbf{I}_{N}\in\mathbb{C}^{k\times n}\otimes\mathbb{C}^{N\times N} \cong\mathbb{C}^{kN\times nN}\) and write compactly \[R=R\otimes\mathbf{I}_{N}\in\mathbb{C}^{(l+1)N\times(l+1)N}. \tag{1.3}\] Matrices \(R\in\mathbb{C}^{k\times n}\), which are embedded into \(\mathbb{C}^{kN\times nN}\), get still denoted by non-boldfaced letters. For \(R,T\in\mathbb{C}^{n\times n}\) we denote the normalized trace by \(\langle R\rangle=\frac{1}{n}\operatorname{Tr}R\) and define a scalar product by \(\langle R,T\rangle:=\langle R^{*}T\rangle\). We use the standard operator norm and the Hilbert-Schmidt norm, which are given by \[\|R\|=\sup_{\|x\|\leq 1}\|Rx\|\quad\text{and}\quad\|R\|_{\text{hs}}=\sqrt{ \langle R^{*}R\rangle}. \tag{1.4}\] for vectors of matrices \(V=(V_{i})_{i\in[\![\![\)}\in(\mathbb{C}^{n\times n})^{l}\) we denote by \(\|V\|\) the maximum of the operator norms of the entries, i.e. \[\|V\|=\max_{i\in[\![\![\)}\|V_{i}\|. \tag{1.5}\] For random matrices \(\mathbf{R}\in\mathbb{C}^{kN\times kN}\) the isotropic and averaged p-norms are defined by \[\|\mathbf{R}\|_{p}:=\sup_{\|\mathbf{x}\|,\|\mathbf{y}\|\leq 1}\left( \mathbb{E}|\langle\mathbf{x},\mathbf{R}y\rangle|^{p}\right)^{\frac{1}{p}} \quad\text{and}\quad\|\mathbf{R}\|_{p}^{\text{av}}:=\sup_{\|\mathbf{B}\|\leq 1 }\left(\mathbb{E}|\langle\mathbf{B}\mathbf{R}\rangle|^{p}\right)^{\frac{1}{p}}. \tag{1.6}\] For a block matrix \(\mathbf{R}\in\mathbb{C}^{kN\times nN}\) with blocks \(\mathbf{R}_{ij}\in\mathbb{C}^{N\times N}\), \(i\in[\![k]\!]\) and \(j\in[\![n]\!]\), we define the blockwise (averaged) trace \(\underline{\mathbf{R}}\in\mathbb{C}^{k\times n}\) by \[\underline{\mathbf{R}}_{ij}=\langle\mathbf{R}_{ij}\rangle. \tag{1.7}\] A matrix \(R\) is said to be positive definite if \(\langle v,Rv\rangle>0\) for all \(v\in\mathbb{C}^{n\times n}\setminus\{0\}\) and we write \(R>0\). It is called positive semi-definite if \(\langle v,Rv\rangle\geq 0\) for all \(v\in\mathbb{C}^{n\times n}\setminus\{0\}\) and we write \(R\geq 0\). \(R\) is called negative (semi-)definite, denoted by \(R<0\) (\(R\leq 0\)), if \(-R\) is positive (semi)-definite. For \(S\in\mathbb{C}^{n\times n}\) Hermitian we write \(S\geq R\) if \(S-R\geq 0\) and \(S>R\) if \(S-R>0\). For linear operators acting on matrix spaces, we denote by \(\|\cdot\|_{\text{sp}}\) the norm induced by the Hilbert-Schmidt norm, \(\|\cdot\|_{\text{hs}}.\) The identity map between matrix spaces is denoted by \(\mathbb{1}\), i.e. \(\mathbb{1}[R]=R\) for all \(R\in\mathbb{C}^{k\times n}\). The upper half-plane will be denoted by \(\mathbb{H}\), i.e. \[\mathbb{H}=\{z\in\mathbb{C}:\operatorname{Im}z>0\} \tag{1.8}\] and \([\![n]\!]:=\{1,2,\ldots,n\}\) is used for the natural numbers up to \(n\). If \(X\) and \(Y\) are positive quantities, we use the notation \(X\lesssim Y\) if there is some constant \(c>0\) such that \(X\leq cY\). The constant will in general depend on the coefficients of \(q\). If it also depends on some other parameters \(\alpha\), we write \(X\lesssim_{\alpha}Y\). In particular, the constant will never depend on the dimension of our random matrices, \(N\). If both \(X\lesssim Y\) and \(Y\lesssim X\) hold true, we write \(X\sim Y\). ### Acknowledgements We thank Peter Henning Thomsen Krarup for his valuable contributions in the early stages of the project. ## 2 Main results **Assumption**.: _Let \(\zeta_{0}\) be a real-valued and \(\zeta_{1}\) be a complex-valued random variable and let \(\zeta_{0}\) and \(\zeta_{1}\) be independent. For \(i=0,1\), they are to satisfy_ \[\mathbb{E}[\zeta_{i}]=0,\quad\mathbb{E}[|\zeta_{i}|^{2}]=1\quad\text{and}\quad \mathbb{E}[|\zeta_{i}|^{p}]\leq C_{p} \tag{2.1}\] _for all \(p\in\mathbb{N}\) and some constants \(C_{p}>0\), depending on \(p\). Let \(\mathbf{W}\in\mathbb{C}^{N\times N}\) be a Hermitian random matrix characterized by its entry distribution:_ 1. _The diagonal entries_ \(\{w_{ii}:\,i\in\llbracket N\rrbracket\}\) _and off-diagonal entries_ \(\{(w_{ij},w_{ji}):\,i,j\in\llbracket N\rrbracket,\,i<j\}\) _are independent;_ 2. \(\{w_{ii}:\,i\in\llbracket N\rrbracket\}\) _consists of independent copies of_ \(\frac{1}{\sqrt{N}}\zeta_{0}\)_,_ 3. \(\{(w_{ij},w_{ji}):\,i,j\in\llbracket N\rrbracket,\,i<j\}\) _consists of independent copies of_ \(\frac{1}{\sqrt{N}}(\zeta_{1},\bar{\zeta}_{1})\)_._ _For a fixed \(l\in\mathbb{N}\) we define \(\mathbf{X}=(\mathbf{X}_{i})_{i\in\llbracket I\rrbracket}\in(\mathbb{C}^{N \times N})^{l}\), a vector of random matrices, where each \(\mathbf{X}_{i}\), \(i\in\llbracket I\rrbracket\), is an independent copy of \(\mathbf{W}\)._ To present our results we first need to distinguish between shifted reducible and non-reducible quadratic polynomials. **Definition 2.1** (Shifted reducible and non-reducible second degree polynomial).: _We call any non-commutative quadratic polynomial of the matrices \(\mathbf{X}=(\mathbf{X}_{i})_{i\in\llbracket I\rrbracket}\) which is of the form_ \[q_{r}(\mathbf{X})=\alpha(v^{*}\mathbf{X}-\xi)(v^{*}\mathbf{X}-\xi)^{*}-\beta \tag{2.2}\] _for some \(\alpha,\beta,\xi\in\mathbb{R}\), \(\alpha\neq 0\), \(\xi\geq 0\) and \(v\in\mathbb{C}^{l}\) with \(\|v\|=1\) a **shifted reducible quadratic polynomial**. Any polynomial not of this form is called a **non-reducible quadratic polynomial**._ **Remark**.: _Shifted reducible quadratic polynomials are exactly the polynomials of the form (1.1) with coefficients \(A=\alpha vv^{*}\), \(b=-\alpha\xi(v+\bar{v})\) and \(c=\alpha|\xi|^{2}-\beta\) for some \(\alpha,\beta,\xi\in\mathbb{R}\) with \(\alpha\neq 0\), \(\xi\geq 0\) and normalised \(v\in\mathbb{C}^{l}\). Note that our definition of a shifted reducible polynomial also allows for polynomials of the form (2.2) with \(\xi\in\mathbb{C}\) since \((v^{*}\mathbf{X}-\xi)(v^{*}\mathbf{X}-\xi)^{*}=(e^{\mathrm{i}\varphi}v^{*} \mathbf{X}-e^{\mathrm{i}\varphi}\xi)(e^{\mathrm{i}\varphi}v^{*}\mathbf{X}-e^{ \mathrm{i}\varphi}\xi)^{*}\) for all \(\varphi\in\mathbb{R}\) and we used this invariance to restrict to \(\xi\in\mathbb{R}_{\geq 0}\) w.l.o.g.._ Shifted reducible polynomials are those where the edge characteristics reduce to understanding the singular value statistics of some (not necessarily Hermitian) first order polynomial in \(\mathbf{X}\), whereas non-reducible polynomials are those where such a simplification is not possible. The main focus of this work are non-reducible polynomials, but we also characterize the limiting spectral measure of the shifted reducible polynomials. For non-reducible polynomials, we prove the convergence of the norm of \(q(\mathbf{X})\) to a deterministic \(\tau_{*}\) in the following sense: **Theorem 2.2** (Convergence of the matrix norm).: _Let \(q\) be a non-reducible quadratic polynomial of the form (1.1). There is a deterministic \(\tau_{*}>0\), only depending on the coefficients \(A,b,c\) of \(q\), such that for all \(\varepsilon,D>0\) the operator norm of \(q(\mathbf{X})\) satisfies the estimate_ \[\mathbb{P}\left(\||q(\mathbf{X})\|-\tau_{*}|\geq N^{-\frac{2}{3}+\varepsilon} \right)\leq C_{\varepsilon,D}N^{-D}.\] **Remark**.: _The deterministic value \(\tau_{*}\) in Theorem 2.2 is the value of the norm as predicted by the limiting spectral measure_ \[\rho:=\lim_{N\to\infty}\frac{1}{N}\sum_{\mu\in\mathrm{Spec}(q(\mathbf{X}))} \delta_{\mu}, \tag{2.3}\] _where \(\delta_{\mu}\) denotes the Dirac measure at point \(\mu\) and the sum runs over all eigenvalues of \(q(\mathbf{X})\) accounting also for multiplicity. That is, we have \(\tau_{*}=\max\{|\tau_{+}|,|\tau_{-}|\}\), where \(\mathrm{supp}(\rho)=[\tau_{-},\tau_{+}]\) (see Definition 2.5 below). The points \(\tau_{+}\) and \(\tau_{-}\) can be obtained by solving an explicit polynomial equation, for details see Lemma 3.5 below._ To obtain the main theorem we need several intermediate results. They establish that the eigenvalue density of any non-reducible polynomial \(q(\mathbf{X})\) approximately shows a square root behaviour around its edge. For reducible polynomials, we classify the different edge behaviours. The central object of our interest is the Stieltjes transform of the limiting spectral measure \(\rho\), which we denote by \(m.\) The function \(m\) is uniquely defined by the following proposition. **Proposition 2.3** (Existence and uniqueness of the Stieltjes transform).: _Let \(A\in\mathbb{C}^{l\times l}\), \(A=A^{*}\), \(A\neq 0\), \(b\in\mathbb{R}^{l}\) and \(c\in\mathbb{R}\). There is a unique function \(m:\mathbb{H}\to\mathbb{H}\) such that_ 1. \(m\) _is complex analytic on all of_ \(\mathbb{H}\)_;_ 2. \(\lim_{z\to\infty}zm(z)=-1\)_;_ 3. _For all_ \(z\in\mathbb{H}\) _the equation_ \[-m^{-1}=z+\gamma(m),\] (2.4) _with_ \[\gamma(m):=-\operatorname{Tr}A(I_{l}+Am)^{-1}+mb^{t}\left((I_{l}+2m\widehat{A })^{-2}(I_{l}+m\widehat{A})\right)b-c,\] (2.5) _is satisfied for_ \(m=m(z)\)_. Here, the notation_ \(\widehat{A}=\frac{1}{2}(A+A^{t})\) _was used._ The proof of Proposition 2.3 is deferred to Appendix A.3. **Definition 2.4** (Self-consistent density of states).: _By Conditions 1 and 2 the function \(m\) in Proposition 2.3 is the Stieltjes transform of a unique probability measure \(\rho\) on the real line, i.e._ \[m(z)=\int_{\mathbb{R}}\frac{\rho(\mathrm{d}x)}{x-z} \tag{2.6}\] _for all \(z\in\mathbb{H}.\) We call \(\rho\) the self-consistent density of states corresponding to \(q(\mathbf{X})\)._ **Remark**.: _By the global law, [21, Proposition 2.17], the self-consistent density of states is indeed the limiting spectral measure._ Note that since \(A\) is Hermitian, the matrix \(\widehat{A}=\frac{1}{2}(A+A^{t})\) denotes the entrywise real part of \(A\). This should not be confused with the algebraic definition of the real part of a matrix, \(\operatorname{Re}R=\frac{1}{2}(R+R^{*})\). Let \(I\subset\mathbb{R}\) be an interval and \(E\in I\). Due to the Stieltjes inversion formula, the limiting spectral measure and its Stieltjes transform are related by the equation \[\rho(E)=\lim_{\eta\searrow 0}\frac{1}{\pi}\operatorname{Im}m(E+\mathrm{i}\eta), \tag{2.7}\] whenever that limit exists for all \(E\in I\) (see e.g. [27, Equation (1.4)]). It was shown in [44, Theorem 1.1 (3)] that \(\operatorname{supp}(\rho)\) is a single compact interval on the real line. In particular, this means that \(\rho\) has no internal edges. We use the following definition. **Definition 2.5** (Edges of the limiting spectral measure).: _Let \(\tau_{+}\) denote the position of the right edge of \(\rho\) and let \(\tau_{-}\) denote the position of the left edge of \(\rho\), i.e., we have_ \[\operatorname{supp}(\rho)=[\tau_{-},\tau_{+}] \tag{2.8}\] We also introduce the notion of a regular edge. **Definition 2.6**.: _Let \(\tau_{0}\in\{\tau_{-},\tau_{+}\}\). The limiting spectral measure \(\rho\) is said to have a regular edge at \(\tau_{0}\) if \(\rho(\mathrm{d}E)=\rho(E)\mathrm{d}E\) has a Lebesgue density (also denoted by \(\rho\)) in a neighborhood of \(\tau_{0}\) and_ \[\lim_{\begin{subarray}{c}E\operatorname{supp}(\rho)\\ E\to\tau_{0}\end{subarray}}\frac{\rho(E)}{\sqrt{|\tau_{0}-E|}} \tag{2.9}\] _exists and does not equal 0._ In other words, regular edges are those that show a square root decay of the density \(\rho\). Our next two results concern the edge characterization of reducible and non-reducible polynomials. **Proposition 2.7** (Edges of non-reducible polynomials).: _Let \(q\) be a non-reducible quadratic polynomial and \(\rho\) its associated limiting spectral measure defined in (2.3). Then the measure \(\rho\) has regular edges both at \(\tau_{-}\) and at \(\tau_{+}\)._ Next, we consider shifted reducible polynomials of the form \[q_{r}(\mathbf{X})=(v^{*}\mathbf{X}-\xi)(v^{*}\mathbf{X}-\xi)^{*} \tag{2.10}\] for some \(\xi\in\mathbb{R}_{\geq 0}\) and normalized \(v\in\mathbb{C}^{l}\). **Remark**.: _Compared to the general case defined in (2.2) we restrict here to \(\alpha=1\) and \(\beta=0\). As these constants only constitute a scaling and a shift respectively, the following result, Proposition 2.8, generalizes in a straightforward manner to all \(\alpha\neq 0\) and \(\beta\in\mathbb{R}.\) For \(\alpha<0\) the roles of the left and the right edge are reversed._ We introduce the quantities \(\sigma=\|\operatorname{Re}v\|\), \(\mu=\langle\operatorname{Re}v,\operatorname{Im}v\rangle\) and for \(\sigma\in(0,1)\) we define \(s>0\) with \[s^{2}:=\begin{cases}\big{(}(\sigma^{2}+a_{+}^{2}(1-\sigma^{2})+2a_{+}\mu)( \sigma^{2}+a_{+}\mu)\big{)}^{-1}\\ \sigma^{-4}\end{cases}\text{if }\mu\neq 0,\\ \text{if }\mu=0,\end{cases} \tag{2.11}\] where for \(\mu\neq 0\) we also set the constant \(a_{\pm}\in\mathbb{R}\) to \[a_{\pm}=\frac{1-2\sigma^{2}}{2\mu}\pm\sqrt{\left(\frac{1-2\sigma^{2}}{2\mu} \right)^{2}+1}. \tag{2.12}\] The edges of the shifted reducible polynomial are then characterized as follows. **Proposition 2.8** (Edges of shifted reducible polynomials).: _Let \(q_{r}\) be a shifted reducible polynomial as in (2.10) for some \(\xi\in\mathbb{R}_{\geq 0}\) and normalized \(v\in\mathbb{C}^{l}\). Then the limiting spectral measure \(\rho\) has a regular edge at \(\tau_{+}=\max\operatorname{supp}(\rho)\). Furthermore there is a \(\kappa>0\) such that the behaviour of \(\rho\) on \((\tau_{-},\tau_{-}+\kappa)\) with \(\tau_{-}=\min\operatorname{supp}(\rho)\) is given by_ \[\rho(E)\sim\begin{cases}(E-\tau_{-})^{-\frac{1}{2}}&\text{if }\xi=0\\ (E-\tau_{-})^{-\frac{1}{2}}&\text{if }v\in\mathbb{R}^{l}\text{ and }\xi<2\\ (E-\tau_{-})^{-\frac{1}{2}}&\text{if }v\in\mathbb{R}^{l}\text{ and }\xi=2\\ (E-\tau_{-})^{-\frac{1}{2}}&\text{if }v\in\mathbb{C}^{l}\setminus e^{\mathrm{i} \varphi}\mathbb{R}^{l}\text{ for all }\varphi\in(-\pi,\pi]\text{ and }s\xi<2\\ (E-\tau_{-})^{-\frac{1}{3}}&\text{if }v\in\mathbb{C}^{l}\setminus e^{\mathrm{i} \varphi}\mathbb{R}^{l}\text{ for all }\varphi\in(-\pi,\pi]\text{ and }s\xi=2.\end{cases} \tag{2.13}\] _In all other cases the left edge is also regular, i.e. \(\rho(E)\sim(E-\tau_{-})^{1/2}\)._ **Remark**.: _The reason behind this result is that the edge behaviour near the left edge follows from the distribution of small singular values of \(v^{*}\mathbf{X}-\xi\). It follows that \(\rho\) has singularities precisely if \(\xi\) is in the asymptotic spectrum of \(v^{*}\mathbf{X}\), with stronger singularities being observed if \(\xi\) is in the bulk of the spectrum and weaker ones if \(\xi\) is on the edge of the spectrum._ Secondly we prove that \(m\mathbf{I}_{N}\), where \(\mathbf{I}_{N}\) is the identity matrix on \(\mathbb{C}^{N\times N}\), well approximates the resolvent \(\mathbf{g}=(q(\mathbf{X})-z\mathbf{I}_{N})^{-1}\in\mathbb{C}^{N\times N}\) of \(q(\mathbf{X})\) around any regular edge and away from the spectrum. More precisely we prove uniform convergence of the quadratic form of \(\mathbf{g}\) on the set \[\mathbb{D}_{\gamma}^{\kappa_{0}}:=\{E+\mathrm{i}\eta\in\mathbb{H}:\,|E-\tau_{ 0}|\leq\kappa_{0},\,N^{-1+\gamma}\leq\eta\leq 1\} \tag{2.14}\] for some \(\kappa_{0}\sim 1\) and for all \(\gamma>0\), as well as on the set \[\mathbb{C}_{\gamma}^{\mathbb{C},\eta_{0}}:=\{E+\mathrm{i}\eta\in\mathbb{H}:\, C^{-1}\leq\mathrm{dist}(E,\operatorname{supp}(\rho))\leq C,\,N^{-1+\gamma}\leq \eta\leq\eta_{0}\} \tag{2.15}\] for all \(C,\gamma>0\) and some \(\eta_{0}\) depending on \(C\). **Theorem 2.9** (Local law for regular edges of polynomials).: _Let \(q\) be a polynomial of the form (1.1) and let \(\rho\) have a regular edge at \(\tau_{0}\in\{\tau_{-},\tau_{+}\}\). There is a \(\kappa_{0}>0\), depending only on the coefficients of \(q\), such that for all \(\varepsilon,\gamma,D>0\) and \(z=E+\mathrm{i}\eta\in\mathbb{D}_{\gamma}^{\kappa_{0}}\) the isotropic local law_ \[\mathbb{P}\left(|\langle\mathbf{x},(\mathbf{g}-m)\mathbf{y}\rangle|>\|\mathbf{x }\|\|\mathbf{y}\|N^{\varepsilon}\left(\frac{\operatorname{Im}m}{\sqrt{N\eta} }+\frac{1}{N\eta}\right)\right)\lesssim_{\varepsilon,\gamma,D}N^{-D} \tag{2.16}\] _holds for all deterministic \(\mathbf{x},\mathbf{y}\in\mathbb{C}^{N}\). Moreover, the averaged local law_ \[\mathbb{P}\left(\left|\frac{1}{N}\operatorname{Tr}(\mathbf{B}(\mathbf{g}-m)) \right|>\|\mathbf{B}\|\frac{N^{\varepsilon}}{N\eta}\right)\lesssim_{\varepsilon,\gamma,D}N^{-D} \tag{2.17}\] _holds for all deterministic \(\mathbf{B}\in\mathbb{C}^{N\times N}\). If additionally \(E\notin\operatorname{supp}(\rho)\), an improved averaged local law of the form_ \[\mathbb{P}\left(\left|\frac{1}{N}\operatorname{Tr}(\mathbf{B}(\mathbf{g}-m)) \right|>\|\mathbf{B}\|N^{\varepsilon}\left(\frac{1}{N|z-\tau_{0}|}+\frac{1}{(N \eta)^{2}\sqrt{|z-\tau_{0}|}}\right)\right)\lesssim_{\varepsilon,\gamma,D}N^{-D} \tag{2.18}\] _holds._ Away from the spectrum, we will also prove the following form of an averaged local law **Proposition 2.10** (Local law away from the spectrum).: _Let \(q\) be a polynomial of the form (1.1). For all \(C>0\) there is an \(\eta_{0}>0\), depending only on the coefficients of \(q\), such that an averaged local law of the form_ \[\mathbb{P}\left(\left|\frac{1}{N}\operatorname{Tr}(\mathbf{B}(\mathbf{g}-m)) \right|>\|\mathbf{B}\|N^{\varepsilon}\left(\frac{1}{N}+\frac{1}{(N\eta)^{2}} \right)\right)\lesssim_{\varepsilon,\gamma,D,C}N^{-D} \tag{2.19}\] _holds true for all deterministic \(\mathbf{B}\in\mathbb{C}^{N\times N}\), \(\gamma,\varepsilon,D>0\) and \(z\in\mathbb{C}^{C,\eta_{0}}_{\gamma}\)._ The local laws, Theorem 2.9 and Proposition 2.10, are proved in Section 5. They give us very precise control over the spectral properties of \(q(\mathbf{X})\) close to any regular edge as can be seen from the following corollaries, **Corollary 2.11** (Edge eigenvector delocalization).: _Let the assumptions of Theorem 2.9 be satisfied and \(\mathbf{v}\) be a normalized eigenvector of \(q(\mathbf{X})\) with eigenvalue \(\lambda\). Then there is a \(\kappa_{0}>0\), only depending on the coefficients of \(q\), such that if \(|\lambda-\tau_{0}|\leq\kappa_{0}\) then we have for all \(\varepsilon,D>0\) that_ \[\sup_{\mathbf{x}\in\mathbb{C}^{N\times N},\|\mathbf{x}\|=1}\mathbb{P}\left(| \langle\mathbf{x},\mathbf{v}\rangle|\geq N^{-\frac{1}{2}+\varepsilon}\right) \lesssim_{D,\varepsilon}N^{-D} \tag{2.20}\] **Corollary 2.12** (Eigenvalue rigidity).: _Denote the classical index of the eigenvalue close to energy \(E\in\operatorname{supp}(\rho)\) by_ \[k(E):=\left[N\int_{-\infty}^{E}\rho(E^{\prime})\mathrm{d}E^{\prime}\right], \tag{2.21}\] _with \(\lceil\cdot\rceil\) being the ceiling function. Let \(\rho\) have a regular edge at \(\tau_{0}\). All eigenvalues around \(\tau_{0}\) are close to their classical position in the following sense. There is a \(\kappa_{0}>0\), only depending on the coefficients of \(q\), such that_ \[\mathbb{P}\left(\sup_{E}|\lambda_{k(E)}-E|\geq\min\left\{\frac{N^{\varepsilon} }{N|E-\tau_{0}|},\frac{N^{\varepsilon}}{N^{\frac{2}{3}}}\right\}\right) \lesssim_{D,\varepsilon}N^{-D}. \tag{2.22}\] _holds for all \(\varepsilon,D>0\) and \(E\in\operatorname{supp}(\rho)\) with \(|E-\tau_{0}|\leq\kappa_{0}\)._ Proof of Theorem 2.2.: As the norm of any Hermitian matrix \(H\) with non-decreasing eigenvalues \(\lambda_{i}\), \(i\in[\![n]\!]\), is given by \(\|H\|=\max\{|\lambda_{1}|,|\lambda_{n}|\}\), Theorem 2.2 follows as a special case of Corollary 2.12 in conjunction with Proposition 2.7, which states that all edges of non-reducible polynomials are regular. ## 3 Properties of the spectral density We first state two propositions which describe the behaviour of the Stieltjes transform in the upper half-plane close to the edges of the spectrum. The first proposition concerns non-reducible polynomials, whereas the second one covers shifted reducible polynomials. Subsequently, we conclude Propositions 2.7 and 2.8 from Propositions 3.1 and 3.2 and state Corollary 3.3. It summarizes important properties of the Stieltjes transform close to any regular edge and is used to prove the local law, Theorem 2.9. Afterwards, we prove Propositions 3.1 and 3.2 in the remainder of the section. **Proposition 3.1** (Stieltjes transform for non-reducible polynomials).: _Let \(q\) be a non-reducible quadratic polynomial. Then we have the following behaviour of \(m\) close to the edges._ 1. _There are_ \(m_{+}<0\)_,_ \(c_{+}>0\) _and_ \(u>0\) _such that for all_ \(z\in\mathbb{H}\) _with_ \(|z-\tau_{+}|\leq u\) _the Stieltjes transform of the limiting spectral density_ \(m=m(z)\) _satisfies_ \[m-m_{+}=c_{+}\sqrt{z-\tau_{+}}+\mathcal{O}(|z-\tau_{+}|).\] (3.1) 2. _There are_ \(m_{-}>0\)_,_ \(c_{-}>0\) _and_ \(u>0\) _such that for all_ \(z\in\mathbb{H}\) _with_ \(|z-\tau_{+}|\leq u\) _the Stieltjes transform of the limiting spectral density_ \(m=m(z)\) _satisfies_ \[m-m_{-}=-c_{-}\sqrt{\tau_{-}-z}+\mathcal{O}(|\tau_{-}-z|).\] (3.2) _Here, \(\sqrt{\cdot}\) denotes the square root function that maps the positive real axis to itself and with a branch cut along the negative real axis._ **Proposition 3.2** (Stieltjes transform for shifted reducible polynomials).: _Let \(q\) be a shifted reducible polynomial of the form (2.10). Then we have the following behaviour of \(m\) close to the edges._ 1. _There are_ \(m_{+}<0\)_,_ \(c_{+}>0\) _and_ \(u>0\) _such that for all_ \(z\in\mathbb{H}\) _with_ \(|z-\tau_{+}|\leq u\) _the function_ \(m=m(z)\) _satisfies_ \[m-m_{+}=c_{+}\sqrt{z-\tau_{+}}+\mathcal{O}(|z-\tau_{+}|).\] (3.3) 2. _There are_ \(c_{-}>0\) _and_ \(u>0\) _such that for all_ \(z\in\mathbb{H}\) _with_ \(|z-\tau_{-}|\leq u\) _the function_ \(m=m(z)\) _satisfies_ \[m=\begin{cases}c_{-}(\tau_{-}-z)^{-\frac{1}{2}}+\mathcal{O}(1)&\text{if $\xi=0$} \\ c_{-}(\tau_{-}-z)^{-\frac{1}{2}}+\mathcal{O}(1)&\text{if $v\in\mathbb{R}^{l}$ and $\xi<2$}\\ c_{-}(\tau_{-}-z)^{-\frac{1}{2}}+\mathcal{O}(1)&\text{if $v\in\mathbb{C}^{l} \setminus\text{e}^{\mathrm{i}\varphi}\mathbb{R}^{l}$ for all $\varphi\in(-\pi,\pi]$ and $s\xi<2$}\\ c_{-}(\tau_{-}-z)^{-\frac{1}{2}}+\mathcal{O}(1)&\text{if $v\in\mathbb{C}^{l} \setminus\text{e}^{\mathrm{i}\varphi}\mathbb{R}^{l}$ for all $\varphi\in(-\pi,\pi]$ and $s\xi=2$},\end{cases}\] (3.4) _where_ \(s\) _was defined in (_2.11_). If none of the above conditions is satisfied, there is additionally an_ \(m_{-}>0\) _such that for all_ \(z\in\mathbb{H}\) _with_ \(|z-\tau_{-}|\leq u\) _we have_ \[m-m_{-}=-c_{-}\sqrt{\tau_{-}-z}+\mathcal{O}(|\tau_{-}-z|).\] (3.5) _The function_ \(\zeta\mapsto\zeta^{p}\) _is chosen such that the positive real axis is mapped to itself and with a branch cut along the negative real axis for all_ \(p\in\mathbb{R}\setminus\mathbb{Z}\)_._ **Corollary 3.3**.: _Let \(q(\mathbf{X})\) have a regular edge at \(\tau_{0}\) and \(m_{0}=m(\tau_{0})\). There is a \(u>0\) such that the function \(m=m(z)\) satisfies_ \[|z-\tau_{0}|\sim|m-m_{0}|^{2} \tag{3.6}\] _and_ \[\operatorname{Im}m\sim\begin{cases}\sqrt{|E-\tau_{0}|+\eta}&\text{if $E\in \operatorname{supp}(\rho)$}\\ \frac{\eta}{\sqrt{|E-\tau_{0}|+\eta}}&\text{if $E\notin\operatorname{supp}(\rho)$} \end{cases} \tag{3.7}\] _for all \(z=E+\mathrm{i}\eta\in\mathbb{H}\) with \(|z-\tau_{0}|\leq u\)._ Proof of Proposition 2.7 and Proposition 2.8.: By (2.7) we have \(\rho(E)=\lim_{\eta\searrow 0}\pi^{-1}m(E+\mathrm{i}\eta)\) on any interval \(I\subset\mathbb{R}\) on which the limit exists for all \(E\in I\). For the cases in Proposition 3.1 and Proposition 3.2 where \(m-m_{0}\) shows a square root behaviour around an edge \(\tau_{0}\), we take the limit on \((\tau_{0}-u,\tau_{0}+u)\) to prove that the edge is regular. For the cases in Proposition 3.2 where \(m\) diverges at \(\tau_{-}\), we take the limit on \((\tau_{-}-u,\tau_{-}+u)\setminus\{\tau_{-}\}\) to obtain the respective asymptotic behaviour. Let \(\mu_{i}\in\mathbb{R}\) denote the eigenvalues of \(A\), \(\widehat{\mu_{i}}\in\mathbb{R}\) the eigenvalues of \(\widehat{A}\) and let \(w_{i}\in\mathbb{R}^{l}\) be an orthonormal set of eigenvectors of \(\widehat{A}\) corresponding to the \(\widehat{\mu_{i}}\). By (2.5) the function \(\gamma\) is defined in terms of these quantities as \[\gamma(m)=-\sum_{i=1}^{l}\frac{\mu_{i}}{1+m\mu_{i}}+\sum_{i=1}^{l}\frac{| \langle w_{i},b\rangle|^{2}(1+m\widehat{\mu}_{i})}{(1+2m\widehat{\mu}_{i})^{ 2}}m-c. \tag{3.8}\] From now on we consider \(\gamma\) to be defined on its maximal domain, \(\mathbb{C}\setminus\mathscr{S}(\gamma)\), where \[\mathscr{S}(\gamma):=\{-\mu_{i}^{-1}\in\mathbb{R}:\,\mu_{i}\neq 0\}\cup\{-(2 \widehat{\mu}_{i})^{-1}\in\mathbb{R}:\,\widehat{\mu}_{i}\neq 0\text{ and }\langle w_{i},b\rangle\neq 0\} \tag{3.9}\] denotes the poles of \(\gamma\). We also require the function \(h\), which we define as follows. **Definition 3.4**.: _Let \(h:\mathbb{C}\setminus\mathscr{S}(h)\) be given by_ \[h(m):=\frac{1}{m^{2}}-\gamma^{\prime}(m)=\frac{1}{m^{2}}-\sum_{i=1}^{l}\frac{ \mu_{i}^{2}}{(1+m\mu_{i})^{2}}-\sum_{i=1}^{l}\frac{|\langle w_{i},b\rangle|^{2 }}{(1+2m\widehat{\mu}_{i})^{3}}. \tag{3.10}\] _Here, \(\mathscr{S}(h):=\mathscr{S}(\gamma)\cup\{0\}\) denotes the set of poles of \(h\) and \(\gamma^{\prime}(m)\) is the derivative of \(\gamma\) with respect to \(m\)._ Proposition 2.3 asserts that \(m\) is a Stieltjes transform of the measure \(\rho\), which has real and compact support. Thus \(m\) is analytic and has positive imaginary part on \(\mathbb{H}\). We take the derivative of (2.4) with respect to \(z\) on \(\mathbb{H}\) to find \[m^{\prime}=\frac{1}{h(m)}. \tag{3.11}\] As \(m\) is a Stieltjes transform of \(\rho\), its analyticity extends to \(\mathbb{C}\setminus\operatorname{supp}(\rho)\) and the above equation holds on \(\overline{\mathbb{H}}\setminus\operatorname{supp}(\rho)\), where \(\overline{\mathbb{H}}=\mathbb{H}\cup\mathbb{R}\). Next, we define \[m_{-}^{*} =\begin{cases}\min((0,\infty)\cap\mathscr{S}(h))&\text{if }(0, \infty)\cap\mathscr{S}(h)\neq\emptyset\\ \infty&\text{otherwise},\end{cases} \tag{3.12}\] \[m_{+}^{*} =\begin{cases}\max((-\infty,0)\cap\mathscr{S}(h))&\text{if }(- \infty,0)\cap\mathscr{S}(h)\neq\emptyset\\ -\infty&\text{otherwise}.\end{cases} \tag{3.13}\] In other words, \((m_{+}^{*},0)\) and \((0,m_{-}^{*})\) are the maximal intervals to the left and the right of the origin, where \(h\) is continuous. The following lemmata describe the existence and characterization of roots of \(h\) both for the shifted reducible and the non-reducible case on \((m_{+}^{*},0)\) and \((0,m_{-}^{*})\). **Lemma 3.5**.: _Let \(q\) be a non-reducible quadratic polynomial. Then we have the following._ 1. _The function_ \(h\) _has a unique root_ \(m_{-}\) _in_ \((0,m_{-}^{*})\) _and_ \(h\) _has a unique root_ \(m_{+}\) _in_ \((m_{+}^{*},0)\)_. Both of them are of first order. More precisely, they satisfy_ \[\pm h^{\prime}(m_{\pm})>0.\] (3.14) 2. _The positions of the edges_ \(\tau_{\pm}\)_, defined in Definition_ 2.5_, are given in terms of_ \(m_{\pm}\) _by_ \[\tau_{\pm}=-m_{\pm}^{-1}-\gamma(m_{\pm}).\] (3.15) **Lemma 3.6**.: _Let \(h\) be the function introduced in Definition 3.4 for a reducible polynomial \(q=q_{r}\) of the form (2.10) with normalized \(v\in\mathbb{C}^{l}\) and \(\xi\geq 0\). Then the following holds true._ 1. _The function_ \(h\) _has a unique root_ \(m_{+}\) _in_ \((m_{+}^{*},0)\)_, which is of first order and satisfies_ \[h^{\prime}(m_{+})>0.\] (3.16) 2. _The function_ \(h\) _is continuous on_ \((0,\infty)\)_. It has no root in_ \((0,\infty)\) _if and only if one of the following hold true:_ 1. \(\xi=0\)_;_ 2. _or_ \(v\in\mathbb{R}^{l}\) _and_ \(\xi\leq 2\)_;_ 3. _or_ \(v\in\mathbb{C}^{l}\setminus e^{i\varphi}\mathbb{R}^{l}\) _for all_ \(\varphi\in(-\pi,\pi]\) _and_ \(s\xi\leq 2\) _If none of the above conditions are satisfied, \(h\) does have a unique root \(m_{-}\in(0,\infty)\), which is of first order and satisfies_ \[h^{\prime}(m_{-})<0. \tag{3.17}\] _If \(h\) does not have any roots in \((0,\infty)\), the following asymptotic behaviour is observed for \(m\to\infty\):_ \[h(m)\sim\begin{cases}m^{-3}&\text{if $\xi=0$}\\ m^{-3}&\text{if $v\in\mathbb{R}^{l}$ and $\xi<2$}\\ m^{-5}&\text{if $v\in\mathbb{R}^{l}$ and $\xi=2$}\\ m^{-3}&\text{if $v\in\mathbb{C}^{l}\setminus\mathrm{e}^{i\varphi}\mathbb{R}^{l}$ for all $\varphi\in(-\pi,\pi]$ and $s\xi<2$}\\ m^{-4}&\text{if $v\in\mathbb{C}^{l}\setminus\mathrm{e}^{i\varphi}\mathbb{R}^{l}$ for all $\varphi\in(-\pi,\pi]$ and $s\xi=2$}.\end{cases} \tag{3.18}\] _Here \(s\) was defined in (2.11)._ 3. _The positions of the edges_ \(\tau_{\pm}\)_, defined in Definition_ 2.5_, are given by_ \[\tau_{+}=-m_{+}^{-1}-\gamma(m_{+})\] (3.19) _and_ \[\tau_{-}=\begin{cases}-m_{-}^{-1}-\gamma(m_{-})&\text{if $m_{-}>0$ with $h(m_{-})=0$ exists;}\\ 0&\text{otherwise.}\end{cases}\] (3.20) The proof of the lemmata is deferred until after the proof of Propositions 3.1 and 3.2. Proof of Propositions 3.1 and 3.2.: In both the non-reducible and the shifted reducible cases the function \(h\) has a unique root \(m_{+}\) in \((m_{+}^{*},0)\). For all \(m\in(m_{+},0)\) the derivative of the inverse \(z=z(m)\) of \(m=m(z)\) with respect to \(m\) is given by \[\frac{\mathrm{d}}{\mathrm{d}m}z=h(m). \tag{3.21}\] Then, by Lemma 3.5 and Lemma 3.6, there is an \(r\sim 1\) and a \(c_{+}>0\), only depending on the coefficients of \(q\), such that for all \(m\in(m_{+},m_{+}+r)\) we have \[\frac{\mathrm{d}}{\mathrm{d}m}z=h(m)=2c_{+}^{2}(m-m_{+})(1+\mathcal{O}(m-m_{+ })) \tag{3.22}\] and thus \[z-\tau_{+}=c_{+}^{2}(m-m_{+})^{2}(1+\mathcal{O}(m-m_{+})). \tag{3.23}\] Taking the square root of the above equation, we obtain \[m-m_{+}=c_{+}\sqrt{z-\tau_{+}}+\mathcal{O}(|z-\tau_{+}|) \tag{3.24}\] for all \(z\in(\tau_{+},\tau_{+}+u)\) for some \(u>0.\) As \(m\) is holomorphic on \(\mathbb{H}\cup(\tau_{+},\infty)\) and has positive imaginary part on \(\mathbb{H}\), the relation (3.24) also holds for all \(z\in\mathbb{H}\) with \(|z-\tau_{+}|\leq u\) for some \(u>0\). In all cases, where \(h\) has a root \(m_{-}\) in \((0,m_{-}^{*})\), we find by the same argument that \[m-m_{-}=-c_{-}\sqrt{\tau_{-}-z}+\mathcal{O}(|\tau_{-}-z|) \tag{3.25}\] for all \(z\in\mathbb{H}\) with \(|z-\tau_{+}|\leq u\) for some \(u,c_{-}>0\). Finally, in the cases where \(h\) does not have a root in \((0,\infty)\), it is continuous on the whole interval and we have that for all \(m\in(0,\infty)\) the derivative of \(z\) with respect to \(m\) is given by \[\frac{\mathrm{d}}{\mathrm{d}m}z=h(m). \tag{3.26}\] Then, by Lemma 3.6, there is an \(R\sim 1\) and a \(c_{0}>0\), only depending on the coefficients of \(q\), such that for all \(m\in(R,\infty)\) we have \[\frac{\mathrm{d}}{\mathrm{d}m}z=h(m)=c_{0}m^{-p}(1+\mathcal{O}(m^{-1}))\quad \text{and thus}\quad z=-\frac{c_{0}}{p-1}m^{-(p-1)}(1+\mathcal{O}(m^{-1})), \tag{3.27}\] with \(p\in\{3,4,5\}\). Inverting the relation, we obtain \[m=c_{-}(-z)^{-\frac{1}{p-1}}+\mathcal{O}(1) \tag{3.28}\] for all \(z\in(-u,0)\) and some \(u,c_{-}>0\). Again, as \(m\) is holomorphic on \(\mathbb{H}\cup(-\infty,0)\) and has positive imaginary part on \(\mathbb{H}\), the relation (3.28) also holds for all \(z\in\mathbb{H}\) with \(|z|\leq u\) for some \(u>0\). Proof of Lemma 3.5 and Lemma 3.6.: We prove both lemmata simultaneously and remark, where it becomes necessary to distinguish the different polynomials \(q\). We first investigate the existence and order of roots of \(h\). Let \(h_{q}\) denote the rational function (3.10) with the dependence on \(q\) being made explicit. We have \(h_{q}(-m)=h_{-q}(m)\). Therefore any root of \(h_{q}\) on \((0,\infty)\) corresponds to a root of \(h_{-q}\) on \((-\infty,0)\). For simplicity, the proofs are thus only formulated for \(m\in(-\infty,0)\). To obtain the corresponding result for \(m\in(0,\infty)\) for the function \(h_{q}\), we consider the function \(h_{-q}\) on \((-\infty,0)\) instead. We now prove for all polynomials \(q\) as in (1.1) that \(h^{\prime}(m)>0\) for all \(m\in(m^{*}_{+},0)\) with \(h(m)\geq 0\) and thus \(h\) can only have first-order roots on the interval. Note that on \((m^{*}_{+},0)\) the inequality \(h\geq 0\) is equivalent to \[\sum_{i=1}^{l}\frac{(m\mu_{i})^{2}}{(1+m\mu_{i})^{2}}+\sum_{\begin{subarray}{ c}i=1,\\ \langle w_{i},b\rangle\neq 0\end{subarray}}^{l}(m|\langle w_{i},b\rangle|)^{2} \frac{1}{(1+2m\widehat{\mu}_{i})^{3}}\leq 1 \tag{3.29}\] and that on \((m^{*}_{+},0)\) the inequality \(h^{\prime}>0\) is equivalent to \[\sum_{i=1}^{l}\frac{(m\mu_{i})^{3}}{(1+m\mu_{i})^{3}}+\sum_{ \begin{subarray}{c}i=1,\\ \langle w_{i},b\rangle\neq 0\end{subarray}}^{l}(m|\langle w_{i},b\rangle|)^{2} \frac{3(m\widehat{\mu}_{i})}{(1+2m\widehat{\mu}_{i})^{4}}<1. \tag{3.30}\] For \(m\in(m^{*}_{+},0)\) we have also that \(m\mu_{i}>-1\) for all \(i\in\llbracket l\rrbracket\) and \(m\widehat{\mu}_{i}>-\frac{1}{2}\) for all \(i\in\llbracket l\rrbracket\) such that \(\langle w_{i},b\rangle\neq 0\) by the definition of \(m^{*}_{+}\) in (3.13). Furthermore, Lemma A.1 ensures that \(\max_{i\in\llbracket l\rrbracket}\{m\mu_{i}\}\geq\max_{i\in\llbracket l \rrbracket}\{m\widehat{\mu}_{i}\}\). Thus the desired implication, \(h\geq 0\) implies \(h^{\prime}>0\), follows directly from the following lemma by identifying (3.31) and (3.32) right below with (3.29) and (3.30) respectively. **Lemma 3.7**.: _Let \(n,k\in\mathbb{N}\) with \(n\geq k\). Let \(y_{i}\), \(i\in\llbracket n\rrbracket\) and let \(\widehat{y}_{j}\), \(j\in\llbracket k\rrbracket\) be collections of real numbers, both of them sorted in non-increasing order, \(y_{1}\geq y_{2}\geq\ldots\geq y_{n}\) and \(\widehat{y}_{1}\geq\widehat{y}_{2}\geq\ldots\geq\widehat{y}_{k}\). Suppose they satisfy_ 1. \(y_{1}\geq\widehat{y}_{1}\)_;_ 2. \(y_{n}>-1\) _and_ \(\widehat{y}_{k}>-\frac{1}{2}\)_._ _Furthermore let \(c_{j}>0\), \(j\in\llbracket k\rrbracket\). Then the inequality_ \[\sum_{i=1}^{n}\frac{y_{i}^{2}}{(y_{i}+1)^{2}}+\sum_{j=1}^{k}c_{j }^{2}\frac{1}{(2\widehat{y}_{j}+1)^{3}}\leq 1 \tag{3.31}\] _implies_ \[\sum_{i=1}^{n}\frac{y_{i}^{3}}{(y_{i}+1)^{3}}+3\sum_{j=1}^{k}c_{j }^{2}\frac{\widehat{y}_{j}}{(2\widehat{y}_{j}+1)^{4}}\leq 1-\nu \tag{3.32}\] _for some \(\nu>0\), depending only on \(y_{1}\)._ The proof of the lemma is deferred to the end of the section. To characterize the existence of roots of \(h\) we introduce the following notions. If \(A\) is a rank one matrix with real entries, then \(\widehat{A}=A\) and \(\widehat{A}\) is a rank one matrix as well. We will denote its non-zero eigenvalue by \(\mu\) and the corresponding normalized eigenvector by \(w\). If, on the other hand, \(A\) is a rank one matrix with \(A\in\mathbb{C}^{l\times l}\setminus\mathbb{R}^{l\times l}\), then \(\widehat{A}\) is a rank two matrix by Lemma A.1 in the appendix. We denote its non-zero eigenvalues by \(\mu_{\pm}\) and the corresponding normalized eigenvectors by \(w_{\pm}\). **Lemma 3.8**.: _Let \(m_{+}^{*}\) be defined as in (3.13). The function \(h\) has no root in \((m_{+}^{*},0)\) if and only if \(A\) is a negative semi-definite rank one matrix, \(b\in\operatorname{Image}(\widehat{A})\) and either_ 1. \(A\in\mathbb{R}^{l\times l}\) _and_ \(\|b\|\leq 4\|A\|\)_;_ 2. _or_ \(A\in\mathbb{C}^{l\times l}\setminus\mathbb{R}^{l\times l}\) _and_ \[\frac{|\langle b,w_{+}\rangle|^{2}}{r_{+}^{3}}+\frac{|\langle b,w_{-}\rangle|^{ 2}}{r_{-}^{3}}\leq(4\|A\|)^{2},\] (3.33) _with_ \(r_{\pm}:=-\|A\|^{-1}\mu_{\pm}\)_._ _If \(h\) has no root in \((m_{+}^{*},0)\), then we have \(m_{+}^{*}=-\infty\) and the asymptotic behaviour of \(h\) for \(m\to-\infty\) is given by_ \[h(m)\sim\begin{cases}\frac{1}{(-m)^{3}}&A\in\mathbb{R}^{l\times l}\text{and }\|b\|<4\|A\|\\ \frac{1}{(-m)^{3}}&A\in\mathbb{R}^{l\times l}\text{and }\|b\|=4\|A\|\\ \frac{1}{(-m)^{3}}&A\in\mathbb{C}^{l\times l}\setminus\mathbb{R}^{l\times l} \text{ and }\frac{|\langle b,w_{+}\rangle|^{2}}{|r_{+}|^{3}}+\frac{|\langle b,w_{-} \rangle|^{2}}{|r_{-}|^{3}}<(4\|A\|)^{2}\\ \frac{1}{(-m)^{4}}&A\in\mathbb{C}^{l\times l}\setminus\mathbb{R}^{l\times l} \text{ and }\frac{|\langle b,w_{+}\rangle|^{2}}{|r_{+}|^{3}}+\frac{|\langle b,w_{-} \rangle|^{2}}{|r_{-}|^{3}}=(4\|A\|)^{2}.\end{cases} \tag{3.34}\] The proof of the lemma is also deferred to the end of the section. Note that \(h\) can have at most one root on \((m_{+}^{*},0)\). To see this, recall that \(h(m)\geq 0\) implies \(h^{\prime}(m)>0\). Therefore, if \(m_{+}\) is a root of \(h\) on \((m_{+}^{*},0)\), we know that \(h\) is strictly monotonously increasing on \((m_{+},0)\). Thus, any root of \(h\) on \((m_{+}^{*},0)\) is also the largest one on the interval; therefore it must be the unique root. Now let \(q\) be a non-reducible polynomial. If \(\operatorname{rank}A\geq 2\), then \(h\) has a root in \((-\infty,0)\) by Lemma 3.8. If \(\operatorname{rank}A=1\), we can express \(A\) as \(A=\alpha www^{*}\) for some \(\alpha\in\mathbb{R}\setminus\{0\}\) and normalized \(w\in\mathbb{C}^{l}\). The vector \(b\) cannot be of the form \(b=\alpha_{1}\operatorname{Re}w+\alpha_{2}\operatorname{Im}w\) for any \(\alpha_{1},\alpha_{2}\in\mathbb{R}\) as \(q\) would be reducible otherwise. Thus \(b\notin\operatorname{Image}A\) and again \(h\) has a root in \((-\infty,0)\) by Lemma 3.8, completing the proof of Statement 1 of Lemma 3.5. Next, consider a shifted reducible polynomial \(q=q_{r}\) for \(q_{r}\) as in (2.10). If we write \(q\) in terms of (1.1) we have \(A=vv^{*}\geq 0\). Applying Lemma 3.5 we obtain that \(h\) has a root in \((-\infty,0)\) and thereby prove Statement 1 of Lemma 3.6. Finally, consider \(q=-q_{r}\) for \(q_{r}\) as in (2.10). Then the coefficients \(A\) and \(b\) associated with \(q\) are given by \(A=-vv^{*}\) and \(b=2\xi\operatorname{Re}v\). The matrix \(A\) is a negative semi-definite rank one matrix. For \(\mu=\langle\operatorname{Re}v,\operatorname{Im}v\rangle\neq 0\) the non-zero eigenvalues of \(\widehat{A}=-\frac{1}{2}(v^{*}v+vv^{*})=-(\operatorname{Re}v)^{*}\operatorname {Re}v-(\operatorname{Im}v)^{*}\operatorname{Im}v\) are given by \[\mu_{\pm}=\sigma^{2}+\mu a_{\pm}, \tag{3.35}\] where \(\sigma=\|\operatorname{Re}v\|\) and \(a_{\pm}\) was defined in (2.12). The corresponding normalized eigenvectors are \[w_{\pm}=\frac{1}{\sigma^{2}+a_{\pm}(1-\sigma^{2})+2a_{\pm}\mu}\left( \operatorname{Re}v+a_{\pm}\operatorname{Im}v\right). \tag{3.36}\] For \(\mu=0\) the eigenvector-eigenvalue pairs are given by \[(w_{+},\mu_{+})=\left(\frac{\operatorname{Im}v}{1-\sigma^{2}},1-\sigma^{2} \right),\quad(w_{-},\mu_{-})=\left(\frac{\operatorname{Re}v}{\sigma^{2}}, \sigma^{2}\right). \tag{3.37}\] Now recall that \(h_{q}(m)=h_{-q}(-m)=h_{q_{r}}(-m)\) and Statement 2 of Lemma 3.6 now follows from applying Lemma 3.8 to \(h_{q}\). It remains to prove the relations between \(m_{\pm}\) and \(\tau_{\pm}\) stated in (3.15), (3.19) as well as (3.20). Let \(q\) be either a non-reducible polynomial or a reducible polynomial of the form (2.10). For \(\widetilde{m}\in(m_{+}^{*},0)\) define \[z(\widetilde{m}):=-\frac{1}{\widetilde{m}}-\gamma(\widetilde{m}). \tag{3.38}\] The function \(z\) is real analytic on \((m_{+}^{*},0)\) with derivative \(z^{\prime}(\widetilde{m})=h(\widetilde{m})\). By Lemma 3.5, Part 1 and Lemma 3.6, Part 1, the function \(h\) has a unique root \(m_{+}\) in \((m_{+}^{*},0)\) and \(h(\widetilde{m})>0\) for all \(\widetilde{m}\in(m_{+},0)\). Therefore,the function \(z\) is invertible on \((m_{+},0)\) and its inverse function \(\widetilde{m}:(z_{+},\infty)\to(m_{+},0)\), with \(z_{+}:=z(m_{+})\), is also real analytic and monotonically increasing. Since \(z^{\prime}(m_{+})=h(m_{+})=0\), the analyticity of \(\widetilde{m}\) cannot be extended onto any neighbourhood of \(z_{+}\). By (3.38), the function \(\widetilde{m}\) satisfies (2.4) and has the asymptotic behaviour \(\widetilde{m}=-z^{-1}+\mathcal{O}(z^{2})\) for \(z\to\infty\). At the same time, \(m\), the function uniquely defined by Proposition 2.3, is a Stieltjes transform of a probability measure with support \([\tau_{-},\tau_{+}]\). As such, it can be analytically extended to \(\mathbb{C}\setminus[\tau_{-},\tau_{+}]\) but not to a neighbourhood of \(\tau_{+}\) and the extension is real-valued on \(\mathbb{R}\setminus[\tau_{-},\tau_{+}]\). As an extension it also satisfies (2.4) on \(\mathbb{C}\setminus[\tau_{-},\tau_{+}]\) and has the asymptotic behaviour \(m=-z^{-1}+\mathcal{O}(z^{2})\). In particular, the restriction of the extension to \(\mathbb{R}\setminus[\tau_{-},\tau_{+}]\), called \(m_{\mathbb{R}}\), is a real analytic function that also satisfies (2.4), has the asymptotic behaviour \(m_{\mathbb{R}}=-z^{-1}+\mathcal{O}(z^{2})\) and cannot be analytically extended to a neighbourhood of \(\tau_{+}\). As satisfying (2.4) and having the asymptotic behaviour \(m=-z^{-1}+\mathcal{O}(z^{2})\) uniquely define an analytic function on a neighbourhood of \(\infty\), we have \(\widetilde{m}=m_{\mathbb{R}}\) on \((C,\infty)\) for some \(C>0\). If we assume that \(\tau_{+}>z_{+}\), then \(\widetilde{m}\) would be an analytical continuation of \(m_{\mathbb{R}}\) to some neighbourhood of \(\tau_{+}\), which is a contradiction and vice versa. Therefore we must have \(z_{+}=\tau_{+}.\) In particular, we have \[\tau_{+}=z(m_{+})=-\frac{1}{m_{+}}-\gamma(m_{+}). \tag{3.39}\] By an analogous argument to the left of the spectrum we find \[\tau_{-}=z(m_{-})=-\frac{1}{m_{-}}-\gamma(m_{-}) \tag{3.40}\] if \(m_{-}\) exists. Now, let \(q=q_{r}\) be a shifted reducible polynomial of the form (2.10) such that \(h\) has no root on \((0,\infty)\). Then the function \(z(\widetilde{m})\) defined on \((0,\infty)\) by (3.38) is a monotonously increasing analytic function on the entirety of its domain. It is therefore invertible and its inverse function, \(\widetilde{m}\), is analytic on \((-\infty,z_{\infty})\) with \(z_{\infty}:=\lim_{x\to\infty}z(x)\) but cannot be analytically extended to a neighbourhood of \(z_{\infty}\). Along the lines of the above argument, \(m_{\mathbb{R}}\) would be an analytical continuation of \(\widetilde{m}\) in a neighbourhood of \(z_{\infty}\) if \(\tau_{-}>z_{\infty}\) and \(\widetilde{m}\) would be an analytical continuation of \(m_{\mathbb{R}}\) in a neighbourhood of \(\tau_{-}\) if \(z_{\infty}>\tau_{-}\). Thus we have \(\tau_{-}=z_{\infty}\) and in particular \[\tau_{-}=\lim_{x\to\infty}-\frac{1}{x}-\gamma(x). \tag{3.41}\] For reducible \(q_{r}\) as in (2.10), we have \(A=vv^{*}\), \(b=-\xi(v+\bar{v})\) and \(c=\xi^{2}\). Taking the limit (3.41) with these parameters we obtain \(\tau_{-}=0\). Proof of Lemma 3.7.: Let \(y\in(-1,\infty)\) and \(\widehat{y}\in(-\frac{1}{2},\infty)\). We define \[h_{1}(y)=\frac{y}{y+1}\quad\text{and}\quad h_{2}(\widehat{y})=\frac{1}{2 \widehat{y}+1}. \tag{3.42}\] Using this notation (3.31) can be expressed as \[g(\mathbf{y},\widehat{\mathbf{y}},\mathbf{c}):=\sum_{i=1}^{n}h_{1}(y_{i})^{2} +\sum_{j=1}^{k}c_{j}^{2}h_{2}^{3}(\widehat{y}_{j})\leq 1 \tag{3.43}\] and (3.32) as \[f(\mathbf{y},\widehat{\mathbf{y}},\mathbf{c}):=\sum_{i=1}^{n}h_{1}(y_{i})^{3 }+3\sum_{j=1}^{k}c_{j}^{2}\widehat{y}_{j}h_{2}^{4}(\widehat{y}_{j})\leq 1-\nu \tag{3.44}\] To prove the lemma it is then sufficient to show that \(g\leq 1\) implies \(f\leq 1-\nu\). First assume \(g=0\). Since all summands of \(g\) are non-negative, it follows that \(y_{i}=0\) for all \(i\in\llbracket n\rrbracket\) and \(c_{j}=0\) for all \(j\in\llbracket k\rrbracket\) and so \(f\) vanishes as well. Thus we assume \(g\neq 0\) from now on and we will prove that \(g\leq 1\) implies \(f<g\). We will prove the cases \(\widehat{y}_{1}\leq\frac{1}{2}\), \(\frac{1}{2}<\widehat{y}_{1}<1\) and \(\widehat{y}_{1}\geq 1\) separately and start with \(\widehat{y}_{1}\leq\frac{1}{2}\). Then \(\widehat{y}_{j}\leq\frac{1}{2}\) for all \(j\). Note that \(\widehat{y}_{j}h_{2}^{3}(\widehat{y}_{j})\leq\widehat{y}_{1}h_{2}(\widehat{y}_ {1})h_{2}^{3}(\widehat{y}_{j})\) and \(h_{1}^{3}(y_{i})\leq h_{1}(y_{1})h_{1}^{2}(y_{i})\) hold for all \(i\) and \(j\) since \(\widehat{y}\mapsto\widehat{y}h_{2}(\widehat{y})\) and \(h_{1}\) are monotonously increasing and the \(\widehat{y}_{j}\) and \(y_{i}\) are sorted in non-increasing order. Thus \[\begin{split} f(\mathbf{y},\widehat{\mathbf{y}},\mathbf{c})& \leq h_{1}(y_{1})\sum_{i=1}^{n}h_{1}^{2}(y_{i})+3\widehat{y}_{1}h_{2}( \widehat{y}_{1})\sum_{j=1}^{k}c_{j}^{2}h_{2}^{3}(\widehat{y}_{j})\\ &\leq h_{1}(y_{1})\sum_{i=1}^{n}h_{1}^{2}(y_{i})+\frac{3}{4}\sum_{ j=1}^{k}c_{j}^{2}h_{2}^{3}(\widehat{y}_{j})\leq\max\left\{\frac{y_{1}}{1+y_{1}}, \frac{3}{4}\right\}g(\mathbf{y},\widehat{\mathbf{y}},\mathbf{c}).\end{split} \tag{3.45}\] The second inequality holds since \(3\widehat{y}_{1}h_{2}(\widehat{y}_{1})\leq\frac{3}{4}\) due to \(\widehat{y}_{1}\leq\frac{1}{2}\) and the third one because all summands in both sums are non-negative. The relation \(f\leq 1-\nu\) for \(g\leq 1\) then follows in this regime. For \(\frac{1}{2}<\widehat{y}_{1}<1\) we estimate every summand in \(f\) but the \(y_{1}\) term by the corresponding term in \(g\) and we find \[f(\mathbf{y},\widehat{\mathbf{y}},\mathbf{c})\leq g(\mathbf{y},\widehat{ \mathbf{y}},\mathbf{c})-h_{1}(y_{1})^{2}+h_{1}(y_{1})^{3}. \tag{3.46}\] This upper bound is valid since \(\widehat{y}_{j}\leq\widehat{y}_{1}<1\) for all \(j.\) Thus for \(g\leq 1\) we find \[f(\mathbf{y},\widehat{\mathbf{y}},\mathbf{c})\leq g(\mathbf{y},\widehat{ \mathbf{y}},\mathbf{c})-\left(\frac{y_{1}}{y_{1}+1}\right)^{2}\frac{1}{1+y_{1} }<1-\frac{2}{27}, \tag{3.47}\] where in the last step we used \(y_{1}\geq\widehat{y}_{1}>\frac{1}{2}.\) Now let \(\widehat{y}_{1}\geq 1\). Let \(k_{0}\) be the largest integer such that \(\widehat{y}_{k_{0}}\geq 1.\) Then \[f_{0}(\mathbf{y},\widehat{\mathbf{y}},\mathbf{c}):=\sum_{i=2}^{n}h_{1}^{3}(y_ {i})+\sum_{j=k_{0}+1}^{k}3c_{j}^{2}\widehat{y}_{j}h_{2}^{4}(\widehat{y}_{j}) \leq\sum_{i=2}^{n}h_{1}^{2}(y_{i})+\sum_{j=k_{0}+1}^{k}c_{j}^{2}h_{2}^{3}( \widehat{y}_{j})=:g_{0}(\mathbf{y},\widehat{\mathbf{y}},\mathbf{c}) \tag{3.48}\] is concluded by again estimating each term individually (either of the sums might be empty). Note that \(g_{0}\geq 0\). Therefore we only need to prove that \[g_{1}(\mathbf{y},\widehat{\mathbf{y}},\mathbf{c}):=(g-g_{0})(\mathbf{y}, \widehat{\mathbf{y}},\mathbf{c})=h_{1}^{2}(y_{1})+\sum_{j=1}^{k_{0}}c_{j}^{2} h_{2}^{3}(\widehat{y}_{j})\leq 1 \tag{3.49}\] implies \[f_{1}(\mathbf{y},\widehat{\mathbf{y}},\mathbf{c}):=(f-f_{0})(\mathbf{y}, \widehat{\mathbf{y}},\mathbf{c})=h_{1}^{3}(y_{1})+\sum_{j=1}^{k_{0}}3c_{j}^{2 }\widehat{y}_{j}h_{2}^{4}(\widehat{y}_{j})\leq g_{1}(\mathbf{y},\widehat{ \mathbf{y}},\mathbf{c})-\nu \tag{3.50}\] for some \(\nu>0\), only depending on \(y_{1}\), to conclude the proof of the lemma. For \(\widehat{y}\in[1,\infty)\) the inequality \(h_{2}^{3}(\widehat{y})\leq 3\widehat{y}h_{2}^{4}(\widehat{y})\) is satisfied and consequently \[\sum_{j=1}^{k_{0}}h_{2}^{3}(\widehat{y}_{j})\leq\sum_{j=1}^{k_{0}}3\widehat{y }_{j}h_{2}^{4}(\widehat{y}_{j}) \tag{3.51}\] holds. Therefore for any fixed \((\mathbf{y},\widehat{\mathbf{y}},\mathbf{c})\) and \(r\in\mathbb{R}_{+}\) the difference \((g_{1}-f_{1})(\mathbf{y},\widehat{\mathbf{y}},r\mathbf{c})\) decreases monotonically in \(r\). Hence we only need to prove \((g_{1}-f_{1})(\mathbf{y},\widehat{\mathbf{y}},r^{*}\mathbf{c})\geq\nu\) for \[r^{*}:=\sup\{r\geq 0|\,g_{1}(\mathbf{y},\widehat{\mathbf{y}},r\mathbf{c})\leq 1\}. \tag{3.52}\] On the other hand \(g_{1}(\mathbf{y},\widehat{\mathbf{y}},r\mathbf{c})\) increases monotonically in \(r\) and therefore \(g_{1}(\mathbf{y},\widehat{\mathbf{y}},r^{*}\mathbf{c})=1\). It is thus sufficient to prove that \(g_{1}=1\) implies \(f_{1}\leq 1-\nu\). Hence we assume \(g_{1}=1\). Then \[\sum_{j=1}^{k_{0}}c_{j}^{2}h_{2}^{3}(\widehat{y}_{j})=1-h_{1}^{2}(y_{1}) \tag{3.53}\] and \[\begin{split} f_{1}(\mathbf{y},\widehat{\mathbf{y}},\mathbf{c})& =h_{1}^{3}(y_{1})+\sum_{j=1}^{k_{0}}3c_{j}^{2}\widehat{y}_{j}h_{2}^ {4}(\widehat{y}_{j})\leq h_{1}^{3}(y_{1})+\max_{j\in[k_{0}]}\{3\widehat{y}_{j} h_{2}(\widehat{y}_{j})\}\sum_{j=1}^{k_{0}}c_{j}^{2}h_{2}^{3}(\widehat{y}_{j})\\ &\leq h_{1}^{3}(y_{1})+3\widehat{y}_{1}h_{2}(\widehat{y}_{1})(1-h_ {1}^{2}(y_{1}))\leq h_{1}^{3}(y_{1})+3y_{1}h_{2}(y_{1})(1-h_{1}^{2}(y_{1}))\\ &=\frac{1}{(1+y_{1})^{3}}(y_{1}^{3}+3y_{1}^{2}+3y_{1})=1-\frac{1} {(1+y_{1})^{3}}.\end{split} \tag{3.54}\] The inequalities holds because \(yh_{2}(y)\) increases monotonically in \(y\) and \(y_{1}\geq\widehat{y}_{1}\geq\widehat{y}_{j}\) for all \(j\). Combining this with (3.45) and (3.47) concludes the proof of the lemma. Proof of Lemma 3.8.: Note that \(h(m)>0\) for \(m\) close to \(0\), since the \(m^{-2}\) term is dominating \(h\) from (3.10) in this regime. First, we consider the situation, where \(h\) has a pole in \((-\infty,0)\). In other words, we have \(m_{+}^{*}>-\infty\), where \(m_{+}^{*}\) was defined in (3.13). All poles of \(h\) on \((-\infty,0)\) are either of order two or of order three so the pole at \(m_{+}^{*}\) will be as well. If the pole is of order two, then the behaviour around \(m^{*}\) is given by \(\lim_{m\to m_{+}^{*}}h(m)=-\infty\) and if it is of order three \(h\) behaves like \(\lim_{m\searrow m_{+}^{*}}h(m)=-\infty\) and \(\lim_{m\nearrow m_{+}^{*}}h(m)=\infty\). Since \(h\) is continuous outside of its poles, the existence of a pole thus implies the existence of a root in \((m_{+}^{*},0)\). Now let \(h\) have no poles in \((-\infty,0)\), i.e. \(m_{+}^{*}=-\infty.\) Then \(h\) is analytic on \((-\infty,0)\) and \(h\) has a root in \((-\infty,0)\) if and only if \(h(m)\) is negative for some \(m\in(-\infty,0)\). We separate multiple cases. **Case 1** (\(\operatorname{rank}(A)\geq 2\)).: _Either \(h\) has a pole in \((-\infty,0)\) or \(b^{t}(1+2m\widehat{A})^{-3}b\geq 0\) for all \(m\in(-\infty,0),\) thus_ \[h(m)\leq\frac{1}{m^{2}}-\operatorname{Tr}A^{2}(1+mA)^{-2}=-\frac{\operatorname {rank}(A)-1}{m^{2}}+\mathcal{O}(m^{-3})<0 \tag{3.55}\] _for sufficiently large \(-m\) and \(h\) has a root either way._ **Case 2** (\(\operatorname{rank}(A)=1\) and \(A\geq 0\)).: _The matrices \(A\) and \(\widehat{A}\) have a positive eigenvalue, therefore \(h\) will have a pole in \((-\infty,0)\) and thus also a root._ **Case 3** (\(\operatorname{rank}(A)=1\), \(A\leq 0\) and \(b\notin\operatorname{Image}\widehat{A}\)).: _In this case, \(h\) has no pole on \((-\infty,0)\) and there is a vector \(w_{0}\) in the kernel of \(\widehat{A}\) such that \(\langle w_{0},b\rangle\neq 0.\) From (3.10) we have that \(h\) is estimated_ \[h(m)\leq\frac{1}{m^{2}}-|\langle w_{0},b\rangle|^{2}<0, \tag{3.56}\] _where the last inequality holds for \(-m\) sufficiently large and so \(h\) has a root as well._ **Case 4** (\(\operatorname{rank}(A)=1\), \(A\leq 0\), \(b\in\operatorname{Image}\widehat{A}\) and \(A\in\mathbb{R}^{l\times l}\)).: _Since \(A\) is real-symmetric, we have \(A=\widehat{A}\) and \(\widehat{A}\) is also a rank one matrix and \(h\) has no poles in \((-\infty,0)\). Either \(b=0\) or \(b\) is an eigenvector of \(\widehat{A}\) with eigenvalue \(\mu=-\|A\|\) since \(A=\widehat{A}\), \(\operatorname{rank}A=1\) and \(A\leq 0.\) In both cases \(h\) becomes_ \[h(m)=\frac{1}{m^{2}}\left(1-\frac{(\mu m)^{2}}{(1+\mu m)^{2}}\right)-\|b\|^{2 }\frac{1}{(1+2\mu m)^{3}} \tag{3.57}\] _To find the asymptotics for \(m\to-\infty\) consider \(0<\zeta<1\), with \(\zeta=(\mu m)^{-1}\). We find_ \[h(m) =\|A\|^{2}\zeta^{2}\left(1-\frac{1}{(\zeta+1)^{2}}\right)-\frac{ \|b\|^{2}}{8}\frac{\zeta^{3}}{(\frac{1}{2}\zeta+1)^{3}} \tag{3.58}\] \[=\zeta^{3}\left(\|A\|^{2}\sum_{k=0}^{\infty}(k+2)(-\zeta)^{k}- \frac{\|b\|^{2}}{8}\sum_{k=0}^{\infty}\frac{(k+1)(k+2)}{2}(-\frac{1}{2}\zeta) ^{k}\right)\] \[=\left(2\|A\|^{2}-\frac{\|b\|^{2}}{8}\right)\zeta^{3}+3\left(\frac {\|b\|^{2}}{16}-\|A\|^{2}\right)\zeta^{4}+\left(4\|A\|^{2}-\frac{3\|b\|^{2}}{8 }\right)\zeta^{5}+\mathcal{O}(\zeta^{6}).\] _Therefore \(h\) has a root if \(\|b\|>4\|A\|\). If \(\|b\|\leq 4\|A\|\) then the leading order coefficient of the expansion is positive. Since \(h(m)\geq 0\) implies \(h^{\prime}(m)>0\) this implies \(h(m)>0\) for all \(m\in(-\infty,0)\). Therefore there is no root and the first two cases in (3.34) follow._ **Case 5** (\(\operatorname{rank}(A)=1\), \(A\leq 0\), \(b\in\operatorname{Image}\widehat{A}\) and \(A\in\mathbb{C}^{l\times l}\setminus\mathbb{R}^{l\times l}\)).: _Since \(A\) is not real-symmetric, we have \(\operatorname{rank}\widehat{A}=2\) by Lemma A.1 and \(h\) has no poles in \((-\infty,0)\). Since \(b\in\operatorname{Image}\widehat{A}\), the function \(h\) can be expressed as_ \[h(m)=\frac{1}{m^{2}}\left(1-\frac{(\mu m)^{2}}{(1+\mu m)^{2}}\right)-|\langle b,w_{+}\rangle|^{2}\frac{1}{(1+2\mu_{+}m)^{3}}-|\langle b,w_{-}\rangle|^{2} \frac{1}{(1+2\mu_{-}m)^{3}} \tag{3.59}\] _To find the asymptotics for \(m\to-\infty\) consider \(0<\zeta<1\), with \(\zeta=(-\|A\|m)^{-1}\). We find_ \[h(m)= \|A\|^{2}\zeta^{2}\left(1-\frac{1}{(\zeta+1)^{2}}\right)-\frac{| \langle b,w_{+}\rangle|^{2}}{8r_{+}^{3}}\frac{\zeta^{3}}{(\frac{1}{2r_{+}} \zeta+1)^{3}}-\frac{|\langle b,w_{-}\rangle|^{2}}{8r_{-}^{3}}\frac{\zeta^{3}}{( \frac{1}{2r_{+}}\zeta+1)^{3}} \tag{3.60}\] \[= \left(2\|A\|^{2}-\frac{|\langle b,w_{+}\rangle|^{2}}{8r_{+}^{3}}- \frac{|\langle b,w_{-}\rangle|^{2}}{8r_{-}^{3}}\right)\zeta^{3}\] \[\qquad\qquad\qquad+3\left(\frac{|\langle b,w_{+}\rangle|^{2}}{16r_ {+}^{4}}-\frac{|\langle b,w_{-}\rangle|^{2}}{16r_{-}^{4}}-\|A\|^{2}\right) \zeta^{4}+\mathcal{O}(\zeta^{5}).\] _By an analogous consideration to the above case, \(h\) has a root if and only if_ \[\frac{|\langle b,w_{+}\rangle|^{2}}{r_{+}^{3}}+\frac{|\langle b,w_{-}\rangle|^{2} }{r_{-}^{3}}>(4\|A\|)^{2} \tag{3.61}\] _and the remaining two cases in (3.34) follow._ ## 4 Linearization Throughout the remainder of the paper, we suppress the index of the identity matrix if it is in dimension \(l+1\) or in \((l+1)N\), i.e. \(I:=I_{l+1}\in\mathbb{C}^{(l+1)\times(l+1)}\) and \(\mathbf{I}:=\mathbf{I}_{(l+1)N}\in\mathbb{C}^{(l+1)N\times(l+1)N}\). In all other cases, we still write out the dimension in the index. To prove that \(\mathbf{g}\) converges towards \(m\) in the sense laid out in Theorem 2.9, we use the linearization method, which we briefly introduce here. For more details on the construction of linearizations in the context of random matrices, see e.g. [21, 30, 31]. First, assume that \(A\) is invertible. We define the linearization \(\mathbf{L}\) of \(q\) as \[\mathbf{L}=K_{0}+\sum_{j=1}^{l}K_{j}\otimes\mathbf{X}_{j}\in\mathbb{C}^{(l+1) N\times(l+1)N}, \tag{4.1}\] where \[K_{0}=\begin{pmatrix}c&0\\ 0&-A^{-1}\end{pmatrix}\in\mathbb{C}^{(l+1)\times(l+1)}\quad\text{and}\quad K_{ j}=\begin{pmatrix}b_{j}&e_{j}^{t}\\ e_{j}&0\end{pmatrix}\in\mathbb{C}^{(l+1)\times(l+1)} \tag{4.2}\] for \(j\in[\![l]\!]\) and \(e_{j}\) being the \(j^{\text{th}}\) standard Euclidean base vector. Here we made use of our convention to identify any matrix \(R\in\mathbb{C}^{k\times n}\), \(k,n\in\mathbb{N}\) with \(R\otimes\mathbf{I}_{N}\in\mathbb{C}^{kN\times nN}\) introduced in Section 1. Let \(J\in\mathbb{C}^{(l+1)\times(l+1)}\) be the orthogonal projection onto the first entry. For \(\delta\in[0,1]\) and \(z=E+\mathrm{i}\eta\in\mathbb{H}\) we define \[\mathbf{G}_{\delta}=(\mathbf{L}-zJ-\mathrm{i}\eta\delta(\mathbf{I}-J))^{-1} \in\mathbb{C}^{(l+1)N\times(l+1)N} \tag{4.3}\] \(\mathbf{G}_{\delta}\) is a generalized resolvent and using the Schur complement formula we obtain \[\mathbf{G}_{\delta}=\begin{pmatrix}\mathbf{g}_{\delta}&\mathbf{g}_{\delta} \mathbf{X}^{t}A_{\delta}\\ A_{\delta}\mathbf{X}\mathbf{g}_{\delta}&-A_{\delta}+A_{\delta}\mathbf{X} \mathbf{g}_{\delta}\mathbf{X}^{t}A_{\delta}\end{pmatrix}, \tag{4.4}\] where \[A_{\delta}=A(I_{l}+\mathrm{i}\delta\eta A)^{-1}\in\mathbb{C}^{l\times l}\quad \text{and}\quad\mathbf{g}_{\delta}=\left(\mathbf{X}^{t}A(I_{l}+\mathrm{i} \delta\eta A)^{-1}\mathbf{X}+b^{t}\mathbf{X}+c-z\right)^{-1}\in\mathbb{C}^{N \times N}. \tag{4.5}\] Note that \(I_{l}+\mathrm{i}\delta\eta A\) is invertible for all \(\eta>0\) and \(\delta\in[0,1]\) as \(A\) is Hermitian. In particular we find \[(\mathbf{G}_{0})_{11}=\mathbf{g}\in\mathbb{C}^{N\times N}. \tag{4.6}\] This justifies, why \(\mathbf{L}\) is called the linearization of \(q\). The \(\delta\neq 0\) case adds an additional regularization, which we use to prove the local law, Proposition 4.3, for \(\delta=0\). \(\mathbf{G}_{\delta}\) satisfies the equation \[\mathbf{I}+(zJ+\mathrm{i}\eta\delta(\mathbf{I}-J)-K_{0}+\mathcal{S}[\mathbf{G} _{\delta}])\mathbf{G}_{\delta}=\mathbf{D}, \tag{4.7}\] where \[\mathcal{S}[\mathbf{R}]=\mathbb{E}[(\mathbf{L}-\mathbb{E}[\mathbf{L}])\mathbf{ R}(\mathbf{L}-\mathbb{E}[\mathbf{L}])] \tag{4.8}\] is called the self-energy operator and \(\mathbf{D}\) is the error term. It is defined by \[\mathbf{D}:=\mathcal{S}[\mathbf{G}_{\delta}]\mathbf{G}_{\delta}+(\mathbf{L}- K_{0})\mathbf{G}_{\delta}. \tag{4.9}\] We split the self energy term into \(\widetilde{\Gamma}\) and \(\mathcal{S}_{\text{o}}\), i.e. \(\mathcal{S}=\widetilde{\Gamma}+\mathcal{S}_{\text{o}}\). The first part, \(\widetilde{\Gamma}\), depends only on the averaged trace of its argument, i.e. for all \(R\in\mathbb{C}^{(l+1)N\times(l+1)N}\) we have \[\widetilde{\Gamma}[\mathbf{R}]=\Gamma[\underline{\mathbf{R}}]. \tag{4.10}\] The blockwise trace, \(\underline{\mathbf{R}}\), was introduced in (1.7) and \(\Gamma:\mathbb{C}^{(l+1)\times(l+1)}\to\mathbb{C}^{(l+1)\times(l+1)}\) is given by \[\Gamma\left[\begin{pmatrix}\omega&v^{t}\\ w&T\end{pmatrix}\right]=\begin{pmatrix}\omega\|b\|^{2}+b^{t}(v+w)+\operatorname{ Tr}T&\omega b^{t}+w^{t}\\ \omega b+v&\omega I_{l}\end{pmatrix}, \tag{4.11}\] where \(\omega\in\mathbb{C}\), \(v,w\in\mathbb{C}^{l}\) and \(T\in\mathbb{C}^{l\times l}\). In (4.10) we have made use of our convention to identify \(\Gamma[\underline{\mathbf{R}}]\) with \(\Gamma[\underline{\mathbf{R}}]\otimes\mathbf{I}_{N}\) The second term \(\mathcal{S}_{\circ}:\mathbb{C}^{(l+1)N\times(l+1)N}\to\mathbb{C}^{(l+1)N\times (l+1)N}\) is given by \[\mathcal{S}_{\circ}\left[\begin{pmatrix}\boldsymbol{\omega}&\mathbf{V}^{t}\\ \mathbf{W}&\mathbf{T}\end{pmatrix}\right]=\frac{1}{N}\mathbb{E}\zeta_{1}^{2} \left(\|b\|^{2}\boldsymbol{\omega}^{(o)}+b^{t}(\mathbf{V}^{(o)}+\mathbf{W}^{( o)})+\operatorname{Tr}_{b}\mathbf{T}^{(o)}\quad\boldsymbol{\omega}^{(o)}b^{t}+ \mathbf{W}^{(o)}\right), \tag{4.12}\] where \(\boldsymbol{\omega}\in\mathbb{C}^{N\times N}\), \(\mathbf{V}=(\mathbf{V}_{i})_{i\in[\![l]\!]}\in(\mathbb{C}^{(N\times N)})^{l}\), \(\mathbf{W}=(\mathbf{W}_{i})_{i\in[\![l]\!]}\in(\mathbb{C}^{N\times N})^{l}\) and \(\mathbf{T}\in\mathbb{C}^{lN\times lN}\). For any \(\mathbf{R}=(\mathbf{R}_{ij})_{i\in[\![l]\!],j\in[\![n]\!]}\in\mathbb{C}^{kN \times nN}\), \(n,k\in\mathbb{N}\), we define \(\mathbf{R}^{(o)}\) by \((\mathbf{R}^{(o)})_{ij}:=\mathbf{R}_{ji}^{t}-\operatorname{diag}(\mathbf{R}_{ ji})\in\mathbb{C}^{N\times N}\). For \(\mathbf{R}=(\mathbf{R}_{ij})_{i,j\in[\![l]\!]}\in\mathbb{C}^{lN\times lN}\) the block-trace \(\operatorname{Tr}_{\mathrm{b}}\) is defined by \[\operatorname{Tr}_{\mathrm{b}}\mathbf{R}=\sum_{i=1}^{l}\mathbf{R}_{ii}\in \mathbb{C}^{N\times N}. \tag{4.13}\] Equation (4.7) without the error term and \(\mathcal{S}_{\circ}\) is called the Dyson equation (DE) and its solution is denoted by \(M_{\delta}=M_{\delta}(z)\in\mathbb{C}^{(l+1)\times(l+1)}\), i.e. \[I+(zJ+\mathrm{i}\eta\delta(I-J)-K_{0}+\Gamma[M_{\delta}])M_{\delta}=0\in \mathbb{C}^{(l+1)\times(l+1)}. \tag{4.14}\] In the next subsection, we will lay out in what sense \(\mathbf{G}_{\delta}\) is close to \(M_{\delta}\). [21, Lemma 2.6] asserts the existence of a unique analytic solution to (4.14) for \(\delta=0\) and [34, Theorem 2.1] guarantees a unique solution for \(\delta\in(0,1]\). For \(m_{\delta}\in\mathbb{C}\), \(v_{\delta},w_{\delta}\in\mathbb{C}^{l}\) and \(\widehat{M}_{\delta}\in\mathbb{C}^{l\times l}\) we partition \(M_{\delta}\) as \[M_{\delta}:=\begin{pmatrix}m_{\delta}&v_{\delta}^{t}\\ w_{\delta}&\widehat{M}_{\delta}\end{pmatrix}. \tag{4.15}\] Since \(M_{\delta}\) solves (4.14) it is invertible and by the Schur complement formula its inverse is given by \[M_{\delta}^{-1}=\begin{pmatrix}m^{-1}+m^{-2}v_{\delta}^{t}\left(\widehat{M}_{ \delta}-w_{\delta}m_{\delta}^{-1}v_{\delta}^{t}\right)^{-1}w_{\delta}&-m^{-1} v_{\delta}^{t}\left(\widehat{M}_{\delta}-w_{\delta}m_{\delta}^{-1}v_{\delta}^{t} \right)^{-1}\\ -m^{-1}\left(\widehat{M}_{\delta}-w_{\delta}m_{\delta}^{-1}v_{\delta}^{t} \right)^{-1}w_{\delta}&\left(\widehat{M}_{\delta}-w_{\delta}m_{\delta}^{-1}v _{\delta}^{t}\right)^{-1}\end{pmatrix}. \tag{4.16}\] At the same time by (4.14) the inverse of \(M_{\delta}\) can be expressed as \[M_{\delta}^{-1}=\begin{pmatrix}-z-m_{\delta}b^{t}b-b^{t}(v_{\delta}+w_{ \delta})-\operatorname{Tr}\widehat{M}_{\delta}&-m_{\delta}b^{t}-w_{\delta}^{t} \\ -m_{\delta}b-v&-A^{-1}-(m_{\delta}+\mathrm{i}\eta\delta)I_{l}\end{pmatrix}. \tag{4.17}\] Comparing the two expressions for \(M_{\delta}^{-1}\), we obtain a set of equations for \(m_{\delta},v_{\delta},w_{\delta}\) and \(\widehat{M}_{\delta}\). Solving them we find explicit expressions for \(v_{\delta},w_{\delta}\) and \(\widehat{M}_{\delta}\) in terms of \(m_{\delta}\). In other words, \(M_{\delta}=M_{\delta}[m_{\delta}]\) can be expressed purely in terms of its (1,1) entry with \(M_{\delta}[x]\) given by \[M_{\delta}[x]:=\begin{pmatrix}x&-xb^{t}V_{\delta}(x)A_{\delta}\\ -xA_{\delta}V_{\delta}(x)b&-A_{\delta}(I_{l}+xA_{\delta})^{-1}+xA_{\delta}V_{ \delta}(x)bb^{t}V_{\delta}(x)A_{\delta}\end{pmatrix} \tag{4.18}\] with \[V_{\delta}(x):=x(I_{l}+2x\widehat{A}_{\delta})^{-1}. \tag{4.19}\] Note that we use square brackets to denote \(M_{\delta}\) as a function of its (1,1) entry. This is done to avoid confusion with \(M_{\delta}(z)\), which denotes \(M_{\delta}\) as a function of the spectral parameter \(z\). The two functions are related by \(M_{\delta}[m_{\delta}(z)]=M_{\delta}(z)\). The entry \(m_{\delta}\) satisfies the equation \[-m_{\delta}^{-1}=z+\gamma_{\delta}(m_{\delta}), \tag{4.20}\] where \[\gamma_{\delta}(x):=-\operatorname{Tr}A_{\delta}(I_{l}+A_{\delta}x)^{-1}+b^{t}( V_{\delta}(x)-V_{\delta}(x)\widehat{A}_{\delta}V_{\delta}(x))b-c. \tag{4.21}\] and \(\widehat{A}_{\delta}=\frac{1}{2}(A_{\delta}+A_{\delta}^{t})\). For \(\delta=0\), Equation (4.20) corresponds precisely to (2.4). From here on we no longer assume that \(A\) is invertible and instead, we use the Equations (4.4), (4.9) and (4.18) as the definition for \(\mathbf{G}_{\delta}\), \(\mathbf{D}\) and \(M_{\delta}=M_{\delta}[m_{\delta}]\) respectively. Note that this is also well defined for (4.9) as \(K_{0}\) and \(\mathbf{L}\), defined in (4.2) and (4.1) respectively, depend on \(A^{-1}\), but \(\mathbf{L}-K_{0}\) does not. The function \(m_{\delta}\) is uniquely defined by the following lemma. **Lemma 4.1**.: _Let \(\rho\) have a regular edge at \(\tau_{\pm}\) and let \(\delta\in(0,1]\). The function \(m_{\delta}\) is uniquely defined by the following criteria around the edge and away from the spectrum._ 1. _There is a_ \(u>0\) _only depending on the coefficients of_ \(q\) _such that for all_ \(z\in\mathbb{H}\) _with_ \(|z-\tau_{\pm}|<u\) _there is a unique function_ \(m_{\delta}=m_{\delta}(z)\) _that solves (_4.20_) and satisfies_ \[\operatorname{Im}m_{\delta}\sim\begin{cases}\sqrt{\kappa+\eta}&\text{if }E\in \operatorname{supp}(\rho)\\ \frac{\eta}{\sqrt{\kappa+\eta}}&\text{if }E\notin\operatorname{supp}(\rho) \end{cases}\quad\text{as well as}\quad|m_{\delta}-m_{\pm}|^{2}\sim|z-\tau_{\pm}|,\] (4.22) _with_ \(\kappa=|E-\tau_{\pm}|\) _and_ \(m_{\pm}=\lim_{\eta\searrow 0}m(\tau_{\pm}+\mathrm{i}\eta)\)_._ 2. _For all_ \(C>0\) _there is an_ \(\eta_{0}\) _such that for all_ \(z=E+\mathrm{i}\eta\in\mathbb{H}\) _with_ \(C^{-1}<\operatorname{dist}(E,\operatorname{supp}(\rho))<C\) _and_ \(\eta<\eta_{0}\) _there is a unique function_ \(m_{\delta}=m_{\delta}(z)\) _that solves (_4.20_) and satisfies_ \[\operatorname{Im}m_{\delta}\sim_{C}\eta\quad\text{as well as}\quad|m_{\delta}(z)-m(E)|\sim\eta,\] (4.23) _with_ \(m(E)=\lim_{\eta\searrow 0}m(E+\mathrm{i}\eta)\)_._ _In each case \(M_{\delta}=M_{\delta}[m_{\delta}]\) is also the unique solution to (4.14) with \(\operatorname{Im}M_{\delta}>0\) if the coefficient matrix \(A\) is invertible._ The lemma is proven in Appendix A.3. ### Local law for the linearization **Definition 4.2** (Shifted square of a Wigner matrix).: _A polynomial \(q\) as in (1.1) with \(A\in\mathbb{R}^{l\times l}\), \(\operatorname{rank}A=1\) and \(b=0\) is called a **shifted square of a Wigner matrix.**_ **Remark**.: _If a polynomial \(q\) satisfies the above definition, then there is some \(v\in\mathbb{R}^{l}\) with \(v\neq 0\) and_ \[q(\mathbf{X})=\pm(v^{t}\mathbf{X})^{2}+c. \tag{4.24}\] _Here, \(\mathbf{W}:=\frac{1}{\|v\|}v^{t}\mathbf{X}\) is a Wigner matrix, normalised such that \(\mathbb{E}|w_{ij}|^{2}=\frac{1}{N}\) and we have_ \[q(\mathbf{X})=\pm\|v\|^{2}\mathbf{W}^{2}+c. \tag{4.25}\] _This justifies the terminology in the above definition._ In case \(q\) is not a shifted square of a Wigner matrix, we will prove that \(M_{0}\) and \(\mathbf{G}_{0}\) are close to each other around any regular edge. As \(m\) and \(\mathbf{g}\) are sub-matrices of \(M_{0}\) and \(\mathbf{G}_{0}\) respectively, this also implies closeness of \(\mathbf{g}\) and \(m\). This result, a local law for the linearization, is presented below in Propositions 4.3 and 4.4. For \(q\) being a shifted square of a Wigner matrix, we will prove the local law, Theorem 2.9 and Proposition 2.10, directly without the use of a linearization. The reason why we prove this case separately is that the stability operator \(\mathscr{L}\), defined below in (4.33), does have an additional unstable direction. **Proposition 4.3** (Edge local law for the linearization).: _Let \(q\) be a polynomial of the form (1.1) that is not a shifted square of a Wigner matrix and let the corresponding \(\rho\) have a regular edge at \(\tau_{0}\). There is a \(\kappa_{0}>0\) depending only on the parameters of \(q\) such that for all \(\varepsilon,\gamma,D>0\), \(\delta\in\{0,1\}\) and \(z=E+\mathrm{i}\eta\in\mathbb{D}_{\gamma}^{\kappa_{0}}\) the isotropic local law_ \[\mathbb{P}\left(|\langle\mathbf{x},(\mathbf{G}_{\delta}-M_{\delta})\mathbf{y} \rangle|>\|\mathbf{x}\|\|\mathbf{y}\|N^{\varepsilon}\left(\frac{\operatorname{ Im}m}{\sqrt{N\eta}}+\frac{1}{N\eta}\right)\right)\lesssim_{\varepsilon,\gamma,D}N^{-D} \tag{4.26}\] _holds for all deterministic \(\mathbf{x},\mathbf{y}\in\mathbb{C}^{(l+1)N}\). Moreover, the averaged local law_ \[\mathbb{P}\left(|\langle\mathbf{B}(\mathbf{G}_{\delta}-M_{\delta})\rangle|> \|\mathbf{B}\|\frac{N^{\varepsilon}}{N\eta}\right)\lesssim_{\varepsilon, \gamma,D}N^{-D} \tag{4.27}\] _holds for all deterministic \(\mathbf{B}\in\mathbb{C}^{(l+1)N\times(l+1)N}\). For \(E\notin\operatorname{supp}(\rho)\), an improved averaged local law of the form_ \[\mathbb{P}\left(|\langle\mathbf{B}(\mathbf{G}_{\delta}-M_{\delta})\rangle|> \|\mathbf{B}\|N^{\varepsilon}\left(\frac{1}{N(\kappa+\eta)}+\frac{1}{(N\eta) ^{2}\sqrt{\kappa+\eta}}\right)\right)\lesssim_{\varepsilon,\gamma,D}N^{-D}, \tag{4.28}\] _with \(\kappa=|E-\tau_{0}|\), is obtained._ **Proposition 4.4** (Local law for the linearization away from the spectrum).: _Let \(q\) be a polynomial of the form (1.1) that is not a shifted square of a Wigner matrix. For all \(C>0\) there is an \(\eta_{0}>0\), depending only on the coefficients of \(q\), such that an averaged local law of the form_ \[\mathbb{P}\left(|\langle\mathbf{B}(\mathbf{G}_{\delta}-M_{\delta})\rangle|>\| \mathbf{B}\|N^{\varepsilon}\left(\frac{1}{N}+\frac{1}{(N\eta)^{2}}\right) \right)\lesssim_{\varepsilon,\gamma,D,C}N^{-D} \tag{4.29}\] _holds true for all deterministic \(\mathbf{B}\in\mathbb{C}^{(l+1)N\times(l+1)N}\), \(\varepsilon,\gamma,D>0\) and \(z\in\mathbb{G}_{\gamma}^{C,\eta_{0}}\)._ To obtain Theorem 2.9, we only require the \(\delta=0\) case, but to prove it we will require the \(\delta=1\) case, thus we state both cases together in the proposition. The proof of Proposition 4.3 has two major ingredients. For one we show that the error term \(\mathbf{D}\) in (4.7) is indeed small, this is done in Proposition 4.5, which we import from [22, Theorem 4.1] and adjust to our setting. Additionally, we need to prove that (4.14) is stable under small perturbations. This is done in Proposition 4.6. **Proposition 4.5**.: _Let \(\varepsilon>0\), \(p\in\mathbb{N}\) and \(\delta\in[0,1]\). Then there is a \(c>0\) such that_ \[\|\mathbf{D}\|_{p}\lesssim_{p,\varepsilon}N^{\varepsilon}\sqrt{\frac{\| \mathbf{G}_{\delta}\mathbf{G}_{\delta}^{*}\|_{p_{0}}}{N}\left(1+\|\mathbf{G}_{ \delta}\|_{p_{0}}\right)^{c}\left(1+N^{-\frac{1}{4}}\|\mathbf{G}_{\delta}\|_{p _{0}}\right)^{cp}} \tag{4.30}\] _and_ \[\|\mathbf{D}\|_{p}^{\mathrm{av}}\lesssim_{p,\varepsilon}N^{\varepsilon}\frac{ \|\mathbf{G}_{\delta}^{*}\mathbf{G}_{\delta}\|_{p_{0}}}{N}\left(1+\|\mathbf{G} _{\delta}\|_{p_{0}}\right)^{c}\left(1+N^{-\frac{1}{4}}\|\mathbf{G}_{\delta}\|_ {p_{0}}\right)^{cp} \tag{4.31}\] _where we defined \(p_{0}=c\frac{p^{4}}{\varepsilon}\)._ Proof.: First, consider the case of \(A\) being invertible. Then \(\mathbf{G}_{\delta}=(\mathbf{L}-zJ-\mathrm{i}\eta\delta(\mathbf{I}-J))^{-1}\) and our proof follows the proof of [22, Theorem 4.1] line by line with the exception that the Ward identities, (51a) and (51b), do not apply for \(\delta=0\). Thus, we cannot replace the \(\mathbf{G}_{\delta}^{*}\mathbf{G}_{\delta}\) terms by \(\eta^{-1}\operatorname{Im}\mathbf{G}_{\delta}\) and we instead are left with the upper bounds (4.30) and (4.31). Now consider \(A\) being non-invertible. Then \(A+\varepsilon\) is invertible for all \(\varepsilon\in(0,u)\) for some \(u\sim 1\). We denote the \(\mathbf{G}_{\delta}\) associated with \(A+\varepsilon\) by \(\mathbf{G}_{\delta}^{\varepsilon}\) and we have \(\lim_{e\to 0}\mathbf{G}_{\delta}^{\varepsilon}=\mathbf{G}_{\delta}\). We thus only need to prove that the constants in (4.30) and (4.31) are uniform in \(\varepsilon\) to obtain the proposition in this case. This is non-trivial as [22, Theorem 4.1] states as a condition that \(\mathbb{E}[\mathbf{L}]\) is bounded and this is clearly not the case in the \(\varepsilon\to 0\) limit. The assumption is only used, however, to ensure that \(\mathbf{G}_{\delta}^{*}\mathbf{G}_{\delta}\) satisfies the lower bound \(\|\mathbf{G}_{\delta}^{*}\mathbf{G}_{\delta}\|_{p_{0}}\gtrsim 1\). In our case, this follows instead from \(\|\mathbf{G}_{\delta}^{*}\mathbf{G}_{\delta}\|_{p_{0}}\geq\|\mathbf{g}_{ \delta}^{*}\mathbf{g}\|_{p_{0}}\gtrsim_{p_{0}}\|\mathbf{g}^{*}\mathbf{g}\|_{p_ {0}}\). Since \(\mathbf{g}\) is the resolvent of \(q(\mathbf{X})\) we have \(\|\mathbf{g}^{*}\mathbf{g}\|_{q}\geq\mathbb{E}[\|q(\mathbf{X})\|^{-1}]\gtrsim 1\) uniformly in \(z\) for bounded \(z\). Here, the last inequality holds true since \(q(\mathbf{X})\) satisfies the inequality \[\|q(\mathbf{X})\|\leq\sum_{i,j=1}^{l}|A_{ij}|\|\mathbf{X}_{i}\|\|\mathbf{X}_{j }\|+\sum_{i=1}^{l}|b_{i}|\|\mathbf{X}_{i}\|+|c| \tag{4.32}\] and \(\mathbb{E}[\|\mathbf{X}_{i}\|]\lesssim 1\) for all \(i\in[\![l]\!]\) since the \(\mathbf{X}_{i}\) are Wigner matrices. Therefore the proposition also holds for non-invertible \(A\). The following result concerns the stability of the Dyson equation. The stability operator \(\mathscr{L}:\mathbb{C}^{(l+1)N\times(l+1)N}\to\mathbb{C}^{(l+1)N\times(l+1)N}\) is given by \[\mathscr{L}[\mathbf{R}]=\mathbf{R}-M_{\delta}\mathcal{S}[\mathbf{R}]M_{\delta} \tag{4.33}\] and we will prove **Proposition 4.6** (Control of \(\mathscr{L}\)).: _Let \(q\) be a polynomial of the form (1.1) that is not a shifted square of a Wigner matrix and let the corresponding \(\rho\) have a regular edge at \(\tau_{0}\). There exists a \(u\sim 1\) such that for all \(z=E+\mathrm{i}\eta\in\mathbb{H}\) with \(|z-\tau_{0}|<u\) and \(\delta\in[0,1]\) there exists an eigenvalue \(\beta\) with corresponding left and right eigenvectors \(L,B\in\mathbb{C}^{(l+1)\times(l+1)}\) of \(\mathscr{L}\) such that_ \[\begin{split}\|\mathscr{L}^{-1}\|_{\mathrm{sp}}\sim(\kappa+\eta)^ {-\frac{1}{2}},&\quad\|(\mathbb{1}-\mathscr{P})\mathscr{L}^{-1} \|_{\mathrm{sp}}\lesssim 1,\quad|\beta|\sim(\kappa+\eta)^{\frac{1}{2}},\\ &|\langle L,B\rangle|\sim 1,&\quad\|L\|+\|B\|\sim 1, \quad|\langle L,M_{\delta}\mathcal{S}[B]B\rangle|\sim 1,\end{split} \tag{4.34}\] _with \(\mathbb{1}\) being the identity operator, \(\mathscr{P}\) being the spectral projection onto \(B\), i.e._ \[\mathscr{P}=(\langle L\otimes\mathbf{I}_{N},B\otimes\mathbf{I}_{N}\rangle)^{-1} \langle L\otimes\mathbf{I}_{N},\cdot\rangle(B\otimes\mathbf{I}_{N}) \tag{4.35}\] _and \(\kappa=|E-\tau_{0}|\)._ _Furthermore, for any \(C>0\) and \(E=\operatorname{Re}z\) with \(C^{-1}<\operatorname{dist}(E,\operatorname{supp}(\rho))<C\) there is an \(\eta_{0}>0\) such that we have_ \[\|\mathscr{L}^{-1}\|_{\operatorname{sp}}\sim_{C}1 \tag{4.36}\] _uniformly for all \(\eta\leq\eta_{0}\)._ The proof will be given in Section 4.2. **Remark**.: \(B\) _and \(L\) being right and left eigenvectors of \(\mathscr{L}\) with eigenvalue \(\beta\) is understood in the sense of_ \[\mathscr{L}[B]=\beta B\quad\text{and}\quad\mathscr{L}^{*}[L]=\bar{\beta}L. \tag{4.37}\] _Here, we used the notation \(R=R\otimes\mathbf{I}_{N}\in\mathbb{C}^{(l+1)N\times(l+1)N}\) introduced in (1.3). The adjoint is defined with respect to the scalar product \(\langle\mathbf{R},\mathbf{T}\rangle=\langle\mathbf{R}^{*}\mathbf{T}\rangle\)._ Corollary 3.3, as well as Propositions 4.5 and 4.6 are the main ingredients to Proposition 4.3 and are in fact sufficient for \(\delta=1.\) For \(\delta=0\) however, extra care is needed as the Ward identity for resolvents \(\mathbf{G}\), \[\mathbf{G}\mathbf{G}^{*}=\frac{\operatorname{Im}\mathbf{G}}{\eta}, \tag{4.38}\] does not translate to generalized resolvents. Instead, we will estimate \(\mathbf{G}_{0}\mathbf{G}_{0}^{*}\) by \(\mathbf{G}_{1}\mathbf{G}_{1}^{*}\), which allows us to obtain the local law for \(\delta=0\) from the \(\delta=1\) case. The proof of Proposition 4.3 will be given in Section 5 and follows the general strategy from [1], modified to accommodate for the lack of a Ward identity. ### Proof of Proposition 4.6 Proof of Proposition 4.6.: We split the stability operator into \(\mathscr{L}=\mathscr{L}^{(0)}+\mathscr{L}^{(1)}\) with \[\mathscr{L}^{(0)}[\mathbf{R}]:=\mathscr{L}[\underline{\mathbf{R}}]\quad\text {and}\quad\mathscr{L}^{(1)}[\mathbf{R}]:=\mathscr{L}[\mathbf{R}-\underline{ \mathbf{R}}]. \tag{4.39}\] By (4.11) and (4.12), we have \(\bar{\Gamma}[\mathbf{R}-\underline{\mathbf{R}}]=\mathcal{S}_{\mathrm{o}}[ \underline{\mathbf{R}}]=0.\) Thus we have \[\mathscr{L}^{(0)}[\mathbf{R}]=\mathcal{L}[\underline{\mathbf{R}}]\otimes \mathbf{I}_{N}\quad\text{and}\quad\mathscr{L}^{(1)}[\mathbf{R}]=\mathbf{R}- \underline{\mathbf{R}}-M_{\delta}\mathcal{S}_{\mathrm{o}}[\mathbf{R}]M_{\delta} \tag{4.40}\] with \(\mathcal{L}:\mathbb{C}^{(l+1)\times(l+1)}\to\mathbb{C}^{(l+1)\times(l+1)}\) defined as \[\mathcal{L}[R]:=R-M_{\delta}\Gamma[R]M_{\delta}. \tag{4.41}\] The image of \(\mathscr{L}^{(0)}\) is given by \[\{\mathbf{R}\in\mathbb{C}^{(l+1)N\times(l+1)N}:\,\underline{\mathbf{R}}= \mathbf{R}\}=:\mathcal{U} \tag{4.42}\] and its kernel is given by \(\mathcal{U}^{\perp}\), the orthogonal complement of \(\mathcal{U}.\) At the same time the image of \(\mathscr{L}^{(1)}\) is contained in \(\mathcal{U}^{\perp}\) and \(\mathcal{U}\) is contained in the kernel of \(\mathscr{L}^{(1)}\). That is, \(\mathscr{L}\) decomposes into \(\mathscr{L}^{(0)}\) acting on \(\mathcal{U}\) and \(\mathscr{L}^{(1)}\) acting on its orthogonal complement. The behaviour of \(\mathcal{L}\) is summarized in the following lemma. **Lemma 4.7**.: _Let \(q\) be a polynomial of the form (1.1) that is not a shifted square of a Wigner matrix and let the corresponding \(\rho\) have a regular edge at \(\tau_{0}\). There exists an \(u\sim 1\) such that for all \(z=E+\mathrm{i}\eta\in\mathbb{H}\) with \(|z-\tau_{0}|<u\) and \(\delta\in[0,1]\) there exists an eigenvalue \(\beta\) with corresponding normalized left and right eigenvectors \(L\) and \(B\) of \(\mathcal{L}\) such that_ \[\|\mathcal{L}^{-1}\|_{\operatorname{sp}}\sim(\kappa+\eta)^{-\frac{ 1}{2}}, \quad\|(\mathbb{1}-\mathcal{P})\mathcal{L}^{-1}\|_{\operatorname{sp}} \lesssim 1,\quad|\beta|\sim(\kappa+\eta)^{\frac{1}{2}}, \tag{4.43}\] \[|\langle L,B\rangle|\sim 1, |\langle L,M_{\delta}\Gamma[B]B\rangle|\sim 1,\] _with \(\mathcal{P}\) being the spectral projection onto \(B\), i.e. \(\mathcal{P}=(\langle L,B\rangle)^{-1}\langle L,\cdot\rangle B\) and \(\kappa=|E-\tau_{+}|\)._ _Furthermore, for any \(C>0\) and \(E=\operatorname{Re}z\) with \(C^{-1}\leq\operatorname{dist}(E,\operatorname{supp}(\rho))\leq C\) there is an \(\eta_{0}>0\) such that we have_ \[\|\mathcal{L}^{-1}\|_{\operatorname{sp}}\sim_{C}1 \tag{4.44}\] _uniformly for all \(\eta\leq\eta_{0}\)._ The proof of Lemma 4.7 is deferred to the end of the section. For \(\mathcal{S}_{\mathrm{o}}\) we find \[\|\mathcal{S}_{\mathrm{o}}[\mathbf{R}]\|_{\mathrm{hs}}\lesssim\frac{1}{N}\| \mathbf{R}\|_{\mathrm{hs}} \tag{4.45}\] for all \(\mathbf{R}\in\mathbb{C}^{(l+1)N\times(l+1)N}\). Thus \(\mathcal{S}_{\mathrm{o}}\) is bounded by \[\|\mathcal{S}_{\mathrm{o}}\|_{\mathrm{sp}}\lesssim\frac{1}{N}. \tag{4.46}\] By Corollary 3.3, Lemma 4.1 and (4.18) we have \(\|M_{\delta}\|\lesssim 1\) for all \(z\) such that \(|z-\tau_{0}|\leq u\) and some \(u>0\). Combined with (4.46) it follows that there is a \(C>0\) such that \[\|\mathscr{L}^{(1)}\|_{\mathrm{sp}}\leq 1+CN^{-1}\quad\text{and}\quad\text{ Spec}\left(\mathscr{L}^{(1)}|_{\mathscr{U}^{\perp}}\right)\subset B_{CN^{-1}}(1), \tag{4.47}\] where \(B_{\varepsilon}(x)\) denotes the \(\varepsilon\) neighborhood of \(x\). Thus for sufficiently large \(N\) the smallest eigenvalue of \(\mathscr{L}\) equals that of \(\mathcal{L}\) and the corresponding left and right eigenvectors of \(\mathscr{L}\) are given by \(L\otimes\mathbf{I}_{N}\) and \(B\otimes\mathbf{I}_{N}\). The norm of the inverse of \(\mathscr{L}\) is bounded by \[\|\mathscr{L}^{-1}\|_{\mathrm{sp}}\leq\max\{1+CN^{-1},\|\mathcal{L}^{-1}\|_{ \mathrm{sp}}\}\quad\text{and}\quad\|(1-\mathcal{P})\mathscr{L}^{-1}\|_{ \mathrm{sp}}\leq\max\{1+CN^{-1},\|\mathcal{L}^{-1}\|_{\mathrm{sp}}\}, \tag{4.48}\] completing the proof of Proposition 4.6 Proof of Lemma 4.7.: First, we prove that \(\mathcal{L}\) has exactly one vanishing eigenvalue at \(\tau_{0}\), and from there on we conclude the proof with the help of a perturbative argument. We define \[\mathcal{C}_{J}[R]=JRJ, \tag{4.49}\] i.e. \(\mathcal{C}_{J}\) is the projection onto the \((1,1)\) entry and in particular \(\mathcal{C}_{J}[M_{0}]=mJ\). We also set \(\mathcal{C}_{J}^{\perp}R:=R-\mathcal{C}_{J}R\) and \(\widetilde{\mathbb{C}}^{(l+1)\times(l+1)}:=\mathrm{Image}\,\mathcal{C}_{J}^{\perp}\). For any \(R\in\mathbb{C}^{(l+1)\times(l+1)}\) we denote \(r:=R_{11}\) and \(\widetilde{R}:=\mathcal{C}_{J}^{\perp}[R]\), i.e. \(R=rJ+\widetilde{R}\). Let \(T_{z}\) be the matrix \[T_{z}=\begin{pmatrix}1&0\\ 0&zA\end{pmatrix}\in\mathbb{C}^{(l+1)\times(l+1)} \tag{4.50}\] as well as \[F:\begin{cases}\mathbb{C}\times\widetilde{\mathbb{C}}^{(l+1)\times(l+1)} \times\mathbb{H}&\to\mathbb{C}^{(l+1)\times(l+1)}\\ (r,\widetilde{R},z)&\mapsto rJ+\widetilde{R}+T_{z}(z+\Gamma[rJ+\widetilde{R}] T_{z})^{-1}.\end{cases} \tag{4.51}\] Then \(F(m,\widetilde{M},z)=0\) for \(M=M_{0}[m]\) defined in (4.18). For \(\delta=0\), the stability operator \(\mathcal{L}\) is the derivative of \(F\) in the sense that \[\mathcal{L}[R]=D_{R}F(m,\widetilde{M},z), \tag{4.52}\] where \(D_{R}F=\frac{\mathrm{d}}{\mathrm{d}\varepsilon}F(m+\varepsilon r,\widetilde{ M}+\varepsilon\widetilde{R},z)|_{\varepsilon=0}\) is the directional derivative of \(F\) in the direction \(R\). We first consider the case when \(A\) is invertible. The case of non-invertible \(A\) will be treated afterwards. We define \(\mathcal{B}:=\mathcal{C}_{M^{-1}}\mathcal{L}\) on \(z\in\mathbb{R}\setminus\mathrm{supp}(\rho)\), where \(m(z)\) is defined as the unique analytical continuation to \(\mathbb{C}\setminus\mathrm{supp}(\rho)\). Since \(F(m,\widetilde{M},z)=0\) we have \[\mathcal{B}[R]=D_{R}[\mathcal{C}_{M^{-1}}F]=D_{rJ}\mathcal{C}_{J}[\mathcal{C}_ {M^{-1}}F]+D_{\widetilde{R}}\mathcal{C}_{J}[\mathcal{C}_{M^{-1}}F]+D_{rJ} \mathcal{C}_{J}^{\perp}[\mathcal{C}_{M^{-1}}F]+D_{\widetilde{R}}\mathcal{C}_{J }^{\perp}[\mathcal{C}_{M^{-1}}F], \tag{4.53}\] where we used the linearity of the derivatives as well as the linearity of \(\mathcal{B}\) in the second equality and we omitted the arguments of \(F\). The above equation decomposes \(\mathcal{B}\) into a two-by-two block operator with diagonal blocks \(\mathcal{B}_{11}[rJ]:=D_{rJ}\mathcal{C}_{J}[\mathcal{C}_{M^{-1}}F]\), \(\mathcal{B}_{22}[\widetilde{R}]:=D_{\widetilde{R}}\mathcal{C}_{J}^{\perp}[ \mathcal{C}_{M^{-1}}F]\) and off-diagonal blocks \(\mathcal{B}_{12}[\widetilde{R}]:=D_{\widetilde{R}}\mathcal{C}_{J}[\mathcal{C}_ {M^{-1}}F]\), \(\mathcal{B}_{21}[rJ]:=D_{rJ}\mathcal{C}_{J}^{\perp}[\mathcal{C}_{M^{-1}}F]\). A tedious but straightforward calculation shows that \(\mathcal{B}_{22}\) is invertible on the image of \(\mathcal{C}_{J}^{\perp}\) and its inverse is given by \[(\mathcal{B}_{22})^{-1}\left[\begin{pmatrix}0&r_{12}^{t}\\ r_{21}&\widetilde{R}\end{pmatrix}\right]=\begin{pmatrix}0&h_{12}^{t}\\ h_{21}&\widetilde{H}\end{pmatrix} \tag{4.54}\] with \[h_{12}= -A^{t}V_{0}(mA(r_{21}-m^{-1}\widehat{R}w_{0})+(1+mA)(r_{12}-m^{-1} \widehat{R}^{t}v_{0}))\] \[h_{21}= -AV_{0}(mA^{t}(r_{12}-m^{-1}\widehat{R}^{t}v_{0})+(1+mA^{t})(r_{21 }-m^{-1}\widehat{R}w_{0}))\] \[\widehat{H}= \frac{A}{1+mA}\widehat{R}\frac{A}{1+mA}-AV_{0}(mA^{t}(r_{12}-m^{ -1}\widehat{R}^{t}v_{0})+(1+mA^{t})(r_{21}-m^{-1}\widehat{R}w_{0}))m^{-1}v^{t}\] \[\qquad-wm^{-1}((r_{21}^{t}-m^{-1}w_{0}^{t}\widehat{R})mA^{t}+(r_ {12}^{t}-m^{-1}v_{0}^{t}\widehat{R})(1+mA^{t}))V_{0}A, \tag{4.55}\] where \(V_{0}=V_{0}(m)\) was introduced in (4.19) and \(v_{0}\) and \(w_{0}\) where defined in (4.15). Their explicit form in terms of \(m=m_{0}\) is given in (4.18). From \(F=0\), we also have \(\mathcal{C}_{J}^{\perp}\mathcal{C}_{M^{-1}}F=0\) and both \(\widetilde{M}=\widetilde{M}_{0}\) and \(z\) are uniquely defined by \(m\) (see (4.18) and (2.4)). Therefore the total derivative of \(\mathcal{C}_{J}^{\perp}\mathcal{C}_{M^{-1}}F\) with respect to \(m\) is well defined and vanishes as well, i.e. \[0=\frac{\mathrm{d}}{\mathrm{d}m}\mathcal{C}_{J}^{\perp}\mathcal{C}_{M^{-1}}F= \mathcal{C}_{J}^{\perp}D_{J}\mathcal{C}_{M^{-1}}F+\mathcal{C}_{J}^{\perp}D_{ \widetilde{M}^{\prime}}\mathcal{C}_{M^{-1}}F+z^{\prime}(m)\frac{\partial}{ \partial z}\mathcal{C}_{J}^{\perp}\mathcal{C}_{M^{-1}}F=\mathcal{B}_{21}[J]+ \mathcal{B}_{22}[\widetilde{M}^{\prime}] \tag{4.56}\] where \(\widetilde{M}^{\prime}:=\frac{\partial\widetilde{M}}{\partial m}\) and \(z^{\prime}\) denote the derivative of \(\mathcal{C}_{J}^{\perp}[M[m]]\) and \(z\) with respect to \(m\). In the last step we also used \(\frac{\partial}{\partial z}\mathcal{C}_{J}^{\perp}\mathcal{C}_{M^{-1}}F=0\), which follows from (4.14). Since \(\mathcal{B}_{22}\) is invertible on its image, (4.56) is equivalent to \[(\mathcal{B}_{22})^{-1}\mathcal{B}_{21}[J]=-\widetilde{M}^{\prime}. \tag{4.57}\] Therefore the Schur complement of \(\mathcal{B}_{22}\) is given by \[(\mathcal{B}_{11}-\mathcal{B}_{12}(\mathcal{B}_{22})^{-1}\mathcal{B}_{21})[J]= \frac{\partial}{\partial m}C_{J}[\mathcal{C}_{M^{-1}}F]+D_{\widetilde{M}^{ \prime}}\mathcal{C}_{J}[\mathcal{C}_{M^{-1}}F]. \tag{4.58}\] In other words, the Schur complement of \(\mathcal{B}_{22}\) is the total derivative of \(\mathcal{C}_{J}[\mathcal{C}_{M^{-1}}F]\) with respect to \(m\) for fixed \(z\). Calculating it, we find \[(\mathcal{B}_{11}-\mathcal{B}_{12}(\mathcal{B}_{22})^{-1}\mathcal{B}_{21})[J]= \left(\frac{1}{m^{2}}-\gamma^{\prime}(m)\right)J=h(m)J, \tag{4.59}\] where \(h\) was introduced in Definition 3.4. By Lemma 3.5, Lemma 3.6 and Lemma A.3 we have \(h(m)\neq 0\) for all \(z\in\mathbb{R}\setminus\mathrm{supp}(\rho).\) Therefore, the Schur complement of \(\mathcal{B}_{22}\) is invertible on the image of \(\mathcal{C}_{J}\) for all \(z\in\mathbb{R}\setminus\mathrm{supp}(\rho)\). Since both \(\mathcal{B}_{22}\) and its Schur complement are invertible on their respective images for all \(z\in\mathbb{R}\setminus\mathrm{supp}(\rho)\), the operator \(\mathcal{B}\) for all \(z\in\mathbb{R}\setminus\mathrm{supp}(\rho)\) is also invertible with its inverse given by the Schur complement formula \[\begin{split}\mathcal{B}^{-1}[R]&=\left(\mathbb{1}-( \mathcal{B}_{22})^{-1}\mathcal{B}_{21}\right)\left(\mathcal{B}_{11}-\mathcal{ B}_{12}(\mathcal{B}_{22})^{-1}\mathcal{B}_{21}\right)^{-1}\left[rJ-\mathcal{B}_{12}( \mathcal{B}_{22})^{-1}[\widehat{R}]\right]+(\mathcal{B}_{22})^{-1}[\widehat{R}] \\ &=h(m)^{-1}\left(r-\left\langle J,\mathcal{B}_{12}(\mathcal{B}_{22 })^{-1}[\widetilde{R}]\right\rangle\right)M^{\prime}+(\mathcal{B}_{22})^{-1}[ \widetilde{R}],\end{split} \tag{4.60}\] where \(M^{\prime}=J+\widetilde{M}^{\prime}\) is the derivative of \(M\) with respect to \(m\) and the inverse of the Schur complement is acting on \(\mathrm{span}(J)\). Therefore, \(\mathcal{L}\) is also invertible with inverse \(\mathcal{L}^{-1}=\mathcal{B}^{-1}\mathcal{C}_{M^{-1}}\) for \(z\in\mathbb{R}\setminus\mathrm{supp}(\rho)\). Now let \(A\) be non-invertible. Then, \(A+\varepsilon\) is invertible for all \(\varepsilon\in(0,u]\) for some \(u>0\). We use an \(\varepsilon\) subscript to denote quantities with \(A\) being replaced by \(A^{\varepsilon}:=A+\varepsilon\), e.g. \(\mathcal{L}^{\varepsilon}\), \((\mathcal{B}^{-1})^{-\varepsilon}\) etc. We have \(\mathcal{L}^{\varepsilon}(\mathcal{B}^{-1}\mathcal{C}_{M^{-1}})^{\varepsilon}= \mathbb{1}\) for all \(\varepsilon\in(0,u]\) and both the limits \(\lim_{\varepsilon\searrow 0}\mathcal{L}^{\varepsilon}\) and \(\lim_{\varepsilon\searrow 0}(\mathcal{B}^{-1}\mathcal{C}_{M^{-1}})^{\varepsilon}\) exist. Indeed, \(\mathcal{L}^{\varepsilon}\) is the derivative of \(F^{\varepsilon}\), which is smooth at \(\varepsilon=0\), proving the existence of \(\lim_{\varepsilon\searrow 0}\mathcal{L}^{\varepsilon}\). Furthermore we explicitly calculate the expression for \((\mathcal{B}^{-1}\mathcal{C}_{M^{-1}})^{\varepsilon}\) in terms of \(A^{\varepsilon}\) and \(m^{\varepsilon}\) by using (4.17), (4.54), (4.55) and (4.60). In the resulting expression, \((A^{\varepsilon})^{-1}\), appearing in (4.17), cancels and therefore the limit \(\lim_{\varepsilon\searrow 0}(\mathcal{B}^{-1}\mathcal{C}_{M^{-1}})^{\varepsilon}\) exists. We leave the details to the reader. Therefore, \(\mathcal{L}\) is also invertible for \(z\in\mathbb{R}\setminus\mathrm{supp}(\rho)\) and we have \(\mathcal{L}^{-1}=\lim_{\varepsilon\searrow 0}(\mathcal{B}^{-1}\mathcal{C}_{M^{-1}})^{\varepsilon}\). For any \(C>0\) the norm of the operator \(\mathcal{L}^{-1}\) is bounded on the compact subset \(C^{-1}\leq\mathrm{dist}(z,\mathrm{supp}(\rho))\leq C\) of \(\mathbb{R}\setminus\mathrm{supp}(\rho)\), i.e. \(\|\mathcal{L}^{-1}\|_{\mathrm{sp}}\sim_{C}1\). By continuity, there is some \(\eta_{0}>0\), depending on \(C\), such that \(\|\mathcal{L}^{-1}\|_{\mathrm{sp}}\sim_{C}1\) still holds for \(z=E+\imath\eta\in\mathbb{H}\) with \(C^{-1}\leq\mathrm{dist}(E,\mathrm{supp}(\rho))\leq C\) and \(\eta\leq\eta_{0}\) and we have therefore shown (4.44). Let \(\mathcal{Q}_{M^{\prime}}\) be the projection onto the orthogonal complement of \(M^{\prime}\). By (4.54), (4.55) and (4.60) we have \[\|\mathcal{Q}_{M^{\prime}}\mathscr{L}^{-1}\|_{\mathrm{sp}}\lesssim\left\|\frac{ 1}{1+Am}\right\|+\|V_{0}\|\lesssim\operatorname{dist}\left(-1,\operatorname{ Spec}(mA)\right)+\operatorname{dist}\left(-\frac{1}{2},\operatorname{Spec}(m \widehat{A})\right) \tag{4.61}\] uniformly in \(z\in\mathbb{R}\setminus\operatorname{supp}(\rho)\). At \(z=\tau_{0}\) we have \(h(m)=0\) and thus \[\left\|\frac{(mA)^{2}}{(1+mA)^{2}}\right\|\leq\operatorname{Tr}\left(\frac{( mA)^{2}}{(1+mA)^{2}}\right)+b^{\prime}\left(\frac{V_{0}}{m}\right)^{3}b=1. \tag{4.62}\] In the above equation, we have equality if and only if \(\operatorname{rank}A=1\) and \(b=0\). Therefore, we have \(\operatorname{Spec}(mA)\subset[-\frac{1}{2},\infty)\) and \(-\frac{1}{2}\in\operatorname{Spec}(A)\) if and only if \(\operatorname{rank}A=1\) and \(b=0\). By Lemma A.1, we have \(\min\operatorname{Spec}(m\widehat{A})=\min\operatorname{Spec}(mA)\) if and only if \(mA\in\mathbb{R}^{l\times l}\). Therefore we have \(-\frac{1}{2}\in\operatorname{Spec}(m\widehat{A})\) if and only if \(\operatorname{rank}A=1\), \(A\in\mathbb{R}^{l\times l}\) and \(b=0\). That is, \(-\frac{1}{2}\in\operatorname{Spec}(m\widehat{A})\) at \(\tau_{0}\) if and only if \(A\) is a shifted square of a Wigner matrix. If \(A\) is not a shifted square of a Wigner matrix, then (4.61) is uniformly bounded in some neighbourhood of \(\tau_{0}.\) Thus \(\mathcal{L}\) can at most have one vanishing eigenvalue at \(\tau_{0}\). Indeed let \[H(m,z):=F(m,\widetilde{M}[m],z)=M[m]+T_{z}(z+\Gamma[M[m]]T_{z})^{-1},\] where \(M[m]\) is given by the right hand side of (4.18). The spectral parameter \(z\) is also uniquely defined by \(m\) (see (2.4)), i.e. \(H(m,z(m))=0\) with \[z(m):=-\frac{1}{m}-\gamma(m).\] Taking the derivative with respect to \(m\) we find \[0=\frac{\mathrm{d}}{\mathrm{d}m}H(m,z(m))=\partial_{m}H+\frac{\mathrm{d}z}{ \mathrm{d}m}\partial_{z}H. \tag{4.63}\] By Corollary 3.3 the derivative \(\frac{\mathrm{d}z}{\mathrm{d}m}\) vanishes at the edge. Therefore at \(z=\tau_{0}\) the above equation simplifies to \[\partial_{m}H=0. \tag{4.64}\] Calculating this we obtain \[0=M^{\prime}-M\Gamma[M^{\prime}]M=\mathcal{L}[M^{\prime}], \tag{4.65}\] with \(M^{\prime}=M^{\prime}[m]\) the derivative \(M\) with respect to \(m\) and we used \(H=0\). Therefore \(B=M^{\prime}[m]\) is the critical right eigenvector at \(\tau_{0}\). Next, we obtain the critical left eigenvector \(L\). The adjoint of the stability operator \(\mathcal{L}\) is \[\mathcal{L}^{*}[R]=R-\Gamma[M^{*}RM^{*}]. \tag{4.66}\] At any regular edge, we have \(\operatorname{Im}M=0\), i.e. \(M=M^{*}.\) Therefore \[0=\Gamma[\mathcal{L}[B]]=\Gamma[B]-\Gamma[\mathcal{C}_{M}[\Gamma[B]]]= \mathcal{L}^{*}[\Gamma[B]] \tag{4.67}\] at \(z=\tau_{0}\). Thus the critical left eigenvector is given by \[L=\Gamma[B]. \tag{4.68}\] Note that \(L\neq 0\) since \(L_{ii}=B_{11}=1\) for all \(1<i\leq l+1\) by the expression for \(\Gamma\) in (4.11). As \(L\) and \(B\) belong to the same non-degenerate eigenvalue they cannot be orthogonal and since \(\mathcal{L}\) does not depend on \(N\) they satisfy \[|\langle L,B\rangle|\sim 1 \tag{4.69}\] at \(\tau_{0}\). Equation (4.69) is also satisfied in a \(u\) neighbourhood of \(\tau_{0}\), with \(u\sim 1\), since \(L\) and \(B\) vary continuously in \(z\). To obtain \(|\langle L,M\Gamma[B]B\rangle|\) we calculate the second total derivative \[0 =\frac{\mathrm{d}^{2}}{\mathrm{d}m^{2}}H(m,z(m)) \tag{4.70}\] \[=\partial_{m}^{2}H(m,z(m))+\frac{\mathrm{d}^{2}z}{\mathrm{d}m^{2} }\partial_{z}H(m,z(m))\] \[\quad+\frac{\mathrm{d}z}{\mathrm{d}m}\left(\frac{\mathrm{d}z}{ \mathrm{d}m}\partial_{z}^{2}H(m,z(m))+2\partial_{m}\partial_{z}H(m,z(m)) \right).\] At the edge, \(\frac{\mathrm{d}z}{\mathrm{d}m}\) once again vanishes, whereas \(\frac{\mathrm{d}^{2}z}{\mathrm{d}m^{2}}=c_{0}\neq 0\) does not by Corollary 3.3 and Lemma 4.1. Thus calculating the derivatives we arrive at \[0 =-c_{0}MJM+M^{\prime\prime}-M\Gamma[M^{\prime\prime}]M-2M\Gamma[M^ {\prime}]M\Gamma[M^{\prime}]M \tag{4.71}\] \[=-c_{0}MJM+\mathcal{L}[M^{\prime\prime}]-2M\Gamma[M^{\prime}]M^{ \prime},\] where we used \(H=0\) multiple times and (4.65) in the last step. Next, we solve for the last term, use \(M^{\prime}=B\) and take an inner product with \(L\): \[\langle L,M\Gamma[B]B\rangle =\frac{1}{2}\left(\langle L,\mathcal{L}[M^{\prime\prime}]\rangle -c_{0}\langle L,MJM\rangle\right) \tag{4.72}\] \[=\frac{1}{2}\left(\langle\mathcal{L}^{*}[L],M^{\prime\prime} \rangle-c_{0}\langle MLM,J\rangle\right)=-\frac{c_{0}}{2}B_{11}\neq 0.\] In the last equality, \(MLM=M\Gamma[B]M=B\) was used. By continuity the relation \(|\langle L,M\Gamma[B]B\rangle|\sim 1\) also holds in a \(u\) neighbourhood of \(\tau_{0}\) with \(u\sim 1\). At \(\tau_{0}\), the operator \(\mathcal{L}\) is independent of \(\delta\) and therefore its vanishing eigenvectors \(L\) and \(B\) are as well. Consequently, (4.69) and (4.72) also hold for all \(\delta\in[0,1]\) and as both expressions are also continuous in \(z\) for all \(\delta\), they also hold in some order one neighborhood of \(\tau_{0}\). Next we study how \(\mathcal{L}\) varies in \(z\) around \(\tau_{0}\) for arbitrary \(\delta\in[0,1]\) and we make the dependence explicit by writing \(\mathcal{L}=\mathcal{L}^{z},\,M_{\delta}=M_{\delta}^{z},\) etc. and define \(\mathcal{E}^{z}:=\mathcal{L}^{z}-\mathcal{L}^{\tau_{0}}.\) As the eigenvalues of \(\mathcal{L}^{z}\) depend continuously on \(z\), there is a \(u>0\) such that \(\mathcal{L}^{z}\) has an isolated small eigenvalue \(\beta^{z}\) for all \(|z-\tau_{0}|<u.\) By perturbation theory \(\beta_{z}\) is given by \[\beta^{z}=(\langle L^{\tau_{0}},B^{\tau_{0}}\rangle)^{-1}\langle L^{\tau_{0}}, \mathcal{E}^{z}(B^{\tau_{0}})\rangle+\mathcal{O}(\|\mathcal{E}^{z}\|_{\mathrm{ sp}}^{2}). \tag{4.73}\] To estimate the right hand side, we first evaluate \(\mathcal{E}^{z}[B^{\tau_{0}}]\) and find \[\mathcal{E}^{z}[B^{\tau_{0}}]=\mathcal{L}^{z}[B^{\tau_{0}}]-\mathcal{L}^{\tau_ {0}}[B^{\tau_{0}}]=\mathcal{C}_{M_{\delta}^{z}}[L^{\tau_{0}}]-C_{M_{\delta}^{ \tau_{0}}}[L^{\tau_{0}}]=M_{\delta}^{\tau_{0}}L^{\tau_{0}}(M_{\delta}^{\tau_{ 0}}-M_{\delta}^{z})+(M_{\delta}^{\tau_{0}}-M_{\delta}^{z})L^{\tau_{0}}M_{ \delta}^{z}. \tag{4.74}\] We take the scalar product with \(L^{\tau_{0}}\) and use the cyclic invariance of the trace to obtain \[|\langle L^{\tau_{0}},\mathcal{E}^{z}[B^{\tau_{0}}]\rangle| =|\langle L^{\tau_{0}}(M_{\delta}^{\tau_{0}}+M_{\delta}^{z})L^{ \tau_{0}}(M_{\delta}^{\tau_{0}}-M_{\delta}^{z})\rangle| \tag{4.75}\] \[=2|\langle L^{\tau_{0}}M_{\delta}^{\tau_{0}}L^{\tau_{0}}(M_{ \delta}^{\tau_{0}}-M_{\delta}^{z})\rangle|+\mathcal{O}(|m_{\delta}^{z}-m_{ \delta}^{\tau_{0}}|^{2})\] \[=2|\langle L^{\tau_{0}}M_{\delta}^{\tau_{0}}L^{\tau_{0}}(M_{ \delta}^{\tau_{0}})^{\prime}\rangle(m_{\delta}^{\tau_{0}}-m_{\delta}^{z})|+ \mathcal{O}(|m_{\delta}^{z}-m_{\delta}^{\tau_{0}}|^{2}+\eta\delta)\] \[=2|\langle L^{\tau_{0}},M_{\delta}^{\tau_{0}}\Gamma[B^{\tau_{0}}] B^{\tau_{0}}\rangle||m_{\delta}^{\tau_{0}}-m_{\delta}^{z}|+\mathcal{O}(|m_{ \delta}^{z}-m_{\delta}^{\tau_{0}}|^{2}+\eta\delta)\] \[\sim|m_{\delta}^{\tau_{0}}-m_{\delta}^{z}|\sim\sqrt{\kappa+\eta}.\] Here, the third equality follows from the fact that \(M_{\delta}[m_{\delta}]=M_{0}(m_{\delta})+\mathcal{O}(\eta\delta)\) and \(M_{0}\) is analytic in \(m_{\delta}\) at \(m_{\delta}^{\tau_{0}}=m^{\tau_{0}}\). In the fourth line we used (4.68), \(M^{\prime}=B\) and in the fifth line that \(|\langle L^{\tau_{0}},M_{\delta}^{\tau_{0}}\Gamma[B^{\tau_{0}}]B^{\tau_{0}}\rangle|\) is non-vanishing. In the last relation, we used Corollary 3.3 and Lemma 4.1. Taking the absolute value (4.73), the asymptotic behaviour of \(|\beta^{z}|\) follows from (4.75) and (4.69), i.e. \[|\beta^{z}|\sim\sqrt{\kappa+\eta}+\mathcal{O}(\|\mathcal{E}^{z}\|_{\mathrm{ sp}}^{2})\sim\sqrt{\kappa+\eta}. \tag{4.76}\] In the last step, we used that \[\|\mathcal{E}^{z}\|_{\mathrm{sp}}=\mathcal{O}(|m_{\delta}^{z}-m_{\delta}^{ \tau_{0}}|). \tag{4.77}\] ## 5 Proof of the local law In this section, we prove Theorem 2.9 and Proposition 2.10. For shifted squares of Wigner matrices (see Definition 4.2), we provide a direct proof below in Lemma 5.1. The Stieltjes transform \(m\) and the resolvent \(\mathbf{g}\) are submatrices of \(M_{0}\) and \(\mathbf{G}_{0}\), introduced in (4.18) and (4.4), respectively. For polynomials that are not shifted squares of Wigner matrices, the local laws are therefore a direct consequence of Proposition 4.3 and Proposition 4.4, the local laws for the linearization around regular edges and away from the spectrum, and we will spend most of the section proving them. The main steps of our proof for the edge local law, Proposition 4.3, follow the general strategy of [1, Proposition 3.3]. Here, we briefly describe the main ideas of the proof. Throughout this section we will use the notation \(\mathbf{\Delta}_{\delta}:=\mathbf{G}_{\delta}-M_{\delta}\). First, we establish a global law away from the spectrum, stated below in Proposition 5.7. Then, we use the global law as a starting point for a bootstrapping process. The bootstrapping proposition, Proposition 5.8, establishes a local law iteratively on scales ever closer to the optimal scale, \(\eta\sim N^{-1+\varepsilon}\). Lemmata 5.4, 5.5 and 5.6 are auxiliary results used in the bootstrapping process. Lemma 5.5 establishes a bound for \(\mathbf{\Delta}_{\delta}\) in terms of the error \(\mathbf{D}\) and \(\Theta_{\delta}\), the projection of \(\mathbf{\Delta}_{\delta}\) onto its unstable direction, as well as an approximate quadratic equation for \(\Theta_{\delta}\). Lemma 5.6 transforms the quadratic bound on \(\Theta_{\delta}\) into a linear bound. The most crucial difference between our proof and that of [1, Proposition 3.3] is addressed in Lemma 5.4. It provides a naive upper bound for \(\mathbf{G}_{0}^{*}\mathbf{G}_{0}\) and allows us to estimate \(\mathbf{G}_{0}^{*}\mathbf{G}_{0}\) in terms of \(\mathbf{G}_{1}^{*}\mathbf{G}_{1}\). Unlike \(\mathbf{G}_{0}\), the matrix \(\mathbf{G}_{1}\) is a resolvent and thus satisfies \(\mathbf{G}_{1}^{*}\mathbf{G}_{1}=\eta^{-1}\operatorname{Im}\mathbf{G}_{1}\). As \(\mathbf{D}\) is in turn bounded by \(\mathbf{G}_{\delta}^{*}\mathbf{G}_{\delta}\) this step is necessary to obtain the correct upper bound for \(\mathbf{G}_{0}-M_{0}\). The proof of the local law away from the spectrum, Proposition 4.4, makes use of a similar but more simple strategy since \(\mathscr{L}\) does not have an unstable direction away from \(\operatorname{supp}\rho\). **Lemma 5.1**.: _Let \(q\) be a shifted square of a Wigner matrix as introduced in Definition 4.2 and let \(\rho\) have a regular edge at \(\tau_{0}\)._ 1. _There is a_ \(\kappa_{0}>0\)_, depending only on the coefficients of_ \(q\)_, such that for all_ \(\varepsilon,\gamma,D>0\) _and_ \(z\in\mathbb{C}_{\gamma}^{\kappa_{0}}\) _the isotropic local law (_2.16_) holds for all deterministic_ \(\mathbf{x},\mathbf{y}\in\mathbb{C}^{N}\) _and the averaged local law (_2.17_) holds for all deterministic_ \(\mathbf{B}\in\mathbb{C}^{N\times N}\)_. If additionally_ \(E\notin\operatorname{supp}(\rho)\)_, we also have the improved local law (_2.18_)._ 2. _For all_ \(C>0\) _there is an_ \(\eta_{0}>0\)_, depending only on the coefficients of_ \(q\)_, such that the averaged local law (_2.19_) holds true for all deterministic_ \(\mathbf{B}\in\mathbb{C}^{N\times N}\)_,_ \(\gamma,\varepsilon,D>0\) _and_ \(z\in\mathbb{C}_{\gamma}^{C,\eta_{0}}\)_._ Proof.: By assumption, \(q\) is a shifted square of a Wigner matrix. Thus it is of the form \[q(\mathbf{X})=q_{a,c}(\mathbf{W}):=a\mathbf{W}^{2}+c, \tag{5.1}\] with \(a,c\in\mathbb{R}\), \(a\neq 0\) and \(\mathbf{W}\) being a Wigner matrix. By \(\mathbf{g}_{a,c}\) we denote the resolvent of (5.1) and by \(m_{a,c}\) the solution of (2.4) for (5.1) with explicitly stated dependence on \(a,c\). We have \[\mathbf{g}_{a,c}(z)=\frac{1}{a\mathbf{W}^{2}+c-z}=\frac{1}{a}\mathbf{g}_{1,0} \left(\frac{z-c}{a}\right) \tag{5.2}\] and \[m_{a,c}(z)=\frac{1}{a}m_{1,0}\left(\frac{z-c}{a}\right). \tag{5.3}\] Therefore \(q_{a,c}\) has a regular edge at \(a\tau_{0}+c\) if and only if \(q_{1,0}\) has a regular edge at \(\tau_{0}\) and Theorem 2.9 for general \(a,c\) follows from \(a=1\), \(c=0\). Thus let w.l.o.g. \(a=1\), \(c=0\) and we drop the subscripts again from \(m\) and \(\mathbf{g}\). For \(\zeta\in\mathbb{H}\) let \(\mathbf{g}_{\mathbf{W}}:=\frac{1}{\mathbf{W}-\zeta}\) be the resolvent of \(\mathbf{W}\) at spectral parameter \(\zeta\). We have \[\mathbf{g}(z)=\frac{1}{2\sqrt{z}}\left(\mathbf{g}_{\mathbf{W}}(\sqrt{z})+ \mathbf{g}_{-\mathbf{W}}(\sqrt{z})\right), \tag{5.4}\] where again the square root function is chosen such that the positive real axis is mapped to itself and with a branch cut along the negative real axis. Let \(\rho_{\text{sc}}(x):=\frac{1}{2\pi}\sqrt{(4-x^{2})_{+}}\) with \((y)_{+}:=\max\{y,0\}\) be the semi-circle density and \(m_{\text{sc}}\) its Stieltjes transform. By explicitly solving (2.4) for \(q=\mathbf{W}^{2}\) and \(m_{\text{sc}}\) we find \[m(z)=\frac{1}{\sqrt{z}}\left(-\sqrt{z}+\sqrt{z-4}\right)=\frac{1}{\sqrt{z}}m_ {\text{sc}}(\sqrt{z}). \tag{5.5}\] for all \(z\in\mathbb{H}\). The semi-circle density \(\rho_{\text{sc}}\) has regular edges at \(\pm 2\), therefore \(\rho\) has exactly one regular edge at \(\tau_{0}=4\). Its left edge is a hard edge and has a singularity at zero. Combining (5.4) and (5.5) we find \[\mathbf{g}(z)-m(z)=\frac{1}{2\sqrt{z}}\left((\mathbf{g}_{\mathbf{W}}(\sqrt{z})- m_{\text{sc}}(\sqrt{z}))+(\mathbf{g}_{-\mathbf{W}}(\sqrt{z})-m_{\text{sc}}( \sqrt{z}))\right). \tag{5.6}\] Note that \(-\mathbf{W}\) is also a Wigner matrix. Therefore, Statement 1 of Lemma 5.1 around \(\tau_{0}=4a+c\) follows from [1, Theorem 2.6] and Statement 2 of Lemma 5.1 Proposition 2.10 from [22, Theorem 2.1]. From now on we assume that \(q\) is not a shifted square of a Wigner matrix. To prove Proposition 4.3 and Proposition 4.4, we will require several intermediate results, which we state below. Throughout the remainder of the section, we assume w.l.o.g. that there is some \(\alpha>0\) such that \[\|\mathbf{X}_{i}\|\lesssim N^{\alpha}. \tag{5.7}\] The assumption can be removed by a standard argument using Chebyshev's inequality and our moment assumption (2.1). We also introduce stochastic domination, a commonly used notation used to state high probability bounds in a way well adapted to our needs. It has first been introduced in a slightly different form in [19], for the form stated here see e.g. [20]. **Definition 5.2** (Stochastic domination).: _Let \(X=(X^{(N)})_{N\in\mathbb{N}}\) and \(Y=(Y^{(N)})_{N\in\mathbb{N}}\) be families of non-negative random variables. We say \(X\) is stochastically dominated by \(Y\) if for all (small) \(\varepsilon>0\) and (large) \(D>0\)_ \[\mathbb{P}\left(X^{(N)}\geq N^{\varepsilon}Y^{(N)}\right)\lesssim_{D, \varepsilon}N^{-D}. \tag{5.8}\] _We denote this relation by \(X\prec Y\). If the constant in the definition depends on any other parameters \(\alpha\), we write \(X\prec_{\alpha}Y\)._ Furthermore, we introduce the norm \(\|\cdot\|_{*}:=\|\cdot\|_{*}^{K,\mathbf{x},\mathbf{y}}\) for deterministic \(\mathbf{x},\mathbf{y}\in\mathbb{C}^{kN}\) and \(k,K\in\mathbb{N}\). Our definition follows that of [1] and it goes back to [22] and we refer to those two works for details. First, we define the set \[I_{0}:=\{\mathbf{x},\mathbf{y}\}\cup\{\mathbf{e}_{a},L_{a\cdot}^{*}:\,a\in[ \![kN]\!]\}, \tag{5.9}\] where \(\mathbf{e}_{a}\) is the \(a^{\text{th}}\) standard base vector. Replacing a scalar index by a dot denotes the vector that runs over the entire range of the index, e.g. \(\mathbf{R}_{a\cdot}:=(\mathbf{R}_{ab})_{b\in[\![kN]\!]}\) is the \(a^{\text{th}}\) row vector of a matrix \(\mathbf{R}\). For \(j\in\mathbb{N}\) we define the set \(I_{j}\) recursively by \[I_{j+1}:=I_{j}\cup\{M_{\delta}\mathbf{u}:\,\mathbf{u}\in I_{j}\}\cup\{\kappa_{ c}((M_{\delta}\mathbf{u})a,b\cdot),\kappa_{d}((M_{\delta}\mathbf{u})a,b\cdot):\, \mathbf{u}\in I_{j},a,b\in[\![kN]\!]\}, \tag{5.10}\] where \(\kappa(ab,cd)\) denotes the cumulant of the \((a,b)\) entry and the \((c,d)\) entry of \(\sum_{j=1}^{l}K_{j}\otimes\mathbf{X}_{j}\) and \(\kappa_{c}\) and \(\kappa_{d}\) denote the decomposition of \(\kappa\) into its direct and its cross contribution according to the Hermitian symmetry. In (5.10) we also use the shorthand notation \(\kappa(\mathbf{x}b,cd):=\sum_{a}x_{a}\kappa(ab,cd)\). Then \(\|\cdot\|_{*}\) is defined as \[\|\mathbf{R}\|_{*}:=\sum_{0\leq j\leq K}N^{-\frac{j}{2K}}\|\mathbf{R}\|_{I_{j} }+N^{-\frac{1}{2}}\max_{\mathbf{u}\in I_{K}}\frac{\|\mathbf{R}\mathbf{u}\|}{ \|\mathbf{u}\|},\quad\|\mathbf{R}\|_{I_{j}}:=\max_{\mathbf{u},\mathbf{v}\in I_ {j}}\frac{|\langle\mathbf{v},\mathbf{R}\mathbf{u}\rangle|}{\|\mathbf{u}\|\| \mathbf{v}\|} \tag{5.11}\] The notion of stochastic domination is closely related to bounds in p-norms as can be seen from the following lemma. **Lemma 5.3** ([22, Lemma 5.4]).: _Let \(\mathbf{R}\in C^{kN\times kN}\), \(k\in\mathbb{N}\), be a random matrix and \(\Phi\) be a stochastic control parameter. Then the following holds true_ 1. _If_ \(\Phi\gtrsim N^{-C}\) _and_ \(\|\mathbf{R}\|\lesssim N^{C}\) _for some_ \(C>0\) _and_ \(|\langle\mathbf{x},\mathbf{R}\mathbf{y}\rangle|\prec\Phi\|\mathbf{x}\|\|\mathbf{ y}\|\) _for all_ \(\mathbf{x},\mathbf{y}\)_, then_ \(\|\mathbf{R}\|_{p}\lesssim_{\varepsilon,p}N^{\varepsilon}\Phi\) _for all_ \(\varepsilon>0\)_,_ \(p\in\mathbb{N}\)_._ 2. _If_ \(\|\mathbf{R}\|_{p}\lesssim_{\varepsilon,p}N^{\varepsilon}\Phi\) _for all_ \(\varepsilon>0\)_,_ \(p\in\mathbb{N}\) _then_ \(\|\mathbf{R}\|_{*}^{K,\mathbf{x},\mathbf{y}}\prec\Phi\) _for any fixed_ \(K\in\mathbb{N}\) _and_ \(\mathbf{x},\mathbf{y}\in\mathbb{C}^{N}\)_._ **Lemma 5.4** (A priori bound).: \(\mathbf{G}_{0}\) _satisfies the a priori bound_ \[\mathbf{G}_{0}^{*}\mathbf{G}_{0}\lesssim\frac{1}{\eta^{2}}(1+\|\mathbf{X}\|^{ 4}) \tag{5.12}\] _uniformly in \(z\in\mathbb{H}\) with \(\eta\lesssim 1\)._ Proof.: We recall that \(\mathbf{G}_{0}\) is given by \[\mathbf{G}_{0}=\begin{pmatrix}\mathbf{g}&\mathbf{g}\mathbf{X}^{t}A\\ A\mathbf{X}\mathbf{g}&-A+A\mathbf{X}\mathbf{g}\mathbf{X}^{t}A\end{pmatrix}. \tag{5.13}\] We estimate \(\mathbf{G}_{0}\) blockwise to obtain \[\mathbf{G}_{0}\mathbf{G}_{0}^{*}\leq\|\mathbf{G}_{0}\|^{2}\lesssim\|\mathbf{g}\|^ {2}(1+\|\mathbf{X}\|^{4})\leq\frac{1}{\eta^{2}}(1+\|\mathbf{X}\|^{4}). \tag{5.14}\] The last inequality holds because \(\mathbf{g}\) is a resolvent. The lemma is used in the last step of the following estimate on \(\mathbf{G}_{0}^{*}\mathbf{G}_{0}\), \[\mathbf{G}_{0}^{*}\mathbf{G}_{0} \leq 2(\mathbf{G}_{1}^{*}\mathbf{G}_{1}+(\mathbf{G}_{0}-\mathbf{G }_{1})^{*}(\mathbf{G}_{0}-\mathbf{G}_{1})) \tag{5.15}\] \[=2(\mathbf{G}_{1}^{*}\mathbf{G}_{1}+\eta^{2}\mathbf{G}_{1}^{*}(I_ {l+1}-J)\mathbf{G}_{0}^{*}\mathbf{G}_{0}(I_{l+1}-J)\mathbf{G}_{1})\lesssim(1+ \|\mathbf{X}\|^{4})\mathbf{G}_{1}^{*}\mathbf{G}_{1}.\] In the first inequality, we applied Lemma A.2 from the appendix with \(R=\mathbf{G}_{1}\) and \(T=\mathbf{G}_{0}-\mathbf{G}_{1}\). In the second step, we used \[\mathbf{G}_{0}-\mathbf{G}_{1}=\mathbf{G}_{0}(\mathbf{G}_{1}^{-1}-\mathbf{G}_ {0}^{-1})\mathbf{G}_{1}=-\mathrm{i}\eta\mathbf{G}_{0}(I_{l+1}-J)\mathbf{G}_{1}. \tag{5.16}\] Taking the p-norm on both sides of (5.15) and applying Holder's inequality we obtain \[\|\mathbf{G}_{0}^{*}\mathbf{G}_{0}\|_{p}\lesssim_{p}(1+\|\|\mathbf{X}\|^{4}\|_ {2p})\|\mathbf{G}_{1}^{*}\mathbf{G}_{1}\|_{2p}\lesssim_{p}\|\mathbf{G}_{1}^{ *}\mathbf{G}_{1}\|_{2p}=\frac{\|\operatorname{Im}\mathbf{G}_{1}\|_{2p}}{\eta} \tag{5.17}\] for all \(p\in\mathbb{N}\). Here we use \(\|Y\|_{p}:=(\mathbb{E}[|Y|^{p}])^{1/p}\) for scalar random variables \(Y\). In the second to last step, we used \(\|\mathbf{X}\|_{p}\lesssim_{p}1\), which follows from Lemma 5.3 since \(\|\mathbf{X}\|\prec 1\) and \(\|\mathbf{X}\|\leq N^{C}\) by assumption (5.7). We define the sets \[\mathbb{D}^{\kappa_{0}}:=\{z\in\mathbb{H}:\,|E-\tau_{0}|\leq\kappa_{0}\}, \quad\mathbb{C}^{C,\eta_{0}}:=\{z\in\mathbb{H}:\,C^{-1}<\operatorname{dist}(E, \operatorname{supp}(\rho))<C,\eta\leq\eta_{0}\} \tag{5.18}\] and the random variable \[\Theta_{\delta}:=\frac{\langle L\otimes\mathbf{I}_{N},\mathbf{\Delta}_{\delta }\rangle}{\langle L\otimes\mathbf{I}_{N},B\otimes\mathbf{I}_{N}\rangle}. \tag{5.19}\] With \(\Theta_{\delta}\) we can write the projection of \(\mathbf{\Delta}_{\delta}\) onto the critical direction (see (4.35)) as \(\mathscr{P}[\mathbf{\Delta}_{\delta}]=\Theta_{\delta}B\). Furthermore, let \(\chi(A)\) denote the indicator function on event \(A\). We import following lemmata from [1, Proposition 3.3] and [1, Lemma 3.9]. **Lemma 5.5** ([1, Proposition 3.3]).: _Let \(\delta\in\{0,1\}\) and \(\|\cdot\|_{*}=\|\cdot\|_{*}^{K,\mathbf{x},\mathbf{y}}\). There is a \(\kappa_{0}\sim 1\) and deterministic matrices \(\mathbf{R}_{i}\) with \(\|\mathbf{R}_{i}\|\lesssim 1\) for \(i=1,2\) such that_ \[\mathbf{\Delta}_{\delta}\chi(\|\mathbf{\Delta}_{\delta}\|_{*}\leq N^{-\frac{3 }{K}})=(\Theta_{\delta}B-\mathscr{L}^{-1}[(\mathbb{1}-\mathscr{P})[M_{\delta} \mathbf{D}]]+\mathcal{E})\chi(\|\mathbf{\Delta}_{\delta}\|_{*}\leq N^{-\frac{3 }{K}}), \tag{5.20}\] _with an error function \(\mathcal{E}\) of size_ \[\|\mathcal{E}\|_{*}=\mathcal{O}\left(N^{\frac{3}{K}}(|\Theta_{\delta}|^{2}+\| \mathbf{D}\|_{*}^{2})\right) \tag{5.21}\] _and \(\Theta_{\delta}\) satisfying the approximate quadratic equation_ \[\left(\xi_{1}\Theta_{\delta}+\xi_{2}\Theta_{\delta}^{2}\right)\chi(\|\mathbf{ \Delta}_{\delta}\|_{*}\leq N^{-\frac{3}{K}})=\mathcal{O}\left(N^{\frac{2}{K}} \|\mathbf{D}\|_{*}^{2}+|\langle\mathbf{R}_{1},\mathbf{D}\rangle|+|\langle \mathbf{R}_{2},\mathbf{D}\rangle|\right) \tag{5.22}\] _with \(\xi_{1}\sim\sqrt{\kappa+\eta}\), \(\xi_{2}\sim 1\) uniformly in \(\mathbf{x},\mathbf{y}\in\mathbb{C}^{(l+1)N}\) and \(z\in\mathbb{D}^{\kappa_{0}}\)._ Proof.: The proof of [1, Proposition 3.3] uses [1, Proposition 3.1] as an input. After replacing [1, Proposition 3.1] by our analogous Proposition 4.6 the proof follows theirs line by line. **Lemma 5.6** ([1, Lemma 3.9]).: _Let \(d=d(\eta)\) be a monotonically decreasing function in \(\eta\geq N^{-1}\) and assume \(0\leq d\lesssim N^{-\varepsilon}\) for some \(\varepsilon>0\). Suppose there are \(\kappa_{0},\gamma>0\) such that_ \[|\xi_{1}\Theta_{\delta}+\xi_{2}\Theta_{\delta}^{2}|\lesssim d\text{ for all }z\in\mathbb{D}_{\gamma}^{\kappa_{0}}\quad\text{and}\quad|\Theta_{\delta}| \lesssim\min\left\{\frac{d}{\sqrt{\kappa+\eta}},\sqrt{d}\right\}\text{ for some }z_{0}\in\mathbb{D}_{\gamma}^{\kappa_{0}}. \tag{5.23}\] _Then also \(|\Theta_{\delta}|\lesssim\min\{d/\sqrt{\kappa+\eta},\sqrt{d}\}\) for all \(z^{\prime}\in\mathbb{D}_{\gamma}^{\kappa_{0}}\) with \(\operatorname{Re}z^{\prime}=\operatorname{Re}z_{0}\) and \(\operatorname{Im}z^{\prime}\leq\operatorname{Im}z_{0}\)._ **Proposition 5.7** (Global Law).: _For all \(C>0\) and some \(\kappa_{0}>0\) with \(\kappa_{0}\sim 1\) there is an \(\eta_{0}>0\) such that for all \(\delta\in[0,1]\), \(z\in\mathbb{D}^{\kappa_{0}}\cup\mathbb{G}^{C,\eta_{0}}\) and \(\varepsilon,D>0\) the isotropic global law,_ \[\mathbb{P}\left(|\langle\mathbf{x},\boldsymbol{\Delta}_{\delta}\mathbf{y} \rangle|>\|\mathbf{x}\|\|\mathbf{y}\|\frac{N^{\varepsilon}}{\sqrt{N}}\right) \lesssim_{\varepsilon,D,\eta,C}N^{-D} \tag{5.24}\] _holds for all deterministic \(\mathbf{x},\mathbf{y}\in\mathbb{C}^{(l+1)N}\). Additionally, we have a global averaged law,_ \[\mathbb{P}\left(|\langle\mathbf{B}\boldsymbol{\Delta}_{\delta}\rangle|>\| \mathbf{B}\|\frac{N^{\varepsilon}}{N}\right)\lesssim_{\varepsilon,D,\eta,C}N^ {-D}, \tag{5.25}\] _for all deterministic \(\mathbf{B}\in\mathbb{C}^{(l+1)N\times(l+1)N}\)._ Proof.: First, consider \(\delta=1\) and \(A\) being invertible. Then for any \(z\in\mathbb{H}\) the matrix \(\mathbf{G}_{1}\) is a resolvent of the Hermitian random matrix \(\mathbf{L}-EJ\) with spectral parameter \(\mathrm{i}\eta\). In this regime the global law is covered by [22, Theorem 2.1]. Now consider \(A\) to be non-invertible. Then there is a \(u>0\) s.t. \(A+\varepsilon\) is invertible for all \(0<\varepsilon<u\) and a global law for \(A+\varepsilon\) also follows from [22, Theorem 2.1]. For \(C>0\) fix a \(z\in\mathbb{D}^{\kappa_{0}}\cup\mathbb{G}^{C,\eta_{0}}\) with \(\kappa_{0},\eta_{0}\) sufficiently small for Proposition 4.6 to be applicable. \(\mathscr{L}\) is continuous in \(\varepsilon\in[0,u]\) for sufficiently small \(u\sim 1\) and Proposition 4.6 holds true uniformly for sufficiently small \(\varepsilon\). As \(M_{1}\) and \(\mathbf{G}_{1}\) are also continuous in \(\varepsilon\in[0,u]\) for some \(u\sim 1\), the proposition follows from a stochastic continuity argument. For \(\delta\in[0,1)\) and arbitrary \(A\) the global law follows from another stochastic continuity argument as \(M_{\delta}\), \(\mathbf{G}_{\delta}\) and \(\mathscr{L}\) are also continuous in \(\delta\). Before starting our bootstrapping argument, we introduce the following notions for isotropic and averaged stochastic dominance for random matrices \(\mathbf{R}\) and deterministic control parameters \(\Lambda\), \[|\mathbf{R}|\prec\Lambda\text{ in }\mathbb{D}\Leftrightarrow\|\mathbf{R} \|^{K,\mathbf{x},\mathbf{y}}\prec\Lambda\text{ uniform in }\mathbf{x},\mathbf{y}\text{ and }z\in \mathbb{D} \tag{5.26}\] \[|\mathbf{R}|_{\text{av}}\prec\Lambda\text{ in }\mathbb{D} \Leftrightarrow\frac{|\langle\mathbf{B}\mathbf{R}\rangle|}{\|\mathbf{B}\|} \prec\Lambda\text{ uniform in }\mathbf{B}\neq 0\text{ and }z\in\mathbb{D}.\] **Proposition 5.8** (Bootstrapping).: _Assume the following:_ 1. _The isotropic and averaged local laws,_ \[|\boldsymbol{\Delta}_{\delta}|\prec N^{\frac{2}{\kappa}}\left(\sqrt{\frac{ \operatorname{Im}m}{N\eta}}+\frac{N^{\frac{2}{\kappa}}}{N\eta}\right),\quad| \boldsymbol{\Delta}_{\delta}|_{\text{av}}\prec\begin{cases}\frac{N^{\frac{2} {\kappa}}}{N\eta},&E\in\mathrm{supp}\rho,\\ \frac{N^{\frac{2}{\kappa}}}{N(\eta+\kappa)}+\frac{N^{\frac{4}{\kappa}}}{N^{2} \eta^{2}\sqrt{\eta+\kappa}},&E\notin\mathrm{supp}\rho,\end{cases}\] (5.27) _hold on_ \(z=E+\mathrm{i}\eta\in\mathbb{D}_{\gamma_{0}}^{\kappa_{0}}\) _for some_ \(\gamma_{0},\kappa_{0},K\) _and_ \(\delta\in\{0,1\}\)_._ 2. _The isotropic and averaged local laws,_ (_5.28_)_ _hold on_ \(z=E+\mathrm{i}\eta\in\mathbb{G}_{\gamma_{0}}^{C,\eta_{0}}\) _for some_ \(\gamma_{0},C,\eta_{0},K\) _and_ \(\delta\in\{0,1\}\)_._ _Then, for all \(\gamma>0\) with \(\gamma\geq\frac{100}{K}\) there is a \(\gamma_{s}>0\) independent of \(\gamma_{0}\) such that the local laws (5.27) also hold on \(\mathbb{D}_{\gamma_{0}}^{\kappa_{0}}\) with \(\gamma_{1}=\max\{\gamma,\gamma_{0}-\gamma_{s}\}\) and the local laws (5.28) also hold on \(\mathbb{G}_{\gamma_{1}}^{C,\eta_{0}}\) with \(\gamma_{1}=\max\{\gamma,\gamma_{0}-\gamma_{s}\}\)._ Proof.: We first prove the local law (5.27) for \(z\in\mathbb{D}_{\gamma_{1}}^{\kappa_{0}}\) and then comment on the modifications necessary to prove (5.28) for \(z\in\mathbb{D}_{\gamma_{1}}^{C,\eta_{0}}\). We begin by proving that \(\eta\mapsto\eta\|\mathbf{G}_{\delta}\|_{p}\) is monotonically non-decreasing in \(\eta\). For \(\delta=1\) the proof is given in the proof of [22, Proposition 5.5]. For \(\delta=0\) the proof modifies as follows. Let \(\varepsilon>0\). For fixed \(E\) we define \(f(\eta):=\eta\|\mathbf{G}_{0}(E+\mathrm{i}\eta)\|_{p}\). It satisfies \[\liminf_{\varepsilon\to 0}\frac{f(\eta+\varepsilon)-f(\eta)}{\varepsilon} =\liminf_{\varepsilon\to 0}\|\mathbf{G}_{0}(E+\mathrm{i}(\eta+ \varepsilon))\|_{p}+\frac{\eta(\|\mathbf{G}_{0}(E+\mathrm{i}(\eta+\varepsilon)) \|_{p}-\|\mathbf{G}_{0}(E+\mathrm{i}\eta)\|_{p})}{\varepsilon} \tag{5.29}\] \[\geq\|\mathbf{G}_{0}(E+\mathrm{i}\eta)\|_{p}-\lim_{\varepsilon \to 0}\eta\left\|\frac{\mathbf{G}_{0}(E+\mathrm{i}(\eta+\varepsilon))-\mathbf{G}_{0}(E+ \mathrm{i}\eta)}{\varepsilon}\right\|_{p}\] \[=\|\mathbf{G}_{0}(E+\mathrm{i}\eta)\|_{p}-\eta\|\mathbf{G}_{0}(E+ \mathrm{i}\eta)J\mathbf{G}_{0}(E+\mathrm{i}\eta)\|_{p}\] To obtain a bound for the last term we estimate \[\eta|\langle\mathbf{x},\mathbf{G}_{0}J\mathbf{G}_{0}\mathbf{y}\rangle|\leq\frac{ \eta}{2}(\langle\mathbf{x},\mathbf{G}_{0}J\mathbf{G}_{0}^{*}\mathbf{x}\rangle+ \langle\mathbf{y},\mathbf{G}_{0}^{*}J\mathbf{G}_{0}\mathbf{y}\rangle)=\frac{1} {2}(\langle\mathbf{x},\mathrm{Im}\,\mathbf{G}_{0}\mathbf{x}\rangle+\langle \mathbf{y},\mathrm{Im}\,\mathbf{G}_{0}\mathbf{y}\rangle) \tag{5.30}\] for \(\mathbf{x},\mathbf{y}\in\mathbb{C}^{(l+1)N}\). In the last step, we used the Ward identity for generalized resolvents, \[\mathbf{G}_{0}J\mathbf{G}_{0}=\frac{\mathrm{Im}\,\mathbf{G}_{0}}{\eta}. \tag{5.31}\] Thus we find \[\eta\|\mathbf{G}_{0}J\mathbf{G}_{0}\|_{p}\leq\sup_{\|\mathbf{x}\|,\|\mathbf{ y}\|=1}\left(\mathbb{E}\left(\frac{1}{2}(\langle\mathbf{x},\mathrm{Im}\, \mathbf{G}_{0}\mathbf{x}\rangle+\langle\mathbf{y},\mathrm{Im}\,\mathbf{G}_{0 }\mathbf{y}\rangle)\right)^{p}\right)^{\frac{1}{2}}\leq\|\mathbf{G}_{0}\|_{p}, \tag{5.32}\] where we used \[|\langle\mathbf{x},\mathrm{Im}\,\mathbf{R}\mathbf{x}\rangle|\leq|\langle \mathbf{x},\mathbf{R}\mathbf{x}\rangle| \tag{5.33}\] in the last step. Therefore \[\liminf_{\varepsilon\to 0}\frac{f(\eta+\varepsilon)-f(\eta)}{\varepsilon}\geq 0 \tag{5.34}\] and the claim follows. For \(\delta=0,1\), assume the local law (5.27) on \(\mathbb{D}_{\gamma_{0}}^{\kappa_{0}}\). Then \(\|\mathbf{G}_{\delta}\|_{p}\sim_{p}1\) on \(\mathbb{D}_{\gamma_{0}}^{\kappa_{0}}\) and by monotonicity of \(\eta\|\mathbf{G}_{\delta}\|_{p}\) we find \[\|\mathbf{G}_{\delta}\|_{p}\lesssim_{p}N^{\gamma_{\varepsilon}}. \tag{5.35}\] on \(\mathbb{D}_{\gamma_{1}}^{\kappa_{0}}\). We choose \(\gamma_{s}<\frac{1}{4}\). Then Proposition 4.5 yields \[\|\mathbf{D}\|_{p}\lesssim_{p,\varepsilon}N^{\varepsilon+c\gamma_{s}}\sqrt{ \frac{\|\mathbf{G}_{\delta}\mathbf{G}_{\delta}^{*}\|_{q}}{N}}\quad\text{and} \quad\|\mathbf{D}\|_{p}^{\mathrm{av}}\lesssim_{p,\varepsilon}N^{\varepsilon+c \gamma_{s}}\frac{\|\mathbf{G}_{\delta}\mathbf{G}_{\delta}^{*}\|_{q}}{N}. \tag{5.36}\] For \(\delta=0\) estimate the quadratic term by making use of (5.17) and then use the Ward identity on \(\mathbf{G}_{1}\mathbf{G}_{1}^{*}\) for both \(\delta=0,1\) to obtain \[\|\mathbf{D}\|_{p}\lesssim_{p,\varepsilon}N^{\varepsilon+c\gamma_{s}}\sqrt{ \frac{\|\mathrm{Im}\,\mathbf{G}_{1}\|_{2q}}{\eta N}}\lesssim_{p}\frac{N^{ \frac{\varepsilon^{\prime}}{2}\gamma_{s}}}{\sqrt{\eta N}}\quad\text{and}\quad \|\mathbf{D}\|_{p}^{\mathrm{av}}\lesssim_{p,\varepsilon}N^{\varepsilon+c\gamma _{s}}\frac{\|\mathrm{Im}\,\mathbf{G}_{1}\|_{2q}}{\eta N}\lesssim_{p}\frac{N^{ \varepsilon^{\prime}\gamma_{s}}}{\eta N}. \tag{5.37}\] on \(\mathbb{D}_{\gamma_{1}}^{\kappa_{0}}\) for sufficiently small \(\varepsilon>0\) and some modified \(c^{\prime}>0\). Note that we are allowed to exchange the \(q\) norm in the bound for \(\delta=1\) by a \(2q\) norm as the norm is increasing in \(q\). After using Lemma 5.3 to turn the p-norm bound in the first equation of (5.37) into a bound on the *-norm bound, we get from Lemma 5.5 \[|\xi_{1}\Theta_{\delta}+\xi_{2}\Theta_{\delta}^{2}|\chi(\|\boldsymbol{\Delta}_ {\delta}\|_{*}\leq N^{-\frac{3}{K}})\prec\frac{N^{\frac{2}{K}+c^{\prime}\gamma_ {s}}}{\eta N} \tag{5.38}\] on \(\mathbb{D}_{\gamma_{1}}^{\kappa_{0}}.\) The left hand side is also Lipschitz continuous with Lipschitz constant \(\prec\eta^{-2}\leq N^{2}\). Thus the inequality \[|\xi_{1}\Theta_{\delta}+\xi_{2}\Theta_{\delta}^{2}|\chi(\|\boldsymbol{\Delta}_ {\delta}\|_{*}\leq N^{-\frac{3}{K}})\leq N^{-\frac{10}{K}} \tag{5.39}\] holds with very high probability on all of \(\mathbb{D}_{\gamma_{1}}^{\kappa_{0}}\) for sufficiently large \(K\) and sufficiently small \(\gamma_{s}\). By our assumption the local law, (5.27), holds on \(\mathbb{D}_{\gamma_{0}}^{\kappa_{0}}\). In conjunction with another stochastic continuity argument, we also get the bound \[|\Theta_{\delta}|\leq\min\left\{\frac{N^{-\frac{10}{K}}}{\sqrt{\kappa+\eta}}, N^{-\frac{5}{K}}\right\} \tag{5.40}\] on all of \(\mathbb{D}_{\gamma_{0}}^{\kappa_{0}}\) with very high probability. Therefore Lemma 5.6 can be applied with \(d=N^{-10/K}\) and we obtain \[|\Theta_{\delta}|\chi(\|\boldsymbol{\Delta}_{\delta}\|_{*}\leq N^{-\frac{3}{K}} )\prec N^{-\frac{5}{K}}. \tag{5.41}\] Next, we turn this into a bound on \(\boldsymbol{\Delta}_{\delta}.\) To do so we again use the *-norm bound obtained from applying Lemma 5.3 to (5.37) to find \(\|\mathbf{D}\|_{*}\prec N^{-\frac{7}{K}}\) for sufficiently small \(\gamma_{s}\) and sufficiently large \(K\). Using both bounds on (5.20) we get \[\|\boldsymbol{\Delta}_{\delta}\|_{*}\chi(\|\boldsymbol{\Delta}_{\delta}\|_{*} \leq N^{-\frac{3}{K}})\lesssim\left(|\Theta_{\delta}|+N^{\frac{2}{K}}\|\mathbf{D} \|_{*}\right)\chi(\|\boldsymbol{\Delta}_{\delta}\|_{*}\leq N^{-\frac{3}{K}}) \prec N^{-\frac{5}{K}}. \tag{5.42}\] We have thus found a "forbidden" area for \(\|\mathbf{\Delta}_{\delta}\|_{*}\). With the aid of a standard stochastic continuity argument, we remove the indicator function from the bounds (5.41) and (5.42) to get to the rough bounds \[\|\mathbf{\Delta}_{\delta}\|_{*}\prec N^{-\frac{5}{K}}\quad\text{and}\quad|\Theta_{ \delta}|\prec N^{-\frac{5}{K}}. \tag{5.43}\] Since \(\mathbf{x}\) and \(\mathbf{y}\) were arbitrary we the first bound becomes \(|\mathbf{\Delta}_{\delta}|\prec N^{-5/K}.\) Now, assume that \(\max\{|\mathbf{\Delta}_{0}|,|\mathbf{\Delta}_{1}|\}\prec\Lambda\) and \(\max\{|\Theta_{0}|,|\Theta_{1}|\}\prec\theta\) for some deterministic \(\theta\leq\Lambda\leq N^{-3/K}\). Then we know from Lemma 5.5 and Proposition 4.5 as well as (5.17) that \[\max_{i}\{|\mathbf{\Delta}_{i}|\}\prec\theta+N^{\frac{2}{K}}\sqrt{\frac{\operatorname {Im}m+\Lambda}{N\eta}}\quad\text{and}\quad\max_{i}\{|\xi_{1}\Theta_{i}+\xi_{2 }\Theta_{i}^{2}|\}\prec N^{\frac{2}{K}}\frac{\operatorname{Im}m+\Lambda}{N\eta} \tag{5.44}\] are also deterministic bounds. Here, we also used \(\operatorname{Im}m\sim\langle\operatorname{Im}M_{1}\rangle\) by Corollary 3.3, Lemma 4.1 as well as (4.18). The first bound in (5.44) self-improving and applying it iteratively yields \[\max_{i}\{|\mathbf{\Delta}_{i}|\}\prec\theta+N^{\frac{2}{K}}\left(\frac{N^{\frac{ 2}{K}}}{N\eta}+\sqrt{\frac{\operatorname{Im}m+\theta}{N\eta}}\right) \tag{5.45}\] and therefore the second bound in (5.44) improves to \[\max_{i}\{|\xi_{1}\Theta_{i}+\xi_{2}\Theta_{i}^{2}|\}\prec N^{\frac{2}{K}} \frac{\operatorname{Im}m+\theta}{N\eta}+N^{\frac{4}{K}}\frac{1}{(N\eta)^{2}}. \tag{5.46}\] Now we separately treat \(\operatorname{Re}z\in\operatorname{supp}(\rho)\) and \(\operatorname{Re}z\notin\operatorname{supp}(\rho)\) and we start with the former. Then by Corollary 3.3 we know that \(\operatorname{Im}m\sim\sqrt{\kappa+\eta}\). For fixed \(\theta\) we apply Lemma 5.6 with \[d=N^{\frac{2}{K}}\frac{\sqrt{\kappa+\eta}+\theta}{N\eta}+N^{\frac{4}{K}}\frac{ 1}{(N\eta)^{2}} \tag{5.47}\] to obtain \[\max_{i}\{|\Theta_{i}|\}\prec\min\left\{\frac{d}{\sqrt{\kappa+\eta}},\sqrt{d }\right\}. \tag{5.48}\] This is also a self-improving bound and iterating it gives \[\max_{i}\{|\Theta_{i}|\}\prec N^{\frac{2}{K}}\frac{1}{N\eta},\quad\text{hence} \quad\max_{i}\{|\Delta_{i}|\}\prec N^{\frac{2}{K}}\left\{\sqrt{\frac{ \operatorname{Im}m}{N\eta}}+\frac{N^{\frac{2}{K}}}{N\eta}\right\}. \tag{5.49}\] For \(\operatorname{Re}z\notin\operatorname{supp}(\rho)\) we have \(\operatorname{Im}m\sim\frac{\eta}{\sqrt{\kappa+\eta}}\), again by Corollary 3.3. Analogously to the \(\operatorname{Re}z\in\operatorname{supp}(\rho)\) we obtain \[\max_{i}\{|\Theta_{i}|\}\prec N^{\frac{2}{K}}\frac{1}{N(\eta+\kappa)}+N^{ \frac{4}{K}}\frac{1}{(N\eta)^{2}\sqrt{\kappa+\eta}}. \tag{5.50}\] Finally we use (5.20), (4.31) and the bounds on \(\max_{i}\{|\Theta_{i}|\}\) to arrive at the averaged bounds \[\max_{i}\{|\mathbf{\Delta}_{i}|_{\mathrm{av}}\}\prec N^{\frac{2}{K}}\begin{cases} \frac{1}{N\eta}\text{ if }\operatorname{Re}z\in\operatorname{supp}\rho_{1}\\ \frac{1}{N(\eta+\kappa)}+\frac{N^{\frac{2}{K}}}{N^{2}\eta^{2}(\eta+\kappa)^{ \frac{1}{2}}}\text{ if }\operatorname{Re}z\notin\operatorname{supp}\rho_{1}.\end{cases} \tag{5.51}\] This concludes the proof of (5.27) on \(\mathbb{C}_{\gamma_{1}}^{\infty}\). Now, assume (5.28) on \(\mathbb{C}_{\gamma_{0}}^{\infty}\). From (4.33) we have \[\mathscr{L}[\mathbf{\Delta}_{\delta}]=-M_{\delta}\mathbf{D}+M_{\delta}\mathcal{S}[ \mathbf{\Delta}_{\delta}]\mathbf{\Delta}_{\delta}. \tag{5.52}\] We apply \(\mathscr{L}^{-1}\) to the equation and take its *-norm for some deterministic \(\mathbf{x}\), \(\mathbf{y}\) to find \[\|\mathbf{\Delta}_{\delta}\|_{*} \leq\|\mathscr{L}^{-1}[M_{\delta}\mathbf{D}]\|_{*}+\|\mathscr{L}^ {-1}[M_{\delta}\mathcal{S}[\mathbf{\Delta}_{\delta}]\mathbf{\Delta}_{\delta}]\|_{*} \tag{5.53}\] \[\leq\|\mathscr{L}^{-1}\|_{*\to*}\left(\|M_{\delta}\mathbf{D}\|_{*}+ \|M_{\delta}\mathcal{S}[\mathbf{\Delta}_{\delta}]\mathbf{\Delta}_{\delta}\|_{*}\right)\] \[\lesssim_{C}N^{\frac{2}{K}}\|\mathbf{D}\|_{*}+N^{\frac{2}{K}}\| \mathbf{\Delta}_{\delta}\|_{*}^{2}.\] Here, \(\|\mathscr{L}^{-1}\|_{*\to*}\) denotes the operator norm of \(\mathscr{L}^{-1}\) with respect to the *-norm. In the last estimate, we have used \(\|\mathscr{L}^{-1}\|_{*\to*}\lesssim\|\mathscr{L}^{-1}\|_{\mathrm{sp}}\lesssim_{C}1\) by [22, Equation (70c)] and Proposition 4.6 as well as \(\|M_{\delta}\mathbf{R}\|_{*}\lesssim N^{2/N}\|\mathbf{R}\|_{*}\) and \(\|M_{\delta}\mathcal{S}[\mathbf{R}|\mathbf{R}]\|_{*}\lesssim N^{2/N}\|\mathbf{R} \|_{*}^{2}\) by [22, Equation (70a) and (70b)]. Equation (5.42) also holds on \(\mathbb{C}_{\gamma_{1}}^{c,\eta_{0}}\) by the same argument as in the case \(z\in\mathbb{D}_{\gamma_{1}}^{\kappa_{0}}\) and for sufficiently large \(K\) and sufficiently small \(\gamma_{*}\) we have in particular \(\|\mathbf{D}\|_{*}\prec N^{-\frac{\tau}{K}}\) for \(\delta\in\{0,1\}\). We use this bound on with (5.53) to estimate \[\|\mathbf{\Delta}_{\delta}\|_{*}\chi(\|\mathbf{\Delta}_{\delta}\|_{*}\leq N^{-\frac{3 }{K}})\prec_{C}N^{-\frac{5}{K}}. \tag{5.54}\] By a stochastic continuity argument, we establish the rough bound \[\|\mathbf{\Delta}_{\delta}\|_{*}\prec_{C}N^{-\frac{5}{K}}\quad\text{and therefore}\quad|\mathbf{\Delta}_{\delta}|\prec_{C}N^{-\frac{5}{K}} \tag{5.55}\] since \(\mathbf{x}\) and \(\mathbf{y}\) were arbitrary. Now, assume \(\max_{i}\{|\mathbf{\Delta}_{i}|\}\prec_{C}\Lambda\) for some deterministic \(\Lambda\leq N^{-3/K}\). Then we have another deterministic bound in the form of \[\max_{i}\{|\mathbf{\Delta}_{i}|\}\prec_{C}N^{\frac{2}{K}}\sqrt{\frac{\operatorname {Im}m+\Lambda}{N\eta}}. \tag{5.56}\] Iterating this self-improving bound and using \(\operatorname{Im}m\sim_{C}\eta\) from Lemma A.3 we find \[\max_{i}\{|\mathbf{\Delta}_{i}|\}\prec_{C}\frac{N^{\frac{4}{K}}}{N\eta}+N^{\frac{ 2}{K}}\frac{1}{\sqrt{N}}. \tag{5.57}\] Finally from (5.57), (5.52) and (4.31) we obtain the averaged bound \[\max_{i}\{|\mathbf{\Delta}_{i}|_{\operatorname{av}}\}\prec_{C}\frac{N^{\frac{4}{K }}}{(N\eta)^{2}}+\frac{1}{N}. \tag{5.58}\] Therefore we have shown that (5.28) holds on \(\mathbb{C}_{\gamma_{1}}^{c,\eta_{0}}\). Proof of Propositions 4.3 and 4.4.: For all \(K>0\) Equation (5.27) holds on \(\mathbb{D}_{1}^{\kappa_{0}}\) for some \(\kappa_{0}\) and (5.28) on \(\mathbb{C}_{1}^{C,\eta_{0}}\) for all \(C>0\) and some \(\eta_{0}\) depending on \(C\) due to the global law Proposition 5.7. The local laws, Proposition 4.3 and Proposition 4.4, follow immediately from applying Proposition 5.8 finitely many times in the respective domains. This concludes the proof of Theorem 2.9 and Proposition 2.10 and we are left with proving their Corollaries, 2.11 and 2.12. Proof of Corollary 2.11.: Let \(\lambda_{i}\), \(i\in[\![N]\!]\), denote the eigenvalues of \(q(\mathbf{X})\) with eigenvectors \(\mathbf{v}_{i}\). let \(\mathbf{x}\in\mathbb{C}^{N\times N}\) be a deterministic and normalized vector and let \(\mathbf{v}\) be an eigenvector of \(q(\mathbf{X})\) with eigenvalue \(\lambda\) such that \(|\lambda-\tau_{0}|<\kappa_{0}\) for \(\kappa_{0}\) from Theorem 2.9. Evaluating \(\mathbf{g}\) at spectral parameter \(\lambda+\mathrm{i}\eta\) have with very high probability that \[1\gtrsim\operatorname{Im}(\mathbf{x},\mathbf{g}(\lambda+\mathrm{i}\eta)\mathbf{ x})=\sum_{i=1}^{N}\eta\frac{\eta}{\eta^{2}+(\lambda-\lambda_{i})^{2}}|\langle \mathbf{x},\mathbf{v}_{i}\rangle|^{2}\geq\frac{|\langle\mathbf{x},\mathbf{v}_{ i}\rangle|}{\eta} \tag{5.59}\] for all \(\eta\geq N^{-1+\varepsilon}\) and all \(\varepsilon>0\). As the deterministic vector \(\mathbf{x}\) was arbitrary, the eigenvalue delocalization, (2.20), follows. Proof of Corollary 2.12.: Let \(q\) have regular edge at \(\tau_{0}\). W.l.o.g. we can assume that \(\tau_{0}=\tau_{+}\) is a right edge. Otherwise, consider the right edge of \(-q\). From (2.18) and Proposition 2.10 it follows that \[\mathbb{P}\left(|\langle\mathbf{B}(\mathbf{g}-m)\rangle|>\|\mathbf{B}\|N^{ \varepsilon}\left(\frac{1}{N(\kappa+\eta)}+\frac{1}{(N\eta)^{2}\sqrt{\kappa+ \eta}}\right)\right)\lesssim_{\varepsilon,\tau,D,C}N^{-D} \tag{5.60}\] for all deterministic \(\mathbf{B}\in\mathbb{C}^{N\times N}\) and \(z\in\mathbb{D}_{\gamma}^{\kappa_{0}}\cup\mathbb{C}_{\gamma}^{c,\eta_{0}}\) with \(E\notin\operatorname{supp}(\rho)\). In particular, this holds true for \(C\geq\kappa_{0}^{-1}\). Following [24, Chapter 11.1], this implies for all \(\varepsilon>0\) that with very high probability there are no eigenvalues \(\lambda\) such that \(\lambda\notin\operatorname{supp}(\rho)\) and \(N^{-2/3+\varepsilon}\leq\lambda-\tau_{+}\leq C\), i.e. \[\mathbb{P}\left(\exists\lambda\in\operatorname{Spec}(q(\mathbf{X}))\,:\,N^{- \frac{2}{3}+\varepsilon}\leq\lambda-\tau_{+}<C\right)\lesssim_{\varepsilon,D,C }N^{-D}. \tag{5.61}\] Additionally, it follows from the trivial bound (4.32) that there is a \(K>0\) such that \[\mathbb{P}\left(\|q(\mathbf{X})\|\geq K\right)\lesssim_{D}N^{-D}. \tag{5.62}\] Combining (5.61) for some \(C>0\) such that \(C+\tau_{0}\geq K\) and (5.62) we have \[\mathbb{P}\left(\exists\lambda\in\operatorname{Spec}(q(\mathbf{X}))\,:\,\lambda- \tau_{+}\geq N^{-\frac{2}{3}+\varepsilon}\right)\lesssim_{\varepsilon,D}N^{-D}. \tag{5.63}\] By a standard argument, (5.63) in conjunction with the averaged local law, (2.17), implies eigenvalue rigidity around the edge as in Theorem 2.12 (see e.g. [24, Chapter 11.2-11.4]). ## Appendix A Appendix ### The entrywise real part of a Hermitian matrix. Let \(n\in\mathbb{N}\) and \(H\in\mathbb{C}^{n\times n}\) be a Hermitian matrix and \(\widehat{H}=\frac{1}{2}(H+H^{t})\) be its entrywise real part. The following lemma summarizes how the two matrices are related. **Lemma A.1**.: _Let \(h_{i}\) and \(\widehat{h}_{i}\) be the eigenvalues of \(H\) and \(\widehat{H}\) respectively, both arranged in non-increasing order. The following holds true_ 1. \(h_{1}\geq\widehat{h}_{1}\) _and_ \(h_{n}\leq\widehat{h}_{n}\)_._ 2. \(\|H\|\geq\|\widehat{H}\|\)_._ 3. _If_ \(H\geq 0\)_, then_ \(\widehat{H}\geq 0\) _and conversely_ \(H\leq 0\) _implies_ \(\widehat{H}\leq 0\)_._ 4. _Let additionally_ \(\operatorname{rank}H=1\)_. Then we have_ \(\operatorname{rank}\widehat{H}=1\) _if and only if_ \(H\in\mathbb{R}^{l\times l}\)_. Otherwise we have_ \(\operatorname{rank}\widehat{H}=2\)_._ Proof.: Let \(\widetilde{H}=\frac{1}{2\mathbb{Z}}(H-H^{t})\) be the entrywise imaginary part of \(H\), i.e. \(H=\widehat{H}+\mathrm{i}\widetilde{H}\). First, we will prove \(h_{1}\geq\widehat{h}_{1}\). Note that for all \(a\in\mathbb{R}\) the norms of \(H+a\) and \(\widehat{H}+a\) are given by \(\|H+a\|=\max\{h_{1}+a,-h_{n}-a\}\) and \(\|\widehat{H}+a\|=\max\{\widehat{h}_{1}+a,-\widehat{h}_{n}-a\}\). Thus for sufficiently large \(a\in\mathbb{R}\) (depending on \(H\)) we have \(\|H+a\|=h_{1}+a\) and \(\|\widehat{H}+a\|=\widehat{h}_{1}+a\). For such an \(a\) let \(v_{1}\in\mathbb{R}^{n}\) be a normalized eigenvector of \(\widehat{H}+a\) corresponding to the eigenvalue \(\widehat{h}_{1}+a\) (it can be chosen purely real since \(\widehat{H}\) is a real symmetric matrix). Then \[h_{1}+a=\|H+a\|\geq\|(H+a)v_{1}\|=\sqrt{\|(\widehat{H}+a)v_{1}\|^{2}+\| \widetilde{H}v_{1}\|^{2}}\geq\|(\widehat{H}+a)v_{1}\|=\widehat{h}_{1}+a.\] (A.1) Here the second equality holds since \(\widetilde{H}v_{1}\) is a purely imaginary vector. The claim \(h_{1}\geq\widehat{h}_{1}\) follows. Choosing \(a\) sufficiently small, we find \(h_{n}\leq\widehat{h}_{n}\) by a similar argument. Since \(\|H\|=\max\{h_{1},-h_{n}\}\) and \(\|\widehat{H}\|=\max\{\widehat{h}_{1},-\widehat{h}_{n}\}\), the inequality \(\|H\|\geq\|\widehat{H}\|\) follows. Next, let \(H\geq 0\). Then \(h_{n}\geq 0\) and thus \(\widehat{h}_{n}\geq h_{n}\geq 0\). The inequality \(\widehat{H}\geq 0\) follows. Similarly, \(H\leq 0\) implies \(\widehat{H}\leq 0\). Now let \(\operatorname{rank}H=1\). Then \(\operatorname{rank}\widehat{H}=1\) if \(H\in\mathbb{R}^{n\times n}\) is clear. Let on the other hand \(H\in\mathbb{C}^{n\times n}\setminus\mathbb{R}^{n\times n}\). Then \(H=\alpha vv^{*}\) for some \(\alpha\in\mathbb{R}\setminus\{0\}\) and normalized \(v\in\mathbb{C}^{n}\setminus e^{\mathrm{i}v}\mathbb{R}^{n}\) for all \(\varphi\in\mathbb{R}\). Then \(\widehat{H}=\frac{\alpha}{2}(vv^{*}+\bar{v}\bar{v}^{*})\) and since \(v\) and \(\bar{v}\) are only linearly dependent if \(v\in e^{\mathrm{i}v}\mathbb{R}^{n}\) the claim \(\operatorname{rank}\widehat{H}=2\) follows. ### Matrix inequality Here we provide a proof for the matrix inequality used in (5.15). **Lemma A.2**.: _Let \(R,T\in\mathbb{C}^{n\times n}\) be arbitrary matrices. Then the following inequality holds:_ \[(R+T)^{*}(R+T)\leq 2(R^{*}R+T^{*}T).\] (A.2) Proof.: We first note that \[(R+T)^{*}(R+T)=R^{*}R+T^{*}T+R^{*}T+T^{*}R\] (A.3) and it is thus sufficient to bound the last two terms. Let \(v\in\mathbb{C}^{n}\) be arbitrary. Since \(R^{*}T+T^{*}R\) is Hermitian, \(\langle v,(R^{*}T+T^{*}R)v\rangle\in\mathbb{R}\) and we estimate \[\langle v,(R^{*}T+T^{*}R)v\rangle=2\operatorname{Re}(\langle Rv,Tv\rangle)\leq 2 \|Rv\|\|Tv\|\leq\|Rv\|^{2}+\|Tv\|^{2}=\langle v,(R^{*}R+T^{*}T)v\rangle,\] (A.4) where we used the Cauchy-Schwarz inequality in the second step. As \(v\) was arbitrary, \(R^{*}T+R^{*}T\leq R^{*}R+T^{*}T\) follows and we conclude the lemma. ### Some properties of the solution of the Dyson Equation Proof of Proposition 2.3.: There can at most be one analytic function in the upper half-plane that satisfies (2.4) and \(\lim_{z\to\infty}zm(z)=-1\) since (2.4) is stable at infinity. Thus we are left with proving existence. First, consider the case of \(A\) being invertible. Let \(\{s_{1},\ldots,s_{l}\}\) be a family of free semi-circular variables in a \(C^{*}\) probability space \((\mathscr{S},\tau)\) (see [21, Appendix B]) and let \(s:=(s_{i})_{i\in[\![]\!]}\). Define \[\mathbf{L}_{\mathrm{sc}}=K_{0}\otimes\mathbb{1}+\sum_{j=1}^{l}K_{j}\otimes s_ {j}.\] (A.5) Following the proof of [21, Lemma 2.6], the matrix-valued function \(M_{0}:\mathbb{H}\to\mathbb{C}^{(l+1)\times(l+1)}\), given by \[M_{0}(z)=(\mathrm{id}\otimes\tau)(\mathbf{L}_{\mathrm{sc}}-zJ\otimes\mathbb{1 })^{-1},\] (A.6) is a solution to (4.14) for \(\delta=0\). Thus its (1,1) entry satisfies (4.20) for \(\delta=0\) and therefore (2.4). Using the Schur complement formula, we find \[(M_{0}(z))_{11}=\tau(q(s)-z\otimes\mathbb{1})^{-1}.\] (A.7) The polynomial \(q(s)\), defined in (1.1) is self-adjoint, thus \((M_{0})_{11}\) is analytic on \(\mathbb{H}\), has positive imaginary part and \(\lim_{z\to\infty}z(M_{0}(z))_{11}=-1\). Therefore \(m:=(M_{0})_{11}\) is the unique function that satisfies all conditions of Proposition 2.3. Now, let \(A\) be non-invertible. Then there is a \(u\sim 1\) such that \(A^{\varepsilon}:=A+\varepsilon\) is invertible for all \(0<\varepsilon<u.\) Let \(q^{\varepsilon}\) and \(\gamma^{\varepsilon}\) be the objects given by replacing \(A\) with \(A+\varepsilon\) in the definitions of \(q\) in (1.1) and \(\gamma\) in (2.5), respectively. By the above argument, the function \[m^{\varepsilon}(z):=\tau(q^{\varepsilon}(s)-z\otimes\mathbb{1})^{-1}\] (A.8) is a solution to the equation \[-\frac{1}{m^{\varepsilon}}=z+\gamma^{\varepsilon}(m^{\varepsilon}).\] (A.9) For any fixed \(z\in\mathbb{H}\), both \(m^{\varepsilon}\) and \(\gamma^{\varepsilon}\) are continuous in \(\varepsilon\) at \(\varepsilon=0\). Thus \(m:=m^{0}\) solves (2.4) and \(m\) is analytic on \(\mathbb{H}\), has positive imaginary part and satisfies \(\lim_{z\to\infty}zm(z)=-1\). Therefore it is the unique function that satisfies all conditions of Proposition 2.3. The function \(m\) is a Stieltjes transform of a real-valued probability measure \(\rho\) with compact support. As such it can be analytically extended to \(\mathbb{C}\setminus\mathrm{supp}(\rho).\) The following lemma summarizes the properties of the extension, also called \(m\), on and above the real axis outside of the spectrum **Lemma A.3**.: _For the analytical extension of \(m\) to \(\mathbb{C}\setminus\mathrm{supp}(\rho)\) we have the following._ 1. _The function_ \(m\) _is real-valued on_ \(\mathbb{R}\setminus\mathrm{supp}(\rho)\) _and_ \(m^{\prime}(E)>0\) _for all_ \(E\in\mathbb{R}\setminus\mathrm{supp}(\rho)\)_._ 2. _For all_ \(C>0\) _there is an_ \(\eta_{0}>0\) _such that for all_ \(z=E+\mathrm{i}\eta\) _with_ \(C^{-1}\leq\mathrm{dist}(E,\mathrm{supp}(\rho))\leq C\) _and_ \(0<\eta\leq\eta_{0}\) _we have_ \(\mathrm{Im}\,m\sim_{C}\eta\)_._ Proof.: The fact that \(m(E)\in\mathbb{R}\) for all \(\mathbb{R}\setminus\mathrm{supp}(\rho)\) follows immediately for from (2.6) and taking the derivative of (2.6) we find \[m^{\prime}(E)=\int_{\mathbb{R}}\frac{\rho(\mathrm{d}x)}{(x-E)^{2}}>0.\] (A.10) Using \(m(E)\in\mathbb{R}\) we have for \(z\in\mathbb{H}\) \[\mathrm{Im}\,m(z)=\mathrm{Im}(m(z)-m(E))=\eta m^{\prime}(E)+\mathcal{O}(\eta^{ 2}).\] (A.11) By (A.10) and continuity of the derivative, we have that \(m^{\prime}(E)\) is bounded from above and bounded away from \(0\) on all compact subsets of \(\mathbb{R}\setminus\mathrm{supp}(\rho)\). Therefore we have \(m^{\prime}(E)\sim_{C}1\) for all \(C>0\) and \(E\) such that \(C^{-1}\leq\mathrm{dist}(E,\mathrm{supp}(\rho))\leq C\). Proof of Lemma 4.1.: The function \(m\) is a Stieltjes transform and as such analytically extendable to \(\mathbb{C}\setminus\operatorname{supp}(\rho)\). Since, by assumption, \(\rho\) has a regular edge at \(\tau_{\pm}\) it can also be continuously extended to \(\tau_{\pm}\). For \(x\in(\mathbb{R}\setminus\operatorname{supp}(\rho))\cup\{\tau_{\pm}\}\) we denote this extension by \(m(x).\) By continuity it satisfies \[-\frac{1}{m(x)}=x+\gamma(m(x)).\] (A.12) Consider (4.20) at \(z\in\mathbb{H}\) as a perturbation to (A.12). More precisely, consider the following equation in the unknown function \(\widetilde{m}_{\delta}(z)\), \[\frac{1}{m(x)}-\frac{1}{\widetilde{m}_{\delta}(z)}=\gamma_{\delta}(\widetilde {m}_{\delta}(z))-\gamma_{0}(m(x))+(z-x),\] (A.13) where we have used \(\gamma_{0}=\gamma.\) Define by \(\dot{\gamma}_{\delta}(y):=\partial_{t}\gamma_{\frac{t}{i\eta}}(y)|_{t=i\delta \eta}\) the partial derivative with respect to \(\mathrm{i}\delta\eta\) and by \(\gamma_{\delta}^{\prime}(y):=\partial_{y}\gamma_{\delta}(y)\) the derivative with respect to its variable. Expanding (A.13) in \(m_{\delta}(z)-m(x)\) and \(\mathrm{i}\delta\eta\) we find \[\begin{split}&\left(\frac{1}{m(x)^{2}}-\gamma_{0}^{\prime}(m(x) )\right)(\widetilde{m}_{\delta}(z)-m(x))-\left(\frac{1}{m(x)^{3}}+\frac{1}{2} \gamma_{0}^{\prime\prime}(m(x))\right)(\widetilde{m}_{\delta}(z)-m(x))^{2}\\ &\qquad\qquad=\mathrm{i}\gamma_{0}(m(x))\delta\eta+(z-x)+\mathcal{ O}(|\widetilde{m}_{\delta}(z)-m(x)|^{3}+|\widetilde{m}_{\delta}(z)-m(x)|\eta+\eta^{2}). \end{split}\] (A.14) We identify \(\frac{1}{m^{2}}-\gamma_{0}^{\prime}(m)=h(m)\) and \(-\frac{2}{m^{3}}-\gamma_{0}^{\prime\prime}(m)=h^{\prime}(m)\) with \(h\) defined in (3.10). Now let \(x=\tau_{\pm}\) and note that \(m(\tau_{\pm})=m_{\pm}.\) By Lemma 3.5 as well as Lemma 3.6 we have \[h(m_{\pm})=0,\quad|h^{\prime}(m_{\pm})|\sim 1\quad\text{and}\quad\pm h^{ \prime}(m_{\pm})>0.\] (A.15) Furthermore one finds \(\dot{\gamma}_{0}(y)>0\) for all \(y\in\mathbb{R}\setminus\mathscr{S}(\gamma)\). Thus (A.14) constitutes an approximate quadratic equation in \(\widetilde{m}_{\delta}(z)-m_{\pm}\) and there are \(u,u_{1}\sim 1\) such that for all \(z\in\mathbb{H}\) with \(0<|z-\tau_{\pm}|\leq u\), there are exactly two solutions \(\widetilde{m}_{\delta}(z)\) with \(|\widetilde{m}_{\delta}(z)-m_{\pm}|\leq u_{1}\). One of them has positive imaginary part and one of them has negative imaginary part. Combining (A.14) and (A.15), we find that the unique solution with positive imaginary part satisfies both parts of (4.22) and we denote it by \(m_{\delta}\). Now let \(C>0\), \(x=E=\operatorname{Re}z\) and \(C^{-1}<\operatorname{dist}(E,\operatorname{supp}(\rho))<C\). Then we have \[h(m(E))=(m^{\prime}(E))^{-1}\sim_{C}1\] (A.16) by Lemma A.3 and the fact that \(C^{-1}<\operatorname{dist}(E,\operatorname{supp}(\rho))<C\) constitutes a compact subset of \(\mathbb{R}\setminus\operatorname{supp}(\rho)).\) Therefore (A.14) is an approximate linear equation in \(\widetilde{m}_{\delta}(z)-m(E)\). Hence there are \(u,u_{1}\sim 1\) such that for all \(0<|z-E|=\eta\leq u\) there is a unique solution \(\widetilde{m}_{\delta}(z)\) with \(|\widetilde{m}_{\delta}(z)-m(E)|\leq u_{1}\) to (A.14). We denote this solution by \(m_{\delta}\) and combining (A.15) with (A.16) we find that it satisfies both parts of (4.23). From now on let \(A\) be invertible and \[M_{\delta}(z)=(\mathrm{id}\otimes\tau)(\mathbf{L}_{\mathrm{sc}}-(zJ+\delta \eta(I-J))\otimes\mathbb{1})^{-1},\] (A.17) where \(\mathbf{L}_{\mathrm{sc}}\) was defined in (A.5). For all \(\eta>0\), the imaginary part of \(zJ+\delta\eta(I-J)\) is positive. Therefore, by Lemma [31, Lemma 5.4], the function \(M_{\delta}\) is the unique solution to (4.14) with \(\operatorname{Im}M_{\delta}(z)>0\). In particular, this implies that its (1,1) entry satisfies (4.20). By the Schur complement formula, it is given by \[(M_{\delta}(z))_{11}=\tau(q_{\delta}(s)-z\otimes\mathbb{1})^{-1},\] (A.18) where \(q_{\delta}\) denotes the polynomial \(q\), defined in (1.1), with \(A\) replaced by \(A_{\delta}\). For \(E\notin\operatorname{supp}(\rho)\) consider \[\begin{split}(M_{\delta})_{11}-m&=\tau((q_{\delta }(s)-z\otimes\mathbb{1})^{-1}-(q_{0}(s)-z\otimes\mathbb{1})^{-1})\\ &=\tau((q_{\delta}(s)-z\otimes\mathbb{1})^{-1}(q_{0}(s)-q_{ \delta}(s))(q_{0}(s)-z\otimes\mathbb{1})^{-1}).\end{split}\] (A.19) The middle factor satisfies \[\|q_{0}(s)-q_{\delta}(s)\|=\left\|\sum_{i,j=1}^{l}(A-A_{\delta})s_{i}s_{j} \right\|\lesssim\eta\delta.\] (A.20) Since \(E\notin\operatorname{supp}(\rho)\), the term \(q_{0}(s)-E\) is invertible and by continuity in \(\eta\) we have \[\|(q_{0}(s)-z\otimes\mathbb{1})^{-1}\|\lesssim_{E}1\] (A.21) uniformly in \(\eta\) for \(0<\eta\leq 1\). In particular, for \(\eta\) sufficiently small (depending on \(E\)), we have \[\|(q_{0}(s)-z\otimes\mathbb{1})^{-1}(q_{0}(s)-q_{\delta}(s))\|\leq\frac{1}{2}.\] (A.22) Therefore the first factor in (A.19) factorizes further into \[(q_{\delta}(s)-z\otimes\mathbb{1})^{-1}=(1\otimes\mathbb{1}+(q_{0}(s)-z\otimes \mathbb{1})^{-1}(q_{0}(s)-q_{\delta}(s)))^{-1}(q_{0}(s)-z\otimes\mathbb{1})^{ -1}\] (A.23) and \[\|(1\otimes\mathbb{1}+(q_{0}(s)-z\otimes\mathbb{1})^{-1}(q_{0}(s)-q_{\delta}( s)))^{-1}\|\leq 2.\] (A.24) Combining (A.19) with (A.20),(A.21),(A.23) and (A.24), we find \[|(M_{\delta})_{11}-m|\lesssim_{E}\delta\eta.\] (A.25) As shown in the first part of the proof, there are \(u,u_{1}>0\) such that (A.13) has a unique solution \(m_{\delta}\) with positive imaginary part and \(|m_{\delta}(z)-m_{\pm}|\leq u_{1}\) for all \(z\in\mathbb{H}\) such that \(|z-\tau_{\pm}|\leq u\). For such \(u,u_{1}\) we choose a \(u_{2}\) with \(0<u_{2}<u\) such that \(|m_{z}-m_{\pm}|\leq\frac{u_{1}}{2}\) for all \(z\in\mathbb{H}\) with \(|z-\tau_{\pm}|\leq u_{2}\) (such a \(u_{2}\) exists by Corollary 3.3). Fix \(E\notin\operatorname{supp}(\rho)\) with \(|E-\tau_{\pm}|<u_{2}\). We choose \(\eta=\operatorname{Im}z\) sufficiently small such that \(|z-\tau_{\pm}|<u_{2}\) and \(|(M_{\delta})_{11}-m|\lesssim\frac{u_{1}}{2}\) (which is possible by (A.25)). Thus there is a \(z\in\mathbb{H}\) with \(|z-\tau_{\pm}|\leq u\) such that \[|(M_{\delta})_{11}(z)-m_{\pm}|\leq|(M_{\delta})_{11}(z)-m(z)|+|m(z)-m_{\pm}| \leq u_{1}.\] (A.26) Since this condition is unique among solutions to (4.20) with positive imaginary part, we must have \((M_{\delta})_{11}(z)=m_{\delta}(z)\) for some \(z\in\mathbb{H}\) with \(|z-\tau_{\pm}|\leq u\). By continuity of both \(m_{\delta}\) and \((M_{\delta})_{11}\) we must have \((M_{\delta})_{11}(z)=m_{\delta}(z)\) for all \(z\in\mathbb{H}\) with \(|z-\tau_{\pm}|\leq u\). For \(C>0\) and \(E\) with \(C^{-1}<\operatorname{dist}(E,\operatorname{supp}(\rho))<C\) we find similarly from Lemma A.3 and (A.25) that for all \(u_{1}>0\) there is an \(\eta>0\) such that \[|(M_{\delta})_{11}(z)-m(E)|\leq|(M_{\delta})_{11}(z)-m(z)|+|m(z)-m(E)|\leq u_ {1}.\] (A.27) Thus \((M_{\delta})_{11}(z)\) must be the unique solution to (4.20) that satisfies (4.23), which concludes the proof of the lemma.
2309.16451
Towards Novel Class Discovery: A Study in Novel Skin Lesions Clustering
Existing deep learning models have achieved promising performance in recognizing skin diseases from dermoscopic images. However, these models can only recognize samples from predefined categories, when they are deployed in the clinic, data from new unknown categories are constantly emerging. Therefore, it is crucial to automatically discover and identify new semantic categories from new data. In this paper, we propose a new novel class discovery framework for automatically discovering new semantic classes from dermoscopy image datasets based on the knowledge of known classes. Specifically, we first use contrastive learning to learn a robust and unbiased feature representation based on all data from known and unknown categories. We then propose an uncertainty-aware multi-view cross pseudo-supervision strategy, which is trained jointly on all categories of data using pseudo labels generated by a self-labeling strategy. Finally, we further refine the pseudo label by aggregating neighborhood information through local sample similarity to improve the clustering performance of the model for unknown categories. We conducted extensive experiments on the dermatology dataset ISIC 2019, and the experimental results show that our approach can effectively leverage knowledge from known categories to discover new semantic categories. We also further validated the effectiveness of the different modules through extensive ablation experiments. Our code will be released soon.
Wei Feng, Lie Ju, Lin Wang, Kaimin Song, Zongyuan Ge
2023-09-28T13:59:29Z
http://arxiv.org/abs/2309.16451v1
# Towards Novel Class Discovery: A Study in Novel Skin Lesions Clustering ###### Abstract Existing deep learning models have achieved promising performance in recognizing skin diseases from dermoscopic images. However, these models can only recognize samples from predefined categories, when they are deployed in the clinic, data from new unknown categories are constantly emerging. Therefore, it is crucial to automatically discover and identify new semantic categories from new data. In this paper, we propose a new novel class discovery framework for automatically discovering new semantic classes from dermoscopy image datasets based on the knowledge of known classes. Specifically, we first use contrastive learning to learn a robust and unbiased feature representation based on all data from known and unknown categories. We then propose an uncertainty-aware multi-view cross pseudo-supervision strategy, which is trained jointly on all categories of data using pseudo labels generated by a self-labeling strategy. Finally, we further refine the pseudo label by aggregating neighborhood information through local sample similarity to improve the clustering performance of the model for unknown categories. We conducted extensive experiments on the dermatology dataset ISIC 2019, and the experimental results show that our approach can effectively leverage knowledge from known categories to discover new semantic categories. We also further validated the effectiveness of the different modules through extensive ablation experiments. Our code will be released soon. Keywords:Novel Class Discovery Skin Lesion Recognition Deep Learning. ## 1 Introduction Automatic identification of lesions from dermoscopic images is of great importance for the diagnosis of skin cancer [22, 16]. Currently, deep learning models, especially those based on deep convolution neural networks, have achieved remarkable success in this task [22, 17, 18]. However, this comes at the cost of a large amount of labeled data that needs to be collected for each class. To alleviate the labeling burden, semi-supervised learning has been proposed to exploit a large amount of unlabeled data to improve performance in the case of limited labeled data [15, 19, 10]. However, it still requires a small amount of labeled data for each class, which is often impossible in real practice. For example, there are roughly more than 2000 named dermatological diseases today, of which more than 200 are common, and new dermatological diseases are still emerging, making it impractical to annotate data from scratch for each new disease category [20]. However, since there is a correlation between new and known diseases, a priori knowledge from known diseases is expected to help automatically identify new diseases [9]. One approach to address the above problem is novel class discovery (NCD) [9, 24, 7], which aims to transfer knowledge from known classes to discover new semantic classes. Most NCD methods follow a two-stage scheme: 1) a stage of fully supervised training on known category data and 2) a stage of clustering on unknown categories [9, 24, 7]. For example, Han et al. [9] further introduced self-supervised learning in the first stage to learn general feature representations. They also used ranking statistics to compute pairwise similarity for clustering. Zhong et al. [24] proposed OpenMix based on the mixup strategy [21] to further exploit the information from known classes to improve the performance of unsupervised clustering. Fini et al. [7] proposed UNO, which unifies multiple objective functions into a holistic framework to achieve better interaction of information between known and unknown classes. Zhong et al. [23] used neighborhood information in the embedding space to learn more discriminative representations. However, most of these methods require the construction of a pairwise similarity prediction task to perform clustering based on pairwise similarity pseudo labels between samples. In this process, the generated pseudo labels are usually noisy, which may affect the clustering process and cause error accumulation. In addition, they only consider the global alignment of samples to the category center, ignoring the local inter-sample alignment thus leading to poor clustering performance. In this paper, we propose a new novel class discovery framework to automatically discover novel disease categories. Specifically, we first use contrastive learning to pretrain the model based on all data from known and unknown categories to learn a robust and general semantic feature representation. Then, we propose an uncertainty-aware multi-view cross-pseudo-supervision strategy to perform clustering. It first uses a self-labeling strategy to generate pseudo-labels for unknown categories, which can be treated homogeneously with ground truth labels. The cross-pseudo-supervision strategy is then used to force the model to maintain consistent prediction outputs for different views of unlabeled images. In addition, we propose to use prediction uncertainty to adaptively adjust the contribution of the pseudo labels to mitigate the effects of noisy pseudo labels. Finally, to encourage local neighborhood alignment and further refine the pseudo labels, we propose a local information aggregation module to aggregate the information of the neighborhood samples to boost the clustering performance. We conducted extensive experiments on the dermoscopy dataset ISIC 2019, and the experimental results show that our method outperforms other state-of-the-art comparison algorithms by a large margin. In addition, we also validated the effectiveness of different components through extensive ablation experiments. ## 2 Methodology Given an unlabeled dataset \(\{x_{i}^{u}\}_{i=1}^{N^{u}}\) with \(N^{u}\) images, where \(x_{i}^{u}\) is the \(i\)th unlabeled image. Our goal is to automatically cluster the unlabeled data into \(C^{u}\) clusters. In addition, we also have access to a labeled dataset \(\{x_{i}^{l},y_{i}^{l}\}_{i=1}^{N^{l}}\) with \(N^{l}\) images, where \(x_{i}^{l}\) is the \(i\)th labeled image and \(y_{i}^{l}\in\mathcal{Y}=\{1,\ldots,C^{l}\}\) is its corresponding label. In the novel class discovery task, the known and unknown classes are disjoint, i.e., \(C^{l}\cap C^{u}=\varnothing\). However, the known and unknown classes are similar, and we aim to use the knowledge of the known classes to help the clustering of the unknown classes. The overall framework of our proposed novel class discovery algorithm is shown in Fig. 1. Specifically, we first learn general and robust feature representations through contrastive learning. Then, the uncertainty-aware multi-view cross-pseudo-supervision strategy is used for joint training on all category data. Finally, the local information aggregation module benefits the NCD by aggregating the useful information of the neighborhood samples. Figure 1: The overall framework of our proposed novel class discovery algorithm. #### 3.1.2 Contrastive Learning To achieve a robust feature representation for the NCD task, we first use noise contrastive learning [8] to pretrain the feature extractor network, which effectively avoids model over-fitting to known categories. Specifically, we use \(x_{i}\) and \(x_{i}^{\prime}\) to represent different augmented versions of the same image in a mini-batch. The unsupervised contrastive loss can be formulated as: \[L_{i}^{ucl}=-\log\frac{\exp\left(z_{i}\cdot z_{i}^{\prime}/\tau\right)}{\sum_{n }\mathbb{1}_{\left[n\neq i\right]}\exp\left(z_{i}\cdot z_{n}/\tau\right)} \tag{1}\] where \(z_{i}=E(x_{i})\) is the deep feature representation of the image \(x_{i}\), \(E\) is the feature extractor network, and \(\tau\) is the temperature value. \(\mathbb{1}\) is the indicator function. In addition, to help the feature extractor learn semantically meaningful feature representations, we introduce supervised contrastive learning [12] for labeled known category data, which can be denoted as: \[L_{i}^{scl}=-\frac{1}{\left|N(i)\right|}\sum_{q\in N(i)}\log\frac{\exp\left(z _{i}\cdot z_{q}/\tau\right)}{\sum_{n}\mathbb{1}_{\left[n\neq i\right]}\exp \left(z_{i}\cdot z_{n}/\tau\right)} \tag{2}\] where \(N(i)\) represents the sample set with the same label as \(x_{i}\) in a mini-batch data. \(\left|N(i)\right|\) represents the number of samples. The overall contrastive loss can be expressed as: \(L_{cl}=(1-\mu)\sum_{i\in B}L_{i}^{ucl}+\mu\sum_{i\in B_{l}}L_{i}^{scl}\),where \(\mu\) denotes the balance coefficient. \(B_{l}\) is the labeled subset of mini-batch data. #### 3.1.3 Uncertainty-aware multi-view cross-pseudo-supervision We now describe how to train uniformly on known and unknown categories using the uncertainty-aware multi-view cross-pseudo-supervision strategy. Specifically, we construct two parallel classification models \(M_{1}\) and \(M_{2}\), both of them composed of a feature extractor and two category classification heads, using different initialization parameters. For an original image \(x_{i}\), we generate two augmented versions of \(x_{i}\), \(x_{i}^{v1}\) and \(x_{i}^{v2}\). We then feed these two augmented images into \(M_{1}\) and \(M_{2}\) to obtain the predictions for \(x_{i}^{v1}\) and \(x_{i}^{v2}\): \[p_{i,1}^{v1}=M_{1}(x_{i}^{v1}),p_{i,1}^{v2}=M_{1}(x_{i}^{v2}),p_{i,2}^{v1}=M_{ 2}(x_{i}^{v1}),p_{i,2}^{v2}=M_{2}(x_{i}^{v2}). \tag{3}\] The prediction outputs are obtained by concatenating the outputs of the two classification heads and then passing a softmax layer [7]. Then, we can compute the ensemble predicted output of \(M_{1}\) and \(M_{2}\): \(p_{i}^{M_{1}}=\left(p_{i,1}^{v1}+p_{i,1}^{v2}\right)/2\), \(p_{i}^{M_{2}}=\left(p_{i,2}^{v1}+p_{i,2}^{v2}\right)/2\). Next, we need to obtain training targets for all data. For an input image \(x_{i}\), if \(x_{i}\) is from the known category, we construct the training target as one hot vector, where the first \(C^{l}\) elements are ground truth labels and the last \(C^{u}\) elements are 0. If \(x_{i}\) is from the unknown category, we set the first \(C^{l}\) elements to 0 and use pseudo labels for the remaining \(C^{u}\) elements. We follow the self-labeling method in [1; 3] to generate pseudo labels. Specifically, the parameters in the unknown category classification head can be viewed as prototypes of each category, and our training goal is to distribute a set of samples uniformly to each prototype while maximizing the similarity between samples and prototypes [1]. Let \(\mathbf{P}=\left[p_{1}^{u};\ldots;p_{B_{u}}^{u}\right]\in\mathbb{R}^{B_{u}\times C ^{u}}\) denotes the ensemble prediction of data of unknown categories in a mini-batch, where \(B_{u}\) represents the number of samples. Here we only consider the output of the unknown categories head due to the samples coming from unknown categories [7]. We obtain the pseudo label by optimizing the following objective: \[\max_{\mathbf{Y}\in\mathcal{S}}\operatorname{tr}\left(\mathbf{Y}\mathbf{P}^{ \top}\right)+\delta H(\mathbf{Y}) \tag{4}\] where \(\mathbf{Y}=\left[y_{1}^{u};\ldots;y_{B_{u}}^{u}\right]\in\mathbb{R}^{B_{u} \times C^{u}}\) will assign \(B_{u}\) unknown category samples to \(C^{u}\) category prototypes uniformly, i.e., each category prototype will be selected \(B_{u}/C^{u}\) times on average. \(\mathcal{S}\) is the search space. \(H\) is the entropy function used to control the smoothness of \(\mathbf{Y}\). \(\delta\) is the hyperparameter. The solution to this objective can be calculated by the Sinkhorn-Knopp algorithm [6]. After generating the pseudo-labels, we can combine them with the ground truth labels of known categories as training targets for uniform training. To mitigate the effect of noisy pseudo labels, we propose to use prediction uncertainty [14] to adaptively adjust the weights of pseudo labels. Specifically, we first compute the variance of the predicted outputs of the models for the different augmented images via KL-divergence: \[V_{1}=E\left[p_{i,1}^{v1}\log\left(\frac{p_{i,1}^{v1}}{p_{i,1}^{v2}}\right) \right],V_{2}=E\left[p_{i,2}^{v1}\log\left(\frac{p_{i,2}^{v1}}{p_{i,2}^{v2}} \right)\right], \tag{5}\] where \(E\) represents the expected value. If the variance of the model's predictions for different augmented images is large, the pseudo label may be of low quality, and vice versa. Then, based on the prediction variance of the two models, the multi-view cross-pseudo supervision loss can be formulated as: \[L_{cps}=E\left[e^{-V_{1}}L_{ce}\left(p^{M_{2}},y^{v1}\right)+V_{1}\right]+E \left[e^{-V_{2}}L_{ce}\left(p^{M_{1}},y^{v2}\right)+V_{2}\right] \tag{6}\] where \(L_{ce}\) denotes the cross-entropy loss. \(y^{v1}\) and \(y^{v2}\) are the training targets. #### 3.2.1 Local information aggregation After the cross-pseudo-supervision training described above, we are able to assign the instances to their corresponding clustering centers. However, it ignores the alignment between local neighborhood samples, i.e., the samples are susceptible to interference from some irrelevant semantic factors such as background and color. Here, we propose a local information aggregation to enhance the alignment of local samples. Specifically, as shown in Fig. 1, we maintain a first-in-first-out memory bank \(\mathcal{M}=\left\{z_{k}^{m},y_{k}^{m}\right\}_{k=1}^{N^{m}}\) during the training process, which contains the features of \(N^{m}\) most recent samples and their pseudo labels. For each sample in the current batch, we compute the similarity between its features and the features of each sample in the memory bank: \[d_{k}=\frac{\exp{(z\cdot z_{k}^{m})}}{\sum_{k=1}^{N^{m}}\exp{(z\cdot z_{k}^{m})}}. \tag{7}\] Then based on this feature similarity, we obtain the final pseudo labels as: \(y^{u}=\rho y^{u}+(1-\rho)\sum_{k=1}^{N^{m}}d_{k}y_{k}^{m}\), where \(\rho\) is the balance coefficient. By aggregating the information of the neighborhood samples, we are able to ensure consistency between local samples, which further improves the clustering performance. ## 3 Experiments **Dataset.** To validate the effectiveness of the proposed algorithm, we conduct experiments on the widely used public dermoscopy challenge dataset ISIC 2019 [4, 5]. The dataset contains a total of 25,331 dermoscopic images from eight categories: Melanoma (MEL), Melanocytic Nevus (NV), Basal Cell Carcinoma (BCC), Actinic Keratosis (AK), Benign Keratosis (BKL), Dermatofibroma (DF), Vascular Lesion (VASC), and Squamous Cell Carcinoma (SCC). Since the dataset suffers from severe category imbalance, we randomly sampled 500 samples from those major categories (MEL, NV, BCC, BKL) to maintain category balance. Then, we construct the NCD task where we treat 50% of the categories (AK, MEL, NV, BCC) as known categories and the remaining 50% of the categories (BKL, SCC, DF, VASC) as unknown categories. We also swap the known and unknown categories to form a second NCD task. For task 1 and task 2, we report the average performance of 5 runs. #### 3.0.1 Implementation details. We used ResNet-18 [11] as the backbone of the classification model. The known category classification head is an _l2_-normalized linear classifier with \(C^{l}\) output units. The unknown category classification head consists of a projection layer with 128 output units, followed by an _l2_-normalized linear classifier with \(C^{u}\) output units. In the first contrastive learning pre-training step, we used SGD optimizer to train the model for 200 epochs and gradually decay the learning rate starting from 0.1 and dividing it by 5 at the epochs 60, 120, and 180. \(\mu\) is set to 0.5, \(\tau\) is set to 0.5. In the joint training phase, we fix the parameters of the previous feature extractor and only fine-tune the parameters of the classification head. We use the SGD optimizer to train the model for 200 epochs with linear warm-up and cosine annealing (\(lr_{\mathrm{base}}=0.1\), \(lr_{\mathrm{min}}=0.001\)), and the weight decay is set to \(1.5\times 10^{-4}\). For data augmentation, we use random horizontal/vertical flipping, color jitter, and Gaussian blurring following [7]. For pseudo label, we use the Sinkhorn-Knopp algorithm with hyperparameters inherited from [7]: \(\delta=0.05\) and the number of iterations is 3. We use a memory bank \(\mathcal{M}\) of size 100 and the hyperparameter \(\rho\) is set to 0.6. The batch size in all experiments is 512. In the inference phase, we only use the output of the unknown category classification head of \(M_{1}\)[9]. Following [9, 23, 24], we report the clustering performance on the unlabeled unknown category dataset. We assume that the number of unknown categories is known and it can also be obtained by the category number estimation method proposed in [9]. Following [2, 9], we use the average clustering accuracy (ACC), normalized mutual information (NMI) and adjusted rand index (ARI) to evaluate the clustering performance of different algorithms. Specifically, we first match the clustering assignment and ground truth labels by the Hungarian algorithm [13]. After the optimal assignment is determined, we then compute each metric. We implement all algorithms based on the PyTorch framework and conduct experiments on 8 RTX 3090 GPUs. #### 3.2.1 Comparison with state-of-the-art methods. We compare our algorithms with some state-of-the-art NCD methods, including RankStats [9], RankStats+ (RankStats with incremental learning) [9], OpenMix [24], NCL [23], UNO [7]. we also compare with the benchmark method (Baseline), which first trains a model using known category data and then performs clustering on unknown category data. Tabel 1 shows the clustering performance of each comparison algorithm on different NCD tasks. It can be seen that the clustering performance of the benchmark method is poor, which indicates that the model pre-trained using only the known category data does not provide a good clustering of the unknown category. Moreover, the state-of-the-art NCD methods can improve the clustering performance, which demonstrates the effectiveness of the currently popular two-stage solution. However, our method outperforms them, mainly due to the fact that they need to generate pairwise similarity pseudo labels through features obtained based on self-supervised learning, while ignoring the effect of noisy pseudo labels. Compared with the best comparison algorithm UNO, our method yields 5.23% ACC improvement, 3.56% NMI improvement, and 2.55% ARI improvement on Task1, and 3.24% ACC improvement, 1.34% NMI improvement, and 2.37% ARI improvement on Task2, which shows that our method is able to provide more reliable pseudo labels for NCD. #### 3.2.2 Ablation study of each key component. We performed ablation experiments to verify the effectiveness of each component. As shown in Tabel 2, CL is contrastive learning, UMCPS is uncertainty-aware multi-view cross-pseudo-supervision, and LIA is the local information aggregation module. It can be observed that CL brings a significant performance gain, which indicates that contrastive learning helps to learn a general and robust feature representation for NCD. In addition, UMCPS also improves the clustering performance of the \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{Task1} & \multicolumn{3}{c}{Task2} \\ \cline{2-7} & ACC & NMI & ARI & ACC & NMI & ARI \\ \hline Baseline & 0.4685 & 0.2107 & 0.1457 & 0.3899 & 0.0851 & 0.0522 \\ RankStats [9] & 0.5652 & 0.2571 & 0.2203 & 0.4284 & 0.1164 & 0.1023 \\ RankStats+ [9] & 0.5845 & 0.2633 & 0.2374 & 0.4362 & 0.1382 & 0.1184 \\ OpenMix [24] & 0.6083 & 0.2863 & 0.2512 & 0.4684 & 0.1519 & 0.1488 \\ NCL [23] & 0.5941 & 0.2802 & 0.2475 & 0.4762 & 0.1635 & 0.1573 \\ UNO [7] & 0.6131 & 0.3016 & 0.2763 & 0.4947 & 0.1692 & 0.1796 \\ Ours & **0.6654** & **0.3372** & **0.3018** & **0.5271** & **0.1826** & **0.2033** \\ \hline \hline \end{tabular} \end{table} Table 1: Clustering performance of different comparison algorithms on different tasks. model, which indicates that unified training helps to the category information interaction. LIA further improves the clustering performance, which indicates that local information aggregation helps to provide better pseudo labels. Finally, our algorithm incorporates each component to achieve the best performance. #### 4.2.3 Ablation study of contrastive learning. We further examined the effectiveness of each component in contrastive learning. Recall that the contrastive learning strategy includes supervised contrastive learning for the labeled known category data and unsupervised contrastive learning for all data. As shown in Tabel 3, it can be observed that both components improve the clustering performance of the model, which indicates that SCL helps the model to learn semantically meaningful feature representations, while UCL makes the model learn robust unbiased feature representations and avoid its overfitting to known categories. #### 4.2.4 Uncertainty-aware multi-view cross-pseudo-supervision. We also examine the effectiveness of uncertainty-aware multi-view cross-pseudo-supervision. We compare it with 1) w/o CPS, which does not use cross-pseudo-supervision, and 2) CPS, which uses cross-pseudo-supervision but not the uncertainty to control the contribution of the pseudo label. As shown in Tabel 3, it can be seen that CPS outperforms w/o CPS, which indicates that CPS encourages the model to maintain consistent predictions for different augmented versions of the input images, and enhances the generalization performance of the model. UM-CPS achieves the best clustering performance, which shows its ability to use uncertainty to alleviate the effect of noisy pseudo labels and avoid causing error accumulation. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multicolumn{3}{c}{Method} & \multicolumn{3}{c}{Task1} & Task2 & \\ \cline{2-7} CL & UMCPS & LIA & ACC & NMI & ARI & ACC & NMI & ARI \\ \hline ✗ & ✗ & ✗ & 0.4685 & 0.2107 & 0.1457 & 0.3899 & 0.0851 & 0.0522 \\ ✓ & & & 0.5898 & 0.2701 & 0.2375 & 0.4402 & 0.1465 & 0.1322 \\ ✓ & ✓ & & 0.6471 & 0.3183 & 0.2821 & 0.5012 & 0.1732 & 0.1851 \\ ✓ & & ✓ & 0.6255 & 0.3122 & 0.2799 & 0.4893 & 0.1688 & 0.1781 \\ ✓ & ✓ & ✓ & **0.6654** & **0.3372** & **0.3018** & **0.5271** & **0.1826** & **0.2033** \\ \hline \hline \end{tabular} \end{table} Table 2: Ablation study of each key component. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{3}{c}{Task1} & Task2 & \\ \cline{2-7} & ACC & NMI & ARI & ACC & NMI & ARI \\ \hline Baseline & 0.4685 & 0.2107 & 0.1457 & 0.3899 & 0.0851 & 0.0522 \\ SCL & 0.5381 & 0.2362 & 0.1988 & 0.4092 & 0.1121 & 0.1003 \\ UCL & 0.5492 & 0.2482 & 0.2151 & 0.4291 & 0.1173 & 0.1174 \\ SCL+UCL & **0.5898** & **0.2701** & **0.2375** & **0.4402** & **0.1465** & **0.1322** \\ \hline w/o CPS & 0.6021 & 0.2877 & 0.2688 & 0.4828 & 0.1672 & 0.1629 \\ CPS & 0.6426 & 0.3201 & 0.2917 & 0.5082 & 0.1703 & 0.1902 \\ UMCPS & **0.6654** & **0.3372** & **0.3018** & **0.5271** & **0.1826** & **0.2033** \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation study of contrastive learning and uncertainty-aware multi-view cross-pseudo-supervision. ## 4 Conclusion In this paper, we propose a novel class discovery framework for discovering new dermatological classes. Our approach consists of three key designs. First, contrastive learning is used to learn a robust feature representation. Second, uncertainty-aware multi-view cross-pseudo-supervision strategy is trained uniformly on data from all categories, while prediction uncertainty is used to alleviate the effect of noisy pseudo labels. Finally, the local information aggregation module further refines the pseudo label by aggregating the neighborhood information to improve the clustering performance. Extensive experimental results validate the effectiveness of our approach. Future work will be to apply this framework to other medical image analysis tasks.
2309.16337
Logarithm-transform aided Gaussian Sampling for Few-Shot Learning
Few-shot image classification has recently witnessed the rise of representation learning being utilised for models to adapt to new classes using only a few training examples. Therefore, the properties of the representations, such as their underlying probability distributions, assume vital importance. Representations sampled from Gaussian distributions have been used in recent works, [19] to train classifiers for few-shot classification. These methods rely on transforming the distributions of experimental data to approximate Gaussian distributions for their functioning. In this paper, I propose a novel Gaussian transform, that outperforms existing methods on transforming experimental data into Gaussian-like distributions. I then utilise this novel transformation for few-shot image classification and show significant gains in performance, while sampling lesser data.
Vaibhav Ganatra
2023-09-28T10:50:32Z
http://arxiv.org/abs/2309.16337v1
# Logarithm-transform aided Gaussian Sampling for Few-Shot Learning ###### Abstract Few-shot image classification has recently witnessed the rise of representation learning being utilised for models to adapt to new classes using only a few training examples. Therefore, the properties of the representations, such as their underlying probability distributions, assume vital importance. Representations sampled from Gaussian distributions have been used in recent works, [19] to train classifiers for few-shot classification. These methods rely on transforming the distributions of experimental data to approximate Gaussian distributions for their functioning. In this paper, I propose a novel Gaussian transform, that outperforms existing methods on transforming experimental data into Gaussian-like distributions. I then utilise this novel transformation for few-shot image classification and show significant gains in performance, while sampling lesser data. ## 1 Introduction Learning from limited data, and adapting neural networks to unforeseen tasks has attracted signification attention in recent years. This is essential since the development of large datasets for supervised training requires significant costs, in terms of finances and the amount of human effort required. Considerable progress has been made in machine learning in the limited data regime. After the development of sophisticated optimization-based meta-learning techniques such as Model-agnostic-Meta-Learning [5], and metric-based meta-learning techniques, such as ProtoNets and MatchingNets[13, 17], there has been a recent shift towards representation learning for few-shot learning [14, 3, 15, 9]. For example, Tian _et al._[14] leverage self-supervision and regularization for learning meaningful representations, and state that "a good embedding is all you need" for few-shot image classification. They propose a meta-free model as a new State-of-the-Art for few-shot image classification. Therefore, studying the properties (such as the underlying probability distributions), of the learned representations is a useful pursuit in decoding their usefulness in few-shot learning. Gaussian representations play an essential role in few-shot learning. For example, instead of using point prototypes for few-shot image classification, Lin _et al._[8] propose modelling prototypes as multi-dimensional Gaussian distributions, and rectify these prototypes using mutual information maximization. Yang _et al._[19] propose a distribution calibration mechanism to calibrate the representations of few-shot data and train classifiers by sampling Gaussian data around the calibrated representations. They make use of Tukey's Ladder of Powers [1] to transform the data so that the underlying distribution of the transformed data is approximately Gaussian. In this paper, I propose a novel "data-to-Gaussian" transform that "Gaussianizes" data. (Gaussianization of data or inducing "normality" into data refers to transforming the data so that its underlying distribution is approximately normal). I demonstrate the utility of the proposed transformation method by transforming a variety of data distributions (including data from experimental datasets) and show (through qualitative and quantitative evaluation) that the transform produces data following a distribution that is closer to a Gaussian distribution. Finally, I replace the distribution calibration mechanism proposed by Yang _et al._[19] with the proposed transformation method and show small yet consistent improvements in the classification accuracies, while sampling a significantly lesser amount of data. Therefore, in this paper, I make the following contributions - * I propose a novel transformation (called the Log-Tukey transformation) to induce "Gaussianization" within experimental data. * I replace the distribution calibration mechanism proposed by Yang _et al._[19] with the novel transform and devise an alternative algorithm for few-shot learning. * I perform experiments on commonly known benchmark datasets, and show significant performance gains while sampling 5x lesser datapoints. The rest of the paper is organized as follows - Sec.2 details the related work, both in terms of making data more Gaussian, and representation learning for few-shot learning. Sec.3 explains the Log-Tukey transform for Gaussianization of data. In Sec.4, I first briefly discuss the distribution calibration method proposed by Yang _et al._[19], since the proposed work is heavily based on it. Next, I detail how I incorporate the Log-Tukey transform into the algorithmic setup proposed by them. Sec.5 shows experimental results of how the method proposed in Sec.4 outperforms the baseline and Sec. 6 concludes the paper. ## 2 Related Work **Making data Gaussian-like** - A significant amount of prior work exists in transforming experimental data such that it follows Gaussian-like distributions. This is because of the usefulness of Gaussian data. Tukey's Ladders of Powers [1] are one of the prominent methods available for this task. Other transforms such as Log transforms and Inverse transforms are also used for this purpose. In addition to these, the Box-Cox [2] transform is another useful method that is widely used for transforming data and making it more Gaussian-like, but can only be used for positive values. The Yeo-Johnson transform [20] is a modification of the Box-Cox transform and can also be used for negative values. **Representation Learning for Few-Shot Learning** - Tian _et al._[14] debunk the need for complicated meta-learning methods for few-shot learning, and emphasize the utility of representations learned through a proxy task such as image classification. Mangla _et al._[10] make use of self-supervision and regularization to learn meaningful structures in the representations of data. Luo _et al._[9] approach few-shot learning by learning global representations whereas Tokmakov _et al._[15] learn compositional representations for few-shot learning. ## 3 Log-Tukey Transform As stated earlier, "Gaussianization" of data plays an essential role in few-shot learning. Hence, there exist multiple techniques to transform existing data such that it follows an approximate Gaussian distribution. Tukey's Ladder of Powers transform [1], as shown in Eq. 1, is by far one of the most popular methods used to Gaussianize data. \[\hat{x}=\begin{cases}x^{\lambda},if\lambda\neq 0\\ log(x),if\lambda=0\end{cases} \tag{1}\] Commonly, the value of \(\lambda\) used is 0.5. However, only using exponents of the data is not immune to data skew and does not ensure maximum "normality"/"Gaussianization" in the data. Fig. 1 shows the probability distribution function of a data sample drawn from an Exponential(0.5) distribution after transforming it using Tukey's Square Root transform and that of the corresponding Gaussian distribution. The peaks of the Tukey-Transformed data distribution and the corresponding Gaussian distribution are misaligned (The peak of the Tukey-transformed data is to the left of the peak of the Gaussian distribution). This happens because the exponential distribution is positively skewed and the Tukey-transform is unable to sufficiently shrink the large values, consequently, the distribution of the transformed data is still positively skewed. (Gaussian distributions do not have any skew) This is a common problem with the method, therefore, maximum "normality" is not ensured in the transformed data. A transform that is used to Gaussianize data must significantly reduce/remove the skew in a skewed distribution. Skew (and long tails) in the data is often overcome by using the logarithm function [7]. Consequently, in this paper, I introduce the Log-Tukey transform, which makes use of logarithm along with Tukey's Square-root transform, as shown in Eq. 2. \[\hat{x}=log(\sqrt{x}+\epsilon+1) \tag{2}\] where \(\epsilon\) is a small value to prevent the transformation from zeroing out the input. In the experiments, a value of 1e-4 is used for \(\epsilon\). The \(+1\) is added to ensure that the resulting values after the transform are positive. The Log-Tukey transform removes the data skew and shrinks the data in a way that the underlying distribution is much closer to a Gaussian-distribution, with the same mean and variance, as shown in Fig. 2 (The peaks are horizontally aligned, and the edges lie very close). A quantita Figure 1: KDE plot (Probability distribution function) of Tukey-transformed data sampled from an Exponential(0.5) distribution. The “Corresponding normal distribution” shown in the figure is a Gaussian distribution with the same mean and variance as the Tukey-transformed data. Due to skew in the data, the transformed data is somewhat different from the corresponding normal distribution. tive comparison of the various "Gaussianization" methods is carried out, where the "closeness" of the distributions is estimated in terms of the Wasserstein distance [16], which can be interpreted as the minimum cost of transforming one of the distributions into the other. Here, I also consider the Box-Cox [2] and Yeo-Johnson [20] transforms, which are commonly used for Gaussianization of data. As is evident from Table. 1, the Log-Tukey transformation has the lowest Wasserstein distance in all cases, thereby showing the enhanced ability of the transformation in inducing "normality" within the data ( The table also includes data from the Iris dataset [6]). In deep learning, this ability is crucial in places where we do not know the ground truth distributions of data/weights, but assume that the underlying distribution is Gaussian. An example of this is shown in Sec.4 where we sample data for few-shot learning in the representation space, assuming it follows a multivariate Gaussian distribution. ## 4 Gaussian Sampling for Few-Shot Classification In this section, I make use of the Log-Tukey transform for few-shot image classification. ### Problem Setting The typical few-shot classification problem formulation is adopted, where we have a labelled dataset \(D=\{(x^{i},y^{i})|1\leq i\leq T\}\), where \(x^{i}\) is a data sample, and \(y^{i}\) is the corresponding label, \(T\) denotes the size of the dataset. Each datapoint in the dataset is labelled as one of \(|C|\) classes, where \(C\) denotes the set of all classes. \(C\) is partitioned into base classes \(C_{b}\) and novel classes \(C_{n}\), such that \(C_{b}\cap C_{n}=\phi\) and \(C_{b}\cup C_{n}=C\). The few-shot classification model is trained on the base classes, and the goal is to train the model in such a manner that it is able to adapt well to the novel classes, using only a few examples. Typically, abundant data is available for the base classes but only few samples are available for the novel classes. The generalization ability of the model is evaluated in terms of accuracy in N-way-K-shot tasks [17], where each task consists of \(N\) novel classes sampled from \(C_{n}\) and \(K\) labeled samples from each of the \(N\) classes. This few-shot labeled set available for model adaption is called the support set. The performance of the model is evaluated on the query set \(Q\), which consists of \(q\) test cases from each of the \(N\) classes \(Q=(x^{i},y^{i})_{i=N\cdot K+1}^{N\cdot K+N}\). Therefore, the average accuracy of the model on multiple N-way-K-shot tasks is used as an estimate of the model performance. ### Distribution Calibration [19] In accordance with the growing interest in using effective representations for few-shot learning, Yang _et al._[19] propose a distribution calibration mechanism which uses statistics of the well-separated base-classes to calibrate the representations of novel classes, and make them separable in a few-shot setting, without over-fitting. They utilize the training method proposed by Mangla _et al._[10] trained on the base classes, as a feature extractor \(F\). After training, they record the classwise statistics - mean \(\mu_{base}=\{\mu_{i}|1\leq i\leq|C_{b}|\}\) and covariances \(\Sigma_{base}=\{\Sigma_{i}|1\leq i\leq|C_{b}|\}\) for the base-classes. For each few-shot task, they calibrate the Tukey-transformed representation (obtained by using \(F\) on the input image, followed by transformation using Eq. 1) of each image, using the base class statistics \(\mu_{base}\) and \(\Sigma_{base}\). Finally, they sample data around the calibrated representa \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline Distribution & Transform & Mean & Std & Wasserstein \\ & & & Dev & Distance \(\downarrow\) \\ \hline Uni(0,1) & None & 0.5 & 0.289 & 0.0458 \\ Uni(0,1) & Tukey & 0.661 & 0.238 & 0.0402 \\ Uni(0,1) & Box-Cox & -0.598 & 0.384 & 0.0579 \\ Uni(0,1) & Yeo-Johnson & 0.456 & 0.258 & 0.0405 \\ Uni(0,1) & **Log-tukey** & 0.497 & 0.152 & **0.0308** \\ \hline Exp(0.5) & None & 0.5 & 0.5 & 0.1482 \\ Exp(0.5) & Tukey & 0.625 & 0.326 & 0.0338 \\ Exp(0.5) & Box-Cox & -0.928 & 0.826 & 0.0227 \\ Exp(0.5) & Yeo-Johnson & 0.244 & 0.145 & 0.0169 \\ Exp(0.5) & **Log-Tukey** & 0.466 & 0.197 & **0.008** \\ \hline Feature 0 & None & 5.843 & 0.825 & 0.0827 \\ from & Tukey & 2.411 & 0.17 & 0.0153 \\ Iris & Box-Cox & 1.549 & 0.109 & 0.01 \\ Dataset & Yeo-Johnson & 1.43 & 0.065 & 0.0057 \\ & **Log-Tukey** & 1.226 & 0.05 & **0.0042** \\ \hline \end{tabular} \end{table} Table 1: Quantitative comparison of “Gaussianization” methods. Uni stands for the continuous uniform distribution, and Exp stands for the exponential distribution. Figure 2: KDE plot (Probability distribution function) of Log-tukey-transformed data sampled from an Exponential(0.5) distribution. The “Corresponding normal distribution” shown in the figure is a Gaussian distribution with the same mean and variance as the Log-Tukey transformed data. The skew in the data is removed and the distribution is much closer to a Gaussian distribution with the same mean and variance. tions, and train a linear classifier on data sampled around all points in the support set. They show that logistic regression/SVM classifiers trained on the calibrated representations and sampled data outperform sophisticated optimization, metric and generation-based meta-learning methods. One point that must be noted is that they sample data around each point in the support set, therefore, if \(p\) points are sampled for each image in an N-way-K-shot task, a total of \(N\) x \(K\) x \(p\) points are sampled. Algorithm.1 outlines the distribution calibration mechanism proposed by Yang _et al._[19]. ``` 0: Support features \(S=(x,y)_{i=1}^{NxK}\) 0: Base class statistics \(\mu_{base}\), \(\Sigma_{base}\) 1: Transform \(S\) with the Tukey transform. (Eq. 1) 2:for\((x_{i},y_{i})\in S\)do 3: Obtain calibrated mean \(x_{i}^{\prime}\) and covariance \(\Sigma_{i}^{\prime}\) for \(x_{i}\) using the method proposed by Yang _et al._[19] 4: Sample multivariate Gaussian data using \(x_{i}^{\prime}\) and \(\Sigma_{i}^{\prime}\), and label them as \(y_{i}\) 5:endfor 6: Train a linear classifier on the sampled + support set features ``` **Algorithm 1** Algorithm for training a Few-Shot classifier using Distribution Calibration ### Gaussian Sampling I build on the work of Yang _et al._[19], since they utilize Tukey's Square Root transform. The experimental setting is a 5-way-5-shot image classifier. As is shown in Sec. 3, the Tukey-transform alone is not optimal in inducing "normality" into the data. The sampled Gaussian data, therefore, is not as close to the ground truth representations of the novel classes, as possible and there is room for further correction. Hence, I replace the Tukey-transform and distribution calibration steps with the Log-Tukey transformation on the representations of the novel classes in an attempt to make them more "Gaussian", following which classwise means and covariances are calculated using the images from the support set. Next, data is sampled for each class around the mean and a linear classifier is trained on the sampled data. Therefore, if \(p\) points are sampled around each mean, a total of \(N\) x \(p\) datapoints are sampled, \(p\) datapoints for each of the \(N\) classes. In an N-way-K-shot setting, this is \(K\) times lesser than that of the distribution calibration proposed by Yang _et al._[19]. Since a 5-shot setting has been adopted, 5x lesser data is sampled. Thus, applying the Log-Tukey transform ensures that the data is more "Gaussian", therefore sampling Gaussian data around the class-mean generates more accurate representations of the novel class. This "Gaussian Sampling" of data from each class aids in few-shot learning and delivers a small yet consistent improvement in performance (as shown in Sec. 5.3), while sampling 5x fewer data, thereby decreasing the computation. Algorithm. 2 shows the overall algorithm of training a few-shot classifier using Gaussian Sampling. ``` 0: Support features \(S=(x,y)_{i=1}^{NxK}\) 1: Transform \(S\) with the Log-Tukey transform (Eq. 2) 2: Calculate classwise-means \(\mu\)'s and covariance \(\Sigma\)'s from the transformed features 3: Sample multivariate Gaussian data using \(\mu\)'s and \(\Sigma\)'s for each class, label them with the corresponding class label 4: Train a linear classifier on the sampled + support set features ``` **Algorithm 2** Algorithm for training a Few-Shot classifier using Gaussian Sampling ## 5 Experiments ### Datasets I validate the Gaussian Sampling method on the miniImageNet [11] and the CUB [18] datasets. A variety of classes including various animals and objects can be found in the miniImageNet dataset while CUB is a more fine-grained dataset that includes various species of birds. Datasets with different levels of granularity may have different distributions for their feature space. I show the validity of the sampling mechanism on both datasets **miniImageNet** is derived from ILSVRC-12 dataset [12]. It has 100 diverse classes with 600 images per class, each of size 84 x 84 x 3. The data split used in the following experiments is as proposed by Ravi _et al._[11], with 64 base classes, 16 validation classes, and 20 novel classes. **CUB** is a fine-grained few-shot classification benchmark. It has a total of 11,788 images, each of size 84 x 84 x 3, of 200 different classes of birds. The dataset is split into 100 base classes, 50 validation classes, and 50 novel classes, following Chen _et al._[4] ### Implementation and Metrics I follow the implementation provided by Yang _et al._[19]. I use the method proposed by Mangla _et al._[10] as the feature-extractor, trained on the base classes and evaluate its performance on the novel classes. I adopt the 5-way-5-shot setting for the experiments. The values of hyper-parameters are as per the implementation provided by Yang _et al._[19]. I evaluate the performance in terms of classification accuracy on the query set. This is evaluated over a total of 500 tasks, in 5 runs of sets of 100 tasks each. Since the distribution calibration proposed by Yang _et al._[19] already surpasses the performance of other existing methods, and since the proposed method is heavily based on it, I only compare the proposed method against theirs [19]1. I show the improvement in the performance across multiple runs for both the miniImageNet and CUB datasets. Footnote 1: The code for the experiments is publicly available at - [https://github.com/ganatra-v/gaussian-sampling-fsl](https://github.com/ganatra-v/gaussian-sampling-fsl) ### Results Table. 2 shows the accuracies of the distribution calibration mechanism [19] and the Gaussian sampling (ours) mechanism for the miniImageNet dataset. In all the trials, the average accuracy of all tasks is better with the Gaussian Sampling than without. Similar results are observed for the CUB dataset, as seen in Table. 3. From the tables, it is clear that with the addition of a single log function, the accuracy can improve by \(\sim\)0.5%. This is a significant gain while using 5x lesser sampled data. All the results shown have been obtained after sampling 750 datapoints in total for the Gaussian sampling method, and 750 x 5 = 3750 datapoints for the distribution calibration method. I also examine the effect of Gaussian Sampling with the variation of the number of datapoints sampled for each point in the support set. Fig. 3 shows the variation in average task accuracy for the miniImageNet dataset. Although the difference in accuracy is small, Gaussian sampling consistently outperforms distribution calibration [19]. Fig. 4 shows a similar trend in the variation in average accuracy for the CUB dataset with different amounts of sampled data. ## 6 Conclusion In this paper, I propose a new method to induce "normality" within experimental data, called the Log-Tukey transform. By transforming data sampled from various distributions, I show the effectiveness of this transform in making data more Gaussian-like as compared to the existing methods. Further, I employ this transform to sample Gaussian representations in a few-shot learning method, and show significant incremental gains while reducing the amount of computation. I also demonstrate the generality and utility of the method by conducting experiments on datasets of varied granularity, and with different amount of sampled data. A gain of \(\sim\)0.5% in accuracy by the addition of a single logarithm function seems like a good bargain. A possible direction for future work would be to examine the effectiveness of the Log-Tukey transform in other scenarios where Gaussian priors and sampling multi-variate Gaussians from experimental data are involved! \begin{table} \begin{tabular}{|l|r|r|r|} \hline Trial & Without GS (\%) & With GS (\%) & Difference (\%) \\ \hline 1 & 82.733 & 83.053 & 0.320 \\ \hline 2 & 83.053 & 84.067 & 1.014 \\ \hline 3 & 84.067 & 84.177 & 0.11 \\ \hline 4 & 83.987 & 84.24 & 0.253 \\ \hline 5 & 84.187 & 84.973 & 0.786 \\ \hline Avg & 83.605 & 84.102 & 0.497 \\ \hline \end{tabular} \end{table} Table 2: Accuracy on the query set of the MiniImageNet dataset. Each trial consists of 100 tasks, run with and without Gaussian Sampling (GS). Figure 4: Variation in the accuracy with the number of sampled data per point in the support set, with/without Gaussian Sampling for the CUB dataset. The number of datapoints sampled for the plot without Gaussian Sampling is 5x the value on the x-axis Figure 3: Variation in the accuracy with the number of sampled data per point in the support set, with/without Gaussian Sampling for the miniImageNet dataset. The number of datapoints sampled for the plot without Gaussian Sampling is 5x the value on the x-axis \begin{table} \begin{tabular}{|l|r|r|r|} \hline Trial & Without GS (\%) & With GS (\%) & Difference (\%) \\ \hline 1 & 90.747 & 91.093 & 0.346 \\ \hline 2 & 91.747 & 92.4 & 0.653 \\ \hline 3 & 91.747 & 92.213 & 0.466 \\ \hline 4 & 90.307 & 90.88 & 0.573 \\ \hline 5 & 90.307 & 90.813 & 0.506 \\ \hline Avg & 90.9707 & 91.48 & 0.509 \\ \hline \end{tabular} \end{table} Table 3: Accuracy on the query set of the CUB dataset. Each trial consists of 100 tasks, run with and without Gaussian Sampling (GS).
2307.16843
Identification of Driving Heterogeneity using Action-chains
Current approaches to identifying driving heterogeneity face challenges in capturing the diversity of driving characteristics and understanding the fundamental patterns from a driving behaviour mechanism standpoint. This study introduces a comprehensive framework for identifying driving heterogeneity from an Action-chain perspective. First, a rule-based segmentation technique that considers the physical meanings of driving behaviour is proposed. Next, an Action phase Library including descriptions of various driving behaviour patterns is created based on the segmentation findings. The Action-chain concept is then introduced by implementing Action phase transition probability, followed by a method for evaluating driving heterogeneity. Employing real-world datasets for evaluation, our approach effectively identifies driving heterogeneity for both individual drivers and traffic flow while providing clear interpretations. These insights can aid the development of accurate driving behaviour theory and traffic flow models, ultimately benefiting traffic performance, and potentially leading to aspects such as improved road capacity and safety.
Xue Yao, Simeon C. Calvert, Serge P. Hoogendoorn
2023-07-31T17:04:39Z
http://arxiv.org/abs/2307.16843v1
# Identification of Driving Heterogeneity using Action-chains ###### Abstract Current approaches to identifying driving heterogeneity face challenges in capturing the diversity of driving characteristics and understanding the fundamental patterns from a driving behaviour mechanism standpoint. This study introduces a comprehensive framework for identifying driving heterogeneity from an _Action-chain_ perspective. First, a rule-based segmentation technique that considers the physical meanings of driving behaviour is proposed. Next, an _Action phase_ Library including descriptions of various driving behaviour patterns is created based on the segmentation findings. The _Action-chain_ concept is then introduced by implementing _Action phase_ transition probability, followed by a method for evaluating driving heterogeneity. Employing real-world datasets for evaluation, our approach effectively identifies driving heterogeneity for both individual drivers and traffic flow while providing clear interpretations. These insights can aid the development of accurate driving behaviour theory and traffic flow models, ultimately benefiting traffic performance, and potentially leading to aspects such as improved road capacity and safety. ## I Introduction Driving behaviour plays a pivotal role in determining vehicle motion, substantially affecting traffic flow, fuel consumption, and emission. It is widely acknowledged that driving heterogeneity, which is defined as the difference between driving behaviours of driver/vehicle combinations under comparable conditions [1], does exist. Research has shown that this heterogeneity contributes to increased traffic accidents and congestion [2]. Additionally, in mixed automated-human traffic, accurate descriptions and predictions of human-driven vehicle (HDV) behaviour are crucial for the decision-making and control of connected and automated vehicles (CAVs). These have underlined the necessity of a better understanding and identification of the heterogeneity in human driving. It is well established that driving heterogeneity encompasses both intra-heterogeneity, which refers to driver-independent variability, and inter-driving heterogeneity, which involves differences in driving behaviour among drivers [1, 3]. However, directly measuring or detecting driving heterogeneity is challenging due to its reliance on human cognitive and physiological processes. With the increasing availability of naturalistic driving data, various efforts have been made to comprehensively and quantitatively analyse driving heterogeneity. The identification of driving heterogeneity from observed driving behaviour is typically approached in two ways [4]: 1) Employing techniques to characterise driving behaviour by inferring driving profiles from distinct driving events, and 2) Analysing driving behaviour without explicitly creating driving behaviour profiles. The former approach addressed the identification of driving heterogeneity as a classification or clustering problem, resulting in categorical output with discrete scales or numerical output with continuous scores. For example, Hoogendoorn et al. [5] developed a method to categorise driver states into low, medium, and high workload categories. Or, clustering techniques have been employed to define a few driving style groups such as aggressive, normal, and mild [2]. However, due to the stochastic and uncertain nature of driving behaviour, these limited groups are insufficient for capturing the diverse characteristics of driving behaviour. Additionally, the criteria used to define these groups are somewhat ambiguous and subjective, posing challenges in effectively eliminating individual biases. In contrast to employing subjectively defined classes, some research has focused on identifying driving heterogeneity by presenting a driving style space containing a vast array of categories without explicitly establishing driving behaviour profiles. For example, Qi et al. distinguished driving styles based on a space that included over 20 different types [6]. Another study converted car-following sequences into a comprehensive array of primitive driving patterns, and the distributions of these patterns were then utilised to analyse individual driving styles [7]. This approach allows for the recognition of a greater degree of variability in driving behaviour by encompassing various driving characteristics. However, it is essential to acknowledge that this broader categorisation of driving heterogeneity may lead to reduced clarity of the fundamental driving behaviours and a limited understanding of driving heterogeneity. Consequently, further research in this area is necessary to address these challenges. To bridge these research gaps, a novel framework is proposed to identify heterogeneity in longitudinal driving behaviour from an _Action-chain_ perspective. An _Action-chain_ is defined as a series of _Action phases_ over time. The contributions of this research are two-fold: i) A rule-based segmentation technique is presented to divide driving trajectories, considering the clear physical meanings of driving behaviour. ii) The concept of _Action phase_ and _Action-chain_ are first introduced to interpret driving behaviours, based on which a method for evaluating driving heterogeneity is proposed. The effectiveness of the framework was evaluated using real-world datasets, and the results demonstrate that the proposed methods can effectively identify driving heterogeneity at both individual drivers and traffic flow levels, providing clear interpretations. This approach offers valuable insights into understanding driving behaviour by uncovering underlying heterogeneity, which supports the development of accurate and robust driving behaviour and traffic flow models. ## II Framework description ### _Defining Action Phase and Action-chain_ The concept of "action points", which refers to specific moments of change in acceleration during driving [8], serves as the foundation to introduce the concept of _action trend_ in this study. While action points capture acceleration or deceleration, they do not fully capture the complexity of driving behaviours. To overcome this limitation, we further propose the concept of _Action phase_, which expands the scope by incorporating additional variables to provide more comprehensive information about driving behaviour. By examining the univariate trajectory of driving behaviour, illustrated by the example of velocity (\(v\)) in Figure 1, distinct states are obviously observed. Some trajectories exhibit upward trends, others display downward trends, while some maintain a relatively stable range of fluctuations that can be considered as a keeping trend. We refer to these moments of driving behaviour different tendencies as _action trends_, which are segmented by turning points. Specifically, _action trends_ are classified as "Increasing (I)", "Decreasing (D)", or "Stable (S)". To further refine the "Stable" trend, it is categorised as "Stable in a high value (H)" or "Stable in a lower value (L)". Thus, the _action trend_ space can be represented as \(S=\{I,D,H,L\}\), and the driving trajectory shown in Figure 1 can be expressed as \(S_{v}=\{D,L,I,D,L\}\). It is worth noting that while driving behaviour variables often exhibit synchronisation, our definition of _action trends_ allows for variations in the temporal changes of different variables. For example, when the velocity state is "Increasing", the acceleration state can be "Increasing", "Decreasing", "Stable", or a combination of them. Consequently, the definition of _action trends_ can be extended to other driving behaviour variables, such as acceleration and space headway. Thereafter, the concept of _Action phase_ is proposed by encompassing multiple variables, and each _Action phase_ label consists of multiple _action trend_ names. These _action trend_ names are estimated using uniform criteria derived from the group level of drivers in a certain traffic flow. To account for the inherent sequential nature of driving behaviour, it is essential to consider the temporal dependencies between _Actions phases_. Hence, the concept of an _Action-chain_ is introduced to represent a sequence of _Action phases_ and their relations. The behaviour of a vehicle over time may consist of one or more _Action-chains_, each corresponding to different responses to the environment. With the _Action-chain_ structure, driving behaviour over time can be characterised, which provides valuable insights into the underlying patterns and heterogeneity of driving behaviour. ### _Introducing the Novel Framework_ The proposed framework for identifying driving heterogeneity aims to estimate frame-wise driving trajectories and identify driving heterogeneity within specific traffic flow conditions. The entire procedure is illustrated in Figure 2, consisting of five main steps: Data Preparation, Trajectory Segmentation, Action phase Extraction, Action-chain Establishment, and Heterogeneity Evaluation. The extraction of _Action phase_ and the establishment of _Action-chains_ involve the preceding steps called Driving Behaviour Interpretation and Action-chain Implementation, respectively. Data plays a crucial role in the identification of heterogeneity and serves as a fundamental aspect of the analytical process. After data tracing and preprocessing, the time-series driving behaviour data are used as input for the segmentation algorithm (**Algorithm 1**). It is represented as \(x_{1},x_{2},...,x_{t}\), where \(x_{t}\) denotes the driving behaviour variable feature at the \(t\)-th frame. The segmentation algorithm (**Algorithm 1**) outputs \(l_{n}^{m}\), which represents the _action trend_ names of variable \(m\) to be recognised for the \(n\)-th segment, where \(n=1,2,...,N\). Based on the segmentation results, the driving behaviour of individual drivers can be visualised using driving behaviour maps, which highlight the unique characteristics of each driver. In the driving behaviour map, at the \(t\)-th frame, the state of driving behaviour is denoted as \(S_{t}=\{l^{1},l^{2},...,l^{m}\}\). Subsequently, **Algorithm 2** is designed to detect driving behaviour segments in which all variables have a single _action trend_. The output, denoted as _Action phase_ and represented as \(S_{n^{\prime}}=\{l^{1},l^{2},...,l^{m}\}\), signifies the _Action phase_ for the \(n^{\prime}\)-th segment, where \(n^{\prime}\in N^{\prime}\), \(N^{\prime}\) denotes the total number of _Action phases_ for an individual driver. All the output _Action phases_ form the _Action phase_ Library under a specific traffic flow. The actual size of this Library is generally smaller than the theoretical value \(m^{4}\) due to the nonexistence of certain state combinations in the real world, in accordance with fundamental driving behaviour theories. The length of the _Action phase_ at the \(n^{\prime}\)-th segment, referred to as the time label, is denoted as \(\mathcal{T}_{n^{\prime}}\). Considering the time-series nature of driving behaviour, an _Action phase_ transition probability algorithm (**Algorithm 2**), the time-series driving behaviour is defined as \(\mathcal{T}_{n^{\prime}}=\{l^{1},l^{2},...,l^{m}\}\). The time-series driving behaviour is defined as \(\mathcal{T}_{n^{\prime}}=\{l^{1},l^{2},...,l^{m}\}\). **3**) is implemented to capture the temporal dependencies between _Action phases_. An _Action phase_ and the next _Action phase_ obtained through the maximum transition probability constitute an _Action-chain_, representing the most probable driving behaviour adopted by drivers. The _Action-chain_ serves as a description of homogeneous driving behaviour and is used to distinguish heterogeneity in driving behaviour. Drivers who deviate more from the _Action-chains_ are considered to exhibit greater heterogeneity (conducted by **Algorithm 4**). ## III Method implementation ### _Rule-based Segmentation_ Traditional classification algorithms, such as the K-nearest neighbour method, support vector machines, and Convolutional Neural Networks have been commonly used for classifying driving styles or recognising driving patterns [9]. However, the segments obtained using these algorithms often lack clear interpretability in terms of physical characteristics. In contrast, rule-based segmentation is a relatively simple and interpretable method for dividing driving behaviour trajectories into meaningful segments. Therefore, we propose a rule-based method, referred to as **Algorithm 1** within the framework, to segment driving behaviour trajectories. Let \(V=\{v_{1},v_{2},...,v_{m}\}\) be a set of driving behaviour variables, such as velocity, acceleration, distance, etc. \(P=\{(x_{1},y_{1}),...,(x_{n},y_{n})\}\) represents a set of turning points for a single variable, which are calculated using calculus, specifically the first and second derivatives. **Algorithm 1** consists of the following steps: 1. Data preparation: Load the turning points of the selected variable. Calculate the variable changes \(\Delta y\) and time intervals \(\Delta x\) between neighbouring turning points. 2. Threshold setting: Define threshold values \(\theta_{1},\theta_{2}\) to differentiate between segments with state Increasing (I), Decreasing (D), or Stable (S). Set \(\gamma\) to determine whether a segment is too short and should be merged with its neighbouring segments. 3. Initial categorisation: If \(\Delta y>\theta_{1}\), meaning that the variable increases to a certain extent, which cannot be ignored, then label the segment as I. If \(\Delta y<\theta_{2}\), in which case the variable decreases to a non-negligible level, then label it as D. When \(\theta_{2}<\Delta y<\theta_{1}\), the variable keeps within a small range of changes and is labelled as S. 4. Merging: For each segment labelled as S, if the time interval \(\Delta x_{n}<\gamma\), and \(\Delta x_{n-1}>\gamma,\Delta x_{n+1}>\gamma\), merge the segment with its neighbouring segment \(n+1\). 5. "Stable" refinement: For the updated S segments, calculate the mean value of the variable for each segment. Update the labels S as stable in High (H) or Low (L) values based on the threshold \(\delta\). By implementing the above **Algorithm 1**, each variable in \(V\) is assigned _action trend_ labels with clear physical meanings. This rule-based method allows for effective segmentation of driving behaviour trajectories based on single variables. Subsequently, **Algorithm 2** is proposed to extract _Action phases_ with simple steps including 1) segmenting the trajectory using turning points of all considered variables, and 2) removing segments shorter than the threshold of drivers' reaction time \(\tau\). ### _Time-series Action Phase Probability Modeling_ The length of an _Action phase_ can vary and is denoted by the labels "Long (lg)" or "Short (st)" according to a threshold \(\eta\). Consequently, the time label space for _Action phases_ is represented as \(\mathcal{T}=\{lg,st\}\). Subsequently, _Action phase_ can be further described by a two-dimensional label space \(S^{\prime}=\{S,\mathcal{T}\}\), which is taken as input for the Action phase transition probability model (**Algorithm 3**). Let \(S^{\prime}_{n^{\prime}}\) and \(S^{\prime}_{n^{\prime}+1}\) represent two adjacent _Action phases_, where \(n^{\prime}\in\{1,2,\ldots,N^{\prime}-1\}\). The transition probability between them can be mathematically represented as a function \(\mathcal{R}\left(S^{\prime}_{n^{\prime}},S^{\prime}_{n^{\prime}+1}\right)\), which captures the underlying characteristics or patterns between \(S^{\prime}_{n^{\prime}}\) and \(S^{\prime}_{n^{\prime}+1}\). This transition probability function provides insight into the relationship and Fig. 2: A novel framework of identifying driving heterogeneity dynamics between consecutive _Action phases_ in the time-series analysis. ### _Coupled Markov Chain Theory_ Two main approaches are commonly used to implement the transition of driving behaviour segments. The first approach utilises Markov models, including Markov Chains and Hidden Markov Models (HMM) [7], which are easily interpretable and capable of capturing underlying structures. However, when dealing with a large number of hidden layers, HMM may become computationally inefficient and less accurate due to increased complexity. The second approach involves deep learning models such as Recurrent Neural Networks (RNN), Long Short-Term Memory (LSTM), and Gated Recurrent Unit (GRU) Networks [7]. These models can address the complexity limitation of HMM and capture complex relationships between _Action phases_. Nevertheless, they typically require a large amount of training data and are computationally expensive due to their gating mechanisms. In our case, the Markov Chain method is adopted to implement the Action Probability (**Algorithm 3**). The concept of a coupled chain refers to the collective behaviour of two independent systems, each following the principles of a classical Markov chain [10]. Let's consider two one-dimensional Markov chains (\(X_{i}\)) and (\(Y_{j}\)) that operate on the state space \(\{S_{1},S_{2},...,S_{n}\}\), with positive transition probabilities defined as \[\Pr(X_{i+1}=S_{k},Y_{j+1}=S_{f}|X_{i}=S_{l},Y_{j}=S_{m})=p_{lm,kf} \tag{1}\] here, the \((X_{i})\) chain describes the _Action phase_ state \(S^{\prime}\) and the \((Y_{i})\) chain describes the time label \(\mathcal{T}\). Then the coupled transition probability \(p_{lm,kf}\) on the state space \(\{S_{1},S_{2},...,S_{n}\}\times\{S_{1},S_{2},...,S_{n}\}\) is given by \[p_{lm,kf}=p_{lk}\cdot p_{mf} \tag{2}\] Two coupled one-dimensional Markov chains can be utilised to construct a two-dimensional spatial stochastic process on a lattice represented by (\(Z_{i,j}\)). The lattice consists of a two-dimensional domain of cells, as depicted in Figure 3. The deep blue cells represent known boundary cells, the light blue cells indicate known cells within the domain (past observations), and the white cells represent unknown cells. The future state used to determine the state of cell (\(i,j\)) is cell (\(N_{x},j\)), where each cell is identified by its row number \(i\) and column number \(j\). Then the conditional probabilities can be expressed as follows [11]: \[P_{lk}^{h}=\Pr(X_{i+1}=S_{k}|X_{i}=S_{l}) \tag{3}\] \[P_{mk}^{v}=\Pr(Y_{j+1}=S_{k}|Y_{j}=S_{m}) \tag{4}\] The stochastic process \((Z_{ij})\) is obtained by coupling the Markov chains \((X_{i})\) and \((Y_{j})\) while ensuring that these chains transition to the same states. Therefore, we have: \[\begin{split}&\Pr(Z_{i,j}=S_{k}|Z_{i-1,j}=S_{l},Z_{i,j-1}=S_{m}) \\ &=C\ \Pr(X_{i}=S_{k}|X_{i-1}=S_{l})\ \Pr(Y_{j-1}=S_{m})\end{split} \tag{5}\] where \(C\) is a normalising constant that arises from restricting transitions in the (\(X_{i}\)) and (\(Y_{j}\)) chains to the same states. It is calculated as: \[C=\left(\sum_{f=1}^{n}p_{lf}^{h}\cdot p_{mf}^{v}\right)^{-1} \tag{6}\] By combining Equation 5 and Equation 6, the required probability can be expressed as: \[\begin{split} p_{lm,k}&:=\Pr(Z_{i,j}=S_{k}|Z_{i-1, j}=S_{l},Z_{i,j-1}=S_{m})\\ &=\frac{p_{lk}^{h}\cdot p_{mk}^{v}}{\sum_{f}p_{lf}^{h}\cdot p_{ mf}^{v}},\,k=1,...,n\end{split} \tag{7}\] ## IV Data-based evaluation ### _Data Preprocessing_ In this study, the NGSIM highway dataset, which includes data from I-80 and US-101, was utilised to investigate the heterogeneity of longitudinal driving behaviour based on our proposed framework. A comprehensive preprocessing of the dataset, involving filtering and extraction, was conducted as described by Sun et al. [2]. Especially, drivers with trajectories lasting at least 50 seconds were selected to ensure an adequate amount of data for analysing longitudinal driving behaviour [12]. The final extracted dataset consisted of 123 drivers from the I-80 dataset and 848 drivers from the US-101 dataset. The driving behaviour variables considered in this study were velocity (\(v\)), acceleration (\(a\)), distance (\(d\)) between the preceding and following vehicles, and their speed difference (\(\Delta v\)). The threshold values used in **Algorithm 1** were determined based on empirical knowledge from literature [13], as summarised in Table I. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \(\Delta y\)**/(unit)** & \(\theta_{1}\) & \(\theta_{2}\) & \(\delta\) & \(\gamma\) & \(\tau\) & \(\eta\) \\ \hline \(v/(m/s)\) & 2 & -2 & 20 & 30 & 10 & 50 \\ \hline \(a/(m/s^{2})\) & 0.25 & -0.25 & 0.25 & 30 & 10 & 50 \\ \hline \(d/(m)\) & 1 & -1 & 1 & 30 & 10 & 50 \\ \hline \(\Delta v/(m/s)\) & 2 & -2 & 2 & 30 & 10 & 50 \\ \hline \end{tabular} \end{table} TABLE I: Parameter settings of **Algorithm 1** Fig. 3: Conditional Markov chain on the states of the future ### _Visualisation and Analysis of Action Phase_ The _action trend_ labels for the four driving behaviour variables are obtained using **Algorithm 1**. These results are then visualised, generating unique driving behaviour maps for each driver, as exemplified in Figure 4. In the figure, various colours represent different driving behaviour variables, with velocity, acceleration, distance, and speed difference represented in that order. The varying intensity of the same colour indicates different _action trend_ names, including Increasing, stable as High, stable as Low, and Decreasing. In Figure 3(a), the dominant _action trend_ for acceleration is "L", although instances of "I" and "D" can also be observed. The distance remains relatively stable without frequent _action trend_ changes. When comparing the driving behaviour maps of the four drivers shown, driver ID1264 from the I-80 dataset exhibits the fewest _action trend_ changes across the four variables. Conversely, drivers ID3 and ID1035 from the US-101 dataset demonstrate a higher frequency of the changes. The driving behaviour map offers an intuitive approach to interpreting driving behaviour by visualising the changes in driving behaviour over time. It is also important to note that this visualisation method relies on observation and should be complemented with further quantitative evaluation, which will be carried out in subsequent steps. The _Action phase_ Library for a specific traffic flow was constructed using **Algorithm 2**. The resulting Library consists of 142 _Action phases_ for the I-80 dataset and 228 _Action phases_ for the US-101 dataset. Table II presents the top 10 _Action phase_ along with their corresponding frequencies. Notably, both traffic flows exhibit a significant overlap in their high-frequency _Action phases_, and the top three _Action phases_ are identical for both datasets. These top _Action phases_ include "((L, L, H, H), st)", "((L, L, H, H), lg)", and "((L, L, L, H), st)", which indicate common driving behaviour across the datasets. The reason behind this is that the two datasets were collected during evening and morning rush hours respectively. In these periods, the density of traffic flow is significantly high, with most vehicles exhibiting car-following behaviour and even close to congestion. Due to this, there is limited variability in driving behaviours, resulting in a scarcity of "I" and "D" and a high frequency of "Stable (H and L)". The high-density traffic flows also provide an explanation for the highest frequency of occurring "L". ### _Analysis of Action-chain_ The transition probabilities from one _Action phase_ to another within the _Action phase_ Library were computed using **Algorithm 3**. Some _Action phases_ either have no transitions or exhibit very low probabilities of transition. Conversely, other _Action phases_ tend to be transitioned to by a greater number of _Action phases_. The results adhere to the fundamental principles of driving behaviour. For example, the _Action phase_ "((L, L, L, H), st)" from the US-101 dataset demonstrates higher probabilities of being transitioned. This can be attributed to the fact that the driving data were collected during the morning peak hour when there is typically high traffic flow density, leading drivers to adopt more consistent driving behaviours with lower values. Overall, each _Action phase_ was found to have a following _Action phase_ with the highest transition probability, resulting in the formation of an _Action-chain_, as illustrated in the examples provided in Table III. In the I-80 dataset, for instance, the _Action phase_ "((D, I, I, H), st)" has a probability of 0.68 to transition to "((L, I, H), st)", which is higher than any other _Action phases_. \begin{table} \begin{tabular}{|c|c|c|c|} \hline \multicolumn{2}{|c|}{**1-80**} & \multicolumn{2}{|c|}{**US-101**} \\ \hline _Action phase_ & **Frequency** & _Action phase_ & **Frequency** \\ \hline ([L,L,H,H], st) & 415 & ((L,L,H,H), st) & 2703 \\ \hline ([L,L,H,H], lg) & 219 & ((L,L,H,H), lg) & 1661 \\ \hline ([L,L,L,H), st) & 156 & ((L,L,L,H), st) & 965 \\ \hline ([L,L,H,H), st) & 68 & ((L,L,L,H), lg) & 672 \\ \hline ([L,L,H, H], g) & 65 & ((D,L,H,H), st) & 651 \\ \hline ([D,L,H,H), st) & 54 & ((L,L,L,L), st) & 480 \\ \hline ([L,L,L,L), st) & 41 & ((I,L,H,H), st) & 469 \\ \hline ([L,L,H,D), st) & 39 & ((D,L,H,H), lg) & 419 \\ \hline ([D,L,H,t), st) & 38 & ((D,L,H,t), st) & 412 \\ \hline ([D,L,H], st) & 30 & ((L,L,H,t), st) & 349 \\ \hline \end{tabular} \end{table} TABLE II: Statistics of _Action phase_ (Top 10) \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Dataset** & _Action phase_ & _Action phase_ to_ & **JTP** \\ \hline \multirow{4}{*}{1-80} & ((D, D, I, l), st) & ((D, L, L, l), st) & 0.68 \\ \cline{2-4} & ((D, I, I, H), st) & ((L, I, I, H), st) & 0.68 \\ \cline{2-4} & ((D, I, D, l), lg) & ((L, L, D, H), st) & 0.67 \\ \cline{2-4} &... &... &... \\ \cline{2-4} & ((D, D, H, D), lg) & ((L, L, H, st), & 0.52 \\ \hline \multirow{4}{*}{US-101} & ((D, D, D, l), st) & ((L, L, L, L, L), st) & 0.64 \\ \cline{2-4} & ((D, H, L, H), st) & ((L, L, L, H), st) & 0.64 \\ \cline{1-1} \cline{2-4} & ((I, L, L, D), st) & ((L, L, L, H), st) & 0.58 \\ \cline{1-1} \cline{2-4} &... &... &... \\ \cline{1-1} \cline{2-4} & ((I, L, D, l), st) & ((L, L, D, L), st) & 0.32 \\ \hline \end{tabular} \end{table} TABLE III: _Action-chain_ composed by the highest joint transition probability (JTP) Fig. 4: Visualisation of _actions_: the driving behaviour map ### _Evaluation of Driving Heterogeneity_ In this study, we assume that the maximum transition probability represents the generally adopted _Action phase_ of drivers in a specific traffic flow, indicating the average level of driving behaviour. However, in the real world, drivers often deviate from this general level due to heterogeneity in their driving behaviours. To quantify the heterogeneity, we define this heterogeneity as the deviation between the actual _Action phase_ transition and the _Action-chain_. The Mean Squared Error (MSE) is a commonly used method for measuring the average squared difference between two sets of data, and it serves as the metric to quantify driving heterogeneity in this context, see Equation 8. \[DH=\frac{1}{N^{\prime}}\times\sum_{n^{\prime}=1}^{N^{\prime}}(P^{\prime}_{n^{ \prime}}-P_{n^{\prime}})^{2} \tag{8}\] where \(N^{\prime}\) is the total number of _Action phases_, and \(P^{\prime}_{n^{\prime}}\) and \(P_{n^{\prime}}\) represent the transition probability of actual _Action phase_ and the maximum transition probability, respectively. A higher value indicates a greater driving heterogeneity. ### _Numerical Results and Discussions_ The heterogeneity of individual drivers in a specific traffic flow was calculated and subjected to further statistical analysis using the normal distribution. The \(3\sigma\) rule of thumb is commonly employed in data analysis to identify potential outliers or unusual behaviour. By applying the \(3\sigma\) principle, drivers with atypical driving behaviour were identified, as summarised in Table IV. These drivers may serve as potential factors contributing to increased traffic flow heterogeneity and negatively affecting traffic performance. It is also noteworthy that the degree of driving heterogeneity in the two traffic flows exhibits the same variance, see Figure 5; however, the drivers on US-101 (with \(\mu=0.08\)) display overall lower levels of heterogeneity compared to those on I-80 (with \(\mu=0.10\)). ## V Conclusion In this study, a novel framework was proposed to address driving heterogeneity in a comprehensive manner. By introducing the concepts of _Action phase_ and _Action-chain_, along with specialised algorithms, the framework effectively quantified and explained driving heterogeneity at both the individual driver and traffic flow levels. Real-world datasets were used for evaluation, validating the framework's ability to offer clear interpretations. Although the contributions of novel insights and findings of heterogeneity of driving behaviour, further validation and justification of the methods employed in each step are still required, which is a focus of our ongoing research.
2309.08109
CAT: a conditional association test for microbiome data using a leave-out approach
In microbiome analysis, researchers often seek to identify taxonomic features associated with an outcome of interest. However, microbiome features are intercorrelated and linked by phylogenetic relationships, making it challenging to assess the association between an individual feature and an outcome. Researchers have developed global tests for the association of microbiome profiles with outcomes using beta diversity metrics which offer robustness to extreme values and can incorporate information on the phylogenetic tree structure. Despite the popularity of global association testing, most existing methods for follow-up testing of individual features only consider the marginal effect and do not provide relevant information for the design of microbiome interventions. This paper proposes a novel conditional association test, CAT, which can account for other features and phylogenetic relatedness when testing the association between a feature and an outcome. CAT adopts a leave-out method, measuring the importance of a feature in predicting the outcome by removing that feature from the data and quantifying how much the association with the outcome is weakened through the change in the coefficient of determination. By leveraging global tests including PERMANOVA and MiRKAT-based methods, CAT allows association testing for continuous, binary, categorical, count, survival, and correlated outcomes. Our simulation and real data application results illustrate the potential of CAT to inform the design of microbiome interventions aimed at improving clinical outcomes.
Yushu Shi, Liangliang Zhang, Kim-Anh Do, Robert R. Jenq, Christine B. Peterson
2023-09-15T02:19:19Z
http://arxiv.org/abs/2309.08109v1
# CAT: a conditional association test for microbiome data using a leave-out approach ###### Abstract **Motivation:** In microbiome analysis, researchers often seek to identify taxonomic features associated with an outcome of interest. However, microbiome features are intercorrelated and linked by phylogenetic relationships, making it challenging to assess the association between an individual feature and an outcome. Researchers have developed global tests for the association of microbiome profiles with outcomes using beta diversity metrics. These methods are popular since microbiome-specific metrics offer robustness to extreme values and can incorporate information on the phylogenetic tree structure. Despite the popularity of global association testing, most existing methods for follow-up testing of individual features only consider the marginal effect and do not provide relevant information for the design of microbiome interventions. **Results:** This paper proposes a novel conditional association test, **CAT**, which can account for other features and phylogenetic relatedness when testing the association between a feature and an outcome. **CAT** adopts a leave-out method, measuring the importance of a feature in predicting the outcome by removing that feature from the data and quantifying how much the association with the outcome is weakened through the change in the coefficient of determination \(R^{2}\). By leveraging global tests including PERMANOVA and MiRKAT-based methods, **CAT** allows association testing for continuous, binary, categorical, count, survival, and correlated outcomes. We demonstrate through simulation studies that **CAT** can provide a direct quantification of feature importance that is distinct from that of marginal association tests. We illustrate **CAT** with applications to two real-world studies on the microbiome in melanoma patients: one examining the role of the microbiome in shaping immunotherapy response, and one investigating the association between the microbiome and survival outcomes. Our results illustrate the potential of **CAT** to inform the design of microbiome interventions aimed at improving clinical outcomes. **Availability:** Our method has been implemented in the R package CAT, which is publicly available at [https://github.com/Yushushi/CAT](https://github.com/Yushushi/CAT). **Contact:** [email protected] Introduction The development of next-generation sequencing techniques has enabled high-resolution profiling of the human microbiome. The challenges of analyzing microbiome data include its high dimensionality and the structural relatedness of the observed features. As a starting point in assessing the link between the microbiome and an outcome, researchers often test the global association of a phenotype with the microbiome as a whole. Global association tests addressing this question typically employ microbiome-specific metrics (also called beta diversity measures) that are more robust to extreme values than classical Euclidean distances. Popular choices include Bray-Curtis dissimilarity (Bray and Curtis, 1957), which is designed for count data, and weighted and unweighted UniFrac (Lozupone and Knight, 2005; Lozupone _et al._, 2007), which incorporate information on the location of features in a phylogenetic tree. For microbiome datasets with clearly identified sample groups, a popular global association test is PERMANOVA, which utilizes permutation testing to obtain a \(p\)-value for the null hypothesis that there is no difference in the location of the centroids across groups (Anderson, 2001). PERMANOVA has been widely used in microbial analysis, as it is simple to apply and only requires observations of the outcome variable and the pairwise distances or dissimilarities between samples. Moreover, recent adaptations of PERMANOVA can handle nested designs and correlated outcomes (Oksanen _et al._, 2022). Another popular global association test for microbiome data is MiRKAT, which is based on kernel machine regression (Zhao _et al._, 2015). One advantage of this method is that it can incorporate multiple candidate distance metrics to maximize power for a particular data set. In addition to continuous and binary response variables, MiRKAT has been extended to handle survival outcomes (Plantinga _et al._, 2017) and correlated or dependent samples (Zhan _et al._, 2018; Koh _et al._, 2019). These global association methods have been widely applied in microbiome studies. However, they cannot provide inference on specific taxa. Thus, a practical question in following up on a significant global association test result is to identify specific microbiome features that drive the global testing result. However, existing methods for testing the effect of individual taxa often ignore the presence of related features and focus only on the marginal effect of the individual feature. In particular, popular differential abundance methods such as ALDEx2 (Fernandes _et al._, 2014), DESeq2 (Love _et al._, 2014), and ANCOM-BC (Lin and Peddada, 2020) all adopt a marginal testing framework. An inherent problem in marginal testing of microbiome data is nested discoveries, where hits are linked within a taxonomic or phylogenetic hierarchy. Taxonomic trees reflect the traditional labeling and organization of microorganisms into groupings such as family or genus, while phylogenetic trees reflect evolutionary history, with branch points corresponding to events that gave rise to differences in the genomic sequences. Both types of trees play a key role in understanding microbiome data. Taxonomic labels serve as a basis for interpretation since they are standardized across studies. Phylogenetic trees are useful in analysis, as they encode rich information on sequence similarity, which drives phenotypic and functional similarity. The relatedness among features can make it challenging to pinpoint which taxa play a critical role in influencing outcomes. For example, when a genus is found to be significant, the corresponding higher taxonomic units to which it belongs, such as family and order, also tend to be significant. However, the precise taxonomic level most relevant to the outcome is difficult to establish. A conditional test can provide direct quantification of the importance of a specific feature in contributing information not captured by other features in the data set, addressing a question with potentially greater biological importance than a marginal test. Notably, the challenge of correlated predictors exists in many high-dimensional datasets, yet is particularly prominent in microbiome data. In other settings, researchers have proposed rigorous definitions of feature-outcome independence. Here, we adopt Candes _et al._ (2018)'s definition, where a feature is said to be "null" if and only if the outcome is independent of it conditionally on all other variables. In this paper, we present a novel conditional test, **CAT**, that provides a natural next step to follow up on a significant global association test result. **CAT** achieves the goal of assessing the importance of individual taxa while accounting for phylogenetic structure and other features in the data set. The remainder of the paper is organized as follows: Section 2 describes the proposed **CAT** method in detail. Section 3 demonstrates its performance using simulated data, and Section 4 illustrates the method through applications to real datasets with binary and survival outcomes. Section 5 concludes the paper with a discussion. ## 2 Approach We begin with an illustration highlighting the motivation behind our approach. In the left panel of Figure 1, we show a PCoA plot based on weighted UniFrac distance that depicts variation in microbiome composition between melanoma patients that responded to immunotherapy vs. those that did not. The 95% confidence regions for each group indicate there may be global differences in the microbiome profiles between the groups. In the right panel, we artificially removed counts belonging to the family Ruminococcaceae; the two clusters of patients become less separated in the PCoA plot, with a reduced distance between centroids. Removing Ruminococcaceae from the data weakened the global asso Figure 1: PCoA plots illustrating the global variation in the microbiome for melanoma patients who responded vs. did not respond to immunotherapy (Gopalakrishnan _et al._, 2018). We depict both the original data (left) and the modified data after removing counts belonging to the family Ruminococcaceae. The ellipses represent the 95% confidence regions. ciation between the microbiome and response, suggesting that Ruminococcaceae may play an important role in driving the global association results. In the remainder of this section, we describe our proposal for a formal testing procedure aimed at quantitatively describing this phenomenon. In our proposed approach, we start with the finest resolution features, corresponding to the leaf nodes in the taxonomic tree. For microbiome data derived from profiling of the 16S rRNA gene, these features are typically defined as Amplicon Sequence Variants (ASVs) or operational taxonomic units (OTUs). Our method may be applied as well to features derived from whole metagenome sequencing (WGS). By comparing the representative sequence for each feature against an established reference library, the feature can be assigned a taxonomic classification. Taxonomic levels from broad to specific follow the sequence kingdom, phylum, class, order, family, genus, and species. Based on the taxonomic assignments, one can draw a taxonomic tree reflecting the relatedness of all the features in the data set. Most existing methods for identifying individual feature associations from WGS or 16S data focus on marginal associations and do not quantify how much individual features contribute to the results from global association testing. To fill this gap, we propose **CAT**, which tests the association between specific features and outcomes while conditioning on the tree structure and the abundance of other features in the tree. The **CAT** method is rooted in the coefficient of determination \(R^{2}\) for global microbiome association tests. **CAT** estimates the change in \(R^{2}\) for a global test using the original dataset vs. a modified data set with the taxon of interest removed, and obtains a \(p\)-value through a bootstrap procedure, which entails sampling from the data with replacement (Efron and Tibshirani, 1994). One widely used global association test is the nonparametric PERMANOVA method (Anderson, 2001, 2017). We begin by briefly reviewing this test, which serves as a starting point for our proposed method. Consider the \(n\times n\) matrix of pairwise distances between \(n\) observations \(\mathbf{D}=[d_{ij}]\), where \(d_{ij}\) represents the distance between observation \(i\) and observation \(j.\) We transform \(\mathbf{D}\) to a new matrix \(\mathbf{A}=[a_{ij}]=[-\frac{1}{2}d_{ij}^{2}]\), and center \(\mathbf{A}\) to get Gower's centered matrix \[\mathbf{G}=\Big{(}\mathbf{I}-\frac{\mathbf{11^{\prime}}}{n}\Big{)}\mathbf{A}\Big{(} \mathbf{I}-\frac{\mathbf{11^{\prime}}}{n}\Big{)},\] where \(\mathbf{I}\) represents the \(n\times n\) identity matrix, and \(\mathbf{11^{\prime}}\) represents an \(n\times n\) matrix of all 1s. With an \(n\times g\) design matrix \(\mathbf{X}\) providing information on \(g\) covariates, we can compute the hat matrix \(\mathbf{H}=\mathbf{X}^{\prime}(\mathbf{X}^{\prime}\mathbf{X})^{-1}\mathbf{X}\). From the hat matrix, we can further calculate the total sum-of-squares (\(SS_{T}\)), the among-group sum-of-squares (\(SS_{A}\)), and the residual sum-of-squares (\(SS_{R}\)) as in MANOVA: \[SS_{T}=\mathrm{tr}(\mathbf{G}),\quad SS_{A}=\mathrm{tr}(\mathbf{HG}),\text{ and }SS_{R}=\mathrm{tr}\big{[}(\mathbf{I}-\mathbf{H})\mathbf{G}\big{]}.\] Just as in MANOVA, the coefficient of determination \(R^{2}\) can be calculated as the ratio of the sum of squares between groups (\(SS_{A}\)) to the sum of squares total (\(SS_{T}\)). It provides an indication of the strength of the relationship between the outcome variable and the microbiome profiles, with a value closer to 1 indicating a stronger relationship. We now describe how to apply **CAT** to test the conditional association between the outcome of interest and a specific taxon. Let \(X\) denote the outcome vector for \(n\) observations. Let \(\mathbf{Z}\) represent the \(n\times m\) matrix with the observed counts for the finest-resolution microbiome features, which correspond to the leaf nodes in a taxonomy tree \(\mathcal{T}\) with \(m\) leaves. We denote the set of leaf nodes for the full tree \(\mathcal{L}(\mathcal{T})=\{1,\ldots,m\}\). For any internal node in the tree \(t\), we let \(\mathcal{L}(t)\) denote the leaf nodes corresponding to its descendants. Given these definitions, we lay out the steps of the **CAT** procedure as follows: 1. Calculate the \(n\times n\) sample pairwise distance matrix \(\mathbf{D}\) for the original data matrix \(\mathbf{Z}\). 2. Perform PERMANOVA using \(\mathbf{D}\) and \(X,\) and obtain a coefficient of determination \(R^{2}\) for the outcome of interest. 3. For the specific taxon \(t\) being tested by **CAT**, generate a new data matrix \(\mathbf{Z}^{*}\) by converting all the elements of \(\mathcal{L}(t)\) to have 0 counts. 4. Calculate a new pairwise distance matrix \(\mathbf{D}^{*}\) using the modified data matrix \(\mathbf{Z}^{*}\). 5. Perform PERMANOVA using \(\mathbf{D}^{*}\) and get a new coefficient of determination \(R^{2*}\) for the outcome of interest. 6. Perform bootstrap sampling, selecting the matching sample from the pairwise distance matrix \(\mathbf{D}\), the modified distance matrix \(\mathbf{D}^{*}\), and the outcome \(X\) for \(B\) bootstrap samples. For each sample, compute the coefficients of determination for the original and modified distances, \(R^{2}_{(1)},R^{2}_{(2)},\ldots,R^{2}_{(B)}\) and \(R^{2*}_{(1)},R^{2*}_{(2)},\ldots,R^{2*}_{(B)}\). 7. Compute the differences between the original \(R^{2}\) and the leave-taxon-out \(R^{2*}\) for all \(B\) bootstrap samples. The estimated \(p\)-value is the proportion of the \(R^{2}\) differences that are less than zero: \[\hat{p}=\frac{1}{B}\sum_{i=1}^{B}I(R^{2}_{(i)}-R^{2*}_{(i)}<0).\] Figure 2 provides a toy example to illustrate how **CAT** converts the leaf counts under a specific taxon \(t\) to \(0\) in Step 3 of the procedure. Suppose that taxon \(t\) is strongly associated with the outcome of interest and that this association is not captured by other features in the tree. In that case, the removal of the counts descending from \(t\) will decrease the \(R^{2}\) in the PERMANOVA test. In contrast, removing a non-discriminating taxon would minimally affect the \(R^{2}\) of the PERMANOVA test. ### Conditional testing for MiRKAT The MiRKAT method (Zhao _et al._, 2015), which has been extended to handle survival outcomes (Plantinga _et al._, 2017) and correlated or dependent samples (Zhan _et al._, 2018; Koh _et al._, 2019), offers a powerful global association test based on the kernel regression framework. For the simplest situation where the outcome of interest \(Y\) is continuous, the model can be expressed as \[y_{i}=\beta_{0}+\mathbf{\beta}\mathbf{x}_{i}+f(\mathbf{z}_{i})+\varepsilon_{i}, \quad i=1,2,\ldots,n,\] where \(y_{i}\) is the outcome for the \(i\)th subject, \(\beta_{0}\) is the intercept term, \(\mathbf{\beta}\) is a vector of regression coefficients, and \(\mathbf{x}_{i}\) is a vector of covariates unrelated to the microbiome. The microbiome information for the \(i\)th sample is characterized by \(\mathbf{z}_{i}\), and \(f(\mathbf{z}_{i})\) is the output from a reproducing kernel Hilbert space \(\mathcal{H}_{k}\). The microbiome association test is equivalent to testing \(f(\mathbf{z})=0\). Given the pairwise distances, the kernel matrix between observations is taken as \(\mathbf{K}=-1/2\Big{(}\mathbf{I}-\frac{\mathbf{1}\mathbf{1}^{\prime}}{n}\Big{)} \mathbf{A}\Big{(}\mathbf{I}-\frac{\mathbf{1}\mathbf{1}^{\prime}}{n}\Big{)}\), with a correction when necessary to ensure that the matrix is positive semi-definite. Zhan (2019) showed that the squared MiRKAT statistic is proportional to the \(R^{2}\) statistic, or coefficient of determination, up to a constant factor. In this setting, \(R^{2}\) characterizes the fraction of variability in outcome similarity explained by microbiome similarity. This Figure 2: A schematic plot showing how to convert the descending counts to 0 for a particular taxon. Taxa in the red rectangles are children of the taxon of interest. permits the use of **CAT** for testing conditional association for a particular taxon. To implement this approach, the MiRKAT procedure can replace PERMANOVA in Steps 2, 5, and 6 of the **CAT** procedure. When multiple distance metrics are used, users can take the maximum of the \(R^{2}\) from different metrics. More broadly, any valid global testing method can be used in these steps. ## 3 Simulation study In this section, we illustrate the utility of **CAT** as a follow-up to global testing and compare results from **CAT** with those from existing marginal testing approaches on simulated data. We develop realistic simulation scenarios by starting from a real microbiome data set, which will be examined more closely in Section 4. We adopt a "spike-in" method to control the taxa driving the cross-group differences. ### Data generation To construct our simulation, we first obtained the 16S sequencing data from Gopalakrishnan _et al._ (2018), which examined the association between the gut microbiome and response to immunotherapy in melanoma patients. This is the same data set illustrated in Figure 1. The original data included 43 patients; 30 responded to therapy, while the rest were non-responders. The sequencing depths per sample in this study had a mean of 48,765. To quantify features from the raw sequencing data, we applied the UNOISE2 function (Edgar, 2016) to the 16S rRNA gene sequences, identifying 1,455 ASVs. Given the sequence for each ASV, we then applied the FastTree algorithm (Price _et al._, 2009) to build a phylogenetic tree. In our simulation set-up, we assume there are two groups (group 0 and group 1), each with 31 observations. We use the mean sequencing depth of the melanoma data as the number of sequences for each sample. The steps for generating the simulated data are: 1. For each group, set the expected abundance of each microbiome feature to that of the marginal distribution of the melanoma dataset. 2. Generate the number of sequences for each ASV from a Dirichlet multinomial distribution. Figure 3: Boxplots of the \(R^{2}\) values (left) and barplots of the percentage of \(p\)-values less than 0.05 from **CAT**, Mann-Whitney, ANCOM-BC, and DeSeq2 (right) over 200 simulated datasets. The first panel in each subplot in the right column represents results from the manipulated family, followed by the order above and the child genera below. bution with the sum of parameters for the Dirichlet distribution set to 62. (In the melanoma dataset, if we assume the data are from two Dirichlet multinomial distributions, the sums of the parameters are estimated to be 70 for the responder group and 54 for the non-responder group.) 3. For group 1, add a random number generated from a Poisson distribution with parameter \(\lambda\) to the number of sequences for the ASVs belonging to the feature being "spiked-in". Varying the parameter \(\lambda\) affects the signal strength; we test the performance of our method with \(\lambda\) set to 5, 10, 30, 50, and 70. The simulation study has two scenarios corresponding to three differential features: family Porphyromonadaceae, which accounts for 32 ASVs and 3.5% of the sequences in the melanoma dataset; and family Lachnospiraceae (346 ASVs, 14.0% of sequences). We chose moderately abundant families to best illustrate differential performance across methods. The range of family abundances varies widely and is highly skewed, with Bacteroidetesae comprising the highest proportion (41.98%) and Aeromonadaceae the lowest (0.00005%), resulting in a median abundance of 0.01%. ### Methods compared In applying **CAT**, we set the number of bootstrap samples to 1000. To fully leverage the phylogenetic information, we use the weighted UniFrac distance, which accounts for ASV abundance, as well as the topology and branch lengths of the phylogenetic tree. The phylogenetic tree used for calculating the weighted UniFrac distances is the phylogenetic tree of Gopalakrishnan _et al._ (2018)'s dataset. To illustrate the utility of **CAT** as a follow-up to global hypothesis testing, we show the distribution of \(R^{2}\) values in the real and modified data sets as side-by-side boxplots. Although **CAT** is unique in its focus on conditional association testing, we also provide results from the following marginal testing methods: a basic Mann-Whitney test, bias-corrected ANCOM (ANCOM-BC, Lin and Peddada, 2020), and DESeq2 (Love _et al._, 2014). Both the original and adjusted \(p\)-values for the DESeq2 method were computed. However, these methods have a different null hypothesis than **CAT**, so cannot be considered as direct competitors. ### Results Here, we report findings from the **CAT** test paired with PERMANOVA using the weighted UniFrac metric for \(\lambda=70\) for Porphromonadaceae and \(\lambda=10\) for Lachnospiraceae across 200 simulated datasets. We chose to focus on these settings as all methods achieved high power to detect the spiked-in feature, but had differential results regarding the significance of its parent and child nodes. The results from **CAT** for other \(\lambda\) values can be found in the Supplemental Material. Omitting sequences from the spiked-in feature or its parent node results in a sharp reduction in \(R^{2}\) values for both synthetic data sets (Figure 3 at left). The effect of omitting sequences from the child nodes is more nuanced; in the first data set, the child nodes of Porphromonadaceae are responsible for explaining some portion of the \(R^{2}\) value, while the child nodes of Lachnospiraceae may contribute less independent information. The percent of \(p\)-values less than 0.05 for the **CAT** test along with the results from existing marginal tests are shown at right in Figure 3. In addition to the family being "spiked-in", i.e., the feature with an abundance difference constructed between the groups under the simulation design, we also offer the hypothesis testing results for the order above and the genera below. The proposed **CAT** method can correctly reject the null hypothesis for the family directly manipulated and the order above most of the time. In some cases, the \(p\)-values obtained from **CAT** are congruent with those obtained from marginal tests. However, for some hypotheses the conditional testing approach of **CAT** is relatively more conservative than marginal tests. In particular, the genus _Butyrivibrio_, a child of the spiked-in feature, is consistently found to be significant by ANCOM-BC and DESeq2, while the **CAT** results indicate that this feature is not responsible for driving the global association results. This behavior of **CAT** reflects the nature of the conditional test; it will be less likely to reject the null hypothesis when other features have already explained the cross-group differences. In contrast, other tests estimate the marginal effect, which may overemphasize the importance of lower-level taxa. Application to real data We now discuss the use of **CAT** to analyze two real data sets examining the role of the microbiome in shaping melanoma patient outcomes. First, we consider the data set from Gopalakrishnan _et al._ (2018), which dealt with the association of the microbiome to immunotherapy response. In Section 4.2, to illustrate the use of **CAT** for time-to-event outcomes, we apply **CAT** with MiRKAT-S to the study of Spencer _et al._ (2021), which characterized the role of the microbiome in shaping progression free survival. ### Binary response In our first case study, we apply the **CAT** method to the melanoma dataset described in Gopalakrishnan _et al._ (2018). In this data set, there are global differences in microbiome composition between patients that responded to immunotherapy vs. those that did not; the \(p\)-value from the PERMANOVA test using weighted UniFrac for responder vs. nonresponder is less than 0.001. To identify specific differentially abundant features, Gopalakrishnan _et al._ (2018) relied on LefSe (Segata _et al._, 2011), the first step of which is to screen features with a Mann-Whitney test. We applied **CAT** to ascertain whether the hits they identified using LefSe remained significant under a conditional test. Table 1 shows the Mann-Whitney \(p\)-value as well as the \(R^{2}\) difference and \(p\)-value obtained using **CAT**; significant hits from **CAT** include the Phylum Firmicutes, its subordinate class Clostridia, the order Clostridales under Clostridia, the family Ruminococcaceae within Clostridales, and the genus _Ruminococccus_ that comes under Ruminococcaceae. Additionally, the species _R. bromii_ beneath _Ruminococccus_ is deemed significant. Within the same family, Ruminococcaceae, the genus Faecalibacterium, and the species underneath, prausnitzii, are also significant. However, some hits that were found to be significant using LefSe, including the genus _Gardnerella_ and the species _B. stercoris_ lose significance; this suggests that these microbiome features might not be good candidates for a microbiome intervention. The difference in \(R^{2}\) values across bootstrap iterations is illustrated in Figure 4. This figure further suggests that intervening on higher-level taxa, such as Ruminococcaceae, might be expected to exert a larger influence than designing an intervention focused on species-level hits. Overall, **CAT** provides many results that are congruent with the original \begin{table} \begin{tabular}{l|l|r|r|r} \hline Level & Taxon & MW & \(R^{2}\) & **CAT** \\ & & \(p\)-value & difference & \(p\)-value \\ \hline Phylum & Bacteroidetes & \(<\mathbf{0.01}\) & 0.0502 & 0.072 \\ Phylum & Firmicutes & \(<\mathbf{0.01}\) & 0.0337 & \(\mathbf{0.035}\) \\ Class & Bacteroidia & \(<\mathbf{0.01}\) & 0.0503 & 0.072 \\ Class & Clostridia & \(<\mathbf{0.01}\) & 0.0304 & \(\mathbf{0.050}\) \\ Class & Mollicutes & \(\mathbf{0.01}\) & 0.0001 & 0.084 \\ Order & Bacteroidales & \(<\mathbf{0.01}\) & 0.0503 & 0.072 \\ Order & Clostridiales & \(<\mathbf{0.01}\) & 0.0303 & \(\mathbf{0.050}\) \\ Family & Micrococcaceae & \(\mathbf{0.01}\) & \(<0.0001\) & 0.071 \\ Family & Ruminococcaceae & \(\mathbf{0.03}\) & 0.0365 & \(<\mathbf{0.001}\) \\ Genus & _Faecalibacteriumum_ & \(\mathbf{0.01}\) & 0.0151 & \(\mathbf{0.005}\) \\ Genus & _Gardnerella_ & \(\mathbf{0.03}\) & \(<0.0001\) & 0.879 \\ Genus & _Peptoniphilus_ & 0.12 & \(<0.0001\) & 0.148 \\ Genus & _Phascolarctobacterium_ & \(\mathbf{0.01}\) & 0.0010 & \(\mathbf{0.029}\) \\ Genus & _Rothia_ & \(\mathbf{0.01}\) & \(<0.0001\) & 0.071 \\ Genus & _Ruminococcus_ & \(\mathbf{0.03}\) & 0.0096 & \(<\mathbf{0.001}\) \\ Species & _B. stercoris_ & \(\mathbf{0.03}\) & \(<0.0001\) & 0.866 \\ Species & _F. prausnitzii_ & \(\mathbf{0.01}\) & 0.0151 & \(\mathbf{0.005}\) \\ Species & _M. hungatei_ & 0.18 & \(<0.0001\) & \(\mathbf{0.038}\) \\ Species & _R. bromii_ & 0.08 & 0.0043 & \(\mathbf{0.002}\) \\ \hline \end{tabular} \end{table} Table 1: Level in the taxonomic tree, taxon, Mann-Whitney \(p\)-value, \(R^{2}\) difference before and after removing candidate taxon, and \(p\)-value from **CAT** when applied to features identified by LEfSe in Gopalakrishnan _et al._ (2018). paper, yet provides novel insights into the conditional association. ### Survival outcomes To demonstrate the application of **CAT** with the MiRKAT-S method for survival outcomes, we employed it to analyze the dataset from Spencer _et al._ (2021). This dataset included 163 subjects undergoing systemic therapy for melanoma that were profiled using 16S rRNA sequencing and followed for progression free survival. Among these subjects, 86 progression events were observed, with a median PFS of 1.8 years. The microbiome profiling data included 3306 ASVs, corresponding to 346 unique taxa at the genus level or higher. In our case study, we focused on taxa found to be associated with treatment response by Spencer _et al._ (2021), including the phylum Firmicutes, class Clostridia, order Oscillispirales, family Ruminococcaceae, and genera _Faecalibacterium_ and _Ruminococcus_. We also tested the genera _Bifidobacterium_ and _Lactobacillus_, as these are popular in commercially available supplements and were tested as probiotic interventions as part of a pre-clinical experiment in the same study (Spencer _et al._, 2021). To apply **CAT**, we used Bray-Curtis and Jaccard distances with the MiRKAT-S method. In addition, we ran the univariate Cox model for each candidate feature for comparison. Table 2 displays the outcomes of our **CAT** method. Remarkably, **CAT** identifies features closer to the leaf nodes of the taxonomic tree as non-significant, as such features may not add explanatory information for the variability of the outcome. However, **CAT** finds statistical significance for higher-level taxa, encompassing order Oscillispirales, the class Clostridia above it, and the phylum Firmicutes above the class. In contrast, when using Cox models for marginal tests, the finer-resolution taxonomic units are deemed significant. ## 5 Concluding remarks To date, most existing methods for microbiome association testing have focused on either global or marginal testing. Here, we adopt a conditional testing framework, proposing the **CAT** method as a conditional association test using a leave-out approach. The leave-out idea is one of the most fundamental ideas in statistical testing, whose applications Figure 4: Application of **CAT** to Gopalakrishnan _et al._ (2018)’s dataset described in Section 4.1 range from likelihood ratio testing to type III sum-of-squares analysis. **CAT** combines this classic idea with the flexibility of using various metrics and testing approaches designed for microbiome data. It is worth mentioning that though microbiome data motivate us to propose **CAT**, the method is widely applicable to a wide range of situations where non-Euclidean pairwise distances are used. In this paper, we only illustrate how to test the conditional association for one taxon; testing the effect of several taxa as a unit is also possible by changing leave-one-out to leave-multiple-out in Step 3 of the procedure. Although our simulation results show that **CAT** may identify fewer features than marginal tests, particularly at lower levels in the tree, we do not directly address multiple testing in this paper. Commonly used multiplicity adjustments, such as the Bonferroni procedure or Benjamini-Hochberg procedure can be applied to \(p\)-values generated by **CAT**. However, the null hypotheses under testing in microbiome data sets are not independent. False discovery rate control in correlated conditional tests, particularly in the presence of a phylogenetic tree structure, is an area that we plan to address in the future. The results from the **CAT** method have clear real-world relevance. There is a growing effort to develop interventions aimed at reshaping the microbiome by administering "ratio \begin{table} \begin{tabular}{l|l|r|r|r} \hline \hline Level & Taxon & Cox & \(R^{2}\) & **CAT** \\ Level & Taxon & \(p\)-value & difference & \(p\)-value \\ \hline Phylum & Firmicutes & 0.54 & 0.0011 & **0.0018** \\ Class & Clostridia & 0.46 & 0.0010 & **0.0033** \\ Order & Oscillospirales & 0.18 & 0.0007 & **0.0364** \\ Family & Ruminococcaceae & **0.04** & 0.0006 & 0.1526 \\ Genus & _Faecalibacterium_ & **0.02** & 0.0005 & 0.3135 \\ Genus & _Ruminococcus_ & 0.31 & \(<0.0001\) & 0.1526 \\ Genus & _Bifidobacterium_ & **0.02** & \(<0.0001\) & 0.3422 \\ Genus & _Lactobacillus_ & 0.58 & \(<0.0001\) & 0.2979 \\ \hline \hline \end{tabular} \end{table} Table 2: Level in the taxonomic tree, taxon, univariate Cox model \(p\)-value, \(R^{2}\) difference before and after removing candidate taxon, and \(p\)-value from **CAT** when applied to features of interest from Spencer _et al._ (2021). nally designed" mixtures of bacterial strains (van der Lelie _et al._, 2021) which have been selected to confer potential benefit to the patient. Our method can identify features that are potentially influential conditioning on the existence of other bacteria. This will help clinicians identify intervention targets more efficiently. ## Funding KAD is partially supported by NIH P30CA016672, SPORE P50CA140388, CCTS TR000371 and CPRIT RP160693. CBP is partially supported by NIH R01 HL158796 and NIH/NCI CCSG P30CA016672. RRJ is partially supported by NIH R01 HL124112 and NIH R01 HL158796. YS is partially supported by NSF DMS 2310955.
2309.14520
New narrow resonance in the $e^+ e^- \to φη$ data by Belle collaboration
Fitting the recent $e^+e^-\ra\phi\,\eta$ data by the Belle collaboration with a theoretical formula reveals, besides the dominant $\phi(1680)$ resonance, two narrow resonances: expected $\phi(2170)$ resonance and an unexpected resonance with the mass of about 1851 MeV. Close proximity to the $X(1835)$ resonance suggests that the new resonance may be interpreted as the $p \bar p$ baryonium in an excited state. Follow-up analysis found the same resonance also in $e^+e^-\ra\omega\,\eta$ data by CMD-3 experiment.
Peter Lichard
2023-09-25T20:28:13Z
http://arxiv.org/abs/2309.14520v2
# New narrow resonance in the \(e^{+}e^{-}\to\phi\,\eta\) data by Belle collaboration ###### Abstract Fitting the recent \(e^{+}e^{-}\to\phi\,\eta\) data by the Belle collaboration with a theoretical formula reveals, besides the dominant \(\phi(1680)\) resonance, two narrow resonances: expected \(\phi(2170)\) resonance and an unexpected resonance with the mass of about 1851 MeV. Close proximity to the \(X(1835)\) resonance suggests that the new resonance may be interpreted as the \(p\bar{p}\) baryonium in an excited state. Follow-up analysis found the same resonance also in \(e^{+}e^{-}\to\omega\,\eta\) data by CMD-3 experiment. Recently, a study of the \(e^{+}e^{-}\to\eta\,\phi\) process with the Belle detector at the KEKB asymmetric-energy \(e^{+}e^{-}\) collider has been published [1]. Experimentalists explored the Initial State Radiation method and covered the \(e^{+}e^{-}\) invariant energy range from 1.56 to 3.96 GeV in 120 bins. The published values of the \(e^{+}e^{-}\to\eta\,\phi\) cross section are accompanied by statistical and systematic errors. As the members of the Belle Collaboration stated in the Introduction, one of the experiment's goals was to study the properties of the \(\phi(2170)\) resonance. This resonance was discovered in 2006 by the BABAR Collaboration at the Stanford Linear Accelerator Center in \(e^{+}e^{-}\to\phi\,f_{0}(980)\) reaction [2] and later confirmed by several experiments in various processes. Of those, we list two that confirmed the \(\phi(2170)\) resonance in the \(e^{+}e^{-}\) annihilation into the \(\eta(547)\phi(1020)\) system: BABAR[3] and BESIII experiment [4] at the Beijing Electron Positron Collider. When analyzing their cross-section data, the Belle collaboration first fit them by assuming one resonance. They got the parameters of the dominant \(\phi(1680)\) resonance correctly, see Table 1 in [1], even if the quality of the fit was not excellent [\(\chi^{2}/{\rm NDF}=85/60\), which translates to Confidence Level (CL) of 2%]. Then, they used a phenomenological fitting procedure tailored for two resonances to find the signs of the \(\phi(2170)\) resonance. Again, the parameters of the dominant \(\phi(1680)\) resonance were varied, whereas those of the other resonance were fixed at the values obtained for \(\phi(2170)\) by BESIII Collaboration [4]. No significant \(\phi(2170)\) signal was found. To investigate the reason for this conundrum, we decided to perform our own analysis of the Belle [1] cross-section data based on a theoretical formula capable of handling, in principle, any number of resonances. For the description of the electron-positron annihilation into the vector meson \(\phi\) and pseudoscalar meson \(\eta\) we use a Vector Meson Dominance (VMD) model based on the Feynman diagram depicted in Fig. 1 and the interaction Lagrangian \[{\cal L}_{V\phi\eta}(x)=\frac{g_{V\phi\eta}}{m_{V}}\epsilon_{\mu\nu\rho\sigma} \partial^{\mu}V^{\nu}(x)\,\partial^{\rho}\phi^{\sigma}(x)\,\eta(x)\,,\] where particle symbols denote the corresponding quantum fields. The \(\gamma V\) junction is parametrized as \(eM_{V}^{2}/g_{V}\) in analogy with the \(\gamma\rho^{0}\) junction \(eM_{\rho^{0}}^{2}/g_{\rho}\). Further, we define dimensionless quantity \(r=g_{V\phi\eta}/g_{V}\). When we consider several intermediate vector mesons \(V_{i}\), the \(e^{+}\)\(e^{-}\to\phi\,\eta\) cross section comes out as \[\sigma=\frac{\pi\alpha^{2}}{6}\frac{\lambda^{3/2}(s)(s+2z)}{s^{3}\sqrt{s(s-4z) }}\left|\sum_{i=1}^{n}\frac{r_{i}M_{i}e^{{\rm i}\delta_{i}}}{s-M_{i}^{2}+{\rm i }M_{i}\Gamma_{i}}\right|^{2}\,, \tag{1}\] where \(x=m_{\phi}^{2}\), \(y=m_{\eta}^{2}\), \(z=m_{e}^{2}\), \(\lambda(s)=s^{2}+x^{2}+y^{2}-2sx-2sy-2xy\), and \(\delta_{1}=0\). Performing the fits with one or two resonances, we got similar results as Belle Collaboration. One-resonance fit yielded a resonance with parameters close to those listed by the Particle Data Group [5] for the \(\phi(1680)\)[6]. When doing the fit with two resonances, we did not fix the mass and width of one of them to the expected \(\phi(2170)\) values, which the Belle collaboration did. Even thus, we got a clear signal of only one resonance, namely \(\phi(1680)\). The parameters of the second one do not correspond to any conceivable resonance. They reflect the effort of the minimalization program [7] to bring the theoretical curve closer to the data in the vast region around 1920 MeV. However, when we allowed three resonances, the situation drastically changed. The quality of the fit increased to CL=90.4%, and two narrow resonances appeared accompanying the dominant \(\phi(1680)\) resonance; see Fig. 2 and Table 1. The one with the higher mass lies in the region where the \(\phi(2170)\) resonance is expected. The mass of it is higher than the PDG average but agrees with the three BESIII measurements [8]. Here, \(\phi(2170)\) manifests as a sudden drop of the excitation curve, not as a peak in some experiments. The width we found is smaller than Figure 1: Feynman diagram defining our VMD model the PDG average. However, it agrees with those obtained in several experiments listed in [5]. The statistical significance of the newly found resonance with a mass of \((1850.7\pm 5.3)\) MeV and width of \((25\pm 35)\) MeV is low. There is a possibility that it is not a true resonance but a mere product of statistical fluctuation in data. To investigate this issue, we use the following "look everywhere" method: The minimalization of the \(\chi^{2}\) procedure is repeated many times with starting values of resonances 1 and 3 kept at values from Table 1. The starting value of \(M_{2}\) is randomly generated in the interval (1600, 2900) MeV, that of \(\Gamma_{2}\) in the interval (10, 40) MeV. The other starting values are chosen at \(r_{2}=0\) and \(\delta_{2}=\pi\). After the minimalization procedure, the observed new "resonances" were grouped into clusters with the masses within a narrow interval (we chose a width of 12 MeV). After repeating the procedure a thousand times, we identify 20 clusters (some with only a few entries), of which the most populated are shown in Table 2. Judging from the number of entries in clusters and the mean values of \(\chi^{2}\), the behavior of the excitation curve around 1851 MeV satisfies the resonance requirement better than other parts of the spectrum outside the two established resonances. Also, the extremely narrow widths of the other "resonances" shown in Table 2 indicate that they are products of statistical fluctuations. All this makes the resonance 2 in the rightmost column of Table 1 the only plausible candidate for the true resonance accompanying \(\phi(1680)\) and \(\phi(2170)\) in the Belle data. Of course, the statistical fluctuation origin of a resonance there cannot be completely ruled out. When we accept the possibility that the new resonance is a real effect, we should think about its origin. The mass of (1850.7\(\pm\)5.3) MeV and width of (25\(\pm\)35) MeV points toward the \(X(1835)\) resonance. However, the quantum numbers \(J^{PC}=0^{-+}\) of the latter prevent it from being produced in the direct channel of the \(e^{+}e^{-}\) annihilation, which requires \(J^{PC}=1^{--}\). In the listing dealing with \(X(1835)\), the PDG [5] mentions the possibility that this object is a superposition of two states differing in widths. More specifically, the results of BESIII experiment [9] "suggest the existence of either a broad state around 1.85 GeV/\(c^{2}\) with strong coupling to the \(p\bar{p}\) final states or a narrow state just below the \(p\bar{p}\) mass threshold". The latter's existence was proposed long ago [10] as a \(p\bar{p}\) state bound by strong interactions (hereafter, we call it protonium). The idea was later elaborated in several papers. The quantum numbers of the \(X(1835)\) suggest that its protonium component is in the \(L=S=J=0\) state. Because of the large bounding energy (BE) [11] we may \begin{table} \begin{tabular}{c c c c} \hline \hline \(\overline{M}\) (MeV) & \(\overline{\Gamma}\) (MeV) & \(\overline{\chi^{2}}\) & n \\ \hline 1850.8 & 21.7 & 47.2 & 247 \\ 2734.5 & 0.9 & 54.3 & 117 \\ 2529.4 & 1.9 & 55.2 & 106 \\ 2396.2 & 5.7 & 58.4 & 77 \\ \hline \hline \end{tabular} \end{table} Table 2: Mean mass, width, and \(\chi^{2}\) together with number of resonances in the most populated clusters after a thousand randomly generated searches. \begin{table} \begin{tabular}{l c c c} \hline \hline & 1 resonance & 2 resonances & 3 resonances \\ \hline \(r_{1}\) & 0.3761(94) & 0.291(29) & 0.360(14) \\ \(M_{1}\) (MeV) & 1650.5\(\pm\)4.1 & 1661.8\(\pm\)6.0 & 1656.8\(\pm\)4.9 \\ \(\Gamma_{1}\) (MeV) & 158.7\(\pm\)5.3 & 125\(\pm\)12 & 150.8\(\pm\)7.0 \\ \(\Sigma_{1}\) & 40 \(\sigma\) & 10 \(\sigma\) & 25 \(\sigma\) \\ \(r_{2}\) & & 0.050(32) & 0.0077(43) \\ \(M_{2}\) (MeV) & & 1921\(\pm\)86 & 1850.7\(\pm\)5.3 \\ \(\Gamma_{2}\) (MeV) & & 290\(\pm\)230 & 25\(\pm\)35 \\ \(\delta_{2}\) & & 0.8\(\pm\)1.2 & 5.59(44) \\ \(\Sigma_{2}\) & & 1.5 \(\sigma\) & 1.7 \(\sigma\) \\ \(r_{3}\) & & & 0.0044(22) \\ \(M_{3}\) (MeV) & & & 2215.7\(\pm\)8.3 \\ \(\Gamma_{3}\) (MeV) & & & 35\(\pm\)23 \\ \(\delta_{3}\) & & & 2.59(39) \\ \(\Sigma_{3}\) & & & 2.0 \(\sigma\) \\ \hline \(\chi^{2}\)/NDF & 83.6/69 & 58.5/65 & 47.1/61 \\ CL (\%) & 11.1 & 70.2 & 90.4 \\ \hline \hline \end{tabular} \end{table} Table 1: Parameters of the fits to the Belle data [1] up to 3 GeV based on Eq. (1). The statistical significance of the \(i\)th resonance is denoted as \(\Sigma_{i}\). Figure 2: The excitation curve obtained by the Belle collaboration [1] and our fit to it using formula (1) with three resonances. Only statistical errors of data are shown and were used in fitting. The parameters of the fit are provided in Table 1. expect protonium's excited states to exist below the \(2m_{p}\) threshold. Those of them with quantum numbers \(L=0\), \(S\!=\!J\!=\!1\) or \(L\!=\!2,S\!=\!J\!=\!1\) provide protonium with \(J^{PC}=1^{--}\), which can appear in the intermediate state of \(e^{+}e^{-}\) annihilation. Guided by this, we suggest that the new narrow resonance in the Belle data [1] is an excited state of the protonium. The situation is similar to strongly bound kaoniums (\(K^{+}K^{-}\)) and (\(K^{0}\bar{K^{0}}\)), the ground states of which cannot be produced in the direct channel of \(e^{+}e^{-}\) annihilation. Their excited states with \(L=1\) were detected as subthreshold poles in the \(e^{+}e^{-}\to K^{+}K^{-}\) and \(e^{+}e^{-}\to K^{0}_{S}K^{0}_{L}\) processes, respectively [12]. The BE of the excited protonium calculated from its mass from Tab. 1 comes out as 26 MeV. For comparison, let us recall that the BE of excited kaoniums was estimated at 10 MeV [12]. Salnikov and Milstein [13] have recently predicted a bound state of \(\Lambda_{c}\) and its antiparticle with BE of 38 MeV. We have started scanning other sets of the \(e^{+}e^{-}\) annihilation data. Up to now, we have found indication of excited protonium in the \(e^{+}e^{-}\to\omega\,\eta\) data of the CMD-3 experiment [14] at Budker Institute of Nuclear Physics in Novosibirsk, see Figure 3 and Table 3. The excited protonium mass and width (1847\(\pm\)16 MeV, 52\(\pm\)31 MeV) agree with those from Belle data [1] (1850.7\(\pm\)5.3 MeV, 25\(\pm\)35 MeV). Unfortunately, its statistical significance is only \(0.7\,\sigma\). **To conclude:** In this work, we indicated the possible existence of a resonance with the mass and width resembling that of the X(1835) resonance but with different quantum numbers \(J^{PC}=1^{--}\). It may be interpreted as an excited state of the protonium, a strongly bound \(p\bar{p}\) system, widely considered one of two components of the X(1835) resonance. Unfortunately, low statistical significance does not allow claiming the new resonance's evidence (3 \(\sigma\)). Additional confirmation is needed by analyzing existing data or by a new measurement. ###### Acknowledgements. I thank Filip Blaschke, Josef Juran, and Santu Mondal for the useful discussions.
2309.11470
Model-free tracking control of complex dynamical trajectories with machine learning
Nonlinear tracking control enabling a dynamical system to track a desired trajectory is fundamental to robotics, serving a wide range of civil and defense applications. In control engineering, designing tracking control requires complete knowledge of the system model and equations. We develop a model-free, machine-learning framework to control a two-arm robotic manipulator using only partially observed states, where the controller is realized by reservoir computing. Stochastic input is exploited for training, which consists of the observed partial state vector as the first and its immediate future as the second component so that the neural machine regards the latter as the future state of the former. In the testing (deployment) phase, the immediate-future component is replaced by the desired observational vector from the reference trajectory. We demonstrate the effectiveness of the control framework using a variety of periodic and chaotic signals, and establish its robustness against measurement noise, disturbances, and uncertainties.
Zheng-Meng Zhai, Mohammadamin Moradi, Ling-Wei Kong, Bryan Glaz, Mulugeta Haile, Ying-Cheng Lai
2023-09-20T17:10:10Z
http://arxiv.org/abs/2309.11470v1
# Model-free tracking control of complex dynamical trajectories with machine learning ###### Abstract Nonlinear tracking control enabling a dynamical system to track a desired trajectory is fundamental to robotics, serving a wide range of civil and defense applications. In control engineering, designing tracking control requires complete knowledge of the system model and equations. We develop a model-free, machine-learning framework to control a two-arm robotic manipulator using only partially observed states, where the controller is realized by reservoir computing. Stochastic input is exploited for training, which consists of the observed partial state vector as the first and its immediate future as the second component so that the neural machine regards the latter as the future state of the former. In the testing (deployment) phase, the immediate-future component is replaced by the desired observational vector from the reference trajectory. We demonstrate the effectiveness of the control framework using a variety of periodic and chaotic signals, and establish its robustness against measurement noise, disturbances, and uncertainties. ## I Introduction The traditional field of controlling chaotic dynamical systems mostly deals with the problem of utilizing small perturbations to transform a chaotic trajectory into a desired periodic one [1]. The basic principle is that the dynamically invariant set that generates chaotic motions contains an infinite number of unstable periodic orbits. For any desired system performance, it is often possible to find an unstable periodic orbit whose motion would produce the required behavior. The problem then becomes one to stabilize the system's state-space or phase-space trajectory around the desired unstable periodic orbit, which can be achieved through linear control in the vicinity of the orbit, thereby requiring only small control perturbations. The control actions can be calculated from the locations and the eigenvalues of the target orbit, which are often experimentally accessible through a measured time series, without the need to know the actual system equations [1; 2; 3; 4]. Controlling chaos can thus be done in a model-free, entirely data-driven manner, and the control is most effective when the chaotic behavior is generated by a low-dimensional invariant set, e.g., one with one unstable dimension or one positive Lyapunov exponent. However, for high-dimensional dynamical systems, controlling complex nonlinear dynamical networks is an active area of research [5; 6; 7]. The goal of tracking control is to design a control law to enable the output of a dynamical system (or a process) to track a given reference signal. For linear feedback systems, tracking control can be mathematically designed with rigorous guarantee of stability [8]. However, nonlinear tracking control is more challenging, especially when the goal is to make a system to track a complex signal. In robotics, for instance, a problem is to design control actions to make the tip of a robotic arm, or the end effector, to follow a complicated or chaotic trajectory. In control engineering, designing tracking control typically requires complete knowledge of the system model and equations. Existing methods for this include feedback linearization [9], back-stepping control [10], Lyapunov redesign [11], and sliding mode control [12]. These classic nonlinear control methods may face significant challenges when dealing with high-dimensional states, strong nonlinearity or time delays [13; 14], especially when the system model is inaccurate or unavailable. Developing model-free and purely data-driven nonlinear control methods is thus at the forefront of research. In principle, data-driven control has the advantage that the controller is able to adjust in real-time to new dynamics under uncertain conditions, but existing controllers are often not sufficiently fast "learners" to accommodate quick changes in the system dynamics or control objectives [15]. In this regard, tracking a complex or chaotic trajectory requires that the controller be a "fast responder" as the target state can change rapidly. At the present, developing model-free and fully data-driven control for fast tracking of arbitrary trajectories, whether simple or complex (ordered or chaotic), remains to be an challenging problem. This paper aims to address this challenge by leveraging recent advances in machine learning. Recent years have witnessed a rapid expansion of machine learning with transformative impacts across science and engineering. This progress has been fueled by the availability of vast quantities of data in many fields as well as by the commercial success in technology and marketing [15]. In general, machine learning is designed to generate models of a system from data. Machine-learning control is of particular relevance to our work, where a machine-learning algorithm is applied to control a complex system and generate an effective control law that maps the desired system output to the input. More specifically, for complex control problems where an accurate model of the system is not available, machine learning can leverage the experience and data to generate an effective controller. Earlier works on machine-learning control concentrated on discrete-time systems, but the past few years have seen growing efforts in incorporating machine learning into control theory for continuous-time systems in various applications [16; 17; 18; 19]. There are four types of problems associated with machine-learning control: control parameter identification, regression based control design of the first kind, regression based control design of the second kind, and reinforcement learning. For control parameter identification, the structure of the control law is given but the parameters are unknown, an example of which is developing genetic algorithms for optimizing the coefficients of a classical controller [e.g., PID (proportional-integral-derivative) control or discrete-time optimal control [20; 21]]. For regression-based control design of the first kind, the task is to use machine learning to generate an approximate nonlinear mapping from sensor signals to actuation commands, an example of which is neural-network enabled computation of sensor feedback from a known full state feedback [22]. For regression-based control design of the second kind, machine learning is exploited to identify arbitrary nonlinear control laws that minimize the cost function of the system. In this case, it is not necessary to know the model, control law structure, or the optimizing actuation command, and optimization is solely based on the measured control performance (cost function), for which genetic programming represents an effective regression technique [23; 24]. For reinforcement learning, the control law can be continually updated over measured performance changes based on rewards [25; 26; 27; 28; 29; 30; 31; 32]. It should be noted that historically, reinforcement learning control is not always model free. For instance, an early work [33] proposed a model-based learning method for nonlinear control where the basic idea is to decompose a complex task into multiple domains in space and time based on the predictability of the dynamics of the environment. A framework was developed [34; 35] to determine both the feedback and feed-forward components of the control input simultaneously, enabling reinforcement learning to solve the tracking problem without requiring complete knowledge of the system dynamics and leading to the on- and off-policy algorithms [36]. Since our aim is to achieve tracking control of complex and chaotic trajectories, a natural choice of the machine-learning framework is reservoir computing [37; 38; 39] that has been demonstrated to be powerful for model-free prediction of nonlinear and chaotic systems [40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53]. The core of reservoir computing is recurrent neural network (RNN) with low training cost where regularized linear regression is sufficient for training. Reservoir computing, shortly after its invention, was exploited to control dynamical systems [54] where an inverse model was trained to map the present state and the desired state of the system to the control signal (action). Subsequently, the trained reservoir computer was exploited as a model-free nonlinear feedback controller [55] as well as for detecting unstable periodic orbits and stabilizing the system about a desired orbit [56]. Reservoir computing and its variant echo state Gaussian process [57] were also used in model predictive control of unknown nonlinear dynamical systems [58; 59], which served as replacements of the traditional recurrent neural-network models with low computational cost. More recently, deep reservoir networks were proposed for controlling chaotic systems [60]. In this paper, we tackle the challenge of model-free and data-driven nonlinear tracking of various reference trajectories, including complex chaotic trajectories, with an emphasis on their potential applications in robotics. In particular, we examine the case of a two-arm robotic manipulator with the control objective of tracking any trajectories while using only partially observed states, denoted as vector \(\mathbf{y}(t)\). Our control framework has the following three features: (1) requirement of only partial state observation for both training and testing, (2) a machine-learning training scheme that involves the observed vectors at two consecutive time steps: \(\mathbf{y}(t)\) and \(\mathbf{y}(t+dt)\), and (3) use of a stochastic signal as the input control signal for training. With respect to feature (1), it may be speculated that the classical Takens delay-coordinate embedding methodology could be used to construct the full phase space from partial observation. However, in this case, the reconstructed state is equivalent to the original system but only in a topological sense: there is no exact state correspondence between the reconstructed and the original dynamical systems. For reservoir-computing based prediction and control tasks, such an exact correspondence is required. To our knowledge, achieving tracking control based on partial state observation is novel. In terms of features (2) and (3), we note a previous work [55] on machine-learning stabilization of linear and low-dimensional nonlinear dynamical systems, where the phase-space region to realize control is localized. This was effectively an online learning approach. In general, online learning algorithms have difficulties such as instability, modeling complexity as required for nonlinear control, and computational efficiency. For example, it is difficult for online learning to capture the intricate complex nonlinear dynamics, causing instability during control. Trajectory divergence is another common problem associated with online learning control, where sudden and extreme changes in the state can occur. In fact, as the dimension and complexity of the system to be controlled increase, online learning algorithms tend to fail. In contrast, offline learning is computationally extremely efficient and allows for more comprehensive and complex model training with minimum risk of trajectory divergence through repeated training. Our tracking framework entails following a dynamic and time-varying (even chaotic) trajectory in the whole phase space, where the offline controller can not only respond to disturbances and system variations but also adjust the control inputs to make the system output follow a continuously changing reference signal. As we will demonstrate, our control scheme brings these features together to enable continuous tracking of arbitrary complex trajectories. ## Results A more detailed explanation of the three features and their combination to solve the complex trajectory tracking problem is as follows. First, existing works on reservoir-computing based controllers relied on full state measurements [54, 55, 56, 58, 59, 60], but our controller requires measuring only a partial set of the state variables. Second, as shown in Fig. 1(a), during the training phase, the input to the machine learning controller consists of two components: the observation vector at two consecutive time steps: \(\mathbf{y}(t)\) and \(\mathbf{y}(t+dt)\). That is, at any time step \(t\), the second vector is the state of the observation vector in the immediate future. This input configuration offers several advantages, which are evident in the testing phase, as shown in Fig. 1(b). After the machine-learning controller has been trained, the testing input consists of the observation vector \(\mathbf{y}(t)\) and the desired observation vector \(\mathbf{y}_{\mathrm{d}}(t)\), calculated from the reference trajectory to be tracked. The idea is that, during the testing or deployment, the immediate future state of the observation is manipulated to match the desired vector from the trajectory. This way, the output control signal from the machine-learning controller will make the end effector of the robotic manipulator to precisely trace out the desired reference trajectory. The third feature is the choice of the control signal for training. Taking advantage of the fundamental randomness underlying any chaotic trajectory, we conduct the training via a completely stochastic control input, as shown in Fig. 1(c), where the reference trajectory generated by such a control signal through the underlying dynamical process is a random walk. Compared with a deterministic chaotic trajectory with short-term predictability, the random-walk trajectory is more complex as its movements are completely unpredictable. As a result, the machine-learning controller trained with a stochastic signal will possess a level of complexity sufficient for controlling or overpowering any deterministic chaotic trajectory. In general, our machine-learning controller so trained is able to learn a mapping between the state error and a suitable control signal for any reference trajectory. In the testing phase, given the current and desired states, the machine-learning controller generates the control signal that enables the robotic manipulator to track any desired complex reference trajectory, as illustrated in Fig. 1(d). We demonstrate the working and power of our machine-learning tracking control using a variety of periodic and chaotic trajectories, and establish the robustness against measurement noise, disturbances, and uncertainties. While our primary machine-learning scheme is reservoir computing, we also test the architecture of feed-forward neural networks and demonstrate its working as an effective tracking controller, albeit with higher computational time complexity. Overall, our work provides a powerful model-free data-driven control framework that only relies on partial state observation and can successfully track complex or chaotic trajectories. ### Principle of machine-learning based control An overview of the working principle of our machine-learning based tracking control is as follows. Consider a dynamical process to be controlled, e.g., a two-arm robotic system, as indicated in the green box on the left in Fig. 2. The objective of control is to make the end effector, which is located at the tip of the outer arm, track a complex trajectory. Let \(\mathbf{x}\in\mathcal{R}^{D}\) represent the full, \(D\)-dimensional state space of the process. An observer has access to part of the full state space and produces a \(D^{\prime}\)-dimensional measurement vector \(\mathbf{y}\), where \(D^{\prime}<D\). A properly selected and trained machine-learning scheme takes \(\mathbf{y}\) as its input and generates a low-dimensional control signal \(\mathbf{u}(t)\in\mathcal{R}^{D^{\prime\prime}}\) (e.g., two respective torques applied to the two arms), where \(D^{\prime\prime}\leq D^{\prime}\), to achieve the control objective. The workings of our control scheme can be understood in terms of the following three essential components: (1) a mathematical description of the dynamical process and the observables (**Methods**), (2) a physical description of how to obtain the control signals from the observables (known as inverse dynamics - **Methods**) and (3) the machine-learning scheme (Supplementary Note 1). The state variable of the two-joint robot-arm system is eight-dimensional: \(\mathbf{x}\equiv[C_{x},C_{y},q_{1},q_{2},\dot{q}_{1},\dot{q}_{2},\ddot{q}_{1}, \ddot{q}_{2}]^{T}\), where \(C_{x}\) and \(C_{y}\) are the Cartesian coordinates of the end effector, \(q_{i}\), \(\dot{q}_{i}\) and \(\ddot{q}_{i}\) are the angular position, angular velocity and angular acceleration of aim \(i\) (\(i=1,2\)). The measurement vector is four-dimensional: \(\mathbf{y}\equiv[C_{x},C_{y},\dot{q}_{1},\dot{q}_{2}]^{T}\). A remarkable feature of our framework is that a purely stochastic signal can be conveniently used for training. As illustrated in Fig. 1(c), the torques \(\tau_{1}(t)\) and \(\tau_{2}(t)\) applied to the two arms, respectively, are taken to be stochastic signals from a uniform distribution, which produce a random-walk type of trajectory of the end effector. The control input for training is \(\mathbf{u}(t)=[\tau_{1}(t),\tau_{2}(t)]^{T}\), as shown in Fig. 3(a). To ensure continuous control input, we use a Gaussian filter to smooth the noise input data. With the control signal, the forward model Eq. (13) (in **Methods**) produces the state vector \(\mathbf{x}(t)\) and the observer generates the vector \(\mathbf{y}(t)\). The observed vector \(\mathbf{y}(t)\) and its delayed version \(\mathbf{y}(t+dt)\) constitute the input to the reservoir computing machine that generates a control signal \(\mathbf{O}(t)\) as the output, leading to the error signal \(\mathbf{e}(t)=\mathbf{O}(t)-\mathbf{u}(t)\) as the loss function for training the neural network. A well trained reservoir can then be tested or deployed to generate any desired control signal, as illustrated in Fig. 3(b). In particular, during the testing phase, the input to the reservoir computer consists of the observed vector \(\mathbf{y}(t)\) and the desired vector \(\mathbf{y}_{\mathrm{d}}(t)\) characterized by the two Cartesian coordinates of the reference trajectory of the end effector and the resulting angular velocities of the two arms. Note that, given an arbitrary reference trajectory \(\{C_{x}(t),C_{y}(t)\}\), the two angular velocities can be calculated (extrapolated) from Eqs. (8) and (9) (in **Methods**). The output of the reservoir computing machine is the two required torques \(\tau_{1}(t)\) and \(\tau_{2}(t)\) that drive the two-arm system so that the end effector traces out the desired reference trajectory. Training.The detailed structure of the data and the dynamical variables associated with the training process is described, as follows. The training phase is divided into a number of uncorrelated episodes, each of length \(T_{\mathrm{ep}}\), which defines the resetting time. At the start of each episode, the state variables including \([\dot{q}_{1},\dot{q}_{2},\ddot{q}_{1},\ddot{q}_{2}]\) along with the controller state are reset. The initial angular positions \(q_{1}\) and \(q_{2}\) are randomly chosen in their defined range, respectively. For each episode, the process's control input is stochastic for a time duration of \(T_{\mathrm{ep}}\), generating a torque matrix of dimension \(2\times T_{\mathrm{ep}}\), as illustrated in Fig. 4. For the same time duration, the state \(\mathbf{x}\) of the dynamical process and the observed state \(\mathbf{y}\) can be expressed as a \(8\times T_{\mathrm{ep}}\) and a \(4\times T_{\mathrm{ep}}\) matrix, respectively. At each time step \(t\), the input to the reservoir computing machine, the concatenation of \(\mathbf{y}(t)\) and \(\mathbf{y}(t+dt)\), is an \(8\times 1\) vector. The neural network learns to generate a control input that takes the process's output from \(\mathbf{y}(t)\) to \(\mathbf{y}(t+dt)\) so as to satisfy the tracking goal. The resulting trajectory of the end effector of the process, due to the stochastic input torques, is essentially a random walk. To ensure that the random walk covers as much of the state space as possible, the training length and machine-learning parameters must be appropriately chosen. Testing.In the testing phase, the trained neural network inverts the dynamics of the process. In particular, given the current and desired output, the neural network generates the control signal to drive the system's output from \(\mathbf{y}(t)\) to \(\mathbf{y}(t+dt)\) while minimizing the error between \(\mathbf{y}(t+dt)\) and \(\mathbf{y}_{\mathrm{d}}(t+dt)\). We shall demonstrate that our machine-learning controller is capable of tracking any complicated trajectories, especially a variety of chaotic trajectories. With a reservoir controller and the inverse model, our tracking-control framework is able to learn the mapping between the current and desired position of the end effector and deliver a proper control signal, for a given reference trajectory. For demonstration, we use 15 different types of reference trajectories including those from low- and high-dimensional chaotic systems. (The details of the generation of these reference trajectories are presented in Supplementary Note 2) Note that the starting position of the end effector is not on the given reference trajectory, requiring a "bridge" to drive the end effector from the starting position to the trajectory (See Supplementary Note 3). Here we also address the issue of probability of control success and the robustness of our method against measurement noise, disturbance, and parameter uncertainties. ### Examples of tracking control The basic parameter setting of the reservoir controller is as follows. The size of the hidden-layer network is \(N_{r}=200\). The dimensionless time step of the evolution of dynamical network is \(dt=0.01\). A long training length is chosen: \(200,000/dt\) so as to ensure that the learning experience of the neural network extends through most of the phase space in which the reference trajectory resides. The testing length is \(2,500/dt\), which is sufficient for the controller to track a good number of complete cycles of the reference trajectory. The values of the reservoir hyperparameters obtained through Bayesian optimization are: spectral radius \(\rho=0.76\), input weights factor \(\gamma=0.76\), leakage parameter \(\alpha=0.84\), regularization coefficient \(\beta=7.5\times 10^{-4}\), link probability \(p=0.53\), and the bias \(w_{\mathrm{b}}=2.00\). The training phase is divided into a series of uncorrelated episodes, ensuring that the velocity or acceleration of the robot arms will not become unreasonably large during the random-walk motion of the reference trajectory. The episodes are initialized at time \(T_{\mathrm{ep}}=80/dt\). The angular positions \(q_{1}\) and \(q_{2}\) of the two arms is set to a random value uniformly distributed in the ranges \([0,2\pi]\) and \([-\pi,\pi]\), respectively. The angular velocities and accelerations \([\dot{q}_{1},\dot{q}_{2},\ddot{q}_{1},\ddot{q}_{2}]\) of the two arms as well as the reservoir state \(\mathbf{r}\) are set to zero initially. From the values of \(q_{1}\) and \(q_{2}\), the coordinates \(C_{x}\) and \(C_{y}\) of the end effector can be obtained from Eq. (7). At the beginning of each episode, since \(q_{1}\) and \(q_{2}\) are random, the end-effector will be a random point inside a circle of radius \(l_{1}+l_{2}=1\) centered at the origin. Figure 5(a) shows the random-walk reference trajectory used in training and examples of the evolution of the dynamical states of the two arms (in two different colors): \(q_{1,2}(t)\), \(\dot{q}_{1,2}(t)\), \(\ddot{q}_{1,2}(t)\), and \(\tau_{1,2}(t)\). To maintain the continuity of the control signal during the training phase, we invoke a Gaussian filter to smooth the noisy signals. Given the control signal \(u(t)=[\tau_{1}(t),\tau_{2}(t)]\) and the state variables \([q_{1,2}(t),\dot{q}_{1,2}(t)]\) at each time step, the angular accelerations \(\ddot{q}_{1,2}(t)\) can be obtained from Eq. (4). At the next time step, the angular positions and velocities are calculated using \[q_{1,2}(t+dt) =q_{1,2}(t)+\dot{q}_{1,2}(t)\cdot dt, \tag{1}\] \[\dot{q}_{1,2}(t+dt) =\dot{q}_{1,2}(t)+\ddot{q}_{1,2}(t)\cdot dt.\] The purpose of the training is for the reservoir controller to learn the intrinsic mapping from \(\mathbf{y}(t)\) to \(\mathbf{y}(t+dt)\) and to produce an output control signal \(u(t)=[\tau_{1}(t),\tau_{2}(t)]\). In the testing phase, given the current measurement \(\mathbf{y}(t)\) and the desired measurement \(\mathbf{y}_{\mathrm{d}}(t+dt)\), the reservoir controller generates a control signal and feed it to the process. The tracking error is the difference between \(\mathbf{y}_{\mathrm{d}}(t+dt)\) and \(\mathbf{y}(t+dt)\). Figure 5(b) presents four examples: two chaotic (Lorenz and Mackey-Glass) and two periodic (a circle and an eight figure) reference trajectories, where in each case, the angular positions, velocities, and accelerations of both arms together with the control signal (the two torques) delivered by the reservoir controller are shown. As the reservoir controller has been trained to track a random walk signal, which is fairly complex and chaotic, it possesses the ability to accurately track these types of deterministic signals. Our machine-learning controller, by design, is generalizable to arbitrarily complex trajectories. This can be seen, as follows. In the training phase, no specific trajectory is used. Rather, training is accomplished by using a stochastic control signal to generate a random-walk type of trajectory that "travels" through the entire state-space domain of interest. The machine-learning controller does not learn any specific trajectory example but a generic map from the observed state at the current time step to the next under a stochastic control signal. The training process determines the parameter values for the controller, which are fixed when it is deployed in the testing phase. The required input for testing is the current observed state \(\mathbf{y}(t)\) and the desired state \(\mathbf{y}_{\mathrm{d}}(t)\) from the reference trajectory. The so-designed machine-learning controller is capable of making the system to follow a variety of complex periodic or chaotic trajectories to which the controller is not exposed during training. (Supplementary Notes 2 and 4 present many additional examples.) ### Robustness against disturbance and noise We consider normally distributed stochastic processes of zero mean and standard deviations \(\sigma_{\mathrm{d}}\) and \(\sigma_{\mathrm{m}}\) to simulate disturbance and noise, which are applied to the control signal vector \(\mathbf{u}\) and the process state vector \(\mathbf{x}\), respectively, as shown in Fig. 2. Figures 6(a) and 6(b) show the ensemble-averaged testing RMSE (root mean square error, defined in Supplementary Note 1) versus \(\sigma_{\mathrm{d}}\) and \(\sigma_{\mathrm{m}}\), respectively, for tracking of the chaotic Lorenz reference trajectory, where 50 independent realizations are used to calculate the average errors. In the case of disturbance, near zero RMSEs are achieved for \(\sigma_{\mathrm{d}}\lesssim 10^{0.5}\), while the noise tolerance is about \(10^{-1}\). Color-coded testing RMSEs in the parameter plane \((\sigma_{\mathrm{d}},\sigma_{\mathrm{m}})\) are shown in Fig. 6(c). Those results indicate that, for reasonably weak disturbances and small noise, the tracking performance is robust. (Additional examples are presented in Supplementary Note 4.) ### Robustness against parameter uncertainties The reservoir controller is trained for ideal parameters of the dynamical process model. However, in applications, the parameters may differ from their ideal values. For example, the lengths of the two robot arms may deviate from what the algorithm has been trained for. More specifically, we narrow our attention to the uncertainty associated with the arm lengths, as variations in the mass parameters do not noticeably impact the control performance. Figure 7 shows the results from the uncertainty test in tracking a chaotic Lorenz reference trajectory. It can be seen that changes in the length \(l_{1}\) of the primary arm have little effect on the performance. Only when the length \(l_{2}\) of the secondary arm becomes much larger than \(l_{1}\) will the performance begin to deteriorate. The results suggest that our control framework is able to maintain good performance if the process model parameters are within reasonable limits. In fact, when the lengths of the two robot arms are not equal, there are reference trajectories that the end-effector cannot physically track. For example, consider a circular trajectory of radius \(l_{1}+l_{2}\). For \(l_{2}<l_{1}\), it is not possible for the end effector to reach the points in the circle of radius \(l_{1}-l_{2}\). More results from the parameter-uncertainty test can be found in Supplementary Note 4. The issues of safe region of initial conditions for control success, tracking speed tolerance, and robustness against variations in training parameters are addressed in Supplementary Note 5. ## Discussion The two main issues in control are: (1) regularization, which involves designing a controller so that the corresponding closed-loop system converges to a steady state, and (2) tracking - to make the output of the closed-loop system track a given reference trajectory continuously. In both cases, the goal is to achieve optimal performance despite disturbances and initial states [61]. The conventional method for control systems design is linear quadratic tracker (LQT), whose objective is, e.g., to design an optimal tracking controller by minimizing a predefined performance index. Solutions to LQT in general consist of two components: a feedback term obtained by solving an algebraic Riccati equation and a feed-forward term which is obtained by solving a non-causal difference equation. These solutions require complete knowledge of the system dynamics and cannot be obtained in real time [62]. Another disadvantage of LQT is that it can be used only for the class of reference trajectories generated by an asymptotically stable command generator that requires the trajectory to approach zero asymptotically. Furthermore, the LQT solutions are typically non-causal due to the necessity of backward recursion, and the infinite horizon LQT problem is challenging in control theory [63]. The rapidly growing field of robotics requires the development of real-time, non-LQT solutions for tracking control. We have developed a real-time nonlinear tracking control method based on machine learning and partial state measurements. The benchmark system employed to illustrate the methodology is a two-arm robotic manipulator. The goal is to apply appropriate control signals to make the end effector of the manipulator to track any complex trajectory in a 2D plane. We have exploited reservoir computing as the machine-learning controller. With proper training, the reservoir controller acquires inherent knowledge about the dynamical system generating the reference trajectory. Our inverse controller design method requires the observed state vector and its immediate future as input to the neural network in the training phase. The testing or deployment phase requires a combination of the current and desired output measurements: no future measurements are needed. More specifically, in the training phase, the input to the reservoir neural network consists of two vectors of equal dimension: (a) the observed vector from the robotic manipulator and (b) its immediate future version. This design enables the controller to naturally associate the second vector with the immediate future state of the first vector in the testing phase and to generate control signals based on this association. After training, the parameters of the machine-learning controller are fixed for testing, which distinguishes our control scheme from online learning. The controller in the testing phase is deployed to track a desired reference trajectory since the immediate future vectors \(\mathbf{y}(t+dt)\) are replaced by the states generated from the desired reference trajectory, which are recognized by the machine as the desired immediate future states of the robotic manipulator to be controlled. The control signal generated in this manner compels the manipulator to imitate the dynamical system that generates the reference trajectory, resulting in precise tracking. We also take advantage of stochastic control signals for training the neural network to enable it to gain as much dynamical complexity as possible. We have tested this reservoir computing based tracking control using a variety of periodic and chaotic reference trajectories. In all the cases, accurate tracking for an arbitrarily long period of time can be achieved. We have also demonstrated the robustness of our control framework against input disturbance, measurement noise, process parameter uncertainties, and variations in the machine-learning parameters. A finding is that selecting the starting end-effector position "wisely" can improve the tracking success rate. In particular, we have introduced the concept of "safe region" from which the initial position of the end effector should be chosen (Supplementary Note 5). In addition, the effects of the amplitude of the stochastic control signal used in training and of the "speed limit" of the reference trajectory on the tracking success rate have been investigated (Supplementary Note 5). We have also demonstrated that feedforward neural networks can be used to replace reservoir computing (Supplementary Note 6). The results suggest the practical utilities of our machine-learning based tracking controller: it is anticipated to be deployable in real-world applications such as unmanned aerial vehicle, soft robotics, laser cutting, soft robotics, and real-time tracking of high-speed air launched effects. Finally, we remark that there are traditional methods for tracking control, such as PID, MPC (model predictive control), and \(H\infty\) trackers (see [20; 21], references therein). In terms of computational complexity, these classical controllers are extremely efficient, while the training of our machine-learning controller with stochastic signals can be quite demanding. However, there is a fundamental limitation with the classic controllers: such a controller can be effective only when its parameters were meticulously tuned for a specific reference trajectory. For a different trajectory, a completely different set of parameters is needed. That is, when the parameters of a classic controller are set, in general it cannot be used to track any alternative trajectory. In contrast, our machine-learning controller overcomes this limitation: it possesses the remarkable capability and flexibility to track any given trajectory after a single training session! This distinctive attribute sets our approach apart from conventional methods, so a direct comparison with these methods may not be meaningful. ## Methods ### Dynamics of joint robot arms The dynamics of the system of \(n\)-joint robot arms can be conveniently described by the standard Euler-Lagrangian method [64]. Let \(T\) and \(U\) be the kinetic and potential energies of the system, respectively. The equations of motion can be determined from the system Lagrangian \(L=T-U\) as \[\frac{d}{dt}\frac{\partial L}{\partial\dot{\mathbf{q}}}-\frac{\partial L}{ \partial\mathbf{q}}=\boldsymbol{\tau}, \tag{2}\] where \(\mathbf{q}=[q_{1},q_{2},\ldots q_{n}]^{T}\) and \(\dot{\mathbf{q}}=[\dot{q}_{1},\dot{q}_{2},\ldots,\dot{q}_{n}]^{T}\) are the angular position and angular velocity vectors of the \(n\) arms [with \(()^{T}\) denoting the transpose], and \(\boldsymbol{\tau}=[\tau_{1},\tau_{2},\ldots,\tau_{n}]^{T}\) is the external force vector with each component applied to a distinct joint denoted by the subscript \(n\). The nonlinear dynamical equations for the robot-arm system can be expressed as [65; 66] \[\mathcal{M}(\mathbf{q})\ddot{\mathbf{q}}+C(\mathbf{q},\dot{\mathbf{q}})\dot{ \mathbf{q}}+\mathbf{G}(\mathbf{q})+\mathbf{F}(\dot{\mathbf{q}})=\boldsymbol{ \tau}, \tag{3}\] where \(\ddot{\mathbf{q}}=[\ddot{q}_{1},\ddot{q}_{2},\ldots,\ddot{q}_{n}]^{T}\) is the acceleration vector of the \(n\) joints, \(M(\mathbf{q})\) denotes the inertial matrix, \(C(\mathbf{q},\dot{\mathbf{q}})\dot{\mathbf{q}}\) represents the Coriolis and centrifugal force, \(\mathbf{G}(\mathbf{q})\) is the gravitational force vector, and \(\mathbf{F}(\dot{\mathbf{q}})\) is the vector of the frictional forces at the \(n\) joints which depends on the angular velocities. We assume that the movements of the robot arms are confined to the horizontal plane so that the gravitational forces can be disregarded, and we also neglect the frictional forces, so Eq. (3) becomes \[\mathcal{M}(\mathbf{q})\ddot{\mathbf{q}}+C(\mathbf{q},\dot{\mathbf{q}})\dot{ \mathbf{q}}=\mathbf{\tau}. \tag{4}\] We focus on the system of two joint robot arms (\(n=2\)), as shown in Fig. 8, where \(m_{1}\) and \(m_{2}\) are the centers of the mass of the two arms, \(l_{1}\) and \(l_{2}\) are their lengths, respectively. The tip of the second arm is the end effector to trace out a desired trajectory in the plane. The two matrices in Eq. (4) are \[\mathcal{M}(\mathbf{q}) =\begin{bmatrix}M_{11}&M_{12}\\ M_{21}&M_{22}\end{bmatrix} \tag{5}\] \[\mathcal{C}(\mathbf{q},\dot{\mathbf{q}}) =\begin{bmatrix}-h(\mathbf{q})\dot{q}_{2}&-h(\mathbf{q})(\dot{q}_ {1}+\dot{q}_{2})\\ h(\mathbf{q})\dot{q}_{1}&0\end{bmatrix}, \tag{6}\] where the matrix elements are given by \[M_{11} =m_{1}l_{\mathrm{c}_{1}}^{2}+I_{1}+m_{2}(l_{1}^{2}+l_{\mathrm{c }_{2}}^{2}+2l_{1}l_{\mathrm{c}_{2}}\cos q_{2})+I_{2},\] \[M_{12} =M_{21}=m_{2}l_{1}l_{\mathrm{c}_{2}}\cos q_{2}+m_{2}l_{\mathrm{c }_{2}}^{2}+I_{2},\] \[M_{22} =m_{2}l_{\mathrm{c}_{2}}^{2}+I_{2},\] the function \(h(\mathbf{q})\) is \[h(\mathbf{q})=m_{2}l_{1}l_{\mathrm{c}_{2}}\sin q_{2},\] \(l_{\mathrm{c}_{1}}=l_{1}/2\), \(l_{\mathrm{c}_{2}}=l_{2}/2\), \(I_{1}\) and \(I_{2}\) are the moments of inertia of the two arms, respectively. Typical parameter values are \(m_{1}=m_{2}=1\), \(l_{1}=l_{2}=0.5\), \(l_{\mathrm{c}_{1}}=l_{\mathrm{c}_{2}}=0.25\), and \(I_{1}=I_{2}=0.03\). The Cartesian coordinates of the end effector are \[C_{x} =l_{1}\cos q_{1}+l_{2}\cos(q_{1}+q_{2}), \tag{7}\] \[C_{y} =l_{1}\sin q_{1}+l_{2}\sin(q_{1}+q_{2}),\] which give the angular positions of the two arms as \[q_{2} =\pm\arccos\frac{C_{x}^{2}+C_{y}^{2}-l_{1}^{2}-l_{2}^{2}}{2l_{1}l _{2}}, \tag{8}\] \[q_{1} =\arctan\frac{C_{y}}{C_{x}}\pm\arctan\frac{l_{2}\sin q_{2}}{l_{1} +l_{2}\cos q_{2}}. \tag{9}\] For any end-effector position, there are two admissible solutions for the angular variables. We select the pair of angles that result in a continuous trajectory. In addition, the end effector may end up in any of the four quadrants, so the range of \(q_{1}\) is \([0,2\pi]\). The range of \(q_{2}\) is \([-\pi,\pi]\), since the second joint can be above or below the first joint. In our simulations, we ensure that the solutions are continuous and thus are physically meaningful, as demonstrated in Fig. 8(b). Noses and unpredictable disturbances are constantly present in real-world applications, making it crucial to ensure that the control strategy is robust and operational in their presence [67]. In fact, a model is always inaccurate compared with the actual physical system because of factors such as change of parameters, unknown time delays, measurement noise, and input disturbances. The goal of the robustness test is to maintain an acceptable level of performance under these circumstances. In our study, we treat disturbances and measurement noise as external inputs, where the former are added to the control signal and the latter is present in the sensor measurements. In particular, the disturbances are modeled as an additive stochastic process \(\xi\) to the data: \[\tilde{x}_{\mathrm{n}}=x_{\mathrm{n}}+\xi_{\mathrm{d}}. \tag{10}\] For measurement noise, we use multiplicative noise \(\xi\) in the form \[\tilde{x}_{\mathrm{n}}=x_{\mathrm{n}}+x_{\mathrm{n}}\cdot\xi_{\mathrm{m}}. \tag{11}\] Both stochastic processes \(\xi_{\mathrm{d}}\) and \(\xi_{\mathrm{m}}\) follow a normal distribution of zero mean and with standard deviation \(\sigma_{\mathrm{d}}\) and \(\sigma_{\mathrm{m}}\), respectively. ### Inverse design based controller formulation To develop a machine-learning based control method, it is necessary to obtain the control signal through observable states. The state of the two-arm system, i.e., the dynamical process to be controlled, is eight-dimensional, which consists of the Cartesian coordinates of the end-effector, the angular positions, angular velocities and angular accelerations of the two manipulators: \[\mathbf{x}\equiv[C_{x},C_{y},q_{1},q_{2},\dot{q}_{1},\dot{q}_{2},\ddot{q}_{1}, \ddot{q}_{2}]^{T}. \tag{12}\] A general nonlinear control problem can be formulated as [60] \[\mathbf{x}(t+dt) =\mathbf{f}[\mathbf{x}(t),\mathbf{u}+\mathbf{u}\cdot\xi_{\mathrm{d }}], \tag{13}\] \[\mathbf{y}(t) =\mathbf{g}[\mathbf{x}(t)]+\mathbf{g}[\mathbf{x}(t)]\cdot\xi_{ \mathrm{m}}, \tag{14}\] where \(\mathbf{x}\in\mathbb{R}^{n}\) (\(n=8\)), \(\mathbf{u}\in\mathbb{R}^{m}\) (\(m<n\)) is the control signal, \(\mathbf{y}\in\mathbb{R}^{k}\) (\(k\leq n\)) represents the sensor measurement. The function \(\mathbf{f}:\mathbb{R}^{n}\times\mathbb{R}^{m}\rightarrow\mathbb{R}^{n}\) is unknown for the controller. In our analysis, we assume that \(\mathbf{f}\) is Lipschitz continuous [68] with respect to \(\mathbf{x}\). The measurement function \(\mathbf{g}:\mathbb{R}^{n}\rightarrow\mathbb{R}^{k}\) fully or partially measures the states \(\mathbf{x}\). For the two-arm system, the measurement vector is chosen to be four-dimensional: \(\mathbf{y}\equiv[C_{x},C_{y},\dot{q}_{1},\dot{q}_{2}]^{T}\). The corresponding vector from the desired, reference trajectory is denoted as \(\mathbf{y}_{\mathrm{d}}(t)\). For our tracking control problem, the aim is to design a two-degree-of-freedom controller that receives the signals \(\mathbf{y}(t)\) and \(\mathbf{y}_{\mathrm{d}}(t)\) as the input and generates an appropriate control signal \(\mathbf{u}(t)\) in order for \(\mathbf{y}(t)\) to track the trajectory generating the observation \(\mathbf{y}_{\mathrm{d}}(t)\). For convenience, we use the notation \(\mathbf{f}_{\mathrm{u}}(\cdot)\equiv\mathbf{f}(\cdot,\mathbf{u})\). For a small time step \(dt\), Eq. (13) becomes \[\mathbf{x}(t+dt)\approx\mathbf{F}_{\mathrm{u}}[\mathbf{x}(t)], \tag{15}\] where \(\mathbf{F}_{\mathrm{u}}\) is a nonlinear function mapping \(\mathbf{x}(t)\) to \(\mathbf{x}(t+dt)\) under the control signal \(\mathbf{u}(t)\). For reachable desired state, \(\mathbf{F}_{\mathrm{u}}\) is invertible. We get \[\mathbf{u}(t)\approx\mathbf{F}_{\mathrm{u}}^{-1}[\mathbf{x}(t),\mathbf{x}(t+ dt)], \tag{16}\] Similarly, Eq. (14) can be approximated as \(\mathbf{x}(t)\approx\mathbf{g}^{-1}[\mathbf{y}(t)]\), so Eq. (16) becomes \[\mathbf{u}(t)\approx\mathbf{F}^{-1}[\mathbf{g}^{-1}[\mathbf{y}(t)],\mathbf{g}^{ -1}[\mathbf{y}(t+dt)]]. \tag{17}\] Equation (17) is referred to as the inverse model for nonlinear control [60], which will be realized in a model-free manner using machine learning. ## Data availability The reference trajectories data generated in this study can be found in the repository: [https://doi.org/10.5281/zenodo.8044994](https://doi.org/10.5281/zenodo.8044994) [69]. ## Code availability The codes for generating all the results can be found on GitHub: [https://github.com/Zheng-Meng/TrackingControl](https://github.com/Zheng-Meng/TrackingControl) [70].
2309.07212
Probing New Physics with High-Redshift Quasars: Axions and Non-standard Cosmology
The Hubble diagram of quasars, as candidates to ``standardizable" candles, has been used to measure the expansion history of the Universe at late times, up to very high redshifts ($z \sim 7$). It has been shown that this history, as inferred from the quasar dataset, deviates at $\gtrsim 3 \sigma$ level from the concordance ($\Lambda$CDM) cosmology model preferred by the cosmic microwave background (CMB) and other datasets. In this article, we investigate whether new physics beyond $\Lambda$CDM (B$\Lambda$CDM) or beyond the Standard Model (BSM) could make the quasar data consistent with the concordance model. We first show that an effective redshift-dependent relation between the quasar UV and X-ray luminosities, complementing previous phenomenological work in the literature, can potentially remedy the discrepancy. Such a redshift dependence can be realized in a BSM model with axion-photon conversion in the intergalactic medium (IGM), although the preferred parameter space is {in tension with various other astrophysical constraints on axions, at a level} depending on the specific assumptions made regarding the IGM magnetic field. We briefly discuss a variation of the axion model that could evade these astrophysical constraints. On the other hand, we show that models beyond $\Lambda$CDM such as one with a varying dark energy equation of state ($w$CDM) or the phenomenological cosmographic model with a polynomial expansion of the luminosity distance, cannot alleviate the tension. The code for our analysis, based on \texttt{emcee}~\cite{Foreman_Mackey_2013} and \texttt{corner.py}~\cite{corner}, is publicly available at \href{https://github.com/ChenSun-Phys/high\_z\_candles.git}{{\tt github.com/ChenSun-Phys/high\_z\_candles}}.
Chen Sun, Manuel A. Buen-Abad, JiJi Fan
2023-09-13T18:00:00Z
http://arxiv.org/abs/2309.07212v2
# Probing New Physics with High-Redshift Quasars: ###### Abstract The Hubble diagram of quasars, as candidates to "standardizable" candles, has been used to measure the expansion history of the Universe at late times, up to very high redshifts (\(z\sim 7\)). It has been shown that this history, as inferred from the quasar dataset, deviates at \(\gtrsim 3\sigma\) level from the concordance (\(\Lambda\)CDM) cosmology model preferred by the cosmic microwave background (CMB) and other datasets. In this article, we investigate whether new physics beyond \(\Lambda\)CDM (B\(\Lambda\)CDM) or beyond the Standard Model (BSM) could make the quasar data consistent with the concordance model. We first show that an effective redshift-dependent relation between the quasar UV and X-ray luminosities, complementing previous phenomenological work in the literature, can potentially remedy the discrepancy. Such a redshift dependence can be realized in a BSM model with axion-photon conversion in the intergalactic medium (IGM), although the preferred parameter space could be in mild tension with various other astrophysical constraints on axions, depending on the specific assumptions made regarding the IGM magnetic field. We briefly discuss a variation of the axion model that could evade these astrophysical constraints. On the other hand, we show that models beyond \(\Lambda\)CDM such as one with a varying dark energy equation of state (\(w\)CDM) or the phenomenological cosmographic model with a polynomial expansion of the luminosity distance, cannot alleviate the tension. The code for our analysis, based on emcee [1] and corner.py[2], is publicly available at github.com/ChenSun-Phys/high_z_candles. + Footnote †: preprint: LA-UR-23-29579 ###### Contents * I Introduction * II Cosmic Distance Inference with Quasar Data Sets * II.1 A Lightning Review * II.2 How New Physics Biases the Distance Inference * III An Effective Evolution of the UV-X-ray Relation * IV Implications for B\(\Lambda\)CDM and BSM Models * IV.1 Beyond \(\Lambda\)CDM * IV.2 Beyond SM * IV.3 Evaluation of the Fits * IV.4 Comment on the Axion Model * IV.5 Effective Evolution of \(\beta\) * V Conclusion * A Quasar Dataset * B Methodology * C Posterior of the Fits ## I Introduction Quasars, or quasi-stellar objects (QSOs), serve as probes in the ultraviolet (UV) and infrared (IR) frequencies up to high redshifts (\(z\sim 7\)). With some theoretical modeling of their intrinsic luminosities, they can be used to measure luminosity distances as a function of redshift. Recently there has been a renewed interest in using them as "standardizable" candles to measure the expansion history of the Universe at late times [3; 4; 5; 6; 7; 8; 9; 10]. This is of particular importance in light of the recent Hubble tension [11; 12; 13; 14; 15]. However, this hope of having a new standard candle has been confronted by a series of challenges in the form of various of consistency checks. In early attempts at its application as a tool to constrain cosmological models, various groups have confirmed that the quasar data prefers an expansion history that stands in stark tension (at \(\gtrsim 3\sigma\) level) with the one inferred from the cosmic microwave background (CMB) [16], which is in turn consistent with type Ia supernovae (SNIa) [17], and Baryon Acoustic Oscillation (BAO) [18; 19] measurements. This discrepancy persists even after the application of stringent cuts to the data that correct for various biases, such as those stemming from dust reddening, X-ray absorption, and Eddington bias [20]. On the other hand, purely data-driven analyses have been implemented in Refs. [8; 9; 10], which correct the luminosity-redshift correlations in the UV and X-ray bands separately. These analyses show that the UV-X-ray relation is indeed robust, and that it does not arise from any luminosity-redshift correlation potentially caused by selection bias. Furthermore they find that, for a given cosmology, the correlation-corrected UV and X-ray luminosities deduced from observations each presents a different redshift evolution, which has been previously unaccounted for. These analyses remain agnostic as to the origin of said evolutions, and limit themselves to characterize their size and impact on the quasar data. In this paper we take a strategy of reverse-engineering: rather than claiming that quasar data favors a cosmology in tension with that from the combined CMB+SNIa+BAO datasets, we take it as _input_ in our search for a plausible explanation of the apparent redshift evolution of observed quasar fluxes, in order to restore cosmic concordance. We define the concordance cosmology as the \(\Lambda\)CDM model, whose late-time expansion history is fixed by the dark energy density parameter \(\Omega_{\Lambda}\) and the scaling factor \(h\) of the Hubble expansion rate \(H_{0}=100h\,\text{km/s/Mpc}\)[16], \[\Omega_{\Lambda}=0.6847\pm 0.0073\,\quad h=0.6736\pm 0.0054. \tag{1}\] While it is possible that the aforementioned luminosity evolution is of a purely astrophysical nature (_e.g._, coming from a poor understanding of the quasar luminosity itself, a possible temporal bias in quasar formation history, or unaccounted-for propagation effects [8; 9; 10]), in this paper we take a step further and consider whether its origin could be in part coming from new physics. We examine the implications that this evolution has for alternative models beyond \(\Lambda\)CDM (B\(\Lambda\)CDM) or beyond the Standard Model (SM) of particle physics (BSM). Since the luminosity distance can be affected by both the expansion history of the Universe and photon attenuation, we test three benchmark models: \(w\)CDM and the cosmographic model [3; 4] as examples of B\(\Lambda\)CDM, and axion-like particles (ALPs, or axions for short) coupled to photons as an example of BSM [21; 22; 23]. We show that while all these alternatives provide good fits to the QSO dataset, both \(w\)CDM and the cosmographic model do not resolve the tension between it and the CMB+SNIa+BAO datasets. On the other hand, axion-photon couplings allow for frequency-dependent photon disappearance, which greatly reduces the tension between all the datasets; although it is still in tension with some other astrophysical constraints. This paper is organized as follows. We briefly review the UV-X-ray relation in quasars and how that is affected by new physics in Sec. II. We next fit \(\Lambda\)CDM to the QSO data with a flexible UV-X-ray relation parametrization in Sec. III, where we reveal the tension between it and other datasets. We demonstrate that, when allowed to change, the quasar data prefers an apparent redshift evolution in the UV-X-ray flux relation, which naturally resolves the aforementioned tension. In Sec. IV we study the performances of two B\(\Lambda\)CDM models (\(w\)CDM and cosmographic model) and one BSM model (axion) in alleviating the tension between the QSO and other datasets (SNIa, BAO, CMB). We show that the axion model has the best performance in resolving the tension. We conclude in Sec. V. The code we used in our numerical analysis, based on emcee [1] and corner.py [2], is publicly available at github.com/ChenSun-Phys/high_z_candles. ## II Cosmic distance inference with quasar data sets In this section we describe the procedure through which QSO data can be used to determine cosmic luminosity distances as a function of redshift, as well as how new physics can change said procedure. ### A Lightning Review The observed power-law behavior of the X-ray spectrum of the quasars has motivated the so-called _"two-phase model"_[24; 25] of the environment surrounding the supermassive black holes (SMBH), which are believed to power the accretion of the active galactic nucleus (AGN). In the two-phase model, ultraviolet (UV) photons are emitted by the relatively cold, optically thick accretion disk around the SMBH; while X-ray photons are produced through up-scattering of the UV photons by the hot, optically thin corona of the SMBH.1 Footnote 1: There is a third component from the reflection of the corona photon into the accretion disk peaks around 30 keV[26; 27]. Since both the UV and X-ray photons are related to the SMBH mass and the accretion rate, their respective luminosities can be formally written as functions of these quantities, namely \(L_{\rm X}=f_{1}(M_{\rm BH},\dot{M}_{\rm BH})\), \(L_{\rm UV}=f_{2}(M_{\rm BH},\dot{M}_{\rm BH})\)[20]. This implies a relation between \(L_{\rm X}\) and \(L_{\rm UV}\), as discussed widely in the quasar/AGN literature; see for example Refs. [28; 29; 30; 31; 32]. A common parametrization of this relation is the so-called _"Risaliti-Lusso (RL) relation"_, \(L_{\rm X}=10^{3}\,L_{\rm UV}^{\gamma}\), with \(L_{\rm X,UV}\) normalized to erg/s/Hz. The parameter \(\gamma\) has been confirmed to have negligible redshift evolution by binning the flux measurement in redshift [3; 20]. In addition, a particular model [20], based on the two-phase model [25], predicts a redshift-independent \(\beta\) as well. The RL quasar luminosity relation can also be understood in terms of the _flux_ in the UV and X-ray band. Under the standard assumption that photon number is conserved, the photon flux \(F\) measured by an observer at a luminosity distance \(D_{L}\) can be related to the luminosity \(L\) of the photon source as follows: \[F(z;\omega)=\frac{L(z;\omega)}{4\pi D_{L}(z)^{2}}\, \tag{2}\] where \(z\) is the redshift and \(\omega\) the photon energy in question.23 The RL relation between the quasar UV and X-ray luminosities can then be written in terms of the measured flux as follows:4 Footnote 2: We denote the photon flux \(F(z;\omega_{x})\) at a specific energy \(\omega_{x}\) by \(F_{x}(z)\). Footnote 3: Unless otherwise stated, we work in natural units. Footnote 4: We use \(\log()\) and \(\ln()\) to denote the base-10 and natural logarithms, respectively. \[\log\left(\frac{F_{\rm X}(z)}{\rm erg/s/Hz/cm^{2}}\right)-\gamma \log\left(\frac{F_{\rm UV}(z)}{\rm erg/s/Hz/cm^{2}}\right) \tag{3}\] \[=2(\gamma-1)\log\left(D_{L}(z)/\rm cm\right)+\beta+(\gamma-1)\log (4\pi)\,\] where \(\gamma\) and \(\beta\), which depend on quasar properties and dynamics, are treated as nuisance parameters. The left hand side depends on the directly observed quasar UV and X-ray fluxes, which means that it can be taken as observational data, modulo the nuisance parameter \(\gamma\). The right hand side is a function of \(D_{L}(z)\), which is a prediction of the underlying cosmological model, and of the QSO nuisance parameters \(\beta\) and \(\gamma\). Thus it can be treated as the theory input. ### How New Physics Biases the Distance Inference Both astrophysical processes and new physics could alter the _flux_ relation in Eq. (3) in a redshift-dependent way, which can be understood in terms of an effective \(\beta_{\rm eff}(z)\). If the effective evolution in \(\beta_{\rm eff}(z)\) has an astrophysical origin, it must be corrected before the UV-X-ray relation can be reliably utilized for cosmological inferences. This is indeed the subject of Refs. [8; 9; 10], where the authors eliminate the luminosity-redshift correlation assuming it takes the form of a given function, and making use of the Efron-Petrosian method [33], before applying the UV-X-ray RL relation. As mentioned in the introduction, these studies showed that the RL relation is not an artifact of any possible selection bias effects, and that there is reason to believe that the UV and X-ray quasar luminosities, as derived from observations and within the context of a specific cosmic history, present each a different, non-negligible redshift evolution. In these investigations, the luminosities in UV and X-ray are assumed to be each corrected by a power-law form,5\(L_{\rm X,UV}\to L_{\rm X,UV}(1+z)^{k_{\rm X,UV}}\), to account for the apparent luminosity-redshift evolution. This effectively results in the substitution of the constant \(\beta\) with the redshift-dependent \(\beta_{\rm eff}(z)\): Footnote 5: More sophisticated functional forms were also tested, which showed no significant difference. \[\beta\to\beta_{\rm eff}(z)=\beta+(k_{\rm X}-\gamma k_{\rm UV})\log(1+z). \tag{4}\] However, simply correcting for the luminosity-redshift correlation using data-driven statistical methods, while at the same time remaining agnostic as to its origin, is not suited to our purposes. Indeed, any new physics that may be the source of even just part of this correlation would not be picked up by these methods, but would instead be discarded away during the correction process along with any other less exotic sources. Therefore, we follow a different approach by instead fitting the quasar dataset with different \(\beta_{\rm eff}(z)\) functional forms directly predicted by new physics models. We consider two classes of new physics models in which this \(\beta_{\rm eff}(z)\) modification of \(\beta\) can arise. The first class consists of deviations from the standard \(\Lambda\)CDM cosmological expansion history. The measured flux (left-hand side of Eq. (3)) constraints the expansion history of the Universe through \(D_{L}(z)\). Anchoring the cosmology to a given concordance history \(D_{L,c}(z)\), \(\beta_{\rm eff}\) can be seen as a deviation from this concordance. Indeed, \(\beta\) in Eq. (3) is modified to \[\beta\to\beta_{\rm eff}(z)=\beta+2(\gamma-1)\log[D_{L}(z)/D_{L,c}(z)]. \tag{5}\] We would like to point out that such a modification of the expansion history of the Universe would manifest itself in many other independent measurements beyond those of quasars, including CMB anisotropy [16], SNeIa [17], and BAO [18; 19; 34]. As we show below, the quasar data prefers a non-standard cosmology at odds with that favored by these other observations, which effectively limits this interpretation of the tension. The second class of models involves frequency-dependent propagation effects that result in photon disappearance, which can also break the _flux_ relation in Eq. (3) without violating the relation in the _luminosities_\(L_{\rm X}\) and \(L_{\rm UV}\): \[F(z;\omega)=P_{\gamma\gamma}(z;\omega)\frac{L(z;\omega)}{4\pi D_{L}(z)^{2}}\, \tag{6}\] where \(P_{\gamma\gamma}(z;\omega)\) is the photon survival probability. This effectively introduces a redshift dependence in \(\beta\) as follows: \[\beta\to\beta_{\rm eff}(z)=\beta+\log\,P_{\gamma\gamma}(z;\omega_{\rm X})- \gamma\log\,P_{\gamma\gamma}(z,\omega_{\rm UV}). \tag{7}\] In other words, if there is unaccounted-for extra attenuation in the measured photon flux, a preference for non-zero en-route photon disappearance will arise within the context of a given concordance cosmological expansion history, such as \(\Lambda\)CDM; and it will manifest itself as a redshift dependence of \(\beta\) in the quasar data. In the scenario where this photon disappearance is caused by new particles, the observed flux's dependence on \(P_{\gamma\gamma}(z;\omega)\) effectively turns quasars into a probe of BSM physics. In this paper we take axion-photon interactions as a benchmark scenario of BSM physics responsible for photon disappearance, and test for axion-photon conversion in inter-galactic medium (IGM) using the QSO dataset. We will see how this model, in conjunction with the standard \(\Lambda\)CDM, provides a very good fit to the data. While this fit is in tension with some astrophysical constraints on the axion-photon coupling, axion-photon interactions provide a concrete physics model that both greatly improves the fit to the QSO data and restores the consistency between the expansion history preferred by this data and that favored by CMB, SNIa, and BAO. We also compare its goodness of fit with a few \(\Lambda\)CDM models. To summarize, our approach differs from and complements the analyses in Refs. [8; 9; 10] in a few ways. Firstly, Refs. [8; 9; 10] designed a data-driven method to empirically correct any luminosity-redshift corrections. This results in the elimination of both any selection bias _and_ any new physics possibly contributing to this redshift evolution. Thus, this method could potentially hide any new physics-induced flux evolution, should there be any. Secondly, it is not our purpose to understand all luminosity-redshift correlation present in the quasar data. By fitting the data with new physics-induced \(\beta_{\rm eff}\), we perform a critical examination of whethe _part_ of the flux-redshift evolution may be caused by new physics, if the concordance cosmological history encoded in \(\Lambda\)CDM and Eq. (1) indeed describe nature. Lastly, while Refs. [8; 9; 10] are of critical importance in establishing the UV-X-ray RL relation on firmer ground, the templates of the luminosity-redshift evolution adopted therein only allow for smooth apparent luminosity changes across a relatively large range of redshifts. As will become clear later in this paper, the new physics model we test cannot be captured by the simpler functional forms tested in Ref. [8]. In particular, the axion-induced flux attenuation generates a somewhat abrupt change in both the UV and the X-ray bands, more akin to a step function than to a slowly-varying transition. ## III An effective evolution of the UV-X-ray relation In this section, we first discuss the tension between QSO dataset and CMB+SNIa+BAO within the context of \(\Lambda\)CDM. We then show how an effective \(\beta_{\rm eff}(z)\) with redshift dependence is preferred by quasars, once the cosmological expansion history is itself anchored by CMB+SNIa+BAO. The datasets we use include: * \(\mathcal{B}\): Our baseline datasets. These include SNIa: Pantheon [17]; BAO: 6dFGS [34], SDSS using the MGS galaxy sample [35], CMASS and LOWZ galaxy samples of SDSS-III DR12 [36]. * \(\mathcal{G}\): Gaussian priors including \(H_{0}=67.36\pm 0.54\,\mathrm{km/s/Mpc}\) and \(r_{s}=147.09\pm 0.26\,\mathrm{Mpc}\) from Planck 2018 [16].6 Footnote 6: We also tested the Gaussian prior of \(H_{0}\) from SH0ES, which is in significant tension with the Planck results. We find the same results. This is expected as only the shape of the Hubble diagram matters in our analysis. * \(\mathcal{Q}\): QSO dataset from Ref. [37]. We further parametrize the nuisance \(\beta\) parameter in three possible ways: * \(\mathcal{Q}_{\beta_{0}}\): \(\beta_{\rm eff}=\beta_{0}\) is a constant across all redshifts. This is the most common parametrization in the quasar literature. * \(\mathcal{Q}_{\beta_{1}}\): \(\beta_{\rm eff}\) is a step function. At redshift \(z_{0}\), it sharply transitions from a value \(\beta_{0}\) to \(\beta_{1}\). * \(\mathcal{Q}_{\beta_{1}}\): \(\beta_{\rm eff}\) goes through a smooth transition, parametrized with a \(\tanh\) function as follows \[\beta_{\rm eff}(z)=\beta_{0}+\frac{\beta_{0}-\beta_{1}}{2}\left[\tanh\left( \frac{z-z_{0}}{\delta z}\right)+1\right].\] (8) According to this parametrization there are five nuisance parameters in the QSO dataset in total: \(\gamma\), \(\beta_{0}\), \(\beta_{1}\), \(z_{0}\), and \(\delta z\). The details of the fits can be found in Appendices A and B. The 1D posteriors can be found in Appendix C. We list the log-likelihood of the best fit points and the posteriors of dark energy density parameter, \(\Omega_{\Lambda}\), in seven different runs with \(\Lambda\)CDM in Tab. 1. We make a few comments on the results below. First, the aforementioned tension between the QSO and SNIa+BAO datasets can be easily seen in the posterior of \(\Omega_{\Lambda}\) from the \(\Lambda\)CDM fit to \(\mathcal{B}+\mathcal{G}\) (\(\Omega_{\Lambda}=0.68\pm 0.02\)) and to \(\mathcal{Q}_{\beta_{0}}+\mathcal{G}\) (\(\Omega_{\Lambda}=0.05^{+0.07}_{-0.03}\)). Although \(\mathcal{Q}_{\beta_{0}}+\mathcal{B}+\mathcal{G}\) leads to a posterior in \(\Omega_{\Lambda}\) that is consistent with the concordance value, this is mostly due to the addition of SNIa+BAO. This can be seen by comparing the log-likelihood between \(\mathcal{Q}_{\beta_{0}}+\mathcal{B}+\mathcal{G}\) and \(\mathcal{Q}_{\beta_{0}}+\mathcal{G}\). The fit to quasars is degraded significantly once the baseline data combination SNIa and BAO is added, with a change of \(\Delta\chi^{2}=+40.0\) for the QSO dataset alone. What is more, the fit to SNIa \begin{table} \begin{tabular}{c|c c c|c} \hline & QSO & SNIa & BAO & \(\Omega_{\Lambda}\) \\ \hline \(\mathcal{B}+\mathcal{G}\) & & -1177.4 & -9.2 & \(0.68\pm 0.02\) \\ \(\mathcal{Q}_{\beta_{0}}+\mathcal{G}\) & -106.6 & & & \(0.05^{+0.07}_{-0.03}\) \\ \(\mathcal{Q}_{\beta_{1}}+\mathcal{G}\) & -169.2 & & & \(0.10^{+0.27}_{-0.27}\) \\ \(\mathcal{Q}_{\beta_{1}}+\mathcal{G}\) & -179.1 & & & \(0.46^{+0.26}_{-0.27}\) \\ \(\mathcal{Q}_{\beta_{0}}+\mathcal{B}+\mathcal{G}\) & -66.62 & -1174.9 & -9.4 & \(0.66\pm 0.02\) \\ \(\mathcal{Q}_{\beta_{1}}+\mathcal{B}+\mathcal{G}\) & -149.2 & -1177.0 & -9.4 & \(0.67\pm 0.02\) \\ \(\mathcal{Q}_{\beta_{1}}+\mathcal{B}+\mathcal{G}\) & -179.1 & -1177.3 & -8.9 & \(0.68\pm 0.02\) \\ \hline \end{tabular} \end{table} Table 1: The log-likelihood of the best fit points in seven different runs with \(\Lambda\)CDM. within the \(\mathcal{Q}_{\beta_{0}}+\mathcal{B}+\mathcal{G}\) run is also slightly worse than that within \(\mathcal{B}+\mathcal{G}\), \(\Delta\chi^{2}=+2.5\). This is due to quasars pulling the parameters away from the minimum of SNIa. Put together, these facts are indications of the incompatibility between the QSO dataset assuming a constant \(\beta\) and SNIa+BAO. Second, the tension is greatly relieved when considering a redshift-dependent \(\beta_{\rm eff}(z)\): * In the \(\mathcal{Q}_{\beta_{\rm g}}+\mathcal{G}\) run, there is a clear preferred transition point at \(z_{0}=1.65^{+0.01}_{-0.01}\). While the posterior in \(\Omega_{\Lambda}\) is still largely off compared with that from \(\mathcal{B}+\mathcal{G}\), there is huge improvement in the fit to the QSO dataset with \(\Delta\chi^{2}=-62.6\), at the cost of only two more parameters. Furthermore, by comparing \(\mathcal{Q}_{\beta_{\rm g}}+\mathcal{B}+\mathcal{G}\) and \(\mathcal{Q}_{\beta_{\rm g}}+\mathcal{G}\), we see the former has only a milder deterioration in the log-likelihood of quasars, \(\Delta\chi^{2}=20.0\), while the value of \(\Omega_{\Lambda}\) is consistent with that from \(\mathcal{B}+\mathcal{G}\). * With \(\mathcal{Q}_{\beta}\), the tension is completely resolved. Comparing \(\mathcal{Q}_{\beta_{\rm i}}+\mathcal{G}\) with \(\mathcal{Q}_{\beta_{0}}+\mathcal{G}\), we see an improvement in the fit to the quasar data with \(\Delta\chi^{2}=-72.5\) between the two fits. In addition, the fit to quasars is as good in \(\mathcal{Q}_{\beta_{\rm i}}+\mathcal{B}+\mathcal{G}\) as in \(\mathcal{Q}_{\beta_{\rm i}}+\mathcal{G}\). More importantly, \(\mathcal{Q}_{\beta_{\rm i}}+\mathcal{G}\) leads to a value of \(\Omega_{\Lambda}\) that is compatible with the concordance cosmology. We show the corresponding shape of \(\beta_{\rm eff}(z)\) in Sec. IV.5. ## IV Implications for BACDM and BSM models In this section, we discuss several specific B\(\Lambda\)CDM and BSM models that realize an effective redshift-dependent \(\beta_{\rm eff}(z)\), and whether they could alleviate the tension between the QSO dataset and other cosmological datasets. ### Beyond \(\Lambda\)CDM The luminosity distance can be computed once a cosmological expansion history is given: \[D_{L}(z)=(1+z)\,\int_{0}^{z}\,dz^{\prime}\frac{c}{H(z^{\prime})}\, \tag{9}\] where \(c\) is the speed of light. This affects the UV-X-ray relation in the measured fluxes as shown in Eq. (3), which causes an effective modification of the QSO nuisance parameter \(\beta\) given in Eq. (5). First we consider the case of \(w\)CDM, where the equation of state of the dark energy, \(w\), deviates from \(-1\). A phenomenological parametrization commonly used in the literature is \[w(a)=w_{0}+w_{a}(1-a)\, \tag{10}\] where \(a\) is the scale factor. This function smoothly interpolates the current equation of state (EOS) \(w_{0}\) to its value in the early Universe (\(w_{0}+w_{a}\)). The parameters \(w_{0},w_{a}\) are allowed to vary between -1 and +1. The \(w\)CDM model has therefore four theory parameters to be fitted, \(\Omega_{\Lambda},H_{0},w_{0},w_{a}\). We also follow Refs. [3; 4] and perform a free-form polynomial expansion of the luminosity distance in order to study potential alternative cosmologies in a more general way. If one fixes the low redshift luminosity distance to \(c/H_{0}\), the so-called cosmographic approach is an effective expansion of the form: \[D_{L}(z)=\frac{c}{H_{0}}\bigg{(}\ln(1+z)+\sum_{i=2}a_{i}\ln^{i}(1+z)\bigg{)}. \tag{11}\] We truncate the expansion up to the forth order. Therefore, the model parameters are \(H_{0},a_{2},a_{3},a_{4}\). Note that Ref. [5] showed that the convergence of the series is poor when they are mapped to physical models (\(\Lambda\)CDM, \(w\)CDM, etc.). Therefore, we use this only as a benchmark phenomenological parametrization to compare with the literature, and refrain from making any statements about the underlying cosmological model. ### Beyond SM We now devote our attention to a concrete particle physics model that can leave imprints in the UV-X-ray relation of Eq. (3). If axions exist and couple to photons, the number of photons may not be conserved when they propagate through a static magnetic field background, such as the IGM magnetic field. The photon disappearance probability \(P_{0}\) (within a single magnetic domain) due to an axion \(a\) with a mass \(m_{a}\) and a coupling to photons \(g_{a\gamma}aF\tilde{F}/4\), where \(g_{a\gamma}\) is the constant coupling with energy dimension \(-1\), and \(F\) (\(\tilde{F}\)) is the (dual) electromagnetic field strength, is given by the well-known formula [38; 39; 21; 40]: \[P_{0}=\frac{(2\Delta_{a\gamma})^{2}}{k^{2}}\sin^{2}\left(\frac{kx}{2}\right)\, \tag{12}\] where \(x\) is the distance traveled by the photon, and \[k \equiv \sqrt{(2\Delta_{a\gamma})^{2}+(\Delta_{a}-\Delta_{\gamma})^{2}}\, \tag{13}\] \[\Delta_{a\gamma} \equiv \frac{g_{a\gamma}B}{2}\,\quad\Delta_{a}\equiv\frac{m_{a}^{2}}{2 \omega}\,\quad\Delta_{\gamma}\equiv\frac{m_{\gamma}^{2}}{2\omega}\, \tag{14}\] with \(B\) the IGM magnetic field transverse to the photon trajectory, and \(\omega\) the photon energy. Here \(m_{\gamma}^{2}\equiv\frac{4\pi an_{e}}{m_{e}}\), where \(m_{e}\) and \(\alpha\) are the electron mass and fine-structure constant respectively, is the effective photon mass squared in the presence of an ionized plasma with an electron number density \(n_{e}\). In our analysis, we take \(n_{e}\simeq 1.6\times 10^{-8}\,\text{cm}^{-3}\)[41; 42]. There are therefore a total of four theory parameters to be fitted \(\Omega_{\Lambda},H_{0},m_{a},g_{a\gamma}\). We adopt the _cell model_ for the IGM magnetic field, described in Refs. [21; 22; 23; 43]. In this model the magnetic field is assumed to be split into domains ("cells"), in which it can be taken to be homogeneous. The photon path, extending from a source at some distance \(y\) to the observer, is assumed to cross a large number \(N\) of these magnetic domains. Each \(i\)-th domain has a _physical_ size \(L_{i}\) and a randomly oriented magnetic field of strength \(B_{i}\)[43], whose component perpendicular to the photon's path is assumed to be the same in each domain. With these simplifications, the resulting net probability of photon-axion conversion over many domains is then given by [44] \[P_{a\gamma}(y)=\left(1-A\right)\left(1-\prod_{i=1}^{N}\left(1-\frac{3}{2}P_{0, i}\right)\right)\, \tag{15}\] where \(A\equiv\frac{2}{3}\big{(}1+\frac{I_{0}^{0}}{I_{0}^{0}}\big{)}\) depends on the ratio of the initial intensities of axions and photons coming from the source, denoted by \(I_{a}^{0}\) and \(I_{\gamma}^{0}\) respectively; and \(P_{0,i}\) is the conversion probability in the \(i\)-th magnetic domain, which can be obtained from Eq. (12) for \(x=L_{i}\). Since \(N\) is very large, Eq. (15) can be rewritten as an integral. In order to do this, we further assume that \(y\) is a distance that scales linearly with \(N\), such that \(y/N\) remains constant as \(N\) goes to infinity. For example, for IGM propagation the domains are typically assumed to be evenly distributed in _comoving_ space, which means that each domain has comoving size \(s\) and the distance to the source is a comoving distance \(y=Ns\). Under these assumptions, we have \[P_{a\gamma}(y)=\left(1-A\right)\left(1-\exp\left[\frac{1}{s} \int\limits_{0}^{y}\mathrm{d}y^{\prime}\ \ln\left(1-\frac{3}{2}P_{0}(y^{\prime})\right)\right]\right) \tag{16}\] We assume \(A=2/3\) or equivalently \(I_{a}^{0}=0\) throughout the paper. The ratio of the observed photon flux and the emitted photon flux from the source is then given by \(P_{\gamma\gamma}=1-P_{a\gamma}\). Let us now briefly comment on the coherent length of the magnetic field domain. While the existence of IGM magnetic fields has been indirectly attested (from the absence of \(\gamma\)-ray cascade emission) and there are upper bounds as well (from CMB anisotropy as well as turbulence decay), observational constraints on the coherent length are lacking [45]. From the theoretical perspective, the coherent length of a magnetic field is related to the structures that support the magnetic field. Therefore, it is reasonable to believe that the magnetic fields at the intersections of filaments has a domain size of \(\sim\)Mpc scale. Nevertheless, the IGM magnetic field domains can be much larger due to characteristic size of other structures in the IGM. Because voids and sheets make up the majority of the IGM volume, it is natural to suppose that the relevant length scale associated with their corresponding magnetic fields should be related to the size of these structures. From the IllustrisTNG simulation[42], one can see that the void size at low redshift (\(z\sim 1\)) can reach \(\mathcal{O}(10)\) Mpc. In Sec. IV.4, we fit the axion model to our datasets with different choices of the domain size. Elsewhere, and unless stated otherwise, we use as a benchmark a magnetic domain size of 1 Mpc and a magnitude field of \(B=1\,\)nG throughout this paper. At last, we stress that the impact of axion-photon interactions only manifests itself in distance measurements that involve photon flux, such as the luminosity distances of quasars. It is otherwise largely invisible in experiments based on direct measurements of the angular diameter sizes (_e.g._ CMB observations.) In addition, for the axion-photon couplings in which we are interested, the impact on SNIa luminosity distances (sensitive to only eV-energy photons) is negligible, most effects being limited to the UV photons at \(z>1.5\). ### Evaluation of the Fits We fit each model (\(w\)CDM, cosmographic, and axion models) to either quasars alone (\(\mathcal{Q}_{\beta_{0}}+\mathcal{G}\)) or to quasars and our baseline datasets (\(\mathcal{Q}_{\beta_{0}}+\mathcal{B}+\mathcal{G}\)), including in both cases the Gaussian prior (\(\mathcal{G}\)) on \(H_{0}\) and \(r_{s}\). See Appendices A and B for details on our methodology, including the QSO dataset, the priors, and the fitting procedure. We summarize the resulting log-likelihoods of the best fit points in Tab. 2. The 1D and 2D posteriors are shown in Appendix C. **Tension between QSO and SNIa+BAO.** We start by noting that the axion model significantly reduces the tension between QSOs (\(\mathcal{Q}_{\beta_{0}}\)) and SNIa+BAO (\(\mathcal{B}\)). In comparison, the stronger tension between \(\mathcal{Q}_{\beta_{0}}\) and \(\mathcal{B}\) remains in \(w\)CDM and the cosmographic model. This is reflected in the goodness of fit to \(\mathcal{Q}_{\beta_{0}}\) of each model. When SNIa and BAO are included, the goodness of fit to QSOs is degraded by \(\Delta\chi^{2}=79.4\) for \(w\)CDM, by \(\Delta\chi^{2}=38.9\) for the cosmographic model, and by \(\Delta\chi^{2}=21.9\) for the axion model. This shows that both \(w\)CDM and the cosmographic model take a significant deviation from the concordance cosmology over a large range of redshifts in order to accommodate the QSO \begin{table} \begin{tabular}{l|c c c|c} & QSO & SNIa & BAO & \(\chi_{c}^{2}\) \\ \hline \hline \(\overline{w\text{CDM}(\mathcal{Q}_{\beta_{0}}+\mathcal{G})}\) & -171.1 & & & 1247.5 \\ \(\overline{w\text{CDM}(\mathcal{Q}_{\beta_{0}}+\mathcal{B}+\mathcal{G})}\) & -91.7 & -1172.0 & -9.1 & 743.0 \\ \hline \(\overline{a_{2}a_{3}a_{4}(\mathcal{Q}_{\beta_{0}}+\mathcal{G})}\) & -179.4 & & & 996.9 \\ \(\overline{a_{2}a_{3}a_{4}(\mathcal{Q}_{\beta_{0}}+\mathcal{B}+\mathcal{G})}\) & -140.5 & -1167.7 & -8.9 & 822.5 \\ \hline \hline axion(\mathcal{Q}_{\beta_{0}}+\mathcal{G}\)) & -169.7 & & & 609.0 \\ \(\overline{\text{axion}(\mathcal{Q}_{\beta_{0}}+\mathcal{B}+\mathcal{G})}\) & -147.8 & -1176.5 & -9.3 & 514.2 \\ \hline \end{tabular} \end{table} Table 2: The log-likelihood of the best fit points in six tests with beyond \(\Lambda\)CDM and beyond SM models. The cosmographic model is labeled as \(a_{2}a_{3}a_{4}\). In the last column, we show the “theory distance” between the best fit points and the concordance cosmological model by comparing their \(\chi^{2}\) difference from the quasars likelihood; see Eq. (21). The smaller the value is, the closer the preferred cosmology is to the concordance model. dataset, with such changes disfavored by SNIa+BAO. As a result, including SNIa+BAO limits how close these models can reach the minimum of the quasar likelihood. On the other hand, the axion model allows a modification of the UV and optical photon flux starting at \(z\gtrsim 1.5\) while leaving the low-\(z\) part mostly unchanged. This minimizes the impact on SNIa, and BAO is not affected by it at all. As a result, including SNIa+BAO only has a mild effect on the goodness of fit to QSO data. Such "flexibility" of the axion model compared to \(w\)CDM and the cosmographic model is the key to reduce the tension between the \(\mathcal{Q}_{\beta_{0}}\) and \(\mathcal{B}\) datasets. Meanwhile, the flexibility of the axion model comes at no extra cost in terms of the introduction of more parameters than the \(w\)CDM and cosmographic models. It is interesting to note that the fit to quasar in \(a_{2}a_{3}a_{4}(\mathcal{Q}_{\beta_{0}}+\mathcal{G})\) and \(\text{axion}(\mathcal{Q}_{\beta_{0}}+G)\) only differ by less than \(\Delta\chi^{2}=10\), with the \(a_{2}a_{3}a_{4}\) model performing slightly better. However, when \(\mathcal{B}\) is included, the axion model is preferred with \(\Delta\chi^{2}\approx-7.3\), due to precisely the aforementioned flexibility in the axion model that effectively decouples the impact on quasar flux, SNIa flux, and BAO. **Restoration of the Concordance Cosmology.** Resolving the tension means the minimum of \(\mathcal{Q}_{\beta_{0}}\) should get closer to that of \(\mathcal{B}\) in the theory space. However, this does not guarantee a restoration of concordance cosmology. For example, the minima of the two datasets can both drift toward a theory point that is far away from the concordance model. Therefore, aside from the relative distance between the minima of \(\mathcal{Q}_{\beta_{0}}\) and \(\mathcal{B}\), we would also like to evaluate the distance between the minima and a fixed point anchored by the concordance model, which is essentially the minimum of the Planck likelihood. To quantify their agreement to the well-established concordance cosmology given by Eq. (1), we compare the distance modulus computed from the best fit points of the six runs \(\{a_{2}a_{3}a_{4},w\text{CDM},\text{axion}\}\otimes\{\mathcal{Q}_{\beta_{0}}+ \mathcal{G},\mathcal{Q}_{\beta_{0}}+\mathcal{B}+\mathcal{G}\}\) with that computed from Eq. (1). More concretely, we define the concordance distance modulus as \[\mu_{c}=5\,\log_{10}\left(\frac{D_{L,c}}{10\,\text{pc}}\right), \tag{17}\] where \(D_{L,c}\) is the luminosity distance computed from Eq. (9) with the Hubble constant taken from Eq. (1). The error associated with \(\mu_{c}\) is given by \[\Delta\mu_{c}=\left[\left(\frac{\partial\mu_{c}}{\partial\Omega_{\Lambda}} \right)^{2}(\Delta\Omega_{\Lambda})^{2}+\left(\frac{\partial\mu_{c}}{\partial h }\right)^{2}(\Delta h)^{2}\right]^{1/2}. \tag{18}\] The distance modulus at each best fit point of the six runs can be computed with the \(D_{L}\) inferred from the cosmological parameters [\((a_{2},a_{3},a_{4})\) in the cosmographic models, \((\Omega_{\Lambda},h,w_{0},w_{a})\) in \(w\)CDM, and \((\Omega_{\Lambda},h)\) in axion]. Since the best fit point fits the data quite well in each of the six tests, one can also use the measurement to represent the best fit theory point for simplicity, \[\mu_{\text{bf}} =\frac{5}{2(\gamma-1)}\left[\log\left(\frac{f_{\text{X}}}{f_{ \text{UV}}^{\gamma}}\right)-\beta\right]\] \[\quad-5\,\log[(4\pi)^{1/2}]-5\,\log\left[10\,\text{pc/cm}\right], \tag{19}\] where \(f_{\text{X,UV}}\) are the X-ray and UV band fluxes normalized in units of erg/s/Hz/cm\({}^{2}\).7 The error of \(\mu_{\text{bf}}\) is computed as follows. Footnote 7: In the case of the axion model, they are corrected for extra attenuation due to the axion-photon conversion: \(f_{\text{X,UV}}=\frac{f_{\text{X,UV}}(z)/(\text{erg/s/Hz/cm${}^{2}$})}{P_{ \gamma\gamma}\,\text{X,UV}(z)}\). \[\Delta\mu_{\text{bf}} =\left[\left(\frac{\partial\mu}{\partial\log F_{\text{X}}} \right)^{2}\Delta\log F_{\text{X}}^{2}+\left(\frac{\partial\mu}{\partial\log F _{\text{UV}}}\right)^{2}\Delta\log F_{\text{UV}}^{2}\right.\] \[+\left(\frac{\partial\mu}{\partial\log m_{a}}\right)^{2}\Delta \log(m_{a})^{2}+\left(\frac{\partial\mu}{\partial\log g_{a\gamma}}\right)^{2} \Delta\log(g_{a\gamma})^{2}\] \[\left.+\left(\frac{\partial\mu}{\partial\beta_{0}}\right)^{2} \Delta\beta_{0}^{2}+\left(\frac{\partial\mu}{\partial\gamma}\right)^{2} \Delta\gamma^{2}+\left(\frac{\partial\mu}{\partial\delta}\right)^{2}\Delta \delta^{2}\right]^{1/2}, \tag{20}\] where \(\delta\) is fitted as a nuisance parameter to account for the intrinsic scattering in the quasar dataset. See Appendix A for the explicit likelihood function. We define a distance between the best fit (\(\mu_{\text{bf}}\)) and the concordance cosmology (\(\mu_{\text{c}}\)) as \[\chi_{\text{c}}^{2}=\sum_{i}\left(\frac{\mu_{\text{bf},i}-\mu_{\text{c,i}}}{ \Delta_{i}}\right)^{2}, \tag{21}\] where \(i\) is the index of each QSO data point, and \(\Delta=\sqrt{\Delta\mu_{\text{bf}}^{2}+\Delta\mu_{\text{c}}^{2}}\approx\Delta \mu_{\text{bf}}\), since \(\Delta\mu_{\text{bf}}\gg\Delta\mu_{\text{c}}\). In practice, we neglect the contribution to \(\Delta\mu_{\text{bf}}\) from \(\Delta\log(m_{a})\) and \(\Delta\log(g_{a\gamma})\) for simplicity. Since the purpose of \(\chi_{\text{c}}^{2}\) is to quantify the compatibility between the best fit point and the concordance cosmology, neglecting both makes the estimate conservative, _i.e._ it is easier to spot any potential incompatibility. We show the theory distance as defined above in the last column of Tab. 2. Larger value of \(\chi_{\text{c}}^{2}\) show a larger incompatibility between the best fit point and the concordance cosmology expansion histories. In \(w\)CDM and the cosmographic model, the incompatibility is much larger than the axion model even after adding SNIa and BAO. This is because the axion model mitigates some of the incompatibility by correcting the quasar flux evolution with the axion-induced attenuation \(P_{\gamma\gamma,\text{UV,X}}(z)\). After this correction the inferred expansion history becomes similar to the concordance cosmology. By contrast, in the other two models the quasar data bends the luminosity distance \(D_{L}(z)\) away from the concordance \(\Lambda\)CDM directly, leading to a larger incompatibility between the best fit points and the concordance cosmology. ### Comment on the Axion Model As demonstrated in the last section, the axion model can alleviate the tension between the quasar and other cosmological datasets, and its best fit point is more consistent with the concordance cosmology. This motivates us to examine further the posterior of the axion model fit. In Fig. 1, we compare the 2D posterior of the axion parameters with other independent astrophysical constraints [46; 47; 48; 49]. We observe that such large coupling is disfavored by those constraints. See Fig. 4 for more detailed 2D posterior information, and Tab. 4 for the full 1D posterior. On the other hand, since these constraints rely on galactic magnetic field or the magnetic field in the intracluster medium, the uncertainty in the domain size of the IGM magnetic field does not affect these bounds. As we discussed in Sec. IV.2, the allowed domain size of the IGM magnetic field can easily vary from 1 Mpc to 100 Mpc. Because of this, we also fit the axion model with a comoving IGM magnetic field domain length set to \(s=100\) Mpc. This results in a more efficient photon-to-axion conversion, as we showed in Ref. [23]. Therefore, the model requires a smaller \(g_{a\gamma}\) to generate the redshift-dependent feature preferred by the QSO dataset. The results from taking \(s=100\) Mpc are still in \(\sim 2\sigma\) tension with H1821+643 [49] and NGC 1275 [48],8 although they are still allowed by M87 [47] and Super Star Cluster [46]. Another strong constraint on the axion-photon coupling comes from the CMB spectral distortion [51; 52]. While a large swath of axion mass values is thereby ruled out, the region \(10^{-14}\,\mathrm{eV}\lesssim m_{a}\lesssim 10^{-12}\,\mathrm{eV}\) can be allowed if _(i)_ multiple resonant conversions, and _(ii)_ tuning of the single resonant conversion probabilities both take place [52]. Footnote 8: The intracluster medium magnetic field modeling for NGC 1275 bound is questioned in [50]. As an extra test to understand the axion-induced photon attenuation, we plot out the photon-to-axion conversion probability at the best fit point of \(\mathrm{axion}(\mathcal{Q}_{\beta_{0}}+\mathcal{G})\) in Fig. 2, for different frequencies. The conversion is very effective in the X-ray frequency range, which transitions from zero to 1/3, the saturated value, mostly below \(z\sim 0.3\). The conversion in the UV, on the other hand, is almost negligible below \(z\sim 1\), merely at the (sub)percent level, while it has a large redshift dependence at \(1\lesssim z\lesssim 2.5\). Above \(z\sim 2.5\) the UV conversion also saturates at 1/3. Put together, the change in \(F_{\mathrm{X}}\) and \(F_{\mathrm{UV}}\) leads to a modification of Eq. (3), i.e. the UV-X-ray flux relation, with Eq. (7). Such a modification is favored by the QSO data due to its tendency to prefer what amounts to an effective change in \(\beta\) around redshift \(z\sim 2\), as shown by the \(\Lambda\mathrm{CDM}(\mathcal{Q}_{\beta_{q}}+\mathcal{G})\) and \(\Lambda\mathrm{CDM}(\mathcal{Q}_{\beta_{q}}+\mathcal{B}+\mathcal{G})\) tests in Sec. III; we discuss this point in more detail in the next section. We further test that the axion-to-photon conversion in the UV band is in the non-linear minimal mixing regime (_i.e._\(k\approx\Delta_{a}\gg 1/s\) in Eq. (12) with \(s\) being the magnetic domain size). In this regime, the conversion probability has a power-law dependence on both the axion mass and the coupling, Figure 1: 2D posteriors of the axion mass (\(m_{a}\)) and coupling to photons (\(g_{a\gamma}\)) in the \(\mathrm{axion}(\mathcal{Q}_{\beta_{0}}+\mathcal{G})\) model, with a magnetic domain size fixed at a comoving length of \(s=1\) Mpc (blue) and \(s=100\) Mpc (orange). The gray curves correspond to other astrophysical constraints, the super star clusters [46], M87 [47], NGC 1275[48], and H1821+643 [49]. Figure 2: Photon-to-axion conversion probability, at redshifts corresponding to all quasars, in the X-ray band (\(\omega=2\) keV, blue), UV band (\(\omega=4.96\) eV, orange), and optical band (\(\omega=1\) eV, black), for the \(\mathrm{axion}(\mathcal{Q}_{\beta_{0}}+\mathcal{G})\) best fit point. We also recompute the UV photon survival probability by increasing \(g_{a\gamma}\) by 16 times and \(m_{a}\) by 4 times in the orange curve. This shows that \(P_{\mathrm{UV}}(z)\) is almost invariant along the curve \((g_{a\gamma}/g_{a\gamma,\mathrm{bf}d})=(m_{a}/m_{a,\mathrm{bf}b})^{2}\), where \((\cdot)_{\mathrm{bf}}\) is the best fit point of the parameter. The domain size is chosen to be \(s=1\) Mpc. \(P_{a\gamma}\propto g_{a\gamma}^{2}/m_{a}^{4}\). To verify this, we shift the axion parameters in the direction of \(g_{a\gamma}\propto m_{a}^{2}\), away from the best fit point. We show that the axion-to-photon conversion is almost invariant along this curve in the \(m_{a}-g_{a\gamma}\) plane, as shown by the green and orange curves in Fig. 2. Note that this behavior corresponds to the degeneracy exhibited by the 2D posterior contours in the \(m_{a}-g_{a\gamma}\) parameter space of Fig. 1. We can also understand why the axion island is allowed even after SNIa is added. In Fig. 2 we show the impact of the photon-to-axion conversion in the optical band (black points). Below \(z\sim 2\), which is the maximum redshift to which the Pantheon dataset extends, the modification of the photon flux in optical band is minimal, at most at the \(\sim 2\%\) level. Therefore, this modification hardly generates any perceivable changes in the SNIa dataset even though the coupling is relatively large. Lastly, we want to comment on a variation of the axion model, which might evade the astrophysical constraints listed in Fig. 1. It is also possible to have axion-photon conversion in a cosmic _dark_ magnetic field \(B^{\prime}\), in the scenario with a dark photon, \(\gamma^{\prime}\), and an axion-photon-dark photon coupling \(g_{a\gamma^{\prime}}aF\tilde{F}^{\prime}\), where \(\tilde{F}^{\prime}\) is the dual field strength of the gauged dark \(U(1)^{\prime}\). In this case, the conversion probability in a single magnetic domain is similar to Eq. (12) and (13) with \(g_{a\gamma}B\) replaced by \(g_{a\gamma^{\prime}}B^{\prime}\). Assuming that the coherent length of \(B^{\prime}\) is similar to that of the IGM magnetic field (\(\sim\mathcal{O}(1-100)\) Mpc), from Fig. 1, we need \(g_{a\gamma^{\prime}}B^{\prime}\gtrsim 10^{-12}\) GeV\({}^{-1}\times 1\) nG to restore the cosmic concordance. The astrophysical constraints on \(g_{a\gamma^{\prime}}\) and \(B^{\prime}\) are weaker compared to those on \(g_{a\gamma}\) and the IGM \(B\) field. More specifically, \(B^{\prime}\) could be as large as micro-Gauss, with the constraints mostly from \(N_{\text{eff}}\) while \(g_{a\gamma^{\prime}}\lesssim 5\times 10^{-10}\) GeV\({}^{-1}\) due to star cooling [53, 54]. This implies a potential new physics explanation for the cosmic concordance of the quasar data, without conflicts with the other constraints. We leave this for further investigation in future work. ### Effective Evolution of \(\beta\) As we discuss in Sec. II.2, new physics modifications can be parametrized as an effective redshift evolution in \(\beta\). We summarize the results by showing this effective redshift evolution, \(\beta_{\text{eff}}(z)\), in Fig. 3. In Sec. III, when fitted to the QSO data alone, we see roughly two classes of solutions: a shallow but abrupt change in \(\beta\) around \(z\sim 1.5\) and a deep and slow change of \(\beta\) spanning the whole redshift range. The axion model leads to a relatively quick change in redshift through \(P_{\gamma\gamma,\text{UV,X}}(z)\). This serves as a good physical realization of the sharp change given by the effective parametrization in \(\mathcal{Q}_{\beta_{i}}\). As we see in Sec. IV.4, the effective change in \(\beta\) around \(z=1.5\) is due to the UV photon conversion to axions. Interestingly, the maximum amount of change allowed in the UV is given by \[\Delta\beta=\gamma\log(2/3)\approx-0.12. \tag{22}\] This size of the modification in the UV-X-ray relation happens to be what the data prefers, as evidenced by the run with the effective parametrization with a sharp transition in \(\beta\) (\(\beta_{\sharp}\) in the plot). We would like to highlight the fact that when the QSO dataset is allowed to have a sharp transition in the UV-X-ray relation (_i.e._\(\mathcal{Q}_{\beta_{\sharp}}\)), the fits prefer this transition to take place at around the same redshift as our axion model does. The striking agreement between both can be seen by comparing \(\Lambda\text{CDM}(\mathcal{Q}_{\beta_{\sharp}}+\mathcal{G})\) with \(\text{axion}(\mathcal{Q}_{\beta_{\flat}}+\mathcal{G})\), or by comparing \(\Lambda\text{CDM}(\mathcal{Q}_{\beta_{\flat}}+\mathcal{B}+\mathcal{G})\) and \(\text{axion}(\mathcal{Q}_{\beta_{\flat}}+\mathcal{B}+\mathcal{G})\). Adding to its appeal, the axion model outperforms non-standard cosmological models, even those as flexible as the cosmographic model, as we show in Sec. IV.3. By contrast, the \(w\)CDM and cosmographic models have large changes when \(\mathcal{B}\) set is added. One should keep in mind that in the B\(\Lambda\)CDM models, the effective evolution in \(\beta\) is achieved at the price of directly modifying the Universe expansion history. Therefore, SNIa+BAO will constrain the low-\(z\) part (\(z\lesssim 1.5\)) of \(\beta_{\rm eff}(z)\) to be as flat as possible. This has consequences in both the \(w\)CDM and the cosmographic model. In \(w\)CDM, having a flat low-\(z\) part of \(\beta_{\rm eff}(z)\) restricts the overall amount of change one can achieve from \(z=0\) to \(z\sim 8\). This is reflected by the change in the curve for \(w\)CDM(\(\mathcal{Q}_{\beta_{0}}+\mathcal{G}\)) and \(w\)CDM(\(\mathcal{Q}_{\beta_{0}}+\mathcal{G}+\mathcal{B}\)): adding \(\mathcal{B}\) significantly reduces the amount of change in \(\beta_{\rm eff}(z)\). On the other hand, the cosmographic model has enough degrees of freedom to guarantee a relatively flat low-\(z\)\(\beta_{\rm eff}(z)\). Therefore, while restricting the flatness at low-\(z\) changes the shape of the effective \(\beta\) evolution, the large modification in \(\beta\) survives. However, this large \(\beta\) change itself comes at the price of a large deviation from the concordance \(\Lambda\)CDM at \(1\lesssim z\lesssim 8\). It also remains unclear what physical models can achieve such a modification in \(D_{L}(z)\) as indicated by the cosmographic model. ## V Conclusion Although quasar data can be used to extract information about the late-time cosmological history of the Universe, it stands in need of standardization. In light of the recent efforts along these lines, we perform a critical examination on the imprints that new physics may leave on the quasar data, in the form of unaccounted-for redshift evolution of their fluxes. This is a timely endeavor, given the ongoing Hubble tension, as well as the discrepancy between the cosmological expansion inferred from quasars and other observations. We use a flexible fitting template to identify what is required to modify the quasar flux in such a way that the concordance cosmological history given by in Eq. (1) can be restored. We test \(w\)CDM and the cosmographic model, two cosmological models beyond \(\Lambda\)CDM, as well as the axion model, a mechanism beyond the SM that can present extra photon attenuation. We find that the axion model has an advantage over the other two, restoring the consistency between the best-fit parameters emerging from the quasar dataset, and the concordance cosmology. It also outperforms the B\(\Lambda\)CDM models in improving the goodness of fit to the quasar dataset alone. Lastly we note that although the best-fit axion model parameters are in mild tension with several astrophysical constraints, this tension can be relieved by allowing for larger magnetic domain sizes in the IGM, or for the axion-photon conversion to take place in a _dark_ magnetic field. These variants of our baseline axion model may evade other constraints while at the same time providing the redshift evolution of the quasar fluxes required by observation, maintaining the validity of the RL relation, and restoring the consistency of the quasar data with the concordance cosmological model. _Acknowledgment_ We thank Daniele Alves, Michael Graesser, Hui Li, Nicole Lloyd-Ronning, and Elisabeta Lusso, for useful comments. We thank Tomer Volansky for the access to Tel Aviv Univeristy High-Performance Computing, where part of the work was performed. This work was supported by the U.S. Department of Energy through the Los Alamos National Laboratory. Los Alamos National Laboratory is operated by Triad National Security, LLC, for the National Nuclear Security Administration of U.S. Department of Energy (Contract No. 89233218CNA000001). Research presented in this article was supported by the Laboratory Directed Research and Development program of Los Alamos National Laboratory under project number 3W2000A-XXG900, task SUN00000. MBA is supported in part by the National Science Foundation under Grant Number PHY-2210361, and the Maryland Center for Fundamental Physics. JF is supported by the NASA grant 80NSSC22K081 and the DOE grant DE-SC-0010010. ## Appendix A Quasar Dataset We use the quasar data sample from Ref. [37]. It contains seven different groups: the XMM-Newton \(z\sim 3\) sample, the new XMM-Newton \(z\sim 4\) quasars, the High-\(z\) sample, XXL, SDSS - 4XMM, SDSS - Chandra, and local AGN. The UV-X-ray relation is expected to hold up to a certain amount of scattering that cannot be eliminated by the improvement of the measurement. Since this intrinsic scattering of the quasars is unknown, we add a third nuisance parameter, \(\delta\), describing the standard deviation of the intrinsic scattering, along with \(\gamma\) and \(\beta\). As a result, the log-likelihood function is given by \[-2\,\ln(\text{likelihood})=\sum_{i}\left[\left(x^{\rm th}-x^{\rm obs }\right)^{2}/\sigma_{i}^{2}+\ln(2\pi\sigma^{2})\right], \tag{10}\] where \(\delta\) is combined into the error of each quasar flux measurement as \(\sigma_{i}^{2}=\sigma_{i,\rm log\,F_{UV,X}}^{2}+\delta^{2}\). By adopting Eq. (10), \(\delta\) can be treated as a nuisance parameter that gets fitted together in the MCMC. Therefore, specific to the QSO dataset, we have the following nuisance parameters: \(\beta,\gamma,\delta\), with \(\beta\) itself parametrize in the various ways described in Sec. III. ## Appendix B Methodology We use emcee[1] to fit the three models (\(w\)CDM, cosmographic, and axion) to two data combinations (\(\mathcal{Q}_{\beta_{0}}+\mathcal{G}\) and \(\mathcal{Q}_{\beta_{0}}+\mathcal{B}+\mathcal{G}\)). Our code is publicly available at github.com/ChenSun-Phys/high_z_candles. We use 100 walkers and a chain length of 40000. All runs pass our MCMC convergence condition of the chain length to be 50 times longer than the auto-correlation length, except for \(w\)CDM(\(\mathcal{Q}_{\beta_{0}}+\mathcal{B}+\mathcal{G}\)), where we loosen the condition to 12 times the auto-correlation length due to its strong bimodal posterior. The results are analyzed with corner.py[2]. Since different parameters have different auto-correlation times, we choose the burn-in to be equal to twice the largest one, and choose the thinning-length to be equal to half of the smallest one. We use the following flat priors for the theory parameters. \[\Omega_{\Lambda}\in(0,1)\quad h\in(0.6,0.8) \tag{14}\] \[w_{0}\in(0,1)\quad w_{a}\in(0,1)\] \[a_{2}\in(1,5)\quad a_{3}\in(-6,6)\ \ a_{4}\in(-10,6)\] \[\log\left[\frac{g_{a\gamma}}{\text{GeV}^{-1}}\right]\in(-18,-8)\ \ \log\left[\frac{m_{a}}{\text{eV}}\right]\in(-17,-11).\] We set the following priors for the nuisance parameters when the corresponding datasets are used. \[\text{SNIa}:M_{0}\in(-21,-18) \tag{15}\] \[\text{BAO}:r_{s}/\text{Mpc}\in(120,160)\] \[\text{QSO}:\gamma\in(0.1,1)\ \ \delta\in(0.05,0.6)\] \[\mathcal{Q}_{\beta_{0}}:\beta_{0}\in(0,10)\] \[\mathcal{Q}_{\beta_{1}}:\beta_{0}\in(0,10)\ \ \beta_{1}\in(0,10)\ \ z_{0}\in(0,9)\] \[\mathcal{Q}_{\beta_{1}}:\beta_{0}\in(0,10)\ \ \beta_{1}\in(0,10)\ \ z_{0}\in(0,9)\ \ \delta z\in(0.01,10)\.\] ## Appendix C Posterior of the Fits We show the 1D, \(1\sigma\) posterior range of the \(\Lambda\)CDM parameters in Tab. 3. The corresponding results for the B\(\Lambda\)CDM and BSM fits are shown in Tab. 4. The 2D posterior of the B\(\Lambda\)CDM and BSM parameters are shown in Fig. 4.
2309.14508
HEROES: Unreal Engine-based Human and Emergency Robot Operation Education System
Training and preparing first responders and humanitarian robots for Mass Casualty Incidents (MCIs) often poses a challenge owing to the lack of realistic and easily accessible test facilities. While such facilities can offer realistic scenarios post an MCI that can serve training and educational purposes for first responders and humanitarian robots, they are often hard to access owing to logistical constraints. To overcome this challenge, we present HEROES- a versatile Unreal Engine simulator for designing novel training simulations for humans and emergency robots for such urban search and rescue operations. The proposed HEROES simulator is capable of generating synthetic datasets for machine learning pipelines that are used for training robot navigation. This work addresses the necessity for a comprehensive training platform in the robotics community, ensuring pragmatic and efficient preparation for real-world emergency scenarios. The strengths of our simulator lie in its adaptability, scalability, and ability to facilitate collaboration between robot developers and first responders, fostering synergy in developing effective strategies for search and rescue operations in MCIs. We conducted a preliminary user study with an 81% positive response supporting the ability of HEROES to generate sufficiently varied environments, and a 78% positive response affirming the usefulness of the simulation environment of HEROES.
Anav Chaudhary, Kshitij Tiwari, Aniket Bera
2023-09-25T20:14:38Z
http://arxiv.org/abs/2309.14508v1
# HEROES: Unreal Engine-based Human and Emergency Robot Operation Education System ###### Abstract Training and preparing first responders and humanitarian robots for Mass Casualty incidents (MCIs) often poses a challenge owing to the lack of realistic and easily accessible test facilities. While such facilities can offer realistic scenarios post an MCI that can serve training and educational purposes for first responders and humanitarian robots, they are often hard to access owing to logistical constraints. To overcome this challenge, we present HEROES- a versatile Unreal Engine simulator for designing novel training simulations for humans and emergency robots for such urban search and rescue operations. The proposed HEROES simulator is capable of generating synthetic datasets for machine learning pipelines that are used for training robot navigation. This work addresses the necessity for a comprehensive training platform in the robotics community, ensuring pragmatic and efficient preparation for real-world emergency scenarios. The strengths of our simulator lie in its adaptability, scalability, and ability to facilitate collaboration between robot developers and first responders, fostering synergy in developing effective strategies for search and rescue operations in MCIs. We conducted a preliminary user study with an 81% positive response supporting the ability of HEROES to generate sufficiently varied environments, and a 78% positive response affirming the usefulness simulation environment of HEROES. ## I Introduction Mass Casualty Incidents (MCIs) are scenarios that can benefit heavily from the intricate integration of mobile robots in traditional human workflows involving first responders given the scale of such scenarios [1, 2]. The nature of such incidents causes emergency service resources- human or equipment, to be overwhelmed through a sheet difference in numbers. These scenarios are often not limited to a difference in victims and service providers, but may also comprise situations where a single interaction requires dedicated human involvement, often derived from a pool of limited human resources. One of the prime examples of this is the concept of Urban Search and Rescue. This activity requires the deployment of a large number of resources to provide emergency services to a possibly smaller group. This creates obstacles in the proper delivery of such services. Furthermore, as the component of the Search yields results, resources must be diverted to the Rescue and treatment of individuals, i.e., primary triage, which diverts resources away from searching for more victims. Current applications of robot integration in MCI response workflows use a wide variety of robots, such as underwater robots for water disasters, small-sized UGVs (unmanned ground vehicles) deployed for victim search during earthquakes, and UAVs (unmanned aerial vehicles) for monitoring volatile situations such as volcanic eruptions [2]. However, such applications often utilize robots trained for navigation on naturally formed terrain [3, 4], or simulations of such terrain [5, 6]. MCIs often present a unique set of obstacles in the form of artificial terrain, such as rubble piles and partially demolished buildings, as shown in Fig. 1. The use of such environments for the purpose of training is done through real-world simulations in facilities such as the Disaster City facility maintained by Texas A&M Engineering Extension Service (TEEX) [7]. However, such facilities are expensive to maintain and operate. Furthermore, the number of such facilities is limited, and the simulations offered are also limited in various situations and layouts. Therefore, the current status quo can benefit from a method of computer-aided simulation of such incidents, which can offer large variability, and be both Fig. 1: _Urban post-MCI destruction environments are not represented during robot navigation training, and yet offer unique challenges. The figure showcases the highly irregular environment created by our proposed simulator HEROES with a wide variety of lighting scenarios due to structural collapse post MCIs. In addition, the figure also showcases the type of perceptual feedback that can be extracted; ordered from left to right as Color Image, Depth, and Semantic Segmentation._ cost-effective and scalable. Robot navigation is increasingly making use of machine learning and deep learning pipelines, which require large and diverse datasets to achieve results that translate well to real-world situations. While general computer simulation has been used to generate such datasets in sim-to-real approaches [8] and especially for perceptual agents [9], MCIs present a new form of environment not yet explored in past works. Rapid and varied generation of such environments can provide additional data for existing machine learning and deep learning pipelines that will allow robots to better adapt to such environments. We present, _HEROES - an Unreal Engine-based human and emergency robot operation education system_. HEROES is a large-scale simulation framework useful for training mobile robots capable of operating in urban MCIs and performing automated search operations through unknown and dynamic environments as well as training data-intensive machine learning models. To this end, we present the following novel contributions: * Traditional methods of navigating challenging terrain used by robots are trained on natural terrain or common urban terrain [3, 4], which is not an accurate analog for urban terrain affected by MCIs. Such terrain is rife with hazards and unforeseen obstacles and routes due to the presence of **physical impediments such as rubble**. We present a simulation of such terrain to be used for training robots to navigate such an environment effectively. * The high variability presented through the simulation provides a **a large number of unique simulation environments** that can be tailored by users to represent different forms of MCIs. * By utilizing a **physical simulation of urban destruction** we present a way to generate varied environments from the same starting configuration, thus increasing the level of variety in the simulation data and also providing an environment where robots can interact with dynamic objects (rubble pieces that may move during robot interaction). ## II Related Works In order to analyze the motivation and direction of this paper, we explored the past works in the following broad categories. ### _Training professionals for MCIs through computer-aided simulations_ MCIs, occurring with an increased frequency nowadays, require complex coordination and efficient action by several different entities such as first responders, medical professionals, and technical specialists. Furthermore, the random and chaotic nature of such incidents often makes it difficult to prepare for every possibility, with service providers facing unexampled scenarios regularly. Current methods for training often lack the realism or variability to model MCIs effectively. Computer-aided simulations, through the use of virtual environments, static and procedural, provide an incredible substitute for such training in the form of virtual-world-based hospital simulations [10] or VR-based disaster training [11]. The effectiveness of using virtual reality to simulate MCI environments has been found to be an effective method of training medical professionals and first responders, thus validating the transfer of simulation training to the real world. [12]. Multiple instances of MCI simulation for training have proven fruitful both in cases of initial training [13, 14, 15], and in the assessment of technical and non-technical preparedness [16]. These works have highlighted the efficacy of computer simulations in training for MCIs and thus showcase that discrete simulations of such events can represent actual events accurately. ### _Using computer simulations to train robot navigation_ Simulated training of robots has now emerged as a pragmatic and structured method of robot training. Sim-to-real transfer can be used effectively for training different kinds of robotic systems in various environments with promising results. In light of recently emerging deep reinforcement learning methods to train robots, computer simulation environments are proving to increase the effectiveness of training by providing a large amount of data points (potentially infinite), and also delivering a safe environment for training [8]. Randomization techniques are employed widely to generate large amounts of unique and varied environments in an attempt to produce better results in unforeseen real-world scenarios [17, 18]. The use of simulation allows greater control over the training environment, as well as more significant variability. Xie et al. explore the use of dynamics randomization in robot training and conclude that without the use of on-robot adaptation, dynamics randomization may not always be sufficient for the transfer of learned behavior to the real world [19]. This establishes the need for learning adaptive behavior over unstable terrain where large amounts of sudden perturbations may be experienced, such as unstable rubble piles formed in the aftermath of earthquakes. In other works, such as Gangapurwala et al. [20], the use of reinforcement learning to achieve dynamic locomotion over uneven terrain provides good results over natural uneven terrain but lacks the unnatural terrain deformities often present in post-MCI scenarios. In such scenarios, robots trained solely on recognizing everyday obstacles may not be able to navigate over highly deformed surfaces found in MCIs. Training of perceptual agents is increasingly being conducted through deep learning methods wherein researchers are attempting to improve robot navigation through domain adaptation for visual control [21], learning semantic details about the environment rather than depending solely on visual data [22], or by using real-world executions to improve simulation parameters [23], where each approach benefits from utilization of synthetic datasets. However, these approaches tend to focus on common real-world environments and use datasets that may not translate to a post-MCI environment. However, since these approaches do not necessarily depend on the environments contained with such datasets, augmenting them using synthetic datasets can provide the adequate information needed to translate robot behavior accurately to post-MCI environments. These works highlight the effectiveness of synthetic datasets in the training of robot navigation but also expose the gap between currently available datasets, which provide accurate representations of normal real-world scenarios but are unable to represent scenarios where such environments may be exaggeratedly deformed. ### _Use of simulation in training robot navigation for MCI environments_ MCIs often present robots with challenging tasks. These include navigation in new and complex environments which are generally not modeled during conventional training, as most attempts aim to produce behavior in normal environments where a robot must navigate around everyday objects [22]. However, such navigational cues and references are not readily available in MCIs. Instead, the robots may be required to navigate through unconventional routes, where human traversal is not possible [5]. This presents a form of the _reality gap_ problem, where the robots trained through simulations aren't able to perform as effectively in real-world scenarios. However, in this case, the _reality gap_ emerges not from technical disparities between the simulators and reality, but from the difference in the simulated environment and the targeted use environment of the robot. Additionally, robots will often need to traverse challenging and highly uneven terrain. Simulations have been used to train such behavior quite successfully in different kinds of robots, from quadrupedal [3, 4], and bipedal [24] to rovers [5]. However, these simulations again present a similar limitation, as MCI terrain may differ wildly from challenging terrain found in everyday conditions. In addition to these, during the use of robots in MCIs, it is imperative to be absolutely aware of the limitations of the used robots and the expectations from the same [6, 25, 26, 27]. In disastrous situations, an unexpected failure of the robot can cause a loss of vital onsite time and resources, as well as a general loss of resource availability in the future. Given the limited availability of emergency service resources, any solutions deployed must be reliable and consistent. Simulation provides a safe environment to explore the limitations and expectations of robotic systems in various MCIs [25] and different kinds of robots in similar environments, allowing both the exploration of a single robot's limitations, as well as the comparative analysis of multiple robots. These publications shed light on the obstacles faced by robots during the navigation of post-MCI environments, thus providing insight into how MCI simulations can be used to address such issues. The review of these past works has helped us understand the current standing of robot navigation in unstable and uneven terrain and has highlighted the need for vast and varied synthetic data pertaining to highly specific situations that may not be adequately represented by real-world datasets or synthetic datasets modeled after normal real-world scenarios. Furthermore, they highlight how computer simulations may accurately provide situations close to MCIs that are difficult to simulate in the real world and how datasets acquired from such simulations may help address additional problems noted in robot deployment during MCIs. ## III HEROES: Post-MCI Environment Simulator In the following sections, we describe the HEROES simulator framework in detail. First, we define and describe the individual destructible unit of the simulator (Section III-A). Then, we explain the process of setting up the simulation environment (Section III-B). Finally, we describe the destruction events and other possible interactions used to create a variety of scenarios in the HEROES simulator (Section III-C). The Simulator can be found on the HEROES webpage1. Footnote 1: [https://ideas.cs.purdue.edu/research/HEROES](https://ideas.cs.purdue.edu/research/HEROES) ### _Individual destructible unit_ The simulation makes use of the _Chaos Physics_[28] framework provided by Unreal Engine, to fracture meshes and simulate physical interactions of the rubble pieces with the environment. However, at present, the capabilities of the _Chaos Physics_ system do not allow the ability to fracture meshes at run-time. This excludes the possibility to use large-scale static meshes that are created during use. Therefore, to overcome this drawback, we define an _Individual destructible unit_ to serve as a predefined and pre-fractured mesh, using which users can construct large-scale environments. The aforementioned unit, henceforth named the _Room_, is defined as a single predefined and pre-fractured static mesh, which is subjected to the physics simulation. A large set of Rooms are designed using 3D modeling software, such as Blender, to serve as the building blocks of the simulation environment. Such meshes are fractured using Unreal Engine's in-engine fracture editor to follow real-world destruction patterns closely. Each mesh is fractured before runtime and every iteration of the simulation will result in the same fracture pattern; however, by simulating physical forces during and after fracturing, we are able to achieve randomness in the spread of broken rubble pieces, thus achieving randomness in the resulting layout. Using pre-fractured meshes also allows us to account for different material strengths, such as brick and wood. The fracturing is generally done using Voronoi fracturing, where each individual fractured component is defined as a Voronoi Fig. 2: _Multiple different types of Rooms can be used to construct the simulation environment. In a clockwise order, a simple Room with 1 doorway, an L-shaped Room with 1 doorway and 1 window, and translucent views of a 1 doorway room with 2 pillars for support, and a 1 doorway room with a beam and two wall supports are shown._ cell. Eq. (1) defines the Voronoi region \(V_{i}\) using control points (or Voronoi sites) denoted by \(P_{i}\), as the set of all points \(x\) in the 3-dimensional space \(\mathbb{R}^{3}\) that are closer to the Voronoi site \(P_{i}\) than they are to any other sites. Voronoi sites are chosen randomly during fracturing and their number can be altered to provide higher granularity in the destruction of the Room. The fracturing pattern can also be defined using randomly perturbed planes to specify specific fractured slices. A combination of using planes to define major fracture lines and Voronoi fracturing to specify smaller chunks of rubble was used in the experiments specified in this paper. \[V_{i}=\{x\in X\mid d(P_{i},x)\leq d(P_{j},x)\;\forall\;i\neq j\} \tag{1}\] Each Room in the set is designed to mimic different construction methods and structures, as can be seen in Fig. 2. Furthermore, using a pre-fractured mesh of the room allows us to more closely simulate the common fracture patterns and points found in different structures. During the fracturing process, the meshes of each Room are broken into several individual meshes, which create a single _Geometry Collection_, and each individual mesh is connected to its neighbors. The connections between pieces are subject to a _Damage Threshold_ which aims to replicate the strength of different structures. During the physics simulation, a strain is applied to each connection to determine breakage, and necessary forces are used to simulate the MCI. ### _Simulation environment_ The Simulation environment is the collection of Rooms that are subjected to physics simulation. It is this collection of Rooms that will serve as the obstacle environment to be used during the training of robot navigation. The simulation environment can affect the behavior of the simulation by allowing Rooms to physically interact with one another during the destruction phase. Users are provided with a variety of different room configurations in order to represent a variety of construction methods, through which users can construct unique and varied simulation environments. Each Room is placed on a grid, allowing easy positioning through discrete placement positions. Furthermore, each Room can be rotated in increments of 90\({}^{\circ}\). During construction, users select Rooms and place them using a representative translucent preview as an indicator. The simulation environment can further be customized to utilize a wide variety of building materials such as brick, concrete, and wood, all of which have separate fracture patterns. In addition, users can also specify environmental parameters such as weather effects and time of day to further increase the variety of the generated data. Table I shows all the possible configuration parameters and exportable data available in the simulator. Following the construction of the Simulation Environment, the users can introduce different MCI events by placing them around the environment and tailoring the parameters to their needs. ### _Destruction events_ A Destruction event in the simulation is an event representative of an MCI. These events can be placed in the simulation environment and will be responsible for replicating the MCI in the simulation. Each fractured component of a room is connected to its neighboring fractured components through joints \(x\), with each joint having a strain threshold of \(T_{s}\). If any joint of a fractured component is subject to a strain higher than \(T_{s}\), all joints of said fracture component are broken and it acts as an independent physics object. Destruction events use different methods to calculate the strain in an effort to model different MCI situations. There are 3 Destruction events modeled in the simulation with provisions to add more, as showcased in Fig. 3. The available events are: * **Building collapse** triggered through earthquake-sourced strain. This event seeks to replicate the collapse of buildings during an earthquake and the subsequent alteration of the navigation environment. Uniform strain is applied to all pieces of each Room's Geometry Collection throughout the Simulation Environment. Eq. (2) defines the strain applied on each joint \(S_{x}\). The strain is calculated uniformly for each joint \(x\) in the set of all joints \(X\) and the magnitude of the strain \(M\) remains constant. \[S_{x}=\{M\mid x\in X\}\] (2) * Damage to buildings due to **explosions**. This event replicates the damage caused by an explosion, with large amounts of strain applied to certain areas of the simulation environment. Furthermore, a linear velocity is also provided for all pieces of rubble generated during the application of strain. This helps model the outward dispersion of rubble that can damage the area around the source of the explosion. Eq. (3) defines the calculation of the strain applied to a joint \(S_{x}\) having variable magnitude \(M_{s}/d^{2}\) where \(M_{s}\) is the magnitude at the center of the explosion and \(d\) is the distance between the joint and the center of the explosion, for every joint within a culling region \(R\). \[S_{x}=\{M_{s}\mid x\in R\;\forall\;x\in X\}\] Following the application of strain, an outward force vector is applied to every component as defined in Eq. (4) where \(M_{f}\) is the magnitude of the force applied. Eq. (5) defines the direction of the force where \(P_{x}\) is the 3D position vector of the object whose joint is under consideration and \(C\) is the position vector of the center of the explosion. \[\parallel\vec{F_{x}}\parallel=\{M_{f}\mid S_{x}\geq T_{s}\;and\;x\in R\; \forall\;x\in X\}\] (4) \[\hat{F_{x}}=\frac{(\vec{P_{x}}-\vec{C})}{\parallel\vec{P_{x}}-\vec{C}\parallel}\] (5) * **Constrained building collapse** represents the collapse of certain sections of a building due to a variety of reasons. Strain is applied similarly to earthquake events but is constrained to a smaller region. This event allows only certain areas of the simulation environment to be affected, allowing a simulation of the transition from normal to affected environments. Furthermore, strain is applied incrementally over a duration allowing users to simulate building with different strain thresholds \(T_{x}\) in the same environment. Eq. (6) defines the calculation of strain over a time \(t_{f}\). \[S_{x}=\{\sum_{t=0}^{t_{f}}M\mid x\in R\;\forall\;x\in X\}\] (6) These Destruction events can further be tailored to suit the user's needs and can be placed around the simulation environment to provide variety without changing the layout of the environment. ### _ROS integration_ To make the simulation compatible with current state-of-the-art robot simulation platforms such as ROS, we have added the provision of controlling the robots in the simulation using ROS through a ROSBridge. The ROSBridge allows us to use the ROS message-passing functionalities to control components inside the simulation using ROS. Robot components can be defined inside Unreal Engine as a set of physical meshes connected by joints. These joints can be controlled using ROS through the ROSBridge, allowing us to control the robots in the simulation. This will help us translate robot behavior to Fig. 4: _The ROS Integration allows us to transfer data between the simulator and ROS, which is one of the most popular software development platforms for robotics. The images above show the different forms of data that can be exported from the simulator as images of a quadrupedal robot’s point of view. They are arranged as follows; Left: Color Image, Middle: Depth Image, Right: Semantic Segmentation._ Fig. 5: _The simulator can also be deployed using mixed reality mediums so human users can directly interact with the simulated MCI environment. The figure shows the simulator deployed through a virtual reality headset._ Fig. 3: _Different forms of destruction events can be generated via the HEROES simulator. Left: Earthquake-sourced strain, Middle: damage due to explosion, Right: constrained building collapse._ the physical simulation, allowing them to interact with the physics objects (rubble) in the simulator. Such a scenario can help us address the issue of robot navigation over dynamic terrain (shifting rubble piles as an example) that is not easily replicable using normal real-world environments. Furthermore, this allows users to specify robot behavior and movement inside the simulator enabling the user to simulate any form and number of components. Users can also export data from the simulator to ROS, including but not limited to images, as seen in Fig. 4 from the robot's point of view. This two-way integration of data defines a complete integration of the robot in the simulator with real-world transferability. Additional sensors can be simulated to increase the possible interactions a robot may have with the environment. Apart from integration with ROS, the simulator also can be deployed using mixed-reality devices to help train first responders in MCI situations. Fig. 5 shows a version of the simulator deployed through a Meta Quest 2 Virtual Reality headset. ## IV Experiments To evaluate the usefulness of HEROES, we set up a few tasks to be performed by beta testers which will be used to evaluate the ease of use and adaptability of HEROES. ### _Experiment tasks_ The simulator was tested for usability and effectiveness of integration into existing workflows for robot navigation training. To test the same, a set of beta testers was asked to operate the simulation and perform the following tasks: * The reviewers were provided with layout details of an environment and were then asked to recreate the environment as closely as possible. * The reviewers were tasked with creating an environment of their choosing. The above-mentioned tasks were set to evaluate the ease of reproducibility of an environment, the ease of creating dense, detailed environments, and the ease of creating unique environments from a user's imagination, respectively. The users were provided with only a minimal set of instructions on the controls of the simulator. The overall ability of the simulator was then evaluated through a questionnaire filled out by all participants. The results of the said questionnaire are highlighted in Table II and expounded in the next section. In addition, users also filled out a supplementary questionnaire targeted at quantifying general usability and user feedback. This supplementary questionnaire is outlined in Table III. ### _Usability survey_ The simulator's usability was tested using the previously mentioned experiments by a group of 10 beta testers and the results were collated using questionnaires to judge the effectiveness of the simulator in two avenues. The first questionnaire targeted the user's ability to complete the tasks put forward and the success was evaluated on a linear scale for each user to gauge the usability of the simulator for completing specific tasks. The results for this are presented in Table II. The second questionnaire aimed to understand the general usability of the simulator and the effectiveness that such a tool can have on existing robot training workflows, evaluated on a linear scale, the results of which are presented in Table III. ## V Conclusions We presented HEROES- a versatile Unreal Engine simulator designed for training humans and emergency robots for such urban search and rescue operations. By using destructible environments, we are able to generate scenarios where hazardous terrain presents a serious challenge to navigation. Training in such environments can help better ascertain drawbacks in existing methods of robot navigation when confronted with unstable terrain not easily replicable in the real world. Furthermore, HEROES provides a logistically feasible method to simulate a wide variety of post-MCI scenarios that cannot be performed in real-world training. In the future, we hope to extend the capabilities of the simulator to allow users to easily train multiple robots in a variety of scenarios. We also aim to expand the simulator's capabilities to allow users to train heterogeneous teams of robots covering MCIs beyond simple destroyed urban environments for uneven terrain navigation.
2309.15616
Perception for Humanoid Robots
Purpose of Review: The field of humanoid robotics, perception plays a fundamental role in enabling robots to interact seamlessly with humans and their surroundings, leading to improved safety, efficiency, and user experience. This scientific study investigates various perception modalities and techniques employed in humanoid robots, including visual, auditory, and tactile sensing by exploring recent state-of-the-art approaches for perceiving and understanding the internal state, the environment, objects, and human activities. Recent Findings: Internal state estimation makes extensive use of Bayesian filtering methods and optimization techniques based on maximum a-posteriori formulation by utilizing proprioceptive sensing. In the area of external environment understanding, with an emphasis on robustness and adaptability to dynamic, unforeseen environmental changes, the new slew of research discussed in this study have focused largely on multi-sensor fusion and machine learning in contrast to the use of hand-crafted, rule-based systems. Human robot interaction methods have established the importance of contextual information representation and memory for understanding human intentions. Summary: This review summarizes the recent developments and trends in the field of perception in humanoid robots. Three main areas of application are identified, namely, internal state estimation, external environment estimation, and human robot interaction. The applications of diverse sensor modalities in each of these areas are considered and recent significant works are discussed.
Arindam Roychoudhury, Shahram Khorshidi, Subham Agrawal, Maren Bennewitz
2023-09-27T12:32:11Z
http://arxiv.org/abs/2309.15616v1
# Perception for Humanoid Robots ###### Abstract **Purpose of Review** The field of humanoid robotics, perception plays a fundamental role in enabling robots to interact seamlessly with humans and their surroundings, leading to improved safety, efficiency, and user experience. This scientific study investigates various perception modalities and techniques employed in humanoid robots, including visual, auditory, and tactile sensing by exploring recent state-of-the-art approaches for perceiving and understanding the internal state, the environment, objects, and human activities. **Recent Findings** Internal state estimation makes extensive use of Bayesian filtering methods and optimization techniques based on maximum a-posteriori formulation by utilizing proprioceptive sensing. In the area of external environment understanding, with an emphasis on robustness and adaptability to dynamic, unforeseen environmental changes, the new slew of research discussed in this study have focused largely on multi-sensor fusion and machine learning in contrast to the use of hand-crafted, rule-based systems. Human robot interaction methods have established the importance of contextual information representation and memory for understanding human intentions. **Summary** This review summarizes the recent developments and trends in the field of perception in humanoid robots. Three main areas of application are identified, namely, internal state estimation, external environment estimation, and human robot interaction. The applications of diverse sensor modalities in each of these areas are considered and recent significant works are discussed. **Keywords:** Humanoid Robots, Perception, Survey, Navigation, State Estimation, Human Robot Interaction ## 1 Introduction Perception is of paramount importance for robots to establish a model of their internal state as well as the external environment. These models allow the robot to perform its task safely, efficiently and accurately. Perception is facilitated by various types of sensors which gather both proprioceptive and exteroceptive information. Humanoid robots, especially those which are mobile, pose a difficult challenge for the perception process: mounted sensors are susceptible to jerky and unstable motions due to the very high degrees of freedom afforded by the high number of articulable joints present on a humanoid's body, e.g., the legs, the hip, the manipulator arms or the neck. We organize the main areas of perception in humanoid robots into three broad yet overlapping areas for the purposes of this survey, namely, state estimation for balance and joint configurations, environment understanding for navigation, mapping and manipulation, and finally human-robot interaction for successful integration into a shared human workspace, see Fig. 1. For each area we discuss the popular application areas, the challenges and recent methodologies used to surmount them. Internal state estimation is a critical aspect of autonomous systems, particularly for humanoid robots in order to address both low level stability and dynamics, and as an auxiliary to higher level tasks such as localization, mapping and navigation. Legged robots locomotion is particularly challenging given their inherent under-actuation dynamics and the intermittent contact switching with the ground during motion. The application of external environment understanding has a very broad scope in humanoid robotics but can be roughly divided into navigation and manipulation. Navigation implies the movement of the mobile bipedal base from one location to another without collision thereby leaving the external environment configuration unchanged. On the other hand, manipulation is where the humanoid changes the physical configuration of its environment using its end-effectors. It could be argued that human robot interaction or HRI is a subset of environment understanding. However, we have separated the two areas based on their ultimate goals. The goal of environment understanding is to interact with inanimate objects while the goal of HRI is to interact with humans. The set of posed challenges are different though similar principles may be reused. Human detection, gesture and activity recognition, teleoperation, object handover and collaborative actions, and social communications are some of the main areas where perception is used. ## 2 State Estimation Recent works on humanoid and legged robots locomotion control have focused extensively on state-feedback approaches [4]. Legged robots have highly nonlinear dynamics, and they need high frequency (\(1\:kHz\)) and low latency (\(<\!\!1\:ms\)) feedback in order to have robust and adaptive control systems, thereby adding more complexity to the design and development of reliable estimators for the base and centroidal states, and contact detection. ### Challenges in State Estimation Perceived data is often noisy and biased and it gets magnified in derived quantities. For instance, joint velocities tend to be noisier than joint positions, as these are obtained by numerically differentiating joint encoder values. Rotella _et al_[5] developed a method to determine joint velocities and acceleration of a humanoid robot using link-mounted Figure 1: Perception for humanoid robots split into three principal areas. Left: State estimation being used to estimate derived quantities like CoM and ZMP from sensors like IMU and joint encoders. Right: Environment understanding has a very broad scope which varies from localization and mapping to environment segmentation for planning and even more application areas. Human Robot Interaction is closely related but deals exclusively with human beings rather than inanimate objects. Center: Few sensors which aid in perception for humanoid robots. Sources for labeled images- (a):[1], (b): [2] and (c): [3]. Inertial Measurement Units (IMUs), resulting in less noise and delay compared to filtered velocities from numerical differentiation. An effective approach to mitigate biased IMU measurements is to explicitly introduce these biases as estimated states in the estimation framework [6], [7]. The high dimensionality of humanoids make it computationally expensive to formulate a single filter for the entire state. As an alternative, Xinjilefu _et al_[8] proposed decoupling the full state into several independent state vectors, and used separate filters to estimate the pelvis state and joint dynamics. To account for kinematic modeling errors such as joint backlash and link flexibility, Xinjilefu _et al_[9] introduced a method using a Linear Inverted Pendulum Model (LIPM) with an offset which represented the modeling error in the Center of Mass (CoM) position and/or external forces. Bae _et al_[10] proposed a CoM kinematics estimator by including a spring and damper in the LIPM to compensate for modeling errors. To address the issue of link flexibility in the humanoid exoskeleton _Atalante_, Vigne _et al_[11] decomposed the full state estimation problem into several independent attitude estimation problems, each corresponding to a given flexibility and a specific IMU relying only on dependable and easily accessible geometric parameters of the system, rather than the dynamic model. In the remainder of this section, we classify the recent related works on state estimation into three main categories [12]: proprioceptive state estimation, which primarily involves filtering methods that fuse high-frequency proprioceptive sensor data; multi-sensor fusion filtering, which integrates exteroceptive sensor modalities into the filtering process; multi-sensor fusion with state smoothing, which employs advanced techniques that leverage the entire history of sensor measurements to refine estimated states. Finally, we present a list of available open-source software for state estimation from reviewed literature in Tab. 1. ### Proprioceptive State Estimation Proprioceptive sensors provide measurements of the robot's internal state. They are commonly used to compute leg odometry, which captures the drifting pose. For a comprehensive review of the evolution of proprioceptive filters on leg odometry, refer to [22], and [23]. #### 2.2.1 Base State Estimation In humanoid robots, the focus is on estimating the position, velocity, and orientation of the "base" frame, typically located at the pelvis. Recent state estimation approaches in this field often fuse IMU and leg odometry. The work by Bloesch _et al_[6] was a decisive step in introducing a base state estimator for legged robots using a quaternion-based Extended Kalman Filter (EKF) approach. This method made no assumptions about the robot's gait and number of legs or the terrain structure and included absolute positions of the feet contact points, and IMU bias terms in the estimated states. Rotella _et al_[7] extended it to humanoid platforms by considering the full foot plate and adding foot orientation to the state vector. Both works showed that as long as at least one foot remains in contact with the ground, the base absolute velocity, roll and pitch angles, and IMU biases are observable. There are also other formulations for the base state estimation using only proprioceptive sensing in [16], [24], and [25]. #### 2.2.2 Centroidal State Estimation Centroidal states in humanoid robots include the CoM position, linear and angular momentum, and their derivatives. The CoM serves as a vital control variable for stability and robust humanoid locomotion, making accurate estimation of centroidal states crucial in control system design for humanoid robots. When the full 6-axis contact wrench is not directly available to the estimator, e.g., the robot gauge sensors measure only the contact normal force, some works have utilized simplified models of dynamics, such as the LIPM [26]. Piperakis _et al_[27] presented an EKF to estimate centroidal variables by fusing joint encoders, IMU, foot sensitive resistors, and later including visual odometry in [13]. They formulated the estimator based on the non-linear Zero Moment Point (ZMP) dynamics, which captured the coupling between dynamics behavior in the frontal and lateral planes. Their results showed better performance over Kalman filter formulation based on the LIPM. Mori _et al_[28] proposed a centroidal state estimation framework for a humanoid robot based on real-time inertial parameter identification, using only the robot's proprioceptive sensors (IMU, foot Force/Torque (F/T) sensors, and joint encoders), and the sequential least squares method. They conducted successful experiments deliberately altering the robot's mass properties to demonstrate the robustness of their framework against dynamic inertia changes. By having 6-axis F/T sensors on the feet, Rotella _et al_[29] utilized momentum dynamics of the robot to estimate the centroidal quantities. Their nonlinear observability analysis demonstrated the observability of either biases or external wrench. In a different approach, Carpentier _et al_[30] proposed a frequency analysis of the information sources utilized in estimating the CoM position, and later for CoM acceleration and the derivative of angular momentum [31]. They introduced a complementary filtering technique that fuses various measurements, including ZMP position, sensed contact forces, and geometry-based reconstruction of the _CoM_ by using joint encoders, according to their reliability in the respective spectral bandwidth. #### Contact Detection and Estimation Feet contact detection plays a crucial role in locomotion control, gait planning, and proprioceptive state estimation in humanoid robots. Recent approaches can be categorized into two main groups: those directly utilizing measured ground reaction wrenches, and methods integrating kinematics and dynamics to infer the contact status by estimating the ground reaction forces. Fallon _et al_[2] employed a Schmitt trigger with a 3-axis foot F/T sensor to classify contact forces and used a simple state machine to determine the most reliable foot for kinematic measurements. Piperakis _et al_[13] adapted a similar approach by utilizing pressure sensors on the foot. Rotella _et al_[32] presented an unsupervised method for estimating contact states by using fuzzy clustering on only proprioceptive sensor data (foot F/T and IMU sensing), surpassing traditional approaches based on measured normal force. By including the joint encoders in proprioceptive sensing, Piperakis _et al_[20] proposed an unsupervised learning framework for gait phase estimation, achieving effectiveness on uneven/rough terrain walking gaits. They also developed a deep learning framework by utilizing F/T and IMU sensing in each leg, to determine the contact state probabilities [33]. The generalizability and accuracy of their approach was demonstrated on different robotic platforms. Furthermore, Maravgakis _et al_[34] introduced a probabilistic contact detection model, using only IMU sensors mounted on the end effector. Their approach estimated the contact state of the feet without requiring training data or ground truth labels. \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Paper** & **Software** & **Language** & **Description** \\ \hline \multicolumn{4}{|c|}{**Extended Kalman Filtering**} \\ \hline [13] & SEROW[14] & C++ & Multi-sensor state estimation (IMU, joint encoders, visual odometry) \\ \hline [12] & PRONTO[15] & C++ & Multi-sensor state estimation (IMU, joint encoders, LiDAR, camera) \\ \hline [16] & InEKF[17] & C++ & Invariant EKF (using IMU motion model, with different measurement models) \\ \hline \multicolumn{4}{|c|}{**Factor Graph**} \\ \hline [18] & WOLF[19] & C++ & Multi-sensor state smoothing (IMU, joint encoders, LiDAR, camera) \\ \hline \multicolumn{4}{|c|}{**Learning**} \\ \hline [20] & GEM[21] & Python & Unsupervised gait-phase estimation (IMU, joint encoders, F/T sensors) \\ \hline \end{tabular} \end{table} Table 1: Open-source software for humanoid robot state estimation. All cited software are available as ROS packages. Another active research field in humanoid robots is monitoring and identifying contact points on the robot's body. Common approaches focus on proprioceptive sensing for contact localization and identification. Flacco _et al_[35] proposed using an internal residual of external momentum to isolate and identify singular contacts, along with detecting additional contacts with known locations. Manuelli _et al_[36] introduced a contact particle filter for detecting and localizing external contacts, by only using proprioceptive sensing, such as 6-axis F/T sensors, capable of handling up to 3 contacts efficiently. Vornadamme _et al_[37] developed a real-time method for multi-contact detection using 6-axis F/T sensors distributed along the kinematic chain, capable of handling up to 5 contacts. Vezzani _et al_[38] proposed a memory unscented particle filter algorithm for real-time 6 Degrees of freedom (DoF) tactile localization using contact point measurements made by tactile sensors. ### Multi-Sensor Fusion Filtering One drawback of base state estimation using proprioceptive sensing is the accumulation of drift over the time, due to sensor noise. This drift is not acceptable for controlling highly dynamic motions, therefore it is typically compensated by integrating other sensor modalities from exteroceptive sensors, such as cameras, depth cameras, and LiDAR. Fallon _et al_[2] proposed a drift-free base pose estimation method by incorporating LiDAR sensing into a high-rate EKF estimator using a Gaussian particle filter for laser scan localization. Although their framework eliminated the drift, a pre-generated map was required as input. Piperakis _et al_[39] introduced a robust Gaussian Figure 2: State estimation with multi-sensor filtering, integrating LiDAR for drift correction and localization. Top row, filtering people from raw point cloud. Bottom row, state estimation and localization with iterative closest point correction on filtered point cloud. From [12]. EKF to handle outlier detection in visual/LiDAR measurements for humanoid walking in dynamic environments. To address state estimation challenges in real-world scenarios, Camurri _et al_[12] presented Pronto, a modular open-source state estimation framework for legged robots Fig. 2. It combined proprioceptive and exteroceptive sensing, such as stereo vision and LiDAR, using a loosely-coupled EKF approach. ### Multi-Sensor Fusion with State Smoothing So far, we have explored filtering methods based on Bayesian filtering for sensor fusion and state estimation. However, as the number of states and measurements increases, computational complexity becomes a limitation. Recent advancements in computing power and nonlinear solvers have popularized non-linear iterative maximum a-posteriori (MAP) optimization techniques, such as factor graph optimization. To address the issue of visual tracking loss in visual factor graphs, Hartley _et al_[40] introduced a factor graph framework that integrated forward kinematic and pre-integrated contact factors. The work was extended by incorporating the influence of contact switches and associated uncertainties [41]. Both works showed that the fusion of contact information with IMU and vision data provides a reliable odometry system for legged robots. Sola _et al_[18] presented an open-source modular estimation framework for mobile robots based on factor graphs. Their approach offered systematic methods to handle the complexities arising from multi-sensory systems with asynchronous and different-frequency data sources. This framework was evaluated on state estimation for legged robots and landmark-based visual-inertial SLAM for humanoids by Fourmy _et al_[26]. ## 3 Environment Understanding Environment understanding is a critical area of research for humanoid robots, enabling them to effectively navigate through and interact with complex and dynamic environments. This field can be broadly classified into two key categories: 1. localization, navigation and planning for the mobile base, and 2. object manipulation and grasping. ### Perception in Localization, Navigation and Planning Localization focuses on precisely and continuously estimating the robot's position and orientation relative to its environment. Planning and navigation involve generating optimal paths and trajectories for the robot to reach its desired destination while avoiding obstacles and considering task-specific constraints. #### 3.1.1 Localization, Mapping and SLAM Localization and SLAM (simultaneous localization and mapping) relies primarily on visual sensors such as cameras and lasers but often additionally use encoders and IMUs to enhance estimation accuracy. #### Localization Indoor environments are usually considered structured, characterized by the presence of well-defined, repeatable and often geometrically consistent objects. Landmarks can be uniquely identified by encoded vectors obtained from visual sensors such as depth or RGB cameras allowing the robot to essentially build up a visual map of the environment and then compare newly observed landmarks against a database to localize via object or landmark identification. In recent years, the use of handcrafted image features such as SIFT and SURF and feature dictionaries such as the Bag-of-Words (BoW) model in landmark representation has been superseded by feature representations _learned_ through training on large example sets, usually by variants of artificial neural networks such as convolutional neural networks (CNNs). CNNs have also outperformed classifiers such as support vector machines (SVMs) in deriving inferences [42][43]. However, several rapidly evolving CNN architectures exist. Ovalle-magallanes _et al_[44] performed a comparative study of four such networks while successfully localizing in a visual map. The _RoboCup Soccer League_ is popular in humanoid research due to the visual identification and localization challenges it presents. [45], [46] and [47] are some examples of real-time, CNN based ball detection approaches utilizing RGB cameras developed specifically for RoboCup. Cruz _et al_[48] could additionally estimate player poses, goal locations and other key pitch features using intensity images alone. Due to the low on-board computational power of the humanoids, others have used fast, low power external mobile GPU boards such as the Nvidia Jetson to aid inference [47][49]. Unstructured and semi-structured environments are encountered outdoors or in hazardous and disaster rescue scenarios. They have a dearth of reliably trackable features, unpredictable lighting conditions and are challenging for gathering training data. Thus, instead of features, researchers have focused on raw point clouds or combining different sensor modalities for navigating such environments. Starr _et al_[50] presented a sensor fusion approach which combined long-wavelength infrared stereo vision and a spinning LiDAR for accurate rangefinding in smoke-obscured environments. Nobili _et al_[51] successfully localized robots constrained by a limited field-of-view LiDAR in a semi-structured environment. They proposed a novel strategy for tuning outlier filtering based on point cloud overlap which achieved good localization results in the DARPA Robotics Challenge Finals. Raghavan _et al_[52] presented simultaneous odometry and mapping by fusing LiDAR and kinematic-inertial data from IMU, joint encoders, and foot F/T sensors while navigating a disaster environment. _Slam_ SLAM subsumes localization by the additional map construction and loop closing aspects, whereby the robot has to re-identify and match a place which was visited sometime in the past, to its current surroundings and adjust its pose history and recorded landmark locations accordingly. A humanoid robot which is intended to share human workspaces needs to deal with moving objects, both rapid and slow, which could disrupt its mapping and localizing capabilities. Thus, recent works on SLAM have focused on handling the presence of dynamic obstacles in visual scenes. While the most popular approach remains sensor fusion [53][54], other purely visual approaches have also been proposed, such as, [55] which introduced a dense RGB-D SLAM solution that utilized optical flow residuals to achieve accurate and efficient dynamic/static segmentation for camera tracking and background reconstruction. Zhang _et al_[56] took a more direct approach which employed deep learning based human detection, and used graph-based segmentation to separate moving humans from the static environment. They further presented a SLAM benchmark dedicated to dynamic environment SLAM solutions [57]. It included RGB-D data acquired from an on-board camera on the _HRP-4_ humanoid robot, along with other sensor data. Adapting publicly available SLAM solutions and tailoring it for humanoid use is not uncommon. Sewtz _et al_[58] adapted the Orb-Slam [59] for a multi-camera setup on the DLR _Rollin' Justin_ System while Ginn _et al_[60] did it for the _iGus_, a midsize humanoid platform, to have low computational demands. #### Navigation and Planning Navigation and planning algorithms use perception information to generate a safe, optimal and reactive path, considering obstacles, terrain, and other constraints. _Local Planning_ Local planning or reactive navigation is generally concerned with local real-time decision-making and control, allowing the robot to actively respond to perceived changes in the environment and adjust its movements accordingly. Especially in highly controlled applications rule-based, perception driven navigation is still popular and yields state-of-the-art performance both in terms of time demands and task accomplishment. Bista _et al_[61] achieved real-time navigation in indoor environments by representing the environment by key RGB images, and deriving a control law based on common line segments and feature points between the current image and nearby key images. Regier _et al_[62] determined appropriate actions based on a pre-defined set of mappings between object class and action. A CNN was used to classify objects from monocular RGB vision. Ferro _et al_[63] integrated information from a monocular camera, joint encoders, and an IMU to generate a collision-free visual servo control scheme. Juang _et al_[64] developed a line follower which was able to infer forward, lateral and angular velocity commands using path curvature estimation and PID control from monocular RGB images. Magassouba _et al_[65] introduced an aural servo framework based on auditory perception, enabling robot motions to be directly linked to low-level auditory features through a feedback loop. We also see the use of a diverse array of classifiers to learn navigation schemes from perception information. Their generalization capability allows adaptation to unforeseen obstacles and events in the environment. Abiyev _et al_[66] presented a vision-based path-finding algorithm which segregated captured images into free and occupied areas using an SVM. Lobos-tsunekawa _et al_[67] and Silva _et al_[68] proposed deep learned visual (RGB) navigation systems for humanoid robots which were able to achieve real time performance. The former used a reinforcement learning (RL) system with an actor-critic architecture while the latter utilized a decision tree of deep neural networks deployed on a soccer playing robot. #### Global Planning These algorithms operate globally, taking into account long-term objectives and optimize movements to minimize costs, maximize efficiency, or achieve a specific outcome on the basis of a perceived environment model. Footstep Planning is a crucial part of humanoid locomotion and has generated substantial research interest for itself. Recent works exhibit two primary trends related to perception. The first is providing humanoids the capability of rapidly perceiving changes in the environment and reacting through fast re-planning. The second endeavors to segment and/or classify uneven terrains to find stable 6 DoF footholds for highly versatile navigation. Tanguy _et al_[54] proposed a model predictive control (MPC) scheme that fused visual SLAM and proprioceptive F/T sensors for accurate state estimation. This allowed rapid reaction to external disturbances by adaptive stepping leading to balance recovery and improved localization accuracy. Hildebrandt _et al_[69] used the point cloud from an RGB-D camera to model obstacles as swept-sphere-volumes (SSVs) and step-able surfaces as convex polygons for real-time reactive footstep planning with the _Lola_ humanoid robot. Their system was capable of handling rough terrain as well as external disturbances such as pushes (see Fig. 3). Others have also used geometric primitives to aid in footstep planning, such as surface patches for foothold representation [70][71], environment segmentation to find step-able regions, such as 2D plane segments embedded in 3D space [72][73], or represented obstacles by their polygonal ground projections [74]. Suryamurthy _et al_[75] assigned pixel-wise terrain labels and rugosity measures using a CNN consuming RGB images for footstep planning on a _CENTAURO_ robot. Whole Body Planning in humanoid robots involves the coordinated planning and control of the robot's entire body to achieve an objective. Coverage planning is a subset of whole body planning where a minimal sequence of whole body robot poses are estimated to completely explore a 3D space via robot mounted visual sensors [76][77]. Target finding is a special case of coverage planning where the exploration stops when the target is found [78][79]. These concepts are related primarily to view planning in computer vision. In other applications, Wang _et al_[80] presented a method for trajectory planning and formation Figure 3: Footstep planning on the humanoid _Lola_ from [69]. Top left: The robot’s vision system and a human causing disturbance. Bottom right: The collision model with geometric obstacle approximations. building of a robot fleet using local positions estimated from onboard optical sensors and Liu _et al_[81] presented a temporal planning approach for choreographing dancing robots in response to microphone-sensed music. ### Perception in Grasping and Manipulation Manipulation and grasping in humanoid robots involve their ability to interact with objects of varying shapes, sizes, and weights, to perform dexterous manipulation tasks using their sensor equipped end-effectors which provide visual or tactile feedback for grip adjustment. #### Grasp Planning Grasp planning is a lower level task specifically focused on determining the optimal manipulator pose sequence to securely and effectively grasp an object. Visual information is used to find grasping locations and also as a feedback to optimize the difference between the target grasp pose and the current end-effector pose. Schmidt _et al_[82] utilized a CNN trained on object depth images and pre-generated analytic grasp plans to synthesize grasp solutions. The solution generated full end-effector poses and could generate poses not limited to the camera view direction. Vezzani _et al_[83] modeled the shape and volume of the target object captured from stereo vision in real-time using super-quadric functions allowing grasping even when parts of the object were occluded. Vicente _et al_[84] and Nguyen _et al_[85] focused on achieving accurate hand-eye coordination in humanoids equipped with stereo vision. While the former compensated for kinematic calibration errors between the robot's internal hand model and captured images using particle based optimization, the latter trained a deep neural network predictor to estimate the robot arm's joint configuration. Nguyen _et al_[86] proposed a combination of CNNs and dense conditional random fields (CRFs) to infer action possibilities on an object (affordances) from RGB images. Tactile sensors, such as pressure-sensitive skins or fingertip sensors, provide feedback about the contact (surface normal) forces, slip detection, object texture, and shape information during object grasping. Kaboli _et al_[87] extracted tactile descriptors for material and object classification agnostic to various sensor types such as dynamic pressure sensors, accelerometers, capacitive sensors, and impedance electrode arrays. A Nao with artificial skin used for their experiments is shown in Fig. 4. Hundhausen _et al_[88] introduced a soft humanoid hand equipped with in-finger integrated cameras and an in-hand real-time image processing system based on CNNs for fast reactive grasping. #### Manipulation Planning Manipulation planning involves the higher-level decision-making process of determining how the robot should manipulate an object once it is grasped. It generates a sequence of motions or actions which is updated based on the continuously perceived robot and grasped object state. Deep recurrent neural networks (RNNs) are capable of predicting the next element in a sequence based on the previous elements. This property is exploited in manipulation planning by breaking down a complex task into a series of manipulation commands generated by RNNs based on past commands. These networks are capable of mapping features extracted from a sequence of RGB images, usually by CNNs, to a sequence of motion commands [89][90]. Inceoglu _et al_[91] presented a multimodal failure monitoring and detection system for robots which Figure 4: Left: A Nao humanoid equipped with artificial skin cells on the chest, hand, fore arm, and upper arm. Right: Visualization of the skin cell coordinate frames on the Nao. Figure taken from [87]. integrated high-level proprioceptive, auditory, and visual information during manipulation tasks. Robot assisted dressing is a challenging manipulation task that has been addressed by multiple authors. Zhang _et al_[92] utilized a hierarchical multi-task control strategy to adapt the humanoid robot _Baxter_'s applied forces, measured using joint torques, to the user's movements during dressing. By tracking the subject human's pose in real-time using capacitive proximity sensing with low latency and high signal-to-noise ratio, Erickson _et al_[93] developed a method to adapt to human motion and adjust for errors in pose estimation during dressing assistance by the _PR2_ robot. Zhang _et al_[94] computed suitable grasping points on garments from depth images using a deep neural network to facilitate robot manipulation in robot-assisted dressing tasks. ## 4 Human-Robot Interaction Human robot interaction is a subset of environment understanding which deals with interactions with humans as opposed to inanimate objects. In order to achieve this, a robot needs diverse capabilities ranging from detecting humans, recognizing their pose, gesture, and emotions, to predicting their intent and even proactively performing actions to ensure a smooth and seamless interaction. There are two main challenges to perception in HRI - perception of users, and inference which involves making sense of the data and making predictions. ### Perception of Users This involves identifying humans in the environment, detecting their pose, facial features, and objects they interact with. This information is crucial for action prediction and emotion recognition [95]. Robots rely on vision-based, audio-based, tactile-based, and range sensor-based sensing techniques for detection as explained in this survey on perception methods of social robots done by [96]. Robinson _et al_[97] showed how vision-based techniques have evolved from using facial features, motion features, and body appearance to deep learning-based approaches. Motion-based features separate moving objects from the background to detect humans. Body appearance-based algorithms use shape, curves, posture, and body parts to detect humans. Deep learning models like R-CNN, Faster R-CNN, and YOLO have also been applied for human detection [96]. Pose detection is essential for understanding human body movements and postures. Sensors such as RGB cameras, stereo cameras, depth sensors, and motion tracking systems are used to extract pose information. This was explained in detail by Moller _et al_[98] in their survey of human-aware robot navigation. Facial features play a significant role in pose detection as they provide additional points of interest and enable emotion recognition [99]. A great demonstration of detecting pose and using it for bi-manual robot control using an RGB-D range sensor was shown by Hwang _et al_[100]. The system employed a CNN from the _OpenPose_ package to extract human skeleton poses, which were then mapped to drive robotic hands. The method was implemented on the _CENTAURO_ robot and successfully performed box and lever manipulation tasks in real-time. They presented a real-time pose imitation method for a mid-size humanoid robot equipped with a servo-cradle-head RGB-D vision system. Using eight pre-trained neural networks, the system accurately captured and imitated 3D motions performed by a target human, enabling effective pose imitation and complex motion replication in the robot. Lv _et al_[101] presented a novel motion synchronization method called _GuLiM_ for teleoperation of medical assistive robots, particularly in the context of combating the COVID-19 pandemic. Li _et al_[102] presented a multimodal mobile teleoperation system that integrated a vision-based hand pose regression network and an IMU-based arm tracking method. The system allowed real-time control of a robot hand-arm system using depth camera observations and IMU readings from the observed human hand, enabled through the _Transteleop_ neural network which generated robot hand poses based on a depth image input of a human hand. Audio communication is vital for human interaction, and robots aim to mimic this ability. Microphones are used for audio detection, and speakers reproduce sound. Humanoid robots are usually designed to be binaural i.e., they have two separate microphones at either side of the head which receive transmitted sound independently. Several researchers have focused on this property to localize both the sound source and the robot in complex auditory environments. Such techniques are used in speaker localization, as well as other semantic understanding tasks such as automatic speech recognition (ASR), auditory scene analysis, emotion recognition, and rhythm recognition [96],[103]. Benaroya _et al_[104] employed non-negative tensor factorization for binaural localization of multiple sound sources within unknown environments. Schymura _et al_[105] focused on combined audio-visual speaker localization and proposed a closed-form solution to compute dynamic stream weighting between audio and visual streams, improving the state estimation in a reverberant environment. The previous study was extended to incorporate dynamic stream weights into nonlinear dynamical systems which improved speaker localization performance even further [106]. Davila-Chacon _et al_[107] used a spiking and a feed-forward neural network for sound source localization and ego noise removal respectively to enhance ASR in challenging environments. Trowitzsch _et al_[108] presented a joint solution for sound event identification and localization, utilizing spatial audio stream segregation in a binaural robotic system. Ahmad _et al_[109] in their survey on physiological signal-based emotion recognition showed that physiological signals from the human body, such as such as heart rate, blood pressure, body temperature, brain activity, and muscle activation can provide insights into emotions. Tactile interaction is an inherent part of natural interaction between humans and the same holds true for robots interacting with humans as well. The type of touch can be used to infer a lot of things such as the human's state of mind, the nature of the object, what is expected out of the interaction, etc. [96]. Mainly two kinds of tactile sensors are used for this purpose - sensors embedded on the robot's arms and grippers, and cover based sensors which are used to detect touch across entire regions or the whole body [96]. Khurshid _et al_[110] investigated the impact of grip-force, contact, and acceleration feedback on human performance in a teleoperated pick-and-place task. Results indicated that grip-force feedback improved stability and delicate control, while contact feedback improved spatial movement but may vary depending on object stiffness. ### Inference An important aspect of inference with all the detected data from the previous section is regarding aligning the perspective of the user and the robot. This allows the robot to better understand the intent of the user regarding the objects or locations they are looking at. This skill is called _perspective taking_ and requires the robot to consider and understand other individuals through motivation, disposition, and contextual attempts. This skill paired with a shared knowledge base allows the individuals and robots to build a reliable _theory of mind_ and collaborate effectively during various types of tasks [3]. Bera _et al_[111] proposed an emotion-aware navigation algorithm for social robots which combined emotions learned from facial expressions and walking trajectories using an onboard and an overhead camera respectively. The approach achieved accurate emotion detection and enabled socially conscious robot navigation in low-to-medium-density environments. ## 5 Conclusion Substantial progress have been made in all three principal areas discussed in this survey. In Tab. 2 we compile a list of the most commonly cited humanoids in the literature corresponding to the aforementioned categorization. We conclude with a summary of the trends and possible areas of further research we observed in each of these areas. _State Estimation_ Tightly-coupled formulation of state estimation based on MAP seems to be promising for future works as it offers several advantages, such as modularity and enabling seamless integration of new sensor types, and extending generic estimators with accommodating a wider range of perception sources in order to develop a whole-body estimation framework. By integrating high-rate control estimation and non-drifting localization based on SLAM, this framework could provide real-time estimation for locomotion control purposes, and facilitate gait and contact planning. Another important area of focus is the development of multi-contact detection and estimation methods for arbitrary unknown contact locations. By moving beyond rigid segment assumptions for humanoid structure and augmenting robots with additional sensors, such as strain gauges to directly measure segment deflections; the multi-contact detection and compensating for modeling errors can lead to more accurate state estimation and improved human-robot interactions. #### 4.2.2 Environment Understanding With the availability of improved inference hardware, learning techniques are increasingly being applied in localization, object identification, and mapping, replacing handcrafted feature descriptors. However, visual classifiers like CNNs struggle with unstructured "stuff" compared to regularly shaped objects, necessitating memory-intensive representations such as point clouds and the need for enhanced classifier capabilities. In the field of SLAM, which has robust solutions for static environments, research is focused on handling dynamic obstacles by favoring multi-sensor fusion for increased robustness. Scalability and real-time capability remain challenging due to the potential overload of a humanoid's onboard computer from wrangling multiple data streams over long sequences. Footstep planning shows a trend towards rapid environment modeling for quick responses, but consistent modeling of dynamic obstacles remains an open challenge. Manipulation and long-term global planning also rely on learning techniques to adapt to unforeseen constraints, requiring representations or embeddings of high-dimensional interactions between perceived elements for complexity reduction. However, finding more efficient, comprehensive, and accurate methods to express these relationships is an ongoing challenge. #### Human Robot Interaction Research in the field of HRI has focused on understanding human intent and emotion through various elements such as body pose, motions, expressions, audio cues, and behavior. Though this may seem natural and trivial from a human's perspective, it is often a very challenging task to incorporate the same into robotic systems. Despite considerable progress in the above approaches, the ever-changing and unpredictable nature of human interaction necessitates additional steps that incorporate concepts like shared autonomy and shared perception. In this context, contextual information and memory play a crucial role in accurately perceiving the state and intentions of the humans with whom interaction is desired. Current research endeavors are actively focusing on these pivotal topics, striving to enhance the capabilities of humanoid robots in human-robot interactions while also considering trust, safety, explainability, and ethics during these interactions. #### Acknowledgments. This work has partially been funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under BE 4420/4-1 within the FOR 5351 - 459376902 - AID4Crops and under Germany's Excellence Strategy, EXC-2070 - 390732324 - PhenoRob. ## Declarations * **Conflict of interest** The authors declare no competing interests. * **Human and Animal Rights and Informed Consent** This article does not contain any studies with human or animal subjects performed by any of the authors.
2309.07574
MMEAD: MS MARCO Entity Annotations and Disambiguations
MMEAD, or MS MARCO Entity Annotations and Disambiguations, is a resource for entity links for the MS MARCO datasets. We specify a format to store and share links for both document and passage collections of MS MARCO. Following this specification, we release entity links to Wikipedia for documents and passages in both MS MARCO collections (v1 and v2). Entity links have been produced by the REL and BLINK systems. MMEAD is an easy-to-install Python package, allowing users to load the link data and entity embeddings effortlessly. Using MMEAD takes only a few lines of code. Finally, we show how MMEAD can be used for IR research that uses entity information. We show how to improve recall@1000 and MRR@10 on more complex queries on the MS MARCO v1 passage dataset by using this resource. We also demonstrate how entity expansions can be used for interactive search applications.
Chris Kamphuis, Aileen Lin, Siwen Yang, Jimmy Lin, Arjen P. de Vries, Faegheh Hasibi
2023-09-14T10:09:11Z
http://arxiv.org/abs/2309.07574v1
# MMEAD: MS MARCO Entity Annotations and Disambiguations ###### Abstract. MMEAD, or MS MARCO Entity Annotations and Disambiguations, is a resource for entity links for the MS MARCO datasets. We specify a format to store and share links for both document and passage collections of MS MARCO. Following this specification, we release entity links to Wikipedia for documents and passages in both MS MARCO collections (v1 and v2). Entity links have been produced by the REL and BLINK systems. MMEAD is an easy-to-install Python package, allowing users to load the link data and entity embeddings effortlessly. Using MMEAD takes only a few lines of code. Finally, we show how MMEAD can be used for IR research that uses entity information. We show how to improve recall@1000 and MRR@10 on more complex queries on the MS MARCO v1 passage dataset by using this resource. We also demonstrate how entity expansions can be used for interactive search applications. Information Retrieval, Entity Linking + Footnote †: journal: Information systems Information retrieval; Ontologies; Query representation. + Footnote †: journal: Information systems Information retrieval; Ontologies; Query representation. + Footnote †: journal: Information systems Information retrieval; Ontologies; Query representation. + Footnote †: journal: Information systems Information retrieval; Ontologies; Query representation. + Footnote †: journal: Information systems Information retrieval; Ontologies; Query representation. + Footnote †: journal: Information systems Information retrieval; Ontologies; Query representation. + Footnote †: journal: Information systems Information retrieval; Ontologies; Query representation. + Footnote †: journal: Information systems Information retrieval; Ontologies; Query representation. ## 1. Introduction The MS MARCO datasets (Moh et al., 2017) have become the _de facto_ benchmark for evaluating deep learning methods for Information Retrieval (IR). The TREC deep learning track (Chen et al., 2018), which has run since 2019, derives its datasets from the MS MARCO passage and document collections. The collections have been used in zero- and few-shot scenarios for diverse retrieval tasks and domains (Zhu et al., 2019; Wang et al., 2019; Wang et al., 2019). They also serve as primary resources for training deep learning models for downstream IR tasks such as conversational search (Kang et al., 2019) and search over knowledge graphs (Kang et al., 2019) to achieve state-of-the-art results. Purely text-based neural IR models, trained using MS MARCO collections, can generally not reason over complex concepts in the social and physical world (Chen et al., 2018; Chen et al., 2018). In response, recently proposed neuro-symbolic methods aim to combine neural models and symbolic AI approaches, e.g., by using knowledge graphs, which map concepts to symbols and relations. An essential step in developing neuro-symbolic models is connecting text to entities that represent the world's concepts formally. This step is mainly done using _Entity linking_, an intermediary between text and knowledge graphs, which detects entity mentions in the text and links them to the corresponding entries in a knowledge graph. Despite the proven effectiveness of neuro-symbolic AI - and for IR models in particular (Chen et al., 2018; Wang et al., 2019; Wang et al., 2019) - the IR community has made limited efforts to develop such models. A primary hindrance is the annotation of large-scale collections with entities; entity linking methods are computationally expensive. Running them over a large text corpus (e.g., MS MARCO v2 with 12M documents and 140M passages) requires extensive resources. This paper aims to fill this gap by making entity annotations of the MS MARCO ranking collections readily available and easy to use. With this work, we publish MMEAD,1 a resource that provides entity links for the MS MARCO document and passage ranking collections. Two state-of-the-art entity linking tools, namely REL (Kang et al., 2019; Wang et al., 2019) and BLINK (Wang et al., 2019), are utilized for annotating the corpora. The annotations are stored in a DuckDB database, enabling efficient analytical operations and fast access to the entities. The resource is available as a Python package and can be installed from PyPI effortlessly. The resource also includes a sample demo, enabling queries with complex compositional structures about entities. Footnote 1: MMEAD is pronounced as the drink meal. We envision that MMEAD will foster research in neuro-symbolic IR research and can be used to further improve neural retrieval models. In our experiments, we show significant improvements on recall for neural re-ranking IR models when using MMEAD annotations as bag-of-word expansions for queries and passages. Our experiments reveal that the difference in effectiveness is even greater (in terms of both recall and MRR) for complex queries that require further reasoning over entities. To show the usefulness of our resource, we also present how to enrich interactive search applications. Specifically, we demonstrate how to obtain entities' geographical locations by relating the entities found in passages to their Wikidata entries. Plotting these entities on the world map shows that the MS MARCO passages can be geo-located all over the world. We can also move from location to web text by retrieving all passages associated with a geographical location that we present through an interactive demo. In summary, this paper makes the following contributions: * We annotate the documents of the MS MARCO passage and document collections and share these annotations. By sharing these annotations, we ease future research in neuro-symbolic retrieval, which extensively uses entity information. We also provide useful metadata such as Wikipedia2Vec (Wikipedia, 2017) entity embeddings. * We provide a Python library that makes our data easy to use. All data is stored in DuckDB tables, which can be loaded and queried quickly. The library is easy to install through PyPI, and the entity annotations are available with only a few lines of code. * We experimentally show that retrieval effectiveness measured by recall significantly increases when using MMEAD. The improvement is even greater for hard queries, where we observe low retrieval effectiveness using text-only IR models. * We demonstrate how the data can be used in geographical applications. For example, we can plot on a static map all entities found in the MS MARCO v2 passage collection for which geographical data is available. Additionally, through an interactive demo, we can retrieve all passages associated with a geographical location. MMEAD is publicly available at [https://github.com/informagi/mmead](https://github.com/informagi/mmead). ## 2. Background In this section, we describe systems that are used for creating entity annotations on the MS MARCO collections for MMEAD. ### Rel REL (Radboud Entity Linker) (Rendboud, 2017) is a state-of-the-art open-source entity linking tool designed for high throughput and precision. REL links entities to a knowledge graph (Wikipedia) using a three-stage approach: (1) mention detection, (2) candidate selection, and (3) entity disambiguation. We briefly explain these three steps: 1. [leftmargin=*] 2. _Mention Detection._ REL starts the entity linking process by first identifying all text spans that might refer to an entity. In this stage, it is essential that all possible entities in the text are identified, as only the output of this stage can be considered an entity by REL. These spans are identified using a named entity recognition (NER) model based on contextual word embeddings. For our experiments, we use the NER model based on Flair embeddings. 3. _Candidate Selection._ Up to seven candidate entities are considered for every mention found by Flair. Part of these entities are selected according to the prior probability \(P(e|m)\) of the mention \(m\) being linked to the entity \(e\). Precisely, the top-4 ranked entities based on \(P(e|m)=\min(1,P_{Wiki}(e|m)+P_{YAGO}(e|m))\) are selected, where \(P_{YAGO}(e|m))\) is a uniform probability from the YAGO dictionary (Pillan et al., 2017) and \(P_{Wiki}(e|m)\) is computed based on the summation of hyperlink counts in Wikipedia and the CrossWikis corpus (Kamphu et al., 2018). The remaining three candidate entities are determined according to the similarity of an entity and the context of a mention. For the top-ranked candidates based on \(P(e|m)\) probabilities, the context similarity is calculated by \(\mathbf{e}^{T}\sum_{w\in c}\mathbf{w}\). Here \(\mathbf{e}\) is the entity embedding for entity \(e\), and \(\mathbf{w}\) are the word embeddings in context \(c\), with a maximum length of 100-word tokens. The entity and word embeddings are jointly learned using Wikipedia2Vec (Wikipedia, 2017). 4. _Entity Disambiguation._ The final stage tries to select the correct entity from the candidate entities and maps it to the corresponding entry in a knowledge graph (Wikipedia). For this, REL assumes a latent relation between entities in the text and utilizes the Ment-norm method proposed by Le and Titov (Le and Titov, 2017). REL is designed to be a modular system, making it easy to swap, for example, the NER system with another. All necessary scripts to train the REL system are available on GitHub,2 making it easy to update REL to a more recent Wikipedia dump. Recently, a batch extension of REL, REBL (Kang et al., 2018), was released, which improves the efficiency of REL for large-scale annotations, particularly in the candidate selection and entity disambiguation stages. Footnote 2: [https://github.com/informagi/rel](https://github.com/informagi/rel), last accessed April 20th 2023 ### Blink BLINK (Rendboud, 2017) is a BERT-based (Kang et al., 2018) model for candidate selection and entity disambiguation, which assumes that entity mentions are already given. When utilized in an end-to-end entity linking setup, BLINK achieves similar effectiveness scores as REL. Below we describe the three steps of mention detection, candidate selection, and entity disambiguation for end-to-end entity linking using BLINK. 1. [leftmargin=*] 2. _Mention Detection._ The mention detection stage can be done using an NER model. Like REL, we utilized Flair NER (Blei et al., 2017) for mention detection. 3. _Candidate Selection._ BLINK considers ten candidates for each mention. The candidates are selected through a bi-encoder (similar to Humeau et al. (Humeau et al., 2018)) that embeds mention contexts and entity descriptions. The mention and the entity are encoded into separate vectors using the [CLS] token of BERT. The similarity score is then calculated using the dot-product of the two vectors representing the mention context and the entity. 4. _Entity Disambiguation._ For entity disambiguation, BLINK employs a cross-encoder to re-rank the top 10 candidates selected by the candidate selection stage. The cross-encoder usage is similar to the work by Humeau et al. (Humeau et al., 2018), which employs a cross-attention mechanism between the mention context and entity descriptions. The input is the concatenation of the mention text and the candidate entity description. ### DuckDB DuckDB (DuckDB, 2017) is an in-process column-oriented database management system. It is designed with requirements that are beneficial for the MMEAD resource: 1. [leftmargin=*] 2. _Efficient analytics._ DuckDB is designed for analytical (OLAP) workloads, while many other database systems are optimized for transactional queries (OLTP). DuckDB is especially suitable for cases where analytics are more important than transactions. As we release a resource, transactions (after loading the data) are unnecessary, making an analytics database more useful than a transactional-focused one. 2. _In-process._ DuckDB runs in-process, which means no database server is necessary, and all data processing happens in-process. This allows the database to be installed from PyPI without any additional steps. 3. _Efficient data transfer._ Because DuckDB runs in-process, it can transfer data from and to the database more easily, as the address space is shared. In particular, DuckDB uses an API built around NumPy and Pandas, which makes data (almost) immediately available for further data analysis within Python. DuckDB also supports the JSON and parquet file formats, making data loading especially fast when data is provided in such formats. ## 3. Mmead MMEAD provides links for MS MARCO collections v1 and v2 created by the REL entity linker, and links for the MS MARCO v1 passage collection by the BLINK entity linker. For REL, we use its batch entity linking extension, REBL (Krishnan et al., 2018). The knowledge graphs used for the REL and BLINK entity linkers are Wikipedia dumps from 2019-07 and 2019-08, respectively. Both dumps are publicly available from the linking systems' Github pages. ### Goals The design criteria for MMEAD are based on the following goals: * _Easy-to-use._ It should be easy to load and use the linked entities in experiments. With only a few lines of code, it should be possible to load entities and use them for analysis. Additional information should also be readily available, like where entities appear in the text and their latent representations. * _High-quality entity links._ We wish to release high-quality entity links for the MS MARCO collections, so that applying neuro-symbolic models and reasoning over entities becomes feasible. * _Extensibility._ It should be easy to link the collections with a different entity linking system and publish them in the same format as MMEAD. This way, we can integrate links produced by other entity linking systems and make them automatically available through the MMEAD framework. * _Useful metadata._ Additional data that can help with experiments should be provided; this includes mapping entities to their respective identifiers and latent representations. ### Design _Easy-to-use._ To create an easy-to-use package, we make the MMEAD data publicly available as JSONL files, which is the same format as the MS MARCO v2 collections. Each line of JSON contains entity links for one of the documents or passages in the collections; see Figure 1. The corresponding document can be identified through the JSON field that represents the document/passage identifier: doc1 d for documents and pid for passages. Then, for every section of a document, a separate JSON field is available to access the entities in that section. For passages, there is only one section containing the entity annotations of the passage, while for MS MARCO v2 documents, we link not only the body of the document but also the header and the title. All essential information about the entity mentions and linked entities is stored in the JSON objects. Specifically, the following metadata is made available: entity_id, start_pos, end_pos, entity, and details. The field entity_id stores the identifier that refers to the entry in the knowledge graph (Wikipedia, in our case). The start_pos and end_pos fields store the start and end positions of the text span that refers to the linked entity (i.e., as a standoff annotation of the entity mention). The positions are UTF-8 indices into the text, ready to be used in Python to extract the relevant parts of the document. The field entity stores the text representation of the entity from the knowledge graph. We chose to store this field for convenience and human readability. The details field is a JSON object that stores linker-specific information; examples include the entity type available from the NER module and the confidence of the identified mention. _High-quality entity links._ MMEAD provides entity links produced by state-of-the-art entity linking systems. For this paper, we provide links from REL for both MS MARCO v1 and v2 passages and docs, and links from BLINK for MS MARCO v1 passages. Both these systems have high precision, ensuring that identified mentions and their corresponding entities are likely correct. The knowledge graphs used by the entity linkers are the same as those used in the original studies; this way, extensive research has been done to confirm the precision of the linking systems. _Extensibility._ We ensure extensibility by clearly describing the format in which the entity links are provided. If another system shares its links in the same format, the MMEAD Python library can work with the data directly. The details field per entity annotation enables inclusion of linker-specific information. REL provides specific instructions on updating the system to never versions of Wikipedia in its documentation, making it possible to easily release links to newer versions of Wikipedia. _Useful metadata._ Alongside the entity links, we also provide additional useful metadata. Specifically, we release Wikipedia2Vec (Wikipedia, 2019) embeddings (300d and 500d feature vectors). REL uses the 300d Wikipedia2Vec feature vectors internally for candidate selection. These feature vectors consist of word embeddings _and_ entity embeddings mapped into the same high-dimensional feature space. These embeddings can be used directly for information retrieval research (Krishnan et al., 2018; Krishnan et al., 2018). We also release a mapping of entities to their identifiers. The entity descriptions can change in different versions of Wikipedia, but their identifiers remain constant. The identifier can also be used to find the corresponding entity in other knowledge graphs such as Wikidata. ### An Example A passage from the MS MARCO v1 passage ranking collection is shown below.3 Footnote 3: This is the second passage from the collection. _The Manhattan Project and its atomic bomb helped bring an end to World War II. Its legacy of peaceful uses of atomic energy continues to have an impact on history and science._ A few text spans in this text can be considered as entities: "the Manhattan Project", "World War II", and "atomic energy." REL identifies two of these entities: the _Manhattan Project_ and _World War II_. The output of the system is converted to our JSON specification, which results in the JSON object presented in Figure 1. The value of the _mod_score field shows that Flair is more certain about "World War II" being an entity than the "Manhattan Project." Table 1 shows the number of entities found in the collections by the REL system. Blink found 21,968,356 entity links for the v1 passage collection. For 11,177,904 entities, the two linking systems produced exactly the same output. ## 4. How to use MMEAD comes with easy-to-use Python code, allowing users to work with the resource effortlessly. To start, MMEAD can be installed from PyPI using pip: ## 5. Entity Expansion with Mmead To demonstrate the usefulness of MMEAD for (neural) retrieval models, we have conducted experiments that extend existing models with MMEAD annotations. These experiments serve a demonstrative purpose only, and the full potential of this resource is to be further explored in (neuro-)symbolic IR models (Han et al., 2017; Wang et al., 2018). ### Methods BM25 expansionWe experimented with three retrieval methods to show the benefits of entity annotation for passage ranking: one baseline method and two methods that use query entity expansion (Kang et al., 2019) using REL: **a BM25 - No ExpansionAs a baseline method, we used BM25 as implemented in Anserini (Anserini, 2017) using hyper-parameters \(k_{1}=0.82\) and \(b=0.68\), shown to be optimal for the MS MARCO dataset. MS MARCO was indexed normally, and no expansion was considered for the queries or the passages. **b BM25 - Entity Text ExpansionIn this method, passages and queries are expanded with the text representation of their annotated entities (from REL). Once the passages and queries have been expanded with entities, we run BM25 with the same hyper-parameter settings as described in **a**. **c BM25 - Entity Hash ExpansionInstead of using the text representation of entities as an expansion, we expanded the passages and queries by the MD5 hash of the entity text (from REL). The use of MD5 hashing is to provide a consistent representation of multi-word terms and to avoid partial or incorrect matching between queries and non-relevant passages; e.g., passages that contain the word "united", do not benefit if the query contains "United States" as an entity. Again, after expansion, we run BM25 with the same hyper-parameter settings described in **a**. In these experiments, the identified entities are deduplicated. As a demonstration of the proposed text expansion methods, Figure 8 shows how the query expansion is performed using explicit and hashed forms. The added entities provide more precise context and help eliminate ambiguous terms. Figure 9 shows the expansion methods on the relevant passage for this query. The relevant passage can be found through our expansion technique. The linking system recognizes that both the query and the passage contain a reference to the entity _Sacagawa_, even though they are spelled differently in the query and the passage. Reciprocal Rank FusionAs a second series of experiments, we applied Reciprocal Rank Fusion (RRF) (Kang et al., 2019) to the runs described above. RRF is a fusion technique that can combine rankings produced by different systems. RRF creates a new ranking by only considering the rank of a document in the input. Given a set of documents \(D\) and a set of rankings \(R\), RRF can be computed as: \[RRF(d\in D)=\sum_{r\in R}\frac{1}{k+r(d)} \tag{1}\] Here \(k\) is a hyperparameter that can be optimized, but we simply used a default value of \(k=60\) for all settings. Figure 4. Example code for loading word and entity embeddings. It shows that the dot-product between “Montreal” word and entity embeddings is greater than the dot-product of embedding vectors for the word “Montreal” and a random word. The word embeddings of Montreal and Toronto, two cities in Canada, are more similar. Figure 5. Entity names and identifiers are accessible in MMEAD. Given an entity text, we can directly find its corresponding identifier and vice versa. Figure 6. All data is stored in DuckDB tables, and thus it is possible to directly access the tables and issue queries. In this example, we extract the identifiers of passages that contain the city of Nijmegen. This provides us with four new rankings; the RRF of the pairwise combinations of the three rankings described above and the RRF of all three of these runs: **d. RRF - No Expansion + Entity Text.** RRF fusion of runs **a** and **b**. The run with no expansions and the run with entity text expansions are considered. **e. RRF - No Expansion + Entity Hash.** RRF fusion of runs **a** and **c**. The run with no expansions and the run with entity hash expansions are considered. **f. RRF - Entity Text + Entity Hash.** RRF fusion of runs **b** and **c**. The run with entity text expansions and the run with entity hash expansions are considered. **g. RRF - No Expansion + Entity Text + Entity Hash.** RRF fusion of runs **a**, **b**, and **c**. All three runs are considered. ### Experimental Setup In our experiments, we use MMEAD as a resource to expand queries and passages with entities. The experiments are performed using the MS MARCO v1 passage ranking collection, where only queries containing at least one entity annotation are used. We do not expect meaningful differences for queries without any linked entities, as the expanded query is identical to the original query in that case (due to the simplicity of the method applied here). As we expect the linked entities to provide additional semantic information about the queries and passages, we conduct further testing on the obstinate query sets of the MS MARCO Chameleons (Chameleon et al., 2019), which consist of challenging queries from the original MS MARCO passage dataset. In general, ranking methods show poor effectiveness in finding relevant matches for these queries. Our testing focuses on the bottom 50% of the worst-performing queries from the subsets of Veiled Chameleon (Hard), Pygmy Chameleon (Harder), and Lesser Chameleon (Hardest), which represent increasing levels of difficulty. This gives us four query sets on which we evaluate; (1) all queries that contain entity annotations (_dev_ - 1984 queries), (2) all queries in the hard subset that contain entity annotations (_hard_ - 680 queries), (3) all queries in the harder subset that contain entity annotations (_harder_ - 493 queries), and lastly, (4) all queries in the hardest subset that have entity annotations (_hardest_ - 322 queries). The experiments are evaluated using Mean Reciprocal Rank (MRR) at rank ten and Recall (R) at rank one thousand. MRR@10 is the official metric for the MS MARCO passage ranking task, while R@1000 gives an upper limit on how well re-ranking systems could perform. The Anserini (Anserini, 2017) toolkit is used to generate our experiments. ### Results Table 2 presents the results of our experiments. If we first look at lines **a-c** in the results table, we can examine the effects of our expansion methods compared to the baseline run. Looking at R@1000, we can see that more relevant passages are found using entity expansion for the _dev_ collection and its harder subsets. We do not find additional relevant documents/passages on the _dev_ set when we use the entity hashes, and entity text seems to be the better approach. There is, however, no increase in MRR@10 when using this expansion method. Entity expansions help when evaluating using R@1000, especially when the queries are more complex. The difference in recall effectiveness becomes larger the more complex the queries get. MRR@10 only improves when using entity text expansion. The reciprocal rank fusion methods are presented in lines **d-g**. When using these methods, the R@1000 increases more. Again, the subsets that contain more complex queries tend to benefit more. Regarding R@1000 effectiveness, the best RRF method uses a ranking from the normal, not expanded index, with the index that has been expanded with the entity text. Again, entity text expansion helps recall more than using hash expansion. Although the RRF methods improve recall, MRR@10 does not benefit from RRF when compared to using only one of the expansion techniques. ## 6. Beyond Quantitative Results In the previous section, we demonstrated the potential value of MMEAD using quantitative evaluations, where we leverage entities to improve retrieval effectiveness in standard benchmark datasets. Beyond these quantitative results, MMEAD can also help enrich interactive search applications in various ways. This section describes a few such examples. \begin{table} \begin{tabular}{l l|c c c c|c c c c} \hline \hline & & \multicolumn{4}{c|}{R@1000} & \multicolumn{4}{c}{MRR@10} \\ & dev & hard & harder & hardest & dev & hard & harder & hardest \\ \hline a. & BM25 – No Expansion & 0.9111 & 0.7855 & 0.7444 & 0.6677 & **0.2413** & 0.0373 & 0.0137 & 0.0000 \\ b. & BM25 – Entity Text & 0.9183 & 0.8240\(\dagger\) & 0.7951\(\dagger\) & 0.7298\(\dagger\) & 0.2202 & 0.0385 & 0.0173 & **0.0057** \\ c. & BM25 – Entity Hash & 0.9105 & 0.7980 & 0.7576 & 0.6848 & 0.2199 & 0.0383 & **0.0175** & 0.0052 \\ \hline d. & RRF – No Expansion + Entity Text & **0.9338\(\dagger\)** & **0.8436\(\dagger\)** & **0.8124\(\dagger\)** & **0.7500\(\dagger\)** & 0.2372 & 0.0385 & 0.0163 & 0.0019 \\ e. & RRF – No Expansion + Entity Hash & 0.9250\(\dagger\) & 0.8260\(\dagger\) & 0.7921\(\dagger\) & 0.7205\(\dagger\) & 0.2378 & 0.0367 & 0.0152 & 0.0034 \\ f. & RRF – Entity Text \& Hash & 0.9231 & 0.8260\(\dagger\) & 0.7982\(\dagger\) & 0.7314\(\dagger\) & 0.2218 & 0.0375 & 0.0161 & 0.0053 \\ g. & RRF – No Expansion + Entity Text \& Hash & 0.9313\(\dagger\) & 0.8370\(\dagger\) & 0.8043\(\dagger\) & 0.7376\(\dagger\) & 0.2358 & **0.0391** & 0.0156 & 0.0035 \\ \hline \hline \end{tabular} \end{table} Table 2. Results on the MS MARCO v1 passage collection, using only the queries that have entity annotations. Bolded numbers are the highest achieved effectiveness. Scores with a dagger (\(\dagger\)) are significantly better compared to BM25 with no expansion (_run a_), following a paired t-test with Bonferroni correction. For MRR, we have not calculated significance scores due to its ordinal scale (Kang et al., 2019). Entity links to Wikidata provide an entree into the broader world of open-linked data, which enables integration with other existing resources. This allows us to build interesting "mashups" or support search beyond simple keyword queries. As a simple example, we can take the entities referenced in MS MARCO, look up the coordinates for geographic entities, and plot them on a map. Figure 7 shows a world map with all entities found in the MS MARCO v2 passage collection mapped onto it (each shown with a transparent blue dot). The results are as expected, where the blue dots' density largely mirrors worldwide population density, although (also as expected) we observe more representation from entities in North America, Europe, and other better-developed parts of the world. Figure 7 is a static visualization, but we can take the same underlying data and principles to create interesting interactive demonstrations. Geo-based search is an obvious idea, where users can specify a geographic region - either by dragging a box in an interactive interface to encompass a region of interest, or specifying a geographic entity. For example, the user might ask "Show me content about tourist sites in Paris" and receive passages about the Eiffel Tower in which Paris is not mentioned explicitly. Simple reasoning based on geographic containment relationships on open-linked data resources would be sufficient for answering this query. Figure 8. Examples of queries for the three different experiments; (a) the non-expanded query, (b) the query with entity text expansion, and (c) the query with entity hash expansion. Text expansions are shown in italics. The MD5 hashes shown in (c) are shortened in this example for formatting. Figure 7. Locations of entities found in the MS MARCO v2 passage collection. While it is possible that pretrained transformers might implicitly contain this information, they can never offer the same degree of fine-grained control provided by explicit entity linking. As a simple demonstration, we have taken MMEAD, reformatted the entity links into RDF, and ingested the results into the QLever SPARQL engine (Bartos et al., 2019).5 By combining MMEAD with RDF data from Wikidata and OpenStreetMap, we can issue SPARQL queries such as "Show me all passages in MS MARCO about France". Footnote 5: [https://github.com/ad-freiburg/qlever](https://github.com/ad-freiburg/qlever), last accessed April 26th 2023 The query is shown in Figure 10, which gives us 122,316 entities found in the collection that have a connection with France (most of them are located in France). Then we can automatically show the entities on a map, as presented in Figure 11 (showing the first 1000 entities found). Not all linked entities are located in France, however. For example, some entities are related to France (entities for which France is mentioned in their Wikidata), but are located elsewhere in the world. One of the blue dots in Germany is the source of the river _Moselle_. This river starts in Germany by splitting off from the _Rhine_, and then goes through France. Instead of querying for France, we can also query for different countries. Table 3 shows the number of entities found for a sample of countries. ## 7. Conclusion and Future Work This research presents the resource MMEAD, or MS MARCO Entity Annotations and Disambiguations. MMEAD contains entity annotations for the passages and documents in MS MARCO v1 and v2. These annotations simplify entity-oriented research on the MS MARCO collections. Links have been provided using the REL and BLINK entity linking systems. Using DuckDB, the data can quickly be queried, making the resource easy to use. We also demonstrated that our resource can enrich interactive search applications. In particular, we present an interactive demo where all entities related to geographical locations can be positioned on a map. We experimentally show that MMEAD improves recall effectiveness significantly when using entities for query and passage expansion. When using reciprocal rank fusion, the effectiveness difference becomes even more prominent and new relevant passages are found. The question remains whether these passages can be ranked higher by new retrieval models. With MMEAD, we support information retrieval research that combines deep learning and entity information. In the future, we would like annotations from a more diverse group of linking systems. Using the MMEAD format, releasing entity links for collections beyond MS MARCO is also possible. We already showed that using entity links improves recall when using the linked entities for query expansion. What the effects are when training, e.g., DPR methods that include the entity links, is yet to be investigated - an exciting research opportunity that lies ahead. Figure 11. First 1000 entities found in that are connected to France. Entities are represented with a blue dot on the map. \begin{table} \begin{tabular}{l|c|c} \hline \hline Country & WikiData ID & \# Entities \\ \hline United States & Q30 & 3,429,889 \\ Canada & Q16 & 170,833 \\ France & Q146 & 122,316 \\ New Zealand & Q664 & 19,094 \\ Peru & Q419 & 16,448 \\ Iran & Q794 & 13,633 \\ Ecuador & Q736 & 10,588 \\ South Korea & Q884 & 9,718 \\ Monaco & Q235 & 8,546 \\ Singapore & Q334 & 6,597 \\ \hline \hline \end{tabular} \end{table} Table 3. Number of entities found per country for some example countries where the entity has an English label. Figure 10. SPARQL query that produces all entities in the passages of the MS MARCO v2 collection that are related to the country of France. ## Acknowledgments This work is part of the research program Commit2Data with project number 628.011.001 (SQIREL-GRAPHS), which is (partly) financed by the Netherlands Organisation for Scientific Research (NWO), and has also been funded in part by the EU project OpenWebSearch.eu under GA 101070014. Additional support comes from the Global Water Futures program funded by the Canada First Research Excellence Fund (CFREF) and the Natural Sciences and Engineering Research Council (NSERC) of Canada.
2309.08647
Intent Detection at Scale: Tuning a Generic Model using Relevant Intents
Accurately predicting the intent of customer support requests is vital for efficient support systems, enabling agents to quickly understand messages and prioritize responses accordingly. While different approaches exist for intent detection, maintaining separate client-specific or industry-specific models can be costly and impractical as the client base expands. This work proposes a system to scale intent predictions to various clients effectively, by combining a single generic model with a per-client list of relevant intents. Our approach minimizes training and maintenance costs while providing a personalized experience for clients, allowing for seamless adaptation to changes in their relevant intents. Furthermore, we propose a strategy for using the clients relevant intents as model features that proves to be resilient to changes in the relevant intents of clients -- a common occurrence in production environments. The final system exhibits significantly superior performance compared to industry-specific models, showcasing its flexibility and ability to cater to diverse client needs.
Nichal Narotamo, David Aparicio, Tiago Mesquita, Mariana Almeida
2023-09-15T13:15:20Z
http://arxiv.org/abs/2309.08647v1
# Intent Detection at Scale: ###### Abstract Accurately predicting the intent of customer support requests is vital for efficient support systems, enabling agents to quickly understand messages and prioritize responses accordingly. While different approaches exist for intent detection, maintaining separate client-specific or industry-specific models can be costly and impractical as the client base expands. This work proposes a system to scale intent predictions to various clients effectively, by combining a single generic model with a per-client list of relevant intents. Our approach minimizes training and maintenance costs while providing a personalized experience for clients, allowing for seamless adaptation to changes in their relevant intents. Furthermore, we propose a strategy for using the clients relevant intents as model features that proves to be resilient to changes in the relevant intents of clients - a common occurrence in production environments. The final system exhibits significantly superior performance compared to industry-specific models, showcasing its flexibility and ability to cater to diverse client needs. Intent detection, customer support, scalability ## I Introduction Automatically detecting the intent of customer support requests (i.e., _tickets_) is a fundamental aspect of an efficient intelligent support system. When the intent of incoming tickets is known, customer support agents can quickly grasp the tickets content and react accordingly, enhancing their efficiency. For instance, intent detection enables the creation of priority queues, helping agents identify the most relevant tickets. Furthermore, intent detection facilitates the mapping of intents to (semi-)automated replies, such as macros, further streamlining the support process [10]. Finally, when connected with analytic systems, intent information provides a powerful understanding of the customer support activity and requests. This paper focuses on the challenge of achieving effective intent detection for customer support at scale. In the context of this challenge, the intent detection system must handle tickets from end-users belonging to multiple clients and industries. Each industry, such as financial institutions, software houses, or e-commerce platforms, typically exhibits a distinct set of relevant intents that need to be accurately classified. Intent detection is commonly performed using supervised machine learning models for classification. The model takes the ticket content as input, which, in the case of an email, can include the email's subject, body, and additional features referring to user or client information. The output corresponds to one or multiple intents from a predefined set. To perform intent detection at scale, one possible approach is to have a single model supporting all clients and all their possible intents, which we call a _generic model_. Despite being cost-effective and easy to maintain, this solution can output out-of-domain intents, causing agents and clients to lose confidence in the system. Industry-specific models serve clients from similar industries. Although this approach offers flexibility, it poses challenges as the number of industries increases and assigning clients to specific industries becomes progressively more complex. Additionally, clients that fall between multiple industries complicate the assignment process and cannot be properly covered by this solution. Finally, in real-world scenarios, the list of relevant intents for a client evolves over time. Similarly, some intents may stop being relevant for a business. This introduces the challenge of adapting the intent detection system to accommodate such changes. It is crucial for the system to remain robust, even if new intents emerge without prior communication before model retraining, to ensure that its performance does not experience a significant decline. Our proposed system, shown in Fig. 1 (c), features a single generic model that considers both the ticket content and a list of the client's most relevant intents. The list of relevant intents can be derived from the client's historical data or predefined by the client themselves, and serves as a more detailed representation of the client's industry and specific requirements. This approach also leverages the relevant intents as valuable client information, enhancing the accuracy of intent prediction for each ticket. Additionally, a filtering module is employed to eliminate irrelevant intents specific to that client in an agile way that is independent from retraining schedules. The main contribution of the paper, is the development of a scalable intent detection system which: * Allows clients to have their own personalized set of intents. * Removes the need to assign clients to industries and to deploy multiple industry-specific models. * Has superior performance when compared against a generic model by using the list of relevant intents as features (Section III-D). * Includes a training procedure that is robust to changes in the list of relevant intents allowing for a fast and flexible way of adjusting per-client relevant intents over time. ## II Method ### _Problem formulation_ Let us formulate the intent detection task as a classification problem where the goal is to predict one intent \(y\), from a predefined list of intents \(\mathcal{I}\), given as input a list of features \(X\), which can contain a representation of the ticket (e.g., the concatenation of the ticket's subject and description) as well as other features providing additional information, such as client or end-user features. More formally, \[\hat{y}=M(X)\in\mathcal{I}, \tag{1}\] where \(M\) denotes the model used to perform the prediction, which receives \(X\) as input and predicts one intent \(\hat{y}\in\mathcal{I}\). In the customer support domain, various approaches can be employed for intent detection (Fig. 1). We assume \(n\) clients and \(m\) industries, with \(m<<n\). Arguably the simplest approach uses a single generic model \(M_{G}\) that handles all incoming tickets without considering their origin (Fig. 1 (a)). Alternatively, we can use \(m\) industry-specific models where \(M_{I}^{i}\) handles tickets from clients within the respective industry \(i\), with intents limited to the relevant intents for that industry, i.e., \(\mathcal{I}^{i}\subseteq\mathcal{I}\) (Fig. 1 (b)). In all cases, we assume that the models solely utilize ticket related information, denoted as \(X=t\), as their input features. More formally, \[\hat{y} =M_{G}(t)\in\mathcal{I}, \tag{2}\] \[\hat{y} =m_{i}(c,\{M_{I}^{0}(t),M_{I}^{1}(t),\ldots,M_{I}^{m}(t)\})\in \mathcal{I}^{i_{c}},\] where \(m_{c}\) is a function mapping client \(c\) to model \(M_{C}^{c}\) and which obtains the predicted intent for \(c\); similarly, \(m_{i}\) is a function mapping client \(c\) to an industry \(i_{c}\), which then maps \(i_{c}\) to a model \(M_{I}^{i_{c}}\), and finally gets the predicted intent for \(c\). The aforementioned strategies has its own advantages and limitations. The generic model offers simplicity in terms of training, deployment, and maintenance since only one model is used. However, it may produce irrelevant intents for clients, which can affect client perception. Industry-specific models have the advantage of being tailored to specific use-cases, thus reducing the likelihood of producing irrelevant intents (e.g., predicting the "request product refund" intent for an client associated to the Healthcare industry). However, these models have associated costs, as previously mentioned, and require client-to-industry assignment which can be challenging when clients do not fit into a single industry category. ### _Proposed solution_ We propose a generic model that leverages not only the ticket input \(t\) from client \(c\) but also incorporates a list of relevant intents \(I_{c}\), i.e., \(X=(t,I_{c})\) (Fig. 1 (c)). Our goal is to combine the strengths of a generic model and client-specific models, by using a single generic model while producing meaningful intents tailored to each client. This approach significantly reduces model training time, deployment time, and production costs. The list of client-specific relevant intents \(I_{c}\) can be obtained through client feedback, where clients suggest new intents to be added, or through automated methods by considering the client's history (e.g., past predictions from a production model), or through both. More formally, \[\hat{y}=M_{G}(t,I_{c})\in\mathcal{I}. \tag{3}\] Our architecture incorporates a _filtering mechanism_ applied after inference to ensure valid outputs. When the model predicts an intent that is not within the list of relevant intents, that prediction passes through the filter. When filtering is enabled, we evaluate the model's performance by either (i) considering the prediction as incorrect or (ii) ignoring the top-1 prediction and selecting the first model prediction that matches an intent in the relevant intents, and then assess if it is correct. We call this second approach _"generic with search"_; in Section III we clarify which option is chosen in each experiment. Fig. 1: Approaches for intent detection serving various clients. (a) A generic model that serves all clients. (b) Industry-specific models process tickets of clients assigned to the respective industry. (c) Our approach leverages a single generic model by integrating the clients’ relevant intents, enhancing performance through additional input to the model and ensuring valid outputs by employing a client-tailored filter. ### _Model architecture_ We employ a transformer encoder architecture, namely XLM-RoBERTa [9], to encode the ticket's message. We choose XLM-RoBERTa for its ability to handle multilingual requests, addressing the diverse language requirements in customer support scenarios. In the classification head of our model (Fig. 2), we enhance it by incorporating the list of relevant intents for the client, providing personalized intent prediction. The prediction is obtained as follows, \[\hat{y}=h\left(\texttt{[CLS]}\oplus I_{c}\right), \tag{4}\] where [CLS] corresponds to the output embedding of the special classification token of XLM-RoBERTa, \(h()\) corresponds to the classification head, and \(\oplus\) denotes the concatenation operator. In the modified classification head, \(h()\), the [CLS] and the list of relevant intents are processed using an Input Mapper, in which the dimensionality of the vectors is reduced. In particular, the [CLS] embedding is multiplied by an identity matrix to keep the same dimension, while the relevant intents array is projected into a 16-dimensional space with embedding dropout. The subsequent module, the Aggregator, combines the embeddings (e.g., through concatenation, sum, or mean). This joint embedding is then fed into a linear projection layer, followed by a configurable number of residual layers. Each residual layer consists of a linear layer, a dropout layer, and a non-linear layer (e.g., _tanh_). Finally, the resulting embedding is passed through a final classification neural network to provide the intent prediction. More details on the architecture options, such as Aggregator, number of residual layers, etc., are provided in the experimental setup in Section III-B. ## III Results ### _Dataset_ In our experiments we use an in-house customer support dataset comprised of real-world anonimized tickets; for data privacy concerns, we are unable to release this dataset. The input for the intent detection models is the concatenation of the ticket's subject with its description. The dataset encompasses tickets in nine languages and of 683 different intents related to customer support, such as "add new item to order", "delete account", or "refund request" from various industries. To split the dataset, we employ stratified sampling, allocating 15% of the data to the validation set while preserving the proportion of examples from each intent class. To account for industry-specific variations, we generate three additional industry-specific datasets. These datasets contain only tickets with intents that are valid for the industry. The mapping of tickets to intents, of intents to industries, and of clients to industries was done by our in-house experts. We create a test set for each dataset by selecting a subset of the validation set that exclusively includes clients manually assigned to that specific industry. This choice was made to ensure that during training, the model learned from tickets that were consistently relevant to the specific intent, while during testing, the focus was on evaluating the model's performance in real production scenarios, where industry-specific models exclusively handle tickets from clients within their respective industries. Additionally, we remove samples from the test set if they have an annotated intent that does not exist in the list of relevant intents for that industry. Table I presents relevant statistics for four datasets (generic dataset and three datasets corresponding to three industries). Our approach requires a list of relevant intents for each client as input. Collecting this information for clients is an ongoing process and, thus, we utilize historical predictions of a model as a proxy for the client's relevant intents. Furthermore, we introduce a coverage parameter to limit the size of the relevant intents lists. For example, if we set the coverage to 100%, all intents predicted by the production model are part of the list of relevant intents, while reducing it to 99% ensures that at least 99% of the tickets are covered by the most frequent intents. This coverage parameter allow us to create different set of relevant intents with different sizes. To achieve this, we sort the intents by frequency and remove tickets with the intent of lowest frequency until we reach the desired coverage. Table II presents details on the median and maximum number of intents per client based on different coverage levels. Similar to the generic model, the generic model with the list of relevant intents is trained using all training tickets and evaluated on the full generic test set as well as the industry-specific test sets. Fig. 2: Proposed solution architecture. The input is the ticket’s subject and description, as well as the client’s list of relevant intents. ### _Experimental setup_ All models are XLM-RoBERTa-base models, which contain 125M parameters. We train all models using cross-entropy loss with a batch size of 512, for a maximum of 30 epochs with early-stopping based on the validation loss and a patience of 3 epochs. We use the AdamW optimizer with an initial learning rate of 1e-6 and a weight decay of 10%. To reduce training time we use adapters in the last 3 layers of the model, following Pfeiffer's configuration [12]. In the classification head we use concatenation as the aggregation function. We set the embedding dropout to 90%. In the linear projection layer the dimensionality is reduced to 128. Finally, we use just one residual layer. To ensure robustness, all performances reported ahead are obtained by averaging the individual performance obtained using 4 different training seeds. Furthermore, we perform statistical analysis to determine statistically significant differences between model performances. We use the ranx library [1] to extract 1,000 random subsets from the test set. We evaluate the performance of the models on these subsets and compute the p-values using the paired t-Test. In all comparisons, we assess the performance of each approach against the baseline without using the relevant intents as features. Statistical significance is determined when the p-value is below 0.001. ### _Generic model with relevant intents vs Industry models_ We start by comparing the performance of the generic model (Fig. 1 (a)) against the industry-specific models (Fig. 1 (b)). As maintaining multiple industry-specific models incurs high costs, it is desirable to have a single model that can effectively handle tickets from various industries. To make this comparison, we train three industry-specific models specifically for the Software, E-commerce, and Finance industries. It is important to note that the training data for these models consists of tickets from the entire dataset that are associated with intents belonging to their respective industries. And it should be noted that these subsets may not be entirely disjoint since there are certain intents that may be relevant to multiple industries. Table III presents the average accuracy the models across four different training seeds. We observe that the generic model performs on par with the industry-specific models in their respective industries, with only a slight decrease in performance observed for the E-commerce subset. This outcome supports the notion that a single generic model can effectively handle the intent detection task. Additionally, we introduce the results of the _generic model with search_ setting, which incorporates a filtering component (similar to Fig. 1 (c)) that selects the most suitable model prediction from the list of relevant intents, as explained in Section II-B. We observe improved performance when employing this strategy, which passes the significant test for all subset except Finance. For the remainder of this work, we assume that the filtering mechanism consistently outputs "incorrect" if the model prediction falls outside the list of relevant intents, as described in Section II-B. Furthermore, since we already assessed the performance of the generic model across industries, our subsequent experiments only concern results on the more complete generic set. ### _Robustness_ The list of relevant intents for a client provides valuable information to improve model performance. However, relevant intents can evolve over time, underscoring the importance of ensuring the production model's robustness to such changes without the need of model retraining. Retraining the model every time the list of client relevant intents wants to be modified incurs significant costs. Thus, it becomes imperative to develop a production model that can effectively handle variations in the list of relevant intents without requiring frequent re-deployment. This not only saves resources but also enhances the model's overall efficiency and scalability. To assess the impact of changes in the list of relevant intents on performance, we train generic models with relevant intents lists using different coverage values, such as 100%, 99%, etc. Then, we evaluate these models by providing input intents lists with varying coverages. This experimental setup simulates scenarios where clients add or remove relevant intents over time. In the first row of Table IV, we show the performance of the baseline generic model with the output filter, that ensures outputs within relevant intents. Note that, although it does not utilize the list of intents as input, the performance of the baseline also decreases due to the filtering mechanism, which classifies predictions outside of the list of relevant intents as incorrect. The subsequent rows show the delta in accuracy when comparing the generic model trained with different coverages of relevant intents lists against the performance of baseline generic model on the same coverage level. Overall, our results indicate that incorporating the list of relevant intents as features leads to performance improvements, as evidenced by the average gains across various coverages when compared to the generic model. We also observe that the gains in performance are statistically significant for most direct comparisons. However, it is noteworthy that performance tends to deteriorate when the training coverage differs from the testing coverage. For instance, training and evaluating the model with a coverage of 100% achieves an accuracy of 69.2%. However, training with the same coverage and evaluating with a list coverage of 96% results, the accuracy is 59.4%, which is a drop of \(\approx 10\%\). While some of the expected degradation was reduced through the use of high values of dropout (\(90\%\)) in the embedding layer of the relevant intents, these results can still raise some concerns regarding potential overfitting. In an attempt to enhance the model's resilience to changes in the relevant intents lists, we introduce synthetic noise during the training process. The hypothesis is that by exposing the model to different intents lists for the same client during training it will become more resistant to real-world changes in the lists. This means that the same client may have different relevant intents lists during the training phase. To introduce the noise in the relevant intents lists, we change \(k\)% of the intents present in the list, e.g, if an intent is not in the relevant intents list for an client, we add it with probability of \(k\)%, and the reverse is done if the intent is present in the list. We train models with the following noise values: 0% (no noise), 5%, 10%, 20% and 50%. To reduce the number of combinations, we fix the training coverage to 98%, we then evaluate those models on different coverage values to assess the models' robustness regarding list changes. We chose 98% coverage since it is the setting with highest average performance (from Table IV) and because we can analyse its performance when removing intents (e.g., 97% and 96% coverages) and also when adding intent (e.g., 99% and 100% coverages). From the results in Table V we observe that the model trained without any training noise in the relevant intents lists shows high degradation in terms of accuracy when subject to the changes in the coverage values: when the coverage is 98%, the model's accuracy is 66.4%, which drops to 63.8% when the coverage is 100%. This indicates that simply training a model where the lists are the ones seen during training is not robust to subsequent changes in the relevant intents lists. The models trained with 5% noise are the most robust to changes in the lists and have high performance overall. Models trained with 10% and 20% also show gains in robustness but notable drops in accuracy, while the model trained with 50% noise in the relevant intents list has very similar (but lower) performance to the generic model without relevant intents lists, suggesting that the model simply learned to ignore the relevant intents lists and rely only on the ticket content. We show the delta in performance of our approach versus the baseline generic model and highlight statistical significant results. ### _Out-of-domain evaluation_ As a final experiment, we further assess our models' robustness by evaluating them on tickets from out-of-domain clients; i.e., from clients whose tickets were not seen during training. To simulate that, we create an out-of-domain split by segregating all tickets from a few clients as test, and leaving the other clients for train and validation. the final distribution of tickets in the out-of-domain dataset is of 77.7%, 11.7% and 10.6% for the train, validation and test splits, respectively. We train and evaluate models in the out-of-domain context, namely one generic model without any extra information and two generic models with relevant intents lists, one trained without noise in the list of relevant intents and the other one trained with 5% noise in the intents. The list of relevant intents is fixed to have 98% training coverage and the evaluation performed on lists with 100%, 99%, 98%, 97%, and 96% coverage, and we average out the results across all coverages, like we did for the previous experiment. We also include the previous results on the in-domain sets for comparison. We observe that the models trained with the list of relevant intents perform better than the generic model across both dataset splits (Table VI). This result indicates that the list of relevant intents helps actually boost the model performance by keeping its ability to generalize and, thus, the model is not simply learning to memorize the input lists of relevant intents. We also notice that adding training noise further increases performance, highlighting the gains in model robustness. ## IV Related work In the customer support scenario, machine learning methods have been widely explored to improve the analysis and handling of requests. Some examples include, using deep learning models to assign issues to the appropriate workers [7], exploring ensembles of deep learning architectures to classify customer support tickets [14], or classifying incoming emails using machine learning models and directing them to pre-defined queues based on their topic [2], allowing agents to select queues that align with their field of expertise. Here we are only concerned with the task of automatically detecting intents and not of matching tickets to predefined queues, nor with aligning agents with priority queues. Furthermore, none of cited works explore the practical consideration of scaling a system in production for various clients, considering a solution that accounts for costs, maintenance, usability expectations and performance quality. Transformer-based architectures [13] like BERT [6] have achieved state-of-the-art results in NLP classification tasks, making them suitable for intent detection. XLM-RoBERTa [5] is particularly relevant as tickets can be in languages other than English. Other architectures were also explored for intent detection including dual-encoders [3] and recurrent neural networks (RNNs) [8]. In particular, Casanueva et al. [3] employed a setup which used the combination of two dual encoders to efficiently train a multi-layer perceptron on top of the representations given by the encoder models. Chen et al. [4] utilized the BERT model [6] to perform joint intent classification and slot filling, using the output embedding of the special [CLS] token for intent classification. In scenarios where inputs consist of different types of data, multi-modal machine learning methods have gained popularity. These approaches handle inputs that combine text, numerical features, audio, and other modalities [11]. Here, we are mainly concerned with merging textual inputs with a list of integers representing the relevant intents. In the context of customer support, tickets often contain valuable non-textual information alongside the text message, which can enhance natural language understanding tasks. A common approach is to use a Transformer encoder, such as BERT [6], to process the text features and extract the embedding of the special [CLS] token, which captures information from the entire text sequence. This representation is then combined with vectorized non-text features and fed into a feed-forward network, we employ a similar approach. ## V Conclusions We introduce a new system to perform intent classification of customer requests that accurately scales with the number of clients and business. We developed an architecture that utilizes a generic model augmented with a per-client relevant intents list. Our approach eliminates the need for assigning clients to specific industries and reduces maintenance costs by deploying a single model. Furthermore, the incorporation of relevant intents lists obtained directly from clients allows for a more personalized experience that can easily adapt to changes in clients' needs. In terms of performance, our approach surpasses industry-specific and generic models. We further improve our model's performance by augmenting it with a list of relevant intents as features and show that the list of relevant intents is robust to changes when trained with added synthetic noise. Finally, we conducted an out-of-domain evaluation using a testing set that included tickets from clients unseen during training and conclude that our model effectively utilizes the relevant intents lists as valuable information for classification, rather than relying solely on them as client identifiers.
2308.16750
The non-two-primes graph of a finite group
To any finite group $G$, we may associate a graph whose vertices are the elements of $G$ and where two distinct vertices $x$ and $y$ are adjacent if and only if the order of the subgroup $\langle x, y\rangle$ is divisible by at least 3 distinct primes. We prove that the subgraph of this graph induced by the non-isolated vertices is connected and has diameter at most 5.
Karmele Garatea-Zaballa, Andrea Lucchini
2023-08-31T14:12:06Z
http://arxiv.org/abs/2308.16750v2
# The non-two-primes graph of a finite group ###### Abstract. To any finite group \(G,\) we may associate a graph whose vertices are the elements of \(G\) and where two distinct vertices \(x\) and \(y\) are adjacent if and only if the order of the subgroup \(\langle x,y\rangle\) is divisible by at least 3 distinct primes. We prove that the subgraph of this graph induced by the non-isolated vertices is connected and has diameter at most 5. 2020 Mathematics Subject Classification: 20D60,05C25 ## 1. Introduction Let \(\mathfrak{F}\) be a class of finite groups and \(G\) a finite group. We may consider a graph \(\widetilde{\Gamma}_{\mathfrak{F}}(G)\) whose vertices are the elements of \(G\) and where two vertices \(g,h\in G\) are connected if and only if \(\langle g,h\rangle\notin\mathfrak{F}\). We denote by \(\mathcal{I}_{\mathfrak{F}}(G)\) the set of isolated vertices of \(\widetilde{\Gamma}_{\mathfrak{F}}(G)\). We define the non-\(\mathfrak{F}\)-graph \(\Gamma_{\mathfrak{F}}(G)\) of \(G\) as the subgraph of \(\widetilde{\Gamma}_{\mathfrak{F}}(G)\) obtained by deleting the isolated vertices. In the particular case when \(\mathfrak{F}\) is the class \(\mathfrak{A}\) of the abelian groups, the graph \(\Gamma_{\mathfrak{A}}(G)\) has been introduced by Erdos and it is known with the name of non-commuting graph (see for example[1], [10]). If \(\mathfrak{F}\) is the class \(\mathfrak{N}\) of the finite nilpotent groups, then \(\Gamma_{\mathfrak{N}}(G)\) is the non-nilpotent graph, studied for example in [2]. When \(\mathfrak{F}\) is the class \(\mathfrak{S}\) of the finite soluble groups, we obtain the non-soluble graph (see [3]). The subset \(\mathcal{I}_{\mathfrak{F}}(G)\) is not in general a subgroup of \(G\), however this occurs for several saturated formations (see for example [7, Theorems 1.1 and 1.3]) and, more in general, we call semiregular a class \(\mathfrak{F}\) with the property that \(\mathcal{I}_{\mathfrak{F}}(G)\) is a subgroup of \(G\) for every finite group \(G\). Moreover we call connected a class \(\mathfrak{F}\) with the property that the graph \(\Gamma_{\mathfrak{F}}(G)\) is connected for any finite group \(G\). The results obtained in [7] indicate that often a semiregular formation is connected. This occurs for example for the formations of abelian groups, nilpotent groups, soluble groups, supersoluble groups, groups with nilpotent derived subgroup, groups with Fitting length less or equal than \(t\) for any \(t\in\mathbb{N}.\) The question whether a semiregular class is necessarily connected has been investigated in [8]. The answer is negative, however the results presented in [8] indicate that in many significant cases the connectivity of a class can be proved using its semiregularity property. In this paper we investigate the connectivity properties of the non-\(\mathfrak{F}\)-graph when \(\mathfrak{F}\) is the class of the finite groups whose order is divisible by at most two different primes. Notice that by the Burnside's \(p^{a}q^{b}\)-theorem, this class is contained in the class of finite solvable group. It is interesting to notice that the class \(\mathfrak{F}\) is not semiregular. For example, if \(G=\langle a,b\mid a^{15}=1,b^{2}=1,(ab)^{2}=1\rangle\) is the dihedral group of order 30, then \(\mathcal{I}_{\mathfrak{F}}(G)=\{1,a^{3},a^{5},a^{6},a^{9},a^{10},a^{12}\}\) is not a subgroup of \(G\). Thus the methods developed in [7] and [3] cannot help to investigate whether the non-\(\mathfrak{F}\)-graph is connected and ad-hoc arguments are needed. Our main result is the following: **Theorem 1**.: _Let \(\mathfrak{F}\) be the class of the finite groups whose order is divisible by at most two different primes. For every finite group \(G\), the non-\(\mathfrak{F}\)-graph \(\Gamma_{\mathfrak{F}}(G)\) is connected and its diameter is at most 5._ We don't know whether the bound on the diameter is sharp but we may exhibit an example of a group \(G\) such that \(\Gamma_{\mathfrak{F}}(G)\) has diameter at least 3. Indeed let \(H=\operatorname{SL}(2,3).\) Then \(H\) has a faithful irreducible action on \(A=C_{3}\times C_{3}\) and a non-trivial action on \(B=C_{7}.\) Let \(G=(A\times B)\rtimes H\) be the semidirect product with respect of these actions and let \(x\) and \(y\) be two elements of \(G\) of order, respectively, 7 and 2. If an element \(g\) of \(G\) is adjacent to \(x\) in \(\Gamma_{\mathfrak{F}}(G)\) then \(|g|=6,\) while if \(g\) is adjacent to \(y\) then \(|g|\in\{14,21,28\},\) so the distance in \(\Gamma_{\mathfrak{F}}(G)\) between \(x\) and \(y\) is at least 3. ## 2. Notations and preliminary results Given a finite group \(G\), we will denote by \(\Gamma(G)\) the graph \(\Gamma_{\mathfrak{F}}(G),\) being \(\mathfrak{F}\) the class of the finite groups whose order is divisible by at most two different primes. We will denote by \(\mathcal{I}(G)\) the set of the isolated vertices of \(\Gamma(G),\) by \(\operatorname{diam}(\Gamma(G))\) the diameter of \(\Gamma(G)\) and by \(d(x,y)\) the distance between two vertices \(x,y\) of \(\Gamma(G).\) Moreover we will denote by \(\pi(G)\) the set of the prime divisors of the order of \(G\) and by \(\tilde{\pi}(G)\) the cardinality of \(\pi(G).\) In a similar way, if \(g\in G,\) then \(\pi(g)\) will denote the set of the prime divisors of the order of \(g\) and \(\tilde{\pi}(g)\) the cardinality of this set. With \(\Sigma(G)\) we will denote the set of the elements \(g\) of \(G\) with \(\tilde{\pi}(g)\geq 2.\) Finally, for every product \(n=p_{1}\cdots p_{t}\) of distinct primes, we will denote by \(\Omega_{n}(G)\) the set of elements \(g\in G\setminus\mathcal{I}(G)\) whose order is divisible by \(n,\) but not by any prime \(q\notin\{p_{1},\ldots,p_{t}\}.\) The following result by Higman will play a crucial role in our proofs. **Theorem 2**.: _[_5_, Theorem 1]_ _Let \(G\) be a finite solvable group all of whose elements have prime power order. Then \(G\) has order divisible by at most two primes._ **Corollary 3**.: _If \(G\) is a finite solvable group and \(\mathcal{I}(G)\neq G,\) then \(\Sigma(G)\neq\varnothing.\)_ **Lemma 4**.: _Let \(G\) be a finite group and let \(N\) be a non-trivial normal subgroup of \(G\) such that \(|N|=p^{a}\) for some prime \(p.\) For every pair \(x_{1},x_{2}\) of elements of \(G,\) there exist \(n_{1},n_{2}\in N\) such that \(p\) divides the order of \(\langle x_{1}n_{1},x_{2}n_{2}\rangle.\)_ Proof.: It is not restrictive to assume that \(G=\langle x_{1},x_{2}\rangle N\) and that \(N\) is a minimal normal subgroup of \(G\). Let \(H=\langle x_{1},x_{2}\rangle.\) We may assume that \(p\) does not divide \(|H|,\) which implies that \(H\) is complement of \(N\) in \(G.\) Assume that our statement is false. Then \(\langle x_{1}n_{1},x_{2}n_{2}\rangle\) is a complement of \(N\) in \(G\) for every \(n_{1},n_{2}\in N.\) Moreover if \(\langle x_{1}n_{1},x_{2}n_{2}\rangle=\langle x_{1}m_{1},x_{2}m_{2}\rangle\) with \(n_{1},n_{2},m_{1},m_{2}\in N,\) then \(n_{1}=m_{1}\) and \(n_{2}=m_{2}.\) Consequently, there are \(|N|^{2}\) complements for \(N\) in \(G\). However, by Schur-Zassenhaus theorem, all these complements are conjugate to \(H\), so \(|N|^{2}\leq|G:N_{G}(H)|\leq|G:H|=|N|,\) a contradiction. Let \(A\) and \(N\) be finite groups, and suppose that \(A\) acts on \(N\) via automorphism. The action of \(A\) on \(N\) is said to be Frobenius if \(n^{a}\neq n\) whenever \(n\in N\) and \(a\in A\) are nonidentity elements. Equivalently, the action of \(A\) on \(N\) is Frobenius if and all if \(C_{N}(a)=1\) for all nonindentity elements \(a\in A.\) **Lemma 5**.: _Let \(N\) be a non-trivial normal subgroup of a finite group \(G\) and \(x,y\in G\). If \(N\) is a \(p\)-group and \(C_{N}(x)=1,\) then there exists \(n\in N\) such that \(p\) divides the order of \(\langle x,yn\rangle.\)_ Proof.: By Lemma 4, there exists \(n_{1},n_{2}\in N\) such that \(p\) divides the order of \(\langle xn_{1},yn_{2}\rangle.\) Since \(C_{N}(x)=1,\) there exists \(n\in N\) such that \(x=(xn_{1})^{m}\). But then \(p\) divides the order of \(\langle(xn_{1})^{m},(yn_{2})^{m}\rangle=\langle x,yn\rangle,\) with \(n=[y,m]n_{2}^{m}.\) **Lemma 6**.: _If there exists \(g\in G\) with \(\tilde{\pi}(g)\geq 3,\) then \(\mathcal{I}(G)=\varnothing\) and \(\mathrm{diam}(\Gamma(G))\leq 2.\)_ Proof.: If \(\tilde{\pi}(g)\geq 3,\) then every other element of \(G\) is adjacent to \(g\) in \(\Gamma(G).\) This implies \(\mathrm{diam}(\Gamma(G))\leq 2.\) **Lemma 7**.: _If \(x,y\in\Sigma(G)\) and \(\tilde{\pi}(G)\geq 3,\) then \(d(x,y)\leq 2.\)_ Proof.: By the previous lemma we may assume \(\pi^{*}(g)=2\) for every \(g\in\Sigma(G).\) If \(\pi(x)\neq\pi(y),\) then \(\pi(x)\cup\pi(y)\) contains at least \(3\) different primes and consequently \(\tilde{\pi}(\langle x,y\rangle)\geq 3,\) and \(x\) and \(y\) are adjacent vertices of \(\Gamma(G).\) If \(\pi(x)=\pi(y),\) then there exists a prime \(p\) in \(\pi(G)\setminus\pi(x).\) If \(z\) is an element of \(G\) of order \(p,\) then \(x-z-y\) is a path in \(\Gamma(G).\) **Lemma 8**.: _Let \(G\) be a finite solvable group with \(\mathcal{I}(G)\neq G\). Then for every \(x\notin\mathcal{I}(G)\) there exists \(y\in\Sigma(G)\) such that \(d(x,y)\leq 2.\)_ Proof.: We prove the statement by induction on \(|G|\). Let \(x\in G\setminus\mathcal{I}(G)\). If \(\tilde{\pi}(x)\geq 2\) the statement is clear, so we may assume that \(\tilde{\pi}(x)=1\). As \(x\notin\mathcal{I}(G),\) there exists \(y\in G\) such that \(x\) and \(y\) are adjacent in \(\Gamma(G).\) If \(\langle x,y\rangle\neq G,\) then by induction there exists \(z\in\langle x,y\rangle,\) with \(\tilde{\pi}(z)\geq 2,\) such that \(d(x,z)\leq 2\). So we may assume that \(G=\langle x,y\rangle.\) Let \(N\) be a minimal normal subgroup of \(G\). There exists a prime \(r\) such that \(N\) is an elementary abelian \(r\)-group. Notice that for every \(1\neq n\in N\) and \(g\in G\), \(\pi(\langle n,g\rangle)=\{r\}\cup\pi(g)\). Thus either \(N\subseteq\mathcal{I}(G)\) or \(G\) contains an element \(\tilde{g}\) whose order is divisible by two primes in \(\pi(G)\setminus\{r\}\) and all the non-trivial elements of \(N\) are adjacent to \(\tilde{g}\in\Gamma(G).\) In particular we may assume \(x\notin N.\) If \(\tilde{\pi}(G/N)>2,\) then, since \(G/N=\langle xN,yN\rangle,\) it follows that \(xN\notin\mathcal{I}(G/N),\) so by induction \(d(xN,zN)\leq 2,\) and consequently \(d(x,z)\leq 2,\) for some element \(z\) with \(2\leq\tilde{\pi}(zN)\leq\tilde{\pi}(z).\) So we may assume \(\pi(G/N)=\{p,q\}\) with \(r\notin\{p,q\}.\) Moreover, without loss of generality, we may assume that \(x\) is a \(p\)-element. If there exists \(1\neq n\in N\) such that \([x,n]=1,\) then \(\pi(xn)=\{p,r\}\) and \(xn\) is adjacent in \(\Gamma(G)\) to \(y,\) implying \(d(x,xn)=2\). So we may assume that \(C_{N}(x)=1.\) By Corollary 3, there exists an element \(z\in G\) with \(\tilde{\pi}(z)\geq 2\). If \(pq\) divides \(|z|,\) then by Lemma 5 there exist \(n\in N\) such that \(r\) divides the order of \(\langle x,zn\rangle.\) Thus \(x\) is adjacent in \(\Gamma(G)\) to the element \(zn,\) whose order is divisible by \(pq.\) If \(qr\) divides \(|z|\) then \(x\) and \(z\) are adjacent in \(\Gamma(G).\) Finally, assume that \(pr\) divides \(|z|.\) In this case let \(v\) be an element of \(G\) of order \(q.\) By Lemma 5 there exists \(n\in N\) such that \(r\) divides the order of \(\langle x,vn\rangle,\) and \(x-vn-z\) is a path in \(\Gamma(G).\) Recall that the prime graph \(\Pi(G)\) of a finite group is the graph whose vertices are the primes dividing the order of \(G\) and where two vertices \(p\) and \(q\) are joined by an edge if there is an element in \(G\) of order \(pq.\) **Lemma 9**.: _Let G be a finite solvable group with \(G\neq\mathcal{I}(G)\) and \(\tilde{\pi}(G)=3.\) If \(\operatorname{diam}(\Gamma(G))>4,\) then \(\Pi(G)\) is a path graph of length three._ Proof.: Let \(\pi(G)=\{p,q,r\}\) and assume \(\operatorname{diam}(\Gamma(G))>4.\) Our aim is to prove that, for a suitable labeling of the three vertices, the only edges of the prime graph \(\Pi(G)\) of \(G\) are \((p,r)\) and \((r,q).\) To that end we will see that if \(\Pi(G)\) is the complete graph \(K_{3}\) or if it contains an isolated vertex, then \(\operatorname{diam}(\Gamma(G))\leq 4.\) It can be easily seen that if \(\Pi(G)\) is complete, then \(\operatorname{diam}(\Gamma(G))\leq 3.\) So we may assume that one of the vertices of \(\Pi(G),\) for example \(r,\) is isolated. In this case \(\Pi(G)\) has two components: \(\{p,q\}\) and \(\{r\}.\) It follows from a result of Gruenberg and Kegel (see [12, Corollary]) that \(G\) is Frobenius or \(2\)-Frobenius and one of the two components consists of the primes dividing the lower Frobenius complement. We distinguish two cases: 1. \(G\) is a Frobenius group. In this case \(G=N\rtimes H\) where \(N\) is nilpotent and the action of \(H\) on \(N\) is Frobenius. 2. \(G\) has normal subgroups \(N\) and \(K\) such that \(K\) is a Frobenius group with Frobenius kernel \(N\), and \(G/N\) is a Frobenius group with Frobenius kernel \(K/N.\) In case (1), we have two possibilities: a) \(N\) is a \(\{p,q\}\)-group and \(H\) is an \(r\)-group. b) \(N\) is an \(r\)-group and \(H\) is a \(\{p,q\}\)-group. In case (a) all the elements of \(G\setminus N\) are \(r\)-elements and the \(p\)-elements and \(q\)-elements of \(G\) belong to \(N\) and are isolated in \(\Gamma(G).\) So if \(g\in G\setminus\mathcal{I}(G),\) either \(g\) is an \(r\)-element or \(pq\) divides \(|g|.\) This implies that \(\operatorname{diam}(\Gamma(G))\leq 2.\) In case (b) every element of \(\Omega_{pq}(G)\) is adjacent to every element of \(\Omega_{r}(G)\). We need to investigate the behavior of the elements in \(\Omega:=\Omega_{p}(G)\cup\Omega_{q}(G).\) Let \(x\in\Omega.\) By Corollary 3, there exists \(z\in\Omega_{pq}(G).\) Since \(N\) is the Frobenius kernel of \(G\) and \(x\notin N,\) we have \(C_{N}(x)=1\) so, by Lemma 5, there exists \(n\) in \(N\) such that \(r\) divides the order of \(\langle x,zn\rangle.\) Thus \(x\) is adjacent to \(zn\in\Omega_{pq}(G).\) We have so proved that every element in \(\Omega\) is adjacent in \(\Gamma(G)\) to some element in \(\Omega_{pq}(G).\) By Lemma 7, we can conclude that \(\operatorname{diam}(\Gamma(G))\leq 4.\) Let us move to case (2). Again we have two possibilities: a) \(K/N\) is a nilpotent \(\{p,q\}\)-group and \(G/K\) and \(N\) are \(r\)-groups. b) \(K/N\) is an \(r\)-group and \(G/K\) and \(N\) are \(\{p,q\}\)-groups. In case (a) we may repeat the previous argument: all the elements of \(\Omega:=\Omega_{p}(G)\cup\Omega_{q}(G)\) belong to \(K\) and are adjacent to an element of \(\Omega_{pq}(G).\) In case (b) we start by noticing that \(\Omega\cap N=\varnothing.\) First assume that \(G\) contains an element \(z\) with \(|zK|=pq.\) Let \(x\in\Omega.\) It must be \(C_{K/N}(xK)=1,\) otherwise \(G\) would contain an element of order \(rt,\) being \(t\) the prime in \(\{p,q\}\) dividing the order of \(x.\) So by Lemma 5, there exists \(k\in K\) such that \(r\) divides the order of \(\langle x,zk\rangle N/N.\) In particular \(x\) and \(zk\) are adjacent vertices in \(\Gamma(G).\) We have so proved that every element of \(\Omega\) is adjacent to a vertex in \(\Omega_{pq}(G)\) and this suffices to conclude that \(\operatorname{diam}(\Gamma(G))\leq 4.\) To conclude our discussion of case (b), we remain with the case when \(G/K\) does not contain elements of order \(pq.\) We claim that in this case \(\tilde{\pi}(G/K)=1.\) To prove this, recall that every Sylow subgroup of a Frobenius complement is cyclic or generalized quaternion (see for example [6, Corollary 6.17]). This implies in particular that the Frobenius complement \(K/N\) is either a cyclic \(r\)-group or a generalized quaternion group. If \(K/N\) is cyclic, then the Frobenius complement \(G/K\), being isomorphic to a subgroup of \(\operatorname{Aut}(K/N)\), is an abelian group; its Sylow subgroups must be cyclic and therefore \(G/K\) is a cyclic \(\{p,q\}\)-group. However we are assuming that \(G/K\) does not contain elements of order \(pq\) so \(G/K\) is a \(p\)-group or a \(q\)-group. If \(K/N\) is generalized quaternion, then \(G/K\) is a Frobenius complement of odd order. Hence, by [11, Lemma 2.4], any two elements in \(G/K\) of coprime order commute, so if \(p\) and \(q\) both divide the order of \(G/K\), then there must be an element in \(G/K\) whose order is divisible by \(pq\), against our assumption. So our claim is proved and we may assume that \(G/K\) is a \(p\)-group. In this case all the \(q\)-elements of \(G\) belong to \(N\) and are isolated in \(\Gamma(G).\) Thus the vertex set of \(\Gamma(G)\) coincides with \(\Omega_{p}(G)\cup\Omega_{r}(G)\cup\Omega_{pq}(G).\) Now we claim that every element \(g\in\Omega_{p}(G)\) is adjacent to an element in \(\Omega_{r}(G)\cup\Omega_{pq}(G).\) This would allow to conclude \(\operatorname{diam}(\Gamma(G))\leq 4.\) In order to prove the claim, let \(y\) be an element of \(G\) order \(r\). If \(g\) is adjacent to \(y\) we are done, so assume that \(\langle g,y\rangle\) is a \(\{p,r\}\)-group. Now let \(Q\) be the Sylow \(q\)-subgroup of \(N\), and let \(Z=Z(Q)\) be the center of \(Q\). If \([gy,z]=1\) for some \(1\neq z\in Z\), then \(gyz\in\Omega_{pq}\) and the order of \(\langle g,gyz\rangle=\langle g,yz\rangle\) is divisible by \(pqr.\) This implies that \(g\) and \(gyz\) are adjacent vertices of \(\Gamma(G).\) So we may assume that \(C_{Z}(gy)=1\). Since the action of \(G/K\) over \(K/N\) is Frobenius, \(gy=g^{x}n\) for some \(x\in G\) and \(n\in N\) and, for any \(1\neq z\in Z\), \([g,z]=([g^{x},z^{x}])^{x^{-1}}=([gyn^{-1},z^{x}])^{x^{-1}}=[gy,z^{x}]^{x^{-1}}\neq 1\), hence \(C_{N}(g)=1.\) By Lemma 5 there exist \(\tilde{z}\in Z\) such that \(\langle g,y\tilde{z}\rangle\) has order divide by \(q\). Thus implies that \(g\) is adjacent to the \(r\)-element \(y\tilde{z}.\) ## 3. Proof of the main result **Lemma 10**.: _If \(G\) is soluble and \(\tilde{\pi}(G)\geq 4,\) then \(\operatorname{diam}(\Gamma(G))\leq 3.\)_ Proof.: By Lemma 6, we may assume \(\tilde{\pi}(x)\leq 2\) for every \(x\in G.\) If \(p,q,r\) are three distinct primes in \(\pi(G),\) then \(G\) contains an element of order the product of two of these primes (see [9, Proposition 1]). In particular assume \(\tilde{\pi}(G)\geq 4\) and let \(x\) and \(y\) be two distinct nonidentity elements of \(G.\) Let \(\pi=\pi(x)\cup\pi(y).\) If \(|\pi|>2,\) then \(x\) and \(y\) are adjacent vertices of \(\Gamma(G).\) So we may assume \(|\pi|\leq 2.\) If there exists \(p\in\pi(x)\cap\pi(y),\) then take three different primes \(q_{1},q_{2},q_{3}\in\pi(G)\setminus\{p\}.\) Then \(G\) contains an element \(z\) whose order is the product of two of these primes and \(x-z-y\) is a path is \(\Gamma(G).\) So we may assume \(\pi(x)=\{p_{1}\}\) and \(\pi(y)=\{p_{2}\},\) with \(p_{1}\neq p_{2}.\) Let \(r_{1},r_{2}\) be two different primes in \(\pi(G)\setminus\{p_{1},p_{2}\}\). There exists in \(G\) an element \(z_{1}\) whose order is the product of two primes in \(\{r_{1},r_{2},p_{2}\}.\) If \(|z_{1}|=r_{1}r_{2},\) then \(x-z-y\) is a path is \(\Gamma(G).\) If \(p_{2}\) divides \(|z_{1}|,\) then we may consider an element \(z_{2}\) whose order is the product of two primes in \(\{r_{1},r_{2},p_{1}\}.\) If \(z_{2}=|r_{1}r_{2}|,\) then \(x-z-y\) is a path is \(\Gamma(G).\) If \(p_{1}\) divides \(|z_{2}|,\) then \(x-z_{1}-z_{2}-y\) is a path in \(\Gamma(G).\) **Lemma 11**.: _If \(G\) is soluble and \(\tilde{\pi}(G)=3,\) then \(\operatorname{diam}(\Gamma(G))\leq 5.\)_ Proof.: By Lemma 9, we may assume that the prime graph of \(G\) is of the form \(p-r-q.\) Notice that if \(x\notin\Omega_{r}(G),\) then \(d(x,z)=1\) for some \(z\in\Sigma(G)\). Indeed, if \(p\) divides \(|x|\) then \(x\) is adjacent to an element in \(\Omega_{rq}(G),\) while if \(q\) divides \(|x|\) then \(x\) is adjacent to an element in \(\Omega_{rp}(G)\). If \(x\in\Omega_{r}(G),\) then, by Lemma 8, \(d(x,z)\leq 2\) for some \(z\in\Sigma(G).\) Hence, using Lemma 7, we conclude that if \(x\) and \(y\) are non-isolated vertices of \(\Gamma(G)\) and \(x\) is not an \(r\)-element, then \(d(x,y)\leq 5.\) Assume now that \(g\in\Omega_{r}(G)\) and that \(g\) is adjacent to \(x\notin\Omega_{r}(G).\) Let \(y\notin\mathcal{I}(G).\) By Lemma 7, there exists \(z\in\Sigma(G)\) with \(d(z,y)\leq 2.\) If \(x\) and \(z\) are adjacent in \(\Gamma(G),\) then \(d(g,y)\leq d(g,x)+d(x,z)+d(z,y)\leq 4.\) If not, then \(\pi(x)\subseteq\pi(z)\) and there exists \(v\in\Sigma(G)\) with \(\pi(v)\cup\pi(x)=\{p,q,r\}.\) But then \(d(g,y)\leq d(g,x)+d(x,v)+d(v,z)+d(z,y)\leq 5.\) We remain with the case when \(g\) belongs to the set \(\Lambda\) of the elements of \(\Omega_{r}(G)\) whose neighbourhood is contained in \(\Omega_{r}(G).\) We are going to solve this case proving the following claim: there exist \(a_{1},a_{2}\in G\setminus\mathcal{I}(G)\) with the following properties: 1. \(d(g,a_{1})=d(g,a_{2})=2;\) 2. \(\pi(a_{1})\cup\pi(a_{2})=\{p,q,r\}.\) In order to prove the claim, let \(g\in\Lambda\) and let \(z\in\Omega_{r}(G)\) be adjacent to \(g\in\Gamma(G).\) Let \(H=\langle g,z\rangle\) and \(N=O_{r}(H).\) Notice that neither \(g\) nor \(z\) belongs to \(N\) since \(\tilde{\pi}(\langle n,h\rangle)\leq 2\) for all \(n\in N\) and \(h\in H\). Let \(A/N\) be a minimal normal subgroup of \(H/N\). Then \(A/N\) is either a \(p\)-group or a \(q\)-group. Assume for example that it is a \(p\)-group (a symmetric argument works if \(A/N\) is a \(q\)-group). It must be \(C_{A/N}(zN)=N/N.\) Indeed, if \([z,a]\in N\) for some \(a\in A\setminus N\), then \(rp\) divides \(|za|\) and consequently \(pqr\) divides \(|\langle g,za\rangle|\) and \(g\) is adjacent to \(za\), in contradiction with the definition of \(\Lambda.\) Notice also that it follows from Lemma 5 that if \(h\) is an \(r\)-element of \(H\) and \(C_{A/N}(hN)=N/N,\) then \(h\) is adjacent to an element of \(H\) whose order is divisible by \(q.\) In particular, since by definition all the elements of \(G\) that are adjacent to \(g\) are \(r\)-elements, \(C_{A/N}(gN)\neq N/N\), so \([g,a]\in N\) for some \(a\in A\setminus N\). But then \(pr\) divides \(|ga|\) and \(\Gamma(G)\) contains the path \(g-z-ga.\) Moreover from what we said above, since \(C_{A/N}(zN)=N/N,\) the graph \(\Gamma(G)\) contains a path \(g-z-u\) with \(q\) dividing the order of \(u\). So we have proved our the claim. We can now prove that if \(y_{1},y_{2}\in\Lambda,\) then \(d(y_{1},y_{2})\leq 5\). Choose \(z_{1},z_{2}\in\Sigma(G)\) such that \(d(y_{1},z_{1})=d(y_{2},z_{2})=2\). If \(\pi(z_{1})\neq\pi(z_{2}),\) then \(d(z_{1},z_{2})=1\) and therefore \(d(y_{1},y_{2})=5\). If \(\pi(z_{1})=\pi(z_{2}),\) then, by our last claim, there exists \(u\) such that \(d(y_{1},u)\leq 2\) and \(\pi(u)\cup\pi(z_{1})=\{p,q,r\}.\) But then \(d(y_{1},y_{2})\leq d(y_{1},u)+d(u,z_{2})+d(z_{2},y_{2})\leq 5.\) Proof of Theorem 1.: Let \(R=R(G)\) be the soluble radical of \(G\) and consider the following subsets of \(G:\) \[A=\mathcal{I}(R)\setminus\mathcal{I}(G),\quad B=R\setminus\mathcal{I}(R),\quad C =G\setminus R.\] Let us summarize a list of information concerning \(\Gamma(G)\) that will be useful to prove our bound on the diameter of \(\Gamma(G).\) 1. \(d(c_{1},c_{2})\leq 2\) for every \(c_{1},c_{2}\in C\). This follows from [4, Theorem 6.4], stating that if \(c_{1},c_{2}\notin R,\) then there exists \(z\in G\) such that \(\langle c_{1},z\rangle\) and \(\langle c_{2},z\rangle\) are not solvable. 2. For every \(a\in A,\) there exists \(c\in C\) such that \(d(a,c)=1\). This follows immediately from the definitions of \(A\) and \(C.\) 3. If \(B\neq\varnothing,\) then for every \(b\in B,\) there exists \(s\in\Sigma(G)\cap R\) with \(d(b,s)\leq 2.\) This follows immediately from Lemma 8. 4. \(d(c,s)\leq 3\) for every \(c\in C\) and \(s\in\Sigma(G).\) If \(\tilde{\pi}(s)\geq 3,\) then \(c\) and \(s\) are adjacent in \(\Gamma(G).\) So we may assume \(\tilde{\pi}(s)=2.\) Since \(G/R\) is not solvable, \(\tilde{\pi}(G/R)\geq 3,\) so there exists \(p\in\tilde{\pi}(G/R)\setminus\tilde{\pi}(s).\) Let \(z\) be a \(p\)-element in \(C.\) Then \(d(c,z)\leq 2\) by \((i),\) so \(d(c,s)\leq d(c,z)+d(z,s)\leq 3.\) By (i) \(\mathcal{I}(G)\subseteq\mathcal{I}(R)\), so \(G\setminus\mathcal{I}(G)\) is the disjoint union of the three subsets \(A,B,C\). We can conclude our proof by analyzing the different possibilities: If \(a_{1},a_{2}\in A\), then by (ii) there exist \(c_{1},c_{2}\in C\) with \(d(a_{1},c_{1})=d(a_{2},c_{2})=1\). Hence, by (i), \(d(a_{1},a_{2})\leq d(a_{1},c_{1})+d(c_{1},c_{2})+d(c_{2},a_{2})\leq 1+2+1=4\). If \(a\in A\) and \(b\in B\), then, by (ii), there exist \(c\in C\) which is adjacent to \(a\) in \(\Gamma(G)\). Let \(H=R\langle c\rangle.\) Notice that \(H\) is solvable and \(a\) and \(b\) are non-isolated vertices of \(\Gamma(H)\), so, by Lemmas 10 and 11, \(d(a,b)\leq 5\). If \(a\in A\) and \(c\in C\), then, by (ii), there exists \(c^{*}\in C\) such that \(d(a,c^{*})=1\), and, by (i), \(d(c,c^{\prime})\leq 2\), thus \(d(a,c)\leq d(a,c^{*})+d(c^{*},c)\leq 3\). If \(b_{1},b_{2}\in B\), then \(b_{1},b_{2}\) are non-isolated vertices of \(\Gamma(R)\), so, by Lemmas 10 and 11, \(d(b_{1},b_{2})\leq 5\). If \(b\in B\) and \(c\in C\), then, by (iii), there exists \(s\in\Sigma(G)\) such that \(d(b,s)\leq 2\) and, by (iv), \(d(b,c)\leq d(b,s)+d(s,c)\leq 2+3=5\).
2309.09559
A queer Kac-Moody construction
We introduce a new, Kac-Moody-flavoured construction for Lie superalgebras, which seeks to incorporate phenomena of the queer Lie superalgebra. The idea of the generalization is to replace the maximal torus by a maximal quasitoral subalgebra, which has the representation theory of a family of (degenerate) Clifford superalgebras. Remarkably, we find that the theory is quite rigid, and a natural class of Lie superalgebras becomes apparent, which we call queer Kac-Moody algebras. We classify finite growth queer Kac-Moody algebras, which includes an $\mathfrak{so}(3)$-superconformal algebra, and give a new perspective on the distinctiveness of the queer Lie superalgebra.
Alexander Sherman, Lior Silberberg
2023-09-18T08:14:14Z
http://arxiv.org/abs/2309.09559v1
# A queer Kac-Moody construction ###### Abstract. We introduce a new, Kac-Moody-flavoured construction for Lie superalgebras, which seeks to incorporate phenomena of the queer Lie superalgebra. The idea of the generalization is to replace the maximal torus by a maximal quasitoral subalgebra, which has the representation theory of a family of (degenerate) Clifford superalgebras. Remarkably, we find that the theory is quite rigid, and a natural class of Lie superalgebras becomes apparent, which we call queer Kac-Moody algebras. We classify finite growth queer Kac-Moody algebras, which includes an \(\mathfrak{so}(3)\)-superconformal algebra, and give a new perspective on the distinctiveness of the queer Lie superalgebra. ## 1. Introduction In 1968, Kac [K3] and Moody [M] introduced a class of Lie algebras that are now known as Kac-Moody algebras. On the surface level, the theory of such algebras is a simple yet beautiful extension of the theory of semisimple Lie algebras as developed by Lie, Killing, and Cartan. Namely, from the data of an \(n\times n\) complex matrix \(A\) (often assumed to satisfy certain conditions), one produces a Lie algebra \(\mathfrak{g}(A)\), and if \(A\) is nice enough (i.e. symmetrizable, generalized Cartan) then \(\mathfrak{g}(A)\) admits a neat presentation in terms of generators (the Chevalley generators) and relations (the Chevalley-Serre relations). Although the applications were not immediately apparent, in the 70s and 80s it dawned that Kac-Moody algebras, and in particular the class of affine Lie algebras, have rich connections to physics, number theory, modular forms, theta functions, differential equations, the Virasoro algebra, and more. We refer to [K1] for a comprehensive study of such algebras and their applications to other fields, as well as a wealth of references to other works. Somewhat in parallel, the theory of Lie superalgebras had a start, with a major milestone being Kac's seminal 1977 paper [K2] in which, among other things, the simple finite-dimensional Lie superalgebras were classified. It was realized already in [K2] that the theory of Kac-Moody Lie algebras admits an obvious extension to the super setting, where one starts with an \(n\times n\) matrix \(A\) along with a parity function \(p:\{1,\ldots,n\}\to\mathbb{Z}/2\mathbb{Z}\). One obtains (up to central quotients and derived subalgebras) all simple, basic (admitting an even invariant form) classical Lie superalgebras from this approach. Finite growth Kac-Moody superalgebras were classified later on in the works of Kac ([K4]) (the case with no isotropic simple roots), Van-de Leur ([Leur]) (the symmetrizable case), and finished by Hoyt and Serganova (see [H] and [HS]). The theory of affine Lie superalgebras has connections once again to physics and number theory, amongst other fields, and such connections have been explored in, e.g. [KW1], [KW2], and [KW3]. See [GHS] for a more recent approach to Kac-Moody superalgebras, which nicely clarifies the role of the Weyl groupoid. ### It's here and it's queer Amidst the progress in understanding Kac-Moody Lie superalgebras, a certain Lie superalgebra was left out: the queer Lie superalgebra. In the theory of finite-dimensional associative superalgebras over the complex numbers, there are two families of central simple superalgebras: namely, \(\operatorname{End}(\mathbb{C}^{m|n})\) and \(Q(n)\), where the latter is the queer associative superalgebra, and it can be viewed as the subalgebra of \(\operatorname{End}(\mathbb{C}^{n|n})\) consisting of endomorphisms commuting with a particular odd automorphism. One may present \(Q(n)\) as matrices of the form \[\begin{bmatrix}A&B\\ B&A\end{bmatrix}\] where \(A,B\) are arbitrary \(n\times n\) matrices. Thus one arrives at the Lie superalgebra \(\mathfrak{q}(n)\), which is the natural Lie superalgebra obtained from \(Q(n)\) with the supercommutator operation. On the face of it, \(\mathfrak{q}(n)\) is a nice Lie superalgebra, whose even part is \(\mathfrak{gl}(n)\) and whose odd part is the adjoint representation of \(\mathfrak{gl}(n)\). However, it is distinctive in that its Cartan subalgebra, \[\mathfrak{h}=\begin{bmatrix}D&D^{\prime}\\ D^{\prime}&D\end{bmatrix},\] where \(D,D^{\prime}\) are diagonal, is _not_ purely even or commutative. It is instead what we call _quasitoral_, i.e. it satisfies \([\mathfrak{h}_{\overline{0}},\mathfrak{h}]=0\). This property prevents it from ever being described in classical Kac-Moody terms, because in that setup one always begins with a self-normalizing torus. In fact the root system of \(\mathfrak{q}(n)\) is nothing but the classical \(A_{n-1}\) root system. Thus it is clear that to have a Kac-Moody type construction that gives rise to the queer Lie superalgebra, one needs to begin with a self-normalizing quasitoral subalgebra. ### The classical construction Without going into too many details, we explain the general construction we provide here. Let us start with the classical construction. Starting with an \(n\times n\) matrix \(A=(a_{ij})\) and parity function \(p:\{1,\dots,n\}\to\mathbb{Z}/2\mathbb{Z}\), one begins by finding a finite-dimensional abelian Lie algebra \(\mathfrak{h}\) along with linearly independent sets \[\Pi=\{\alpha_{1},\dots,\alpha_{n}\}\subseteq\mathfrak{h}^{*},\ \ \text{and}\ \ \Pi^{\vee}=\{h_{1},\dots,h_{n}\}\subseteq\mathfrak{h},\ \text{satisfying}\ \alpha_{j}(h_{i})=a_{ij}.\] Then one takes the algebra generated by \(\mathfrak{h}\) and Chevalley generators \(e_{1},\dots,e_{n},f_{1},\dots,f_{n}\), where the parity of \(e_{i}\) and \(f_{i}\) is \(p(i)\), and we have the relations \[[h,e_{i}]=\alpha_{i}(h)e_{i},\ \ \ [h,f_{i}]=-\alpha_{i}(h)f_{i},\ \ \ [e_{i},f_{j}]=\delta_{ij}h_{i}.\] One then defines \(\mathfrak{g}(A)\) to be the quotient of this algebra by the maximal set of relations that don't kill any elements in \(\mathfrak{h}\)-if \(A\) is generalized Cartan and symmetrizable, then this is just the Serre relations. For the Lie superalgebra case, see [LS] for more on the relations. To understand our generalization, one should view the choices of \(\alpha_{i}\in\mathfrak{h}^{*}\) and \(p(i)\in\mathbb{Z}/2\mathbb{Z}\) as a choice of irreducible representations of \(\mathfrak{h}\): indeed, \(\alpha_{i}\) tells you the action, and \(p(i)\) tells you the parity of the representation. Thus the Chevalley generators are just nonzero elements (i.e. bases) of corresponding irreducible representations of \(\mathfrak{h}\). ### The queer construction To generalize to the queer setting, one should start with a quasi-toral Lie superalgebra \(\mathfrak{h}\), where we set \(\mathfrak{t}:=\mathfrak{h}_{\overline{0}}\). Then the irreducible representations of \(\mathfrak{h}\) are in bijection (up to parity) with \(\mathfrak{t}^{*}\); however even when \(\mathfrak{t}\) acts semisimply, which we assume, the category of \(\mathfrak{h}\)-modules is not semisimple, which adds both a complication and richness to the theory. With \(\mathfrak{h}\) in hand, one should take a linearly independent set \(\Pi=\{\alpha_{1},\dots,\alpha_{n}\}\subseteq\mathfrak{t}^{*}\). The natural generalization of the Chevalley generators is to choose irreducible representations \[\mathfrak{g}_{\alpha_{1}},\dots,\mathfrak{g}_{\alpha_{n}},\mathfrak{g}_{- \alpha_{1}},\dots,\mathfrak{g}_{-\alpha_{n}}\] of \(\mathfrak{h}\) with specified weights \(\pm\alpha_{1},\dots,\pm\alpha_{n}\). Finally, to have coroots, one needs (nontrivial) \(\mathfrak{h}\)-equivariant maps \([-,-]_{i}:\mathfrak{g}_{\alpha_{i}}\otimes\mathfrak{g}_{-\alpha_{i}}\to \mathfrak{h}\). We call the package of data specified above a Cartan datum, \(\mathcal{A}\), and one may now construct a Lie superalgebra \(\mathfrak{g}(\mathcal{A})\) in a completely analogous fashion as in the classical Kac-Moody setup. In particular \(\mathfrak{g}(\mathcal{A})\) will contain \(\mathfrak{h}\) as a self-normalizing subalgebra, a root decomposition, it will contain \(\mathfrak{g}_{\pm\alpha_{i}}\) for all \(i\), and it will admit a natural triangular decomposition \(\mathfrak{g}(\mathcal{A})=\mathfrak{n}^{-}\oplus\mathfrak{h}\oplus\mathfrak{n}\). ### Clifford Kac-Moody algebras The above setup is very general however (in fact one doesn't even need the \(\mathfrak{h}\)-modules \(\mathfrak{g}_{\alpha_{i}}\) to be irreducible), and to get something with more structure it makes sense to impose the condition of integrability (see Definition 4.1). This gives us the class of what we call Clifford Kac-Moody algebras. Here Clifford indicates that the simple root spaces are representations are Clifford algebras. It is natural to then classify Clifford Kac-Moody algebras with one simple root, and determine how the roots can interact with one another. We have done this in Sections 5 and 6, and the answer we obtain is depicted in the diagram below. To explain, the 'nodes' of the diagram represent possible Clifford Kac-Moody algebras with one simple root \(\alpha\), and we have drawn an arrow \(\alpha\to\beta\) if it is possible (under our integrability assumption) that \([\mathfrak{h}_{\alpha},\mathfrak{g}_{\beta}]\neq 0\), where \(\mathfrak{h}_{\alpha}=[\mathfrak{g}_{\alpha},\mathfrak{g}_{-\alpha}]\) are the \(\alpha\)-coroots. This is analogous to the condition that \(\alpha_{j}(h_{i})\neq 0\) in a Kac-Moody algebra. In particular, a loop at a node means that it is possible for two simple roots of that type to interact. Here the node 'Super KM' contains the three possible root subalgebras \(\mathfrak{sl}(2),\mathfrak{osp}(1|2)\), and \(\mathfrak{sl}(1|1)\). These subalgebras can interact with in each in every possible way, so to speak, which is what the self arrow represents. They give rise to classical Kac-Moody superalgebras. Similarly, the node 'Queer KM' represents the three possible root subalgebras which appear in our definition of a _queer Kac-Moody_ algebra: they are (up to central quotient) of the form \(\mathfrak{ts}:=\mathfrak{s}\otimes\mathbb{C}[\xi]\oplus\mathbb{C}\langle c\rangle\), where \(\mathbb{C}[\xi]=\mathbb{C}\langle 1,\xi\rangle\) is a supersymmetric polynomial algebra on one odd variable, \(c\) is central, and \(\mathfrak{s}=\mathfrak{sl}(2),\mathfrak{osp}(1|2)\), or \(\mathfrak{sl}(1|1)\). The formula for the bracket is given in Example 4.4, and is entirely analogous to the loop algebra construction. Such an 'odd' affinization is what we refer to as a 'Takiff' construction, following an established convention, and in recognition of Takiff's work [T]. Thus we see that queer Kac-Moody nodes are obtained from super Kac-Moody nodes by applying the Takiff construction, so to speak. And further, the diagram shows that queer Kac-Moody nodes can interact with each other in interesting ways. The other nodes are all of 'Heisenberg' type, which means that \([\mathfrak{h}_{\alpha},\mathfrak{g}_{\alpha}]=0\). The diagram illustrates that they cannot interact as nicely with the other nodes under our integrability assumption. (See Section 6 for further explanation and for the meaning of the dashed arrow). ### Queer Kac-Moody algebras As has already been stated, queer Kac-Moody (qKM) algebras are those constructed from simple roots of type \(\mathfrak{ts}\), where \(\mathfrak{s}=\mathfrak{sl}(2),\mathfrak{osp}(1|2)\), or \(\mathfrak{sl}(1|1)\). Note also that \(\mathfrak{sl}(2)\cong\mathfrak{sq}(2)=[\mathfrak{q}(2),\mathfrak{q}(2)]\). Thus arises the question of classification of such algebras, and in particular the classification of the finite growth qKM algebras. One simple way to produce a qKM algebra is to start with a Kac-Moody superalgebra \(\mathfrak{s}\) and apply the Takiff construction to obtain, \(\mathfrak{s}\otimes\mathbb{C}[\xi]\oplus\langle c,\partial_{\xi},\dots\rangle\). The ellipsis indicates that there are certain central extensions and derivations that may be added, but we ignore this for now. Although this is a beautiful and important superalgebra (and in the finite-dimensional case a slight variant of it was studied by [CC]), we view the Takiff construction as somewhat degenerate: in particular every simple root shares the central element \(c\) as a coroot. We thus call such algebras 'completely coupled' (or \(Y\)-coupled to be more precise). There are in fact notions of being completely \(X\)-coupled, completely \(Y\)-coupled, and completely uncoupled, the latter being the most 'interesting' case (see Definition 8.1). In Section 9 we prove the following: **Theorem 1.1**.: _An indecomposable, finite growth qKM algebra is either completely \(X\)-coupled, completely \(Y\)-coupled, or completely uncoupled._ This allows us to separate our study into the three separate classes. For the following, we note that qKM algebras have Dynkin diagrams with vertices \(\diamondsuit\) for \(\mathfrak{sl}(2)\) simple roots, \(\diamondsuit\) for \(\mathfrak{tosp}(1|2)\) simple roots, and \(\otimes\) for \(\mathfrak{sl}(1|1)\) simple roots. We put labels on edges to indicate values of pairings of roots with coroots (see Section 7.4). **Theorem 1.2**.: _Let \(\mathfrak{g}\) be an indecomposable, completely \(X\)-coupled algebra with more than one simple root, without any assumptions on growth conditions. Then \(\mathfrak{g}\) is of finite growth with GK dimension 1, and has one of three possible Dynkin diagrams:_ Figure 1. Clifford Kac-Moody algebras For the next theorem, let \(\Theta\) denote the map which takes a Dynkin diagram of a qKM algebra and turns the diamonds into circles, thus giving the Dynkin diagram of Kac-Moody superalgebra. **Theorem 1.3**.: _Suppose that \(\mathfrak{g}\) is a finite growth qKM algebra with Dynkin diagram \(D\). Let \(\mathfrak{s}\) be the Kac-Moody superalgebra obtained from the Dynkin diagram \(\Theta(D)\). If \(\mathfrak{g}\) is completely \(Y\)-coupled, then \(\mathfrak{g}\) is constructed from \(\mathfrak{s}\) via the Takiff construction (see Theorem 9.3 for a precise statement)._ ### Completely uncoupled qKM algebras Theorem 1.3 tells us that any possible finite-growth Dynkin diagram can appear in our construction. However if we pass to the completely uncoupled situation, things become more rigid. Namely we prove the following: **Theorem 1.4**.: _The possible Dynkin diagrams of an indecomposable completely uncoupled qKM algebra of finite growth with more than one simple root are the following:_ 1. _Type_ \(A(n)\) _for_ \(n\in\mathbb{Z}_{\geq 2}\)_, giving rise to_ \(\mathfrak{q}(n+1)\)_:_ _with_ \(n\) _vertices._ 2. _Type_ \(A(n)^{(1)}\) _for_ \(n\in\mathbb{Z}_{\geq 2}\)_, giving rise to_ \(\mathfrak{q}(n+1)^{(1)}\)_, an affinization of_ \(\mathfrak{q}(n)\)_:_ _with_ \(n+1\) _vertices._ 3. _Type_ \(A(1)^{(1)}\)_, giving rise to two superalgebras_ \(\mathfrak{q}^{\pm}_{(2,2)}\)_:_ \[\xy(-1,-1)[r]{-2,-2}]{-2,-2}\] Note that \(\mathfrak{q}(2)^{(1)}\) is actually \(Y\)-coupled, and comes from a Takiff construction. One outcome of Theorem 1.4 is that the only finite-type, completely uncoupled qKM root system we can obtain is \(A(n)\), and ultimately the only algebra we can get (up to central extensions/quotients and derivations/commutator subalgebras) is \(\mathfrak{q}(n)\). Thus it shows that there cannot be a queer version of other simple root systems, which is clear from the lack of any simple algebra of this form, but difficult to perceive otherwise. Another outcome is the existence of two nontrivial completely uncoupled finite-growth Lie superalgebra \(\mathfrak{q}^{+}_{(2,2)}\). These Lie superalgebras both have root system of type \(A^{(1)}_{1}\), and contain \(\widehat{\mathfrak{sl}}_{2}\) as a subalgebra. In a future work it will be shown that \(\mathfrak{q}^{+}_{(2,2)}\) is the \(\mathfrak{so}(3)\)-superconformal algebra \(K(3,1,0,0)\) in the notation of [KL]. ### Serre relations Establishing Serre relations is important in the study of Kac-Moody superalgebras, in particular for quantizations of superalgebra. It is natural to ask when qKM algebras admit nice presentations. We have the following, which is proven in [Si]. **Theorem 1.5**.: _Let \(\mathfrak{g}\) be a finite growth, completely uncoupled Lie superalgebra with simple roots \(\{\alpha_{1},\dots,\alpha_{n}\}\), where \(n>1\). Let \(A=(a_{ij})\) be its Cartan matrix (see Section 7.4). Then \(\mathfrak{g}\) is generated by \(\mathfrak{h}\) and simple root spaces \(\mathfrak{g}_{\pm\alpha_{1}},\dots,\mathfrak{g}_{\pm\alpha_{i}}\), subject to the following relations:_ 1. _Chevalley-type relations (those defining_ \(\mathfrak{g}(\mathcal{A})\)_, as listed in Section_ 3.1_); and_ 2. _Serre relations: for_ \(i\neq j\) _we have_ \[\operatorname{ad}(\mathfrak{g}_{\alpha_{i}})^{1-a_{ij}}(\mathfrak{g}_{\alpha_ {j}})=0,\quad\operatorname{ad}(\mathfrak{g}_{-\alpha_{i}})^{1-a_{ij}}( \mathfrak{g}_{-\alpha_{j}})=0.\] ### Future work The importance of affine Lie (super)algebras to other areas of mathematics and physics cannot be understated, and thus an important problem, which will be addressed in a future work, is an explicit realization of the finite growth qKM algebras described above. So far, we have determined that \(\mathfrak{q}^{+}(2,2)\) is the \(\mathfrak{so}(3)\)-superconformal algebra \(K(3,1,0,0)\), but there are \(4\) others that we do not yet have realizations for: \(\mathfrak{q}^{-}(2,2)\), and the three \(X\)-coupled superalgebras in Theorem 9.2. It is very important to understand if these are already known superalgebras or not, and if they are, whether there are further insights that the qKM structure can provide. Another important avenue is the study of representation theory of qKM algebras. One can ask for descriptions of integrable representations, if there are character formulas, denominator identities, if one can compute the Shapovalov determinant (to extend the work of [G]), and more. In particular we hope that nice identities can be found in the Grothendieck ring of \(\mathfrak{h}\), such as was begun for \(\mathfrak{q}(n)\) in [GSS]. ### Acknowledgements The authors would like to thank Maria Gorelik and Vera Serganova for many helpful discussions and suggestions. The first author was partially supported by ARC grant DP210100251. The second author was partially supported by the funding received from the MINERVA Stiftung with the funds from the BMBF of the Federal Republic of Germany. ### Table of contents ###### Contents * 1 Introduction * 2 Quasi-toral Lie superalgebras * 3 General Construction of \(\mathfrak{g}(\mathcal{A})\) * 4 Clifford Kac-Moody algebras * 5 Cartan datum with one simple root * 6 On Connectivity of simple roots * 7 Queer Kac-Moody algebras * 8 Two simple roots in a qKM algebra * 9 Completely coupled vs. completely uncoupled * 10 Completely uncoupled qKM algebras * 11 Finite growth, completely uncoupled qKM algebras ### List of notation * \(\mathcal{C}\ell(V,B)\) the Clifford superalgebra on a vector space \(V\) with symmetric bilinear form \(B\); * \(\mathcal{S}(V)\) the supersymmetric algebra on \(V\); * \(\mathfrak{h}\) a finite-dimensional quasi-toral Lie superalgebra (Sec. 2.2); * \(\mathfrak{t}=\mathfrak{h}_{\overline{0}}\) the even part of a quasitoral Lie superalgebra; * \(\mathcal{F}(\mathfrak{h})\) the category of finite-dimensional \(\mathfrak{h}\)-modules with semisimple action of \(\mathfrak{t}\) (Sec. 2.2); * \(B_{\lambda}\) the symmetric bilinear form on \(\mathfrak{h}_{\overline{1}}\) given by \(B_{\lambda}(x,y)=\lambda([x,y])\) (Sec. 2.2); * \(C_{\lambda}\) an irreducible representation of \(\mathfrak{h}\) of weight \(\lambda\) (Sec. 2.2); * \(V^{\vee}:=V^{\omega_{\mathfrak{h}}^{-1}}\) the twist of an \(\mathfrak{h}\)-module \(V\) by \(\omega_{\mathfrak{h}}^{-1}\) (Sec. 2.3); * \(\mathcal{A}\) a Cartan datum (Def. 3.1); * \(\Pi=\{\alpha_{1},\ldots,\alpha_{n}\}\subseteq\mathfrak{t}^{*}\) simple roots (Sec. 3.1); * \(\mathfrak{h}_{\alpha}:=[\mathfrak{g}_{\alpha},\mathfrak{g}_{-\alpha}]\) the coroot space of \(\alpha\) (Def. 3.3); * \(\tilde{\mathfrak{g}}(\mathcal{A})\) the Lie superalgebra constructed in Sec. 3.1; * \(\mathfrak{g}(\mathcal{A}):=\tilde{\mathfrak{g}}(\mathcal{A})/\mathfrak{r}\), see Sec. 3.3; * \(\omega\) the Chevalley automorphism on \(\mathfrak{g}(\mathcal{A})\), see Cor. 3.3; * \(\mathfrak{g}(\alpha)\) the subalgebra generated by \(\mathfrak{g}_{\alpha}\) and \(\mathfrak{g}_{-\alpha}\) (Sec. 5); * \(T\mathfrak{s}\), \(\mathfrak{ts}\), \(\mathfrak{pts}\) are Takiff constructions, see Ex. 4.4; * \(\mathfrak{hs}(n)^{(\Pi)}\) a Heisenberg superalgebra determined by a rank \(n\) simple root, see Sec. 5.1; * \(Tak(\mathfrak{s})\) a type for a simple root, see the table at the start of Sec. 6; * \(e_{i},E_{i},f_{i},F_{i}\) are 'Chevalley generators' in a qKM algebra (Sec. 7.3); * \(H_{i},h_{i},c_{i}\) are the pure coroots of \(\alpha_{i}\) in qKM algebra (Sec. 7.3); * \(x_{ij},y_{ij}\) satisfy \([H_{i},e_{j}+E_{j}]=x_{ij}E_{j}+y_{ij}e_{j}\) (Sec. 7.3). ## 2. Quasi-toral Lie superalgebras In this section we review some known results on the representation theory of quasitoral Lie superalgebras (see Definition 2.1), which will be used frequently in our work. For more details, we refer the reader to [G] and [GSS]. ### Clifford superalgebras Let \(V\) be a finite-dimensional vector space and let \(B\) be a symmetric (not necessarily non-degenerate) bilinear form on \(V\). Let \(\mathcal{C}\ell(V,B)\) denote the corresponding Clifford superalgebra, with the elements of \(V\) being odd. Explicitly, \(\mathcal{C}\ell(V,B)\) is the quotient of the tensor superalgebra \(\mathcal{T}V:=\bigoplus\limits_{n\geq 0}V^{\otimes n}\) by the ideal generated by expressions of the form \(v\otimes w+w\otimes v-B(v,w)\), where \(v,w\in V\). Because this ideal is homogeneous (in particular it is generated by even elements), the quotient inherits the structure of an associative superalgebra. The form \(B\) induces a non-degenerate symmetric bilinear form on \(V/\ker B\), which we shall also denote by \(B\). We have a well-known, non-canonical isomorphism of superalgebras \[\mathcal{C}\ell(V,B)\simeq\mathcal{C}\ell(V/\ker B,B)\otimes\mathcal{S}(\ker B), \tag{1}\] where \(\mathcal{S}(\ker B)\) is the superalgebra of supercommutative polynomials in elements of \(\ker B\), a purely odd vector space. In particular \(\mathcal{S}(\ker B)\) is isomorphic as an algebra to the Grassmann algebra on \(\ker B\) viewed as an even vector space, and thus is finite-dimensional. Let \(m\) denote the dimension of \(V\), and suppose that \(B\) is non-degenerate. For the following facts we refer to [CW], Exercise 3.11. * \(\mathcal{C}\ell(V,B)\) is always a simple superalgebra; * all modules over \(\mathcal{C}\ell(V,B)\) are completely reducible; * if \(m\) is odd, \(\mathcal{C}\ell(V,B)\) admits a unique, parity invariant, irreducible module; * if \(m\) is even, \(\mathcal{C}\ell(V,B)\) admits two irreducible modules which differ by parity. * if \(m\neq 0\) and \(E\) is an irreducible \(\mathcal{C}\ell(V,B)\)-module, then \(\dim E_{\overline{0}}=\dim E_{\overline{1}}=2^{\lfloor\frac{m-1}{2}\rfloor}\), where \(\lfloor-\rfloor\) denotes the floor function. ### Irreducible \(\mathfrak{h}\)-modules For general background on Lie superalgebras, we refer to [CW] or [Mu]. **Definition 2.1**.: A Lie superalgebra \(\mathfrak{h}\) is said to be quasi-toral if \([\mathfrak{h}_{\overline{0}},\mathfrak{h}]=0\), i.e. \(\mathfrak{h}_{\overline{0}}\) is central in \(\mathfrak{h}\). **Example 2.1**.: For \(n\geq 0\), let \(\mathfrak{h}(n)\) be the Lie superalgebra with \(\mathfrak{h}(n)_{\overline{0}}=\mathbb{C}\langle c\rangle\), and \(\mathfrak{h}(n)_{\overline{1}}=\mathbb{C}^{n}\). We impose that \(c\) is a nonzero, central element of \(\mathfrak{h}(n)\), and for \(x,y\in\mathfrak{h}(n)_{\overline{1}}\) we set \([x,y]=(x,y)c\), where \((-,-)\) is a fixed, nondegenerate symmetric bilinear form on \(\mathbb{C}^{n}\). Then \(\mathfrak{h}(n)\) is a quasitoral Lie superalgebra. For \(\mathfrak{h}\) a finite-dimensional quasi-toral Lie superalgebra, we write \(\mathfrak{t}:=\mathfrak{h}_{\overline{0}}\). Throughout the paper, we will consider only \(\mathfrak{h}\)-modules with semisimple \(\mathfrak{t}\)-action. Write \(\mathcal{F}(\mathfrak{h})\) for the category of finite-dimensional \(\mathfrak{h}\)-modules with semisimple action of \(\mathfrak{t}\). To a weight \(\lambda\in\mathfrak{t}^{*}\), we associate a symmetric bilinear form \(B_{\lambda}\) on \(\mathfrak{h}_{\overline{1}}\), given by \(B_{\lambda}(H_{1},H_{2})=\lambda([H_{1},H_{2}])\). **Definition 2.2**.: For a weight \(\lambda\in\mathfrak{t}^{*}\), we define the rank of \(\lambda\) to be \(\operatorname{rk}\lambda:=\operatorname{rk}B_{\lambda}\in\mathbb{Z}_{\geq 0}\), where \(\operatorname{rk}B_{\lambda}\) is the rank of the bilinear form \(B_{\lambda}\) on \(\mathfrak{h}_{\overline{1}}\). Observe that \(\operatorname{rk}c\lambda=\operatorname{rk}\lambda\) for \(c\in\mathbb{C}^{\times}\). We consider the universal enveloping algebra \(\mathcal{U}(\mathfrak{h})\) as an algebra over the central polynomial subalgebra \(\mathcal{U}(\mathfrak{t})=\mathcal{S}(\mathfrak{t})\). Let \(\lambda\in\mathfrak{t}^{*}\) and let \(I(\lambda)\) be the kernel of the algebra homomorphism \(\operatorname{ev}_{\lambda}:\mathcal{S}(\mathfrak{t})\to\mathbb{C}\), induced by evaluation at \(\lambda\). We consider the Clifford superalgebra \[\mathcal{C}\ell(\lambda):=\mathcal{C}\ell(\mathfrak{h}_{\overline{1}},B_{ \lambda})=\mathcal{U}(\mathfrak{h})/I(\lambda)\mathcal{U}(\mathfrak{h}).\] Then we have a non-canonical isomorphism of superalgebras (see (1)) \[\mathcal{C}\ell(\lambda)\simeq\mathcal{C}\ell(\mathfrak{h}_{\overline{1}}/ \ker B_{\lambda},B_{\lambda})\otimes\mathcal{S}(\ker B_{\lambda}),\] where \(B_{\lambda}\), by abuse of notation, also denotes the induced form on \(\mathfrak{h}_{\overline{1}}/\ker B_{\lambda}\). We have a decomposition of \(\mathcal{F}(\mathfrak{h})\) according to central character given by \[\mathcal{F}(\mathfrak{h})=\bigoplus_{\lambda\in\mathfrak{t}^{*}}\mathcal{F}_{ \lambda},\] where \(\mathcal{F}_{\lambda}\) consists of those modules on which \(\mathfrak{t}\) acts by \(\lambda\). Further, \(\mathcal{F}_{\lambda}\) is equivalent to the category of \(\mathcal{C}\ell(\lambda)\)-modules. Thus all irreducible \(\mathfrak{h}\)-modules arise as \(\mathcal{C}\ell(\lambda)\)-modules, for some \(\lambda\in\mathfrak{t}^{*}\). We denote by \(C_{\lambda}\) a choice of unique (up to parity) irreducible \(\mathfrak{h}\)-module on which \(\mathfrak{t}\) acts via \(\lambda\), where we assume that \(C_{0}\) is the trivial module \(\mathbb{C}\). Recall from Section 2.1 that if \(m=\operatorname{rk}\lambda\), then \(\dim(C_{\lambda})_{\overline{0}}=\dim(C_{\lambda})_{\overline{1}}=2^{\lfloor \frac{m-1}{2}\rfloor}\). Note that the blocks of \(\mathcal{F}(\mathfrak{h})\) are more finely parameterized in the following way. If \(B_{\lambda}\) is degenerate, then \(\mathcal{F}_{\lambda}\) is a block, and as stated above, this block is equivalent to the category of finite-dimensional modules over \(\mathcal{C}\ell(\lambda)\). If \(B_{\lambda}\) is non-degenerate, then \(\mathcal{F}_{\lambda}\) is semisimple; if \(\operatorname{rk}\lambda\) is odd there is exactly one simple module in \(\mathcal{F}_{\lambda}\), and if \(\operatorname{rk}\lambda\) is even there are two. ### Dualities on \(\mathcal{F}(\mathfrak{h})\) The category \(\mathcal{F}(\mathfrak{h})\) admits the usual duality \((-)^{*}\) which is induced by the anti-automorphism of \(\mathcal{U}(\mathfrak{h})\) defined by \(-\mathrm{id}_{\mathfrak{h}}\). We have another duality on \(\mathcal{F}(\mathfrak{h})\) which we denote by \((-)^{\#}\). It is induced by the anti-automorphism of \(\mathcal{U}(\mathfrak{h})\) defined by \(\mathrm{id}_{\mathfrak{h}_{\overline{0}}}\oplus(\sqrt{-1}\cdot\mathrm{id}_{ \mathfrak{h}_{\overline{1}}})\). We have the formula \[C_{\lambda}^{\#}\cong\begin{cases}C_{\lambda}&\text{ if }\operatorname{rk} \lambda\text{ is odd or }\operatorname{rk}\lambda\equiv 0\quad(\operatorname{mod}4),\\ \Pi C_{\lambda}&\text{ if }\operatorname{rk}\lambda\equiv 2\quad( \operatorname{mod}4).\end{cases}\] The quasi-toral Lie superalgebra \(\mathfrak{h}\) admits an automorphism \(\omega_{\mathfrak{h}}=(-\mathrm{id}_{\mathfrak{h}_{\overline{0}}})\oplus( \sqrt{-1}\cdot\mathrm{id}_{\mathfrak{h}_{\overline{1}}})\). Given an irreducible \(\mathfrak{h}\)-module \(C_{\lambda}\), we define the twisted module \[C_{\lambda}^{\vee}\coloneqq C_{\lambda}^{\omega_{\mathfrak{h}}^{-1}}\] (note the inverse - it will simplify notation later). On \(\mathcal{F}(\mathfrak{h})\), we have a natural isomorphism of functors \((-)^{\vee}\cong((-)^{*})^{\#}\). From Theorem 3.6.4 of [GSS] we obtain the following isomorphisms of \(\mathfrak{h}\)-modules which we use later: if \(\operatorname{rk}\lambda\) is odd, then \[C_{\lambda}\otimes C_{\lambda}^{\vee}\simeq\mathcal{S}(\mathfrak{h}_{ \overline{1}}/\ker B_{\lambda})\oplus\Pi\mathcal{S}(\mathfrak{h}_{\overline{ 1}}/\ker B_{\lambda}), \tag{2}\] where \(\mathcal{S}(\mathfrak{h}_{\overline{1}}/\ker B_{\lambda})\) is the Grassmann algebra on \(\mathfrak{h}_{\overline{1}}/\ker B_{\lambda}\), and has the natural action by \(\mathcal{C}\ell(0)\cong\mathcal{S}(\mathfrak{h}_{\overline{1}})\) from left multiplication in the quotient. In particular \(\mathfrak{t}=\mathfrak{h}_{\overline{0}}\) will act trivially. Note that \(\mathcal{S}(\mathfrak{h}_{\overline{1}}/\ker B_{\lambda})\) has a simple socle and top, and thus in particular is indecomposable. If \(\operatorname{rk}\lambda\in 4\mathbb{Z}\), then we have \[C_{\lambda}\otimes C_{\lambda}^{\vee}\simeq\mathcal{S}(\mathfrak{h}_{ \overline{1}}/\ker B_{\lambda}), \tag{3}\] while if \(\operatorname{rk}\lambda\in 4\mathbb{Z}+2\), then \[C_{\lambda}\otimes C_{\lambda}^{\vee}\simeq\Pi\mathcal{S}(\mathfrak{h}_{ \overline{1}}/\ker B_{\lambda}). \tag{4}\] We now obtain the following corollary: **Corollary 2.1**.: _Let \(\lambda\in\mathfrak{t}^{*}\) and let \(\phi:C_{\lambda}\otimes C_{\lambda}^{\vee}\to\mathfrak{h}\) be an \(\mathfrak{h}\)-module homomorphism._ 1. _If_ \(\operatorname{rk}\lambda\in 4\mathbb{Z}\)_, the_ \(\operatorname{Im}\phi\) _is purely even and at most one-dimensional; and_ _,_ 2. _if_ \(\operatorname{rk}\lambda\in 4\mathbb{Z}+2\)_, then the odd part of the image of_ \(\phi\) _is at most one-dimensional, and it generates_ \(\operatorname{Im}\phi\) _as an_ \(\mathfrak{h}\)_-module._ Proof.: This follows immediately from \([\mathfrak{h}_{\overline{0}},\mathfrak{h}]=0\) and the structure of \(C_{\lambda}\otimes C_{\lambda}^{\vee}\) given in (3) and (4). ### Nondegenerate morphisms **Definition 2.3**.: Let \(\mathfrak{h}\) be a quasifocal Lie superalgebra, and let \(U,V,\) and \(W\) be \(\mathfrak{h}\)-modules. We say a morphism of \(\mathfrak{h}\)-modules \(\phi:U\otimes V\to W\) is nondegenerate if for any submodules \(U^{\prime}\subseteq U\) and \(V^{\prime}\subseteq V\) we have both \(\phi(U^{\prime}\otimes V)\neq 0\) and \(\phi(U\otimes V^{\prime})\neq 0\). Observe that if \(U\) and \(V\) are simple \(\mathfrak{h}\)-modules, then a map \(\phi:U\otimes V\to W\) is nondegenerate if and only if \(\phi\neq 0\). ### Realization of \(C_{\lambda}\) We consider the polynomial superalgebra \(\mathbb{C}[\xi_{1},...,\xi_{m}]\), where \(\xi_{1},\ldots,\xi_{m}\) are odd, supercommuting variables, i.e. \(\xi_{i}\xi_{j}=-\xi_{j}\xi_{i}\) for all \(i,j\). In particular \(\xi_{i}^{2}=0\). For a set \(I=\{i_{1},...,i_{k}\}\subseteq\{1,...,m\}\), we denote \(\xi_{I}:=\xi_{i_{1}}...\xi_{i_{k}}\) for \(i_{1}<...<i_{k}\) (in particular \(\xi_{\varnothing}=1\)). We now explain how to concretely realize the irreducible \(\mathfrak{h}\)-modules \(C_{\lambda}\) as polynomial superalgebras, depending on the rank of \(\lambda\). If \(\operatorname{rk}\lambda=2m\) for some \(m\in\mathbb{Z}_{20}\), then there exist linearly independent vectors \(H_{1},...,H_{m},\bar{H}_{1},...,\bar{H}_{m}\) of \(\mathfrak{h}_{\overline{1}}\) which satisfy \[B_{\lambda}(H_{i},\bar{H}_{j})=\delta_{ij},\ \ B_{\lambda}(H_{i},H_{j})=0,\ \ B_{ \lambda}(\bar{H}_{i},\bar{H}_{j})=0,\] for \(1\leq i,j\leq m\). We can identify \(C_{\lambda}\) (up to parity) with \(\mathbb{C}[\xi_{1},...,\xi_{m}]\), subject to the following action of \(\mathfrak{h}_{\overline{1}}\): \(H_{i}\) acts by multiplication by \(\xi_{i}\) and \(\bar{H}_{i}\) acts by the odd derivation \(\partial_{\xi_{i}}\). If, on the other hand, \(\operatorname{rk}\lambda=2m+1\) for some \(m\in\mathbb{Z}_{\geq 0}\), then there exist linearly independent vectors \(H_{1},...,H_{m},\bar{H}_{1},...,\bar{H}_{m},\bar{H}\) of \(\mathfrak{h}_{\overline{1}}\) which satisfy \[B_{\lambda}(H_{i},\bar{H}_{j})=\delta_{ij},\ \ B_{\lambda}(H_{i},H_{j})=0, \ B_{\lambda}(\bar{H}_{i},\bar{H}_{j})=0,\] \[B_{\lambda}(\bar{H},\bar{H})=2,\ \ \ B_{\lambda}(H_{i},\bar{H})=0, \ B_{\lambda}(\bar{H}_{i},\bar{H})=0,\] for \(1\leq i,j\leq m\). The \(\mathfrak{h}\)-module \(C_{\lambda}\) can be realized as \(\mathbb{C}[\xi_{1},...,\xi_{m+1}]\) with the following action of \(\mathfrak{h}_{\overline{1}}\): \(H_{i}\) acts by multiplication by \(\xi_{i}\), \(\bar{H}_{i}\) acts by the odd derivation \(\partial_{\xi_{i}}\), and \(\bar{H}\) acts by \(\xi_{m+1}+\partial_{\xi_{m+1}}\). ## 3. General Construction of \(\mathfrak{g}(\mathcal{A})\) In this section we give a construction, in the spirit of the Kac-Moody approach, for Lie superalgebras admitting a quasi-toral Cartan subalgebra. It is a generalization of the constructions given in [K1] and [K3] for Lie algebras, and [K2] for Lie superalgebras. ### Cartan datum and \(\tilde{\mathfrak{g}}(\mathcal{A})\) **Definition 3.1**.: A Cartan datum \(\mathcal{A}\) consists of the following information: 1. A finite-dimensional quasi-toral Lie superalgebra \(\mathfrak{h}\). We write \(\mathfrak{t}:=\mathfrak{h}_{\overline{0}}\) and \([-,-]_{\mathfrak{h}}\) for the Lie bracket on \(\mathfrak{h}\); 2. A linearly independent subset \(\Pi=\{\alpha_{1},...,\alpha_{n}\}\subseteq\mathfrak{t}^{*}\); 3. For each \(\alpha\in\pm\Pi\), an \(\mathfrak{h}\)-module \(\mathfrak{g}_{\alpha}\) in \(\mathcal{F}_{\alpha}\); i.e., as a \(\mathfrak{t}\)-module, \(\mathfrak{g}_{\alpha}\) is a weight space of weight \(\alpha\). We write \(m_{\alpha}:\mathfrak{h}\times\mathfrak{g}_{\alpha}\to\mathfrak{g}_{\alpha}\) for the action of \(\mathfrak{h}\) on \(\mathfrak{g}_{\alpha}\); 4. For each \(\alpha\in\Pi\), a nondegenerate morphism (see Section 2.4) of \(\mathfrak{h}\)-modules \([-,-]_{\alpha}:\mathfrak{g}_{\alpha}\otimes\mathfrak{g}_{-\alpha}\to\mathfrak{h}\). **Remark 3.1**.: Observe that for any \(\alpha\in\Pi\), the morphism \([-,-]_{\alpha}\) must vanish on the submodule generated by odd elements in the radical (as an \(\mathfrak{h}\)-module) of \(\mathfrak{g}_{\alpha}\otimes\mathfrak{g}_{-\alpha}\). To a Cartan datum \(\mathcal{A}\) we associate a Lie superalgebra \(\tilde{\mathfrak{g}}(\mathcal{A})\), which is generated by the super vector space \[\mathcal{C}=\bigoplus_{i=1}^{n}\mathfrak{g}_{-\alpha_{i}}\oplus\mathfrak{h} \oplus\bigoplus_{i=1}^{n}\mathfrak{g}_{\alpha_{i}}, \tag{5}\] subject to the following relations: 1. For any \(h_{1},h_{2}\in\mathfrak{h}\) we have \([h_{1},h_{2}]=[h_{1},h_{2}]_{\mathfrak{h}}\). 2. For any \(h\in\mathfrak{h}\), \(\alpha\in\pm\Pi\) and \(x\in\mathfrak{g}_{\alpha}\), we have \([h,x]=m_{\alpha}(h,x)\). 3. For any \(\alpha,\beta\in\Pi\) with \(x\in\mathfrak{g}_{\alpha}\) and \(y\in\mathfrak{g}_{-\beta}\), we have \([x,y]=[x,y]_{\alpha}\) if \(\alpha=\beta\) and \([x,y]=0\) otherwise. We denote by \(\tilde{\mathfrak{n}}^{+}\) and \(\tilde{\mathfrak{n}}^{-}\) the subalgebras of \(\tilde{\mathfrak{g}}(\mathcal{A})\) generated by \(\bigoplus_{i=1}^{n}\mathfrak{g}_{\alpha_{i}}\) and \(\bigoplus_{i=1}^{n}\mathfrak{g}_{-\alpha_{i}}\), respectively. We set \(Q_{+}:=\mathbb{Z}_{\geq 0}\Pi\). We obtain the following theorem, whose statement and proof are direct generalizations of Theorem 1.2 in [K1]. **Theorem 3.1**.: _Let \(\tilde{\mathfrak{g}}:=\tilde{\mathfrak{g}}(\mathcal{A})\) and \(\tilde{\mathfrak{n}}^{\pm}\) be as above. Then_ 1. _The subalgebras_ \(\tilde{\mathfrak{n}}^{+}\) _and_ \(\tilde{\mathfrak{n}}^{-}\) _are freely generated as Lie superalgebras by_ \(\bigoplus_{i=1}^{n}\mathfrak{g}_{\alpha_{i}}\) _and_ \(\bigoplus_{i=1}^{n}\mathfrak{g}_{-\alpha_{i}}\)_, respectively;_ 2. \(\tilde{\mathfrak{g}}=\tilde{\mathfrak{n}}^{-}\oplus\mathfrak{h}\oplus \tilde{\mathfrak{n}}^{+}\) _as super vector spaces;_ 3. _As a_ \(\mathfrak{t}\)_-module,_ \(\tilde{\mathfrak{g}}\) _admits a root space decomposition:_ \[\tilde{\mathfrak{g}}=\bigoplus_{\begin{subarray}{c}\alpha\in Q_{+}\\ \alpha\neq 0\end{subarray}}\tilde{\mathfrak{g}}_{-\alpha}\oplus\mathfrak{h} \oplus\bigoplus_{\begin{subarray}{c}\alpha\in Q_{+}\\ \alpha\neq 0\end{subarray}}\tilde{\mathfrak{g}}_{\alpha}.\] _In particular,_ \(\mathfrak{h}\) _is the centralizer of_ \(\mathfrak{t}\) _in_ \(\tilde{\mathfrak{g}}\)_, and is self-normalizing;_ 4. \(\tilde{\mathfrak{g}}\) _contains a unique maximal ideal_ \(\mathfrak{r}\) _that intersect_ \(\mathfrak{h}\) _trivially; it satisfies_ \(\mathfrak{r}=(\mathfrak{r}\cap\tilde{\mathfrak{n}}^{+})\oplus(\mathfrak{r} \cap\tilde{\mathfrak{n}}^{-})\)_._ Proof.: Let \(W\) be any representation of \(\mathfrak{h}\), and consider the super vector space \(V=\mathcal{T}(\mathfrak{g}_{i=1}^{n}\mathfrak{g}_{\alpha_{i}})\otimes W\). The space \(V\) is naturally graded, with \(V_{s}=(\mathfrak{g}_{i=1}^{n}\mathfrak{g}_{\alpha_{i}})^{\otimes s}\otimes W\). We define a representation \(\pi_{W}:\tilde{\mathfrak{g}}\to\operatorname{End}(V)\) by the following action of the generators of \(\tilde{\mathfrak{g}}\). Let \(e\in\mathfrak{g}_{i=1}^{n}\mathfrak{g}_{\alpha_{i}}\), \(f\in\mathfrak{g}_{i=1}^{n}\mathfrak{g}_{-\alpha_{i}}\), and \(h\in\mathfrak{h}\) be homogeneous elements. We let \(f\) act on \(V_{0}\) trivially and \(h\) act on \(V_{0}=W\) by the existing action of \(\mathfrak{h}\) on \(W\). For \(a\in V\), we define inductively: \[e(a)=e\otimes a,\] \[h(e\otimes a)=[h,e]\otimes a+(-1)^{\bar{h}\bar{e}}e\otimes h(a)\] \[f(e\otimes a)=[f,e]\otimes a+(-1)^{\bar{f}\bar{e}}e\otimes f(a).\] To show this defines an action of \(\tilde{\mathfrak{g}}\) on \(V\), we need to check it respects the defining relations imposed on \(\tilde{\mathfrak{g}}\), which is an immediate yet technical generalization of the proof of Theorem 1.2 in [K1]. We demonstrate the general argument of the proof by showing, inductively, that \([h,f](a)=(hf-(-1)^{\bar{h}\bar{f}}fh)(a)\) for all \(a\in V\). Assuming this equality holds for \(a\in V_{s}\), then \[(hf-(-1)^{\bar{h}\bar{f}}fh)(e\otimes a)\] \[=h[f,e](a)+(-1)^{\bar{f}\bar{e}}h(e\otimes f(a))-(-1)^{\bar{h}\bar {f}}f([h,e]\otimes a)-(-1)^{\bar{h}(\bar{f}+\bar{e})}f(e\otimes h(a))\] \[=h[f,e](a)+(-1)^{\bar{f}\bar{e}}[h,e]\otimes f(a)+(-1)^{(\bar{f}+ \bar{h})\bar{e}}e\otimes hf(a)-(-1)^{\bar{h}\bar{f}}[f,[h,e]](a)\] \[-(-1)^{\bar{e}\bar{f}}[h,e]\otimes f(a)-(-1)^{\bar{h}(\bar{f}+ \bar{e})}[f,e]h(a)-(-1)^{\bar{e}(\bar{f}+\bar{h})+\bar{h}\bar{f}}e\otimes fh(a)\] \[=[h,[f,e]](a)-(-1)^{\bar{h}\bar{f}}[f,[h,e]](a)+(-1)^{\bar{e}(\bar {f}+\bar{h})}e\otimes(hf-(-1)^{\bar{h}\bar{f}}fh)(a)\] \[=[[h,f],e](a)+(-1)^{\bar{e}(\bar{f}+\bar{h})}e\otimes[h,f](a)=[h, f](e\otimes a).\] The equality \([h,f](a)=(hf-(-1)^{\bar{h}\bar{f}}fh)(a)\) for \(a\in V\) follows by induction. As the representation \(\pi_{W}\) is defined for every \(\mathfrak{h}\)-module \(W\), we obtain that \(\mathfrak{h}\) injects into \(\tilde{\mathfrak{g}}\). Moreover, the action of \(\bigoplus_{i=1}^{n}\mathfrak{g}_{\alpha_{i}}\) via \(\pi_{W}\) shows that \(\bigoplus_{i=1}^{n}\mathfrak{g}_{\alpha_{i}}\) injects into \(\tilde{\mathfrak{g}}\). Let \(W_{0}\) denote the trivial \(\mathfrak{h}\)-module. The map \(\phi:\tilde{\mathfrak{n}}^{+}\to\mathcal{T}(\bigoplus_{i=1}^{n}\mathfrak{g}_{ \alpha_{i}})\) given by \(n\mapsto\pi_{W_{0}}(n)(1)\) establishes \(\mathcal{T}(\bigoplus_{i=1}^{n}\mathfrak{g}_{\alpha_{i}})\) as an enveloping algebra of \(\tilde{\mathfrak{n}}^{+}\), and is easily verified to be its universal enveloping algebra. In particular, \(\tilde{\mathfrak{n}}^{+}\) is freely generated by \(\bigoplus_{i=1}^{n}\mathfrak{g}_{\alpha_{i}}\). The result for \(\tilde{\mathfrak{n}}^{-}\) is analogous. This proves 1. We certainly have \(\tilde{\mathfrak{g}}=\tilde{\mathfrak{n}}^{-}+\mathfrak{h}+\tilde{\mathfrak{n} }^{+}\). Suppose \(n^{-}+h+n^{+}=0\) for some \(n^{\pm}\in\tilde{\mathfrak{n}}^{\pm}\) and \(h\in\mathfrak{h}\). Then \(0=\pi_{W}(n^{-}+h+n^{+})(a)\) for \(a\in V_{0}=W\). For \(W=W_{0}\) we obtain \(\phi(n^{+})=\pi_{W_{0}}(n^{-}+h+n^{+})(1)=0\), so \(n^{+}=0\). Now \(0=\pi_{W}(n^{-}+h)(a)=\pi_{W}(h)(a)\) for all \(a\in V_{0}=W\), which implies \(h=0\). We conclude that \(n^{-}=0\), which establishes 2. Part 3 is an immediate consequence of 2. Finally, a standard linear algebraic argument gives that any ideal \(I\) of \(\tilde{\mathfrak{g}}\) decomposes as a \(\mathfrak{t}\)-module in the following way: \[I=\bigoplus_{\begin{subarray}{c}\alpha\in Q_{+}\\ \alpha\neq 0\end{subarray}}(I\cap\tilde{\mathfrak{g}}_{-\alpha})\oplus(I\cap \mathfrak{h})\oplus\bigoplus_{\begin{subarray}{c}\alpha\in Q_{+}\\ \alpha\neq 0\end{subarray}}(I\cap\tilde{\mathfrak{g}}_{\alpha})=(I\cap \tilde{\mathfrak{n}}^{-})\oplus(I\cap\mathfrak{h})\oplus(I\cap\tilde{ \mathfrak{n}}^{+}). \tag{6}\] Therefore, if \(I\) intersects \(\mathfrak{h}\) trivially then \(I=(I\cap\tilde{\mathfrak{n}}^{-})\oplus(I\cap\tilde{\mathfrak{n}}^{+})\). Summing over all such ideals of \(\tilde{\mathfrak{g}}\), we get a unique maximal \(\mathfrak{r}\) intersecting \(\mathfrak{h}\) trivially, which satisfies \(\mathfrak{r}=(\mathfrak{r}\cap\tilde{\mathfrak{n}}^{-})\oplus(\mathfrak{r} \cap\tilde{\mathfrak{n}}^{+})\), according to 6. This proves part 4. **Remark 3.2**.: By the same argument, Theorem 3.1 also holds in the case that \(\mathfrak{g}_{\pm\alpha_{i}}\) are generalized weight spaces of weight \(\pm\alpha_{i}\) as \(\mathfrak{t}\)-modules. The following lemma is a direct generalization of Lemma 1.5 in [K1], with the same proof. **Lemma 3.1**.: _If \(x\in\tilde{\mathfrak{n}}^{+}\) is such that \([\mathfrak{g}_{-\alpha},x]\in\mathfrak{r}\) for all \(\alpha\in\Pi\), then \(x\in\mathfrak{r}\). Analogously, if \(y\in\tilde{\mathfrak{n}}^{-}\) satisfies \([\mathfrak{g}_{\alpha},y]\in\mathfrak{r}\) for all \(\alpha\in\Pi\), then \(y\in\mathfrak{r}\)._ ### Chevalley automorphism of \(\tilde{\mathfrak{g}}(\mathcal{A})\) Recall the automorphism \(\omega_{\mathfrak{h}}\) of \(\mathfrak{h}\) defined in Section 2.2. **Definition 3.2**.: Let \(\mathcal{A}\) be a Cartan datum; a Chevalley automorphism of \(\tilde{\mathfrak{g}}(\mathcal{A})\) is an extension \(\tilde{\omega}\) of \(\omega_{\mathfrak{h}}\) to an automorphism of all of \(\tilde{\mathfrak{g}}(\mathcal{A})\). In contrast with the usual Kac-Moody construction, the Lie superalgebra \(\tilde{\mathfrak{g}}(\mathcal{A})\) is not guaranteed to admit a Chevalley automorphism; in particular, a necessary condition is that \(\mathfrak{g}_{-\alpha}\cong\mathfrak{g}_{\alpha}^{\vee}\) for all roots \(\alpha\). We now describe a condition which ensures the existence of such an automorphism. Suppose \(\mathcal{A}\) is a Cartan datum for which we have identifications \(\mathfrak{g}_{-\alpha}=\mathfrak{g}_{\alpha}^{\vee}\) for all \(\alpha\in\Pi\), where \(\mathfrak{g}_{\alpha}^{\vee}\) is the twist by the automorphism \(\omega_{\mathfrak{h}}^{-1}\) as introduced in Section 2.2. For each \(\alpha\in\Pi\), let \(\omega_{\alpha}:\mathfrak{g}_{\alpha}\to\mathfrak{g}_{\alpha}^{\vee}\) be the identity map of the underlying super vector spaces. We will also write \(x^{\vee}:=\omega_{\alpha}(x)\) for \(x\in\mathfrak{g}_{\alpha}\). We set \(\omega_{-\alpha}:=\delta_{\mathfrak{g}_{\alpha}}\circ\omega_{\alpha}^{-1}: \mathfrak{g}_{\alpha}^{\vee}\to\mathfrak{g}_{\alpha}\), where \(\delta_{\mathfrak{g}_{\alpha}}(v)=(-1)^{\overline{v}}v\) is the grading operator on \(\mathfrak{g}_{\alpha}\). It follows immediately that for \(\alpha\in\Pi\), \(x\in\mathfrak{g}_{\alpha}\), \(y\in\mathfrak{g}_{-\alpha}\) and \(h\in\mathfrak{h}\) we have \[\omega_{\alpha}([h,x])=[\omega_{\mathfrak{h}}(h),\omega_{\alpha}(x)],\ \ \omega_{-\alpha}([h,y])=[\omega_{\mathfrak{h}}(h),\omega_{-\alpha}(y)].\] **Theorem 3.2**.: _Let \(\mathcal{A}\) be a Cartan datum satisfying \(\mathfrak{g}_{-\alpha}=\mathfrak{g}_{\alpha}^{\vee}\) for all \(\alpha\in\Pi\). Assume \(\omega_{\mathfrak{h}}([x,y])=[\omega_{\alpha}(x),\omega_{-\alpha}(y)]\) for all \(x\in\mathfrak{g}_{\alpha}\) and \(y\in\mathfrak{g}_{-\alpha}\), for all \(\alpha\in\Pi\). Then there exists a Chevalley automorphism \(\tilde{\omega}\) of \(\tilde{\mathfrak{g}}(\mathcal{A})\) of order \(4\), which preserves \(\mathfrak{r}\) and satisfies \(\tilde{\omega}|_{\mathfrak{h}}=\omega_{\mathfrak{h}}\) and \(\tilde{\omega}|_{\mathfrak{g}_{\alpha}}=\omega_{\alpha}\) for \(\alpha\in\pm\Pi\)._ Proof.: Let \(\omega_{\mathcal{C}}:=\bigoplus_{i=1}^{n}\omega_{-\alpha_{i}}\oplus\omega_{ \mathfrak{h}}\oplus\bigoplus_{i=1}^{n}\omega_{\alpha_{i}}\) be the endomorphism of the super vector space \(\mathcal{C}\) defined in (5). It is immediate that \(\omega_{\mathcal{C}}\) is an automorphism and that \(\omega_{\mathcal{C}}^{2}=\delta_{\mathcal{C}}\), where \(\delta_{\mathcal{C}}\) is the grading operator of \(\mathcal{C}\). We wish to extend \(\omega_{\mathcal{C}}\) to a Lie superalgebra endomorphism \(\tilde{\omega}\) of \(\tilde{\mathfrak{g}}(\mathcal{A})\). By assumption we know that \(\omega_{\mathcal{C}}([x,y])=[\omega_{\mathcal{C}}(x),\omega_{\mathcal{C}}(y)]\) whenever \(x,y,[x,y]\in\mathcal{C}\). Moreover, because \(\omega_{\mathcal{C}}\) is an even linear map, we have \[[\omega_{\mathcal{C}}(x),\omega_{\mathcal{C}}(y)]+(-1)^{\bar{x}\bar{y}}[ \omega_{\mathcal{C}}(y),\omega_{\mathcal{C}}(x)]=0\] and \[[[\omega_{\mathcal{C}}(x),\omega_{\mathcal{C}}(y)],\omega_{\mathcal{C}}(z)]=[ \omega_{\mathcal{C}}(x),[\omega_{\mathcal{C}}(y),\omega_{\mathcal{C}}(z)]]+(-1 )^{\bar{x}\bar{y}}[\omega_{\mathcal{C}}(y),[\omega_{\mathcal{C}}(x),\omega_{ \mathcal{C}}(z)]]\] for homogeneous \(x,y,z\in\mathcal{C}\). It follows that we may extend \(\tilde{\omega}\) to \(\tilde{\mathfrak{g}}(\mathcal{A})\) by requiring that \(\tilde{\omega}([x,y])=[\tilde{\omega}(x),\tilde{\omega}(y)]\) for \(x,y\in\tilde{\mathfrak{g}}(\mathcal{A})\) and \(\tilde{\omega}|_{\mathcal{C}}:=\omega_{\mathcal{C}}\). From the definition of \(\omega_{\mathcal{C}}\) follows that \(\tilde{\omega}^{2}=\delta_{\tilde{\mathfrak{g}}(\mathcal{A})}\), where \(\delta_{\tilde{\mathfrak{g}}(\mathcal{A})}\) is the grading operator of \(\tilde{\mathfrak{g}}(\mathcal{A})\). In particular, we obtain that \(\tilde{\omega}\) is an automorphism of order \(4\). It is immediate that \(\tilde{\omega}(\tilde{\mathfrak{g}}_{\alpha})=\tilde{\mathfrak{g}}_{-\alpha}\) for any \(\alpha\in\pm Q_{+}\). This implies \(\tilde{\omega}(\mathfrak{r})\cap\mathfrak{h}=0\), so \(\tilde{\omega}(\mathfrak{r})\subseteq\mathfrak{r}\). A similar argument gives \(\tilde{\omega}^{-1}(\mathfrak{r})\subseteq\mathfrak{r}\). Altogether, we obtain \(\tilde{\omega}(\mathfrak{r})=\mathfrak{r}\). **Proposition 3.1**.: _Let \(\mathcal{A}\) be a Cartan datum such that \(\mathfrak{g}_{-\alpha}=\mathfrak{g}_{\alpha}^{\vee}\) for all \(\alpha\in\Pi\). Assume, moreover, that \(\mathfrak{g}_{\alpha}\) is irreducible and that \(\operatorname{rk}\alpha\leq 2\) for all \(\alpha\in\Pi\). Then the conditions of Theorem 3.2 hold, and thus \(\tilde{\mathfrak{g}}(\mathcal{A})\) admits a Chevalley automorphism._ Proof.: Fix a simple root \(\alpha\in\Pi\). If \(\operatorname{rk}\alpha=0\), then \(\mathfrak{g}_{\alpha}\) is \(1\)-dimensional with basis \(\{v\}\). Then \(\{v^{\vee}\}\) is a basis for \(\mathfrak{g}_{-\alpha}\) (where \(v^{\vee}=\omega_{\alpha}(v)\)) and \([v,v^{\vee}]\in\mathfrak{t}\). Therefore \[\omega_{\mathfrak{h}}([v,v^{\vee}])=-[v,v^{\vee}]=-[(-1)^{\bar{v}}\omega_{- \alpha}(v^{\vee}),\omega_{\alpha}(v)]=[\omega_{\alpha}(v),\omega_{-\alpha}(v^{ \vee})].\] If \(\operatorname{rk}\alpha=1\) or \(2\), then \(\mathfrak{g}_{\alpha}\) is \((1|1)\)-dimensional. Let \(\{v,w\}\) be a homogeneous basis for \(\mathfrak{g}_{\alpha}\). From the irreducibility of \(\mathfrak{g}_{\alpha}\), there exists \(H\in\mathfrak{h}_{\overline{1}}\) such that \(H\cdot v=w\). As \([v,v^{\vee}]\in\mathfrak{t}\), we have \([\mathfrak{h},[v,v^{\vee}]]=0\). So \[0=[H,[v,v^{\vee}]]=[H\cdot v,v^{\vee}]+(-1)^{\bar{v}}[v,H\cdot v^{\vee}]=[w,v^{ \vee}]-\sqrt{-1}\cdot(-1)^{\bar{v}}[v,w^{\vee}].\] As \([w,v^{\vee}]\in\mathfrak{h}_{\overline{1}}\), we must have \[\omega_{\mathfrak{h}}([w,v^{\vee}])=\sqrt{-1}[w,v^{\vee}]=-(-1)^{\bar{v}}[v,w^{ \vee}]=-[\omega_{-\alpha}(v^{\vee}),\omega_{\alpha}(w)]=[\omega_{\alpha}(w), \omega_{-\alpha}(v^{\vee})].\] A similar computation gives \(\omega_{b}([v,w^{\vee}])=[\omega_{\alpha}(v),\omega_{-\alpha}(w^{\vee})]\). Finally, the computation in the \(\operatorname{rk}\alpha=0\) case gives \(\omega_{b}([v,v^{\vee}])=[\omega_{\alpha}(v),\omega_{-\alpha}(v^{\vee})]\) and \(\omega_{b}([w,w^{\vee}])=[\omega_{\alpha}(w),\omega_{-\alpha}(w^{\vee})]\). The claim follows by linearity. ### The Lie superalgebra \(\mathfrak{g}(\mathcal{A})\) We define \(\mathfrak{g}(\mathcal{A})\coloneqq\tilde{\mathfrak{g}}(\mathcal{A})/ \mathfrak{r}\), the quotient of \(\tilde{\mathfrak{g}}(\mathcal{A})\) by the maximal ideal intersecting \(\mathfrak{h}\) trivially, as in Theorem 3.1. We say \(\mathfrak{g}(\mathcal{A})\) is the Lie superalgebra associated to the Cartan datum \(\mathcal{A}\). We also write \(\mathfrak{g}\) instead of \(\mathfrak{g}(\mathcal{A})\) when clear from context. **Lemma 3.2**.: _The natural map_ \[\mathcal{C}=\bigoplus_{i=1}^{n}\mathfrak{g}_{-\alpha_{i}}\oplus\mathfrak{h} \oplus\bigoplus_{i=1}^{n}\mathfrak{g}_{\alpha_{i}}\to\mathfrak{g}(\mathcal{A})\] _is an embedding of super vector spaces. Thus we may identify \(\mathfrak{h}\) and \(\mathfrak{g}_{\alpha}\) for \(\alpha\in\pm\Pi\) with their images in \(\mathfrak{g}(\mathcal{A})\)._ Proof.: This immediately follows from the definition of \(\mathfrak{r}\) and the assumption of nondegeneracy on our maps \(\mathfrak{g}_{\alpha}\otimes\mathfrak{g}_{-\alpha}\to\mathfrak{h}\). Since \(\mathfrak{t}\) acts on \(\mathfrak{g}\) semisimply, we have a weight space decomposition \(\mathfrak{g}=\bigoplus_{\alpha\in\mathfrak{t}^{*}}\mathfrak{g}_{\alpha}\). We call \(\alpha\in\mathfrak{t}^{*}\setminus\{0\}\) a root of \(\mathfrak{g}\) if \(\mathfrak{g}_{\alpha}\neq 0\). We denote by \(\Delta\) the set of roots of \(\mathfrak{g}\), and set \(\Delta_{+}\coloneqq\Delta\cap Q_{+}\). If \(\alpha=\sum_{i=1}^{s}k_{i}\alpha_{i}\) is a root for some \(k_{i}\in\mathbb{Z}_{\geq 0}\), we say \(\alpha\) is of height \(\sum_{i=1}^{s}k_{i}\). We denote by \(\mathfrak{n}^{+}\) and \(\mathfrak{n}^{-}\) the images of \(\tilde{\mathfrak{n}}^{+}\) and \(\tilde{\mathfrak{n}}^{-}\) in \(\mathfrak{g}\), respectively. Theorem 3.1 implies the triangular decomposition \(\mathfrak{g}=\mathfrak{n}^{-}\oplus\mathfrak{h}\oplus\mathfrak{n}^{+}\) as a super vector space and the decomposition \[\mathfrak{g}=\bigoplus_{\alpha\in\Delta_{+}}\mathfrak{g}_{-\alpha}\oplus \mathfrak{h}\oplus\bigoplus_{\alpha\in\Delta_{+}}\mathfrak{g}_{\alpha}\] as a \(\mathfrak{t}\)-module. We notice that \(\mathfrak{g}_{\alpha}\) is an \(\mathfrak{h}\)-module for any \(\alpha\in Q_{+}\), so the decomposition also holds as an \(\mathfrak{h}\)-module. Further, \(\mathfrak{h}\) is the centralizer of \(\mathfrak{t}\) in \(\mathfrak{g}(\mathcal{A})\), and is self-normalizing. **Definition 3.3**.: For a root \(\alpha\in\Delta\), we call \(\mathfrak{h}_{\alpha}\coloneqq[\mathfrak{g}_{\alpha},\mathfrak{g}_{-\alpha}]\) the _coroot-space_ corresponding to \(\alpha\); it is an ideal of \(\mathfrak{h}\). An element of \(\mathfrak{h}_{\alpha}\) shall be called an \(\alpha\)_-coroot_, or a coroot corresponding to \(\alpha\). We say an \(\alpha\)-coroot \(h\) is _pure_ if \(h=[x,y]\) for homogeneous non-zero \(x\in\mathfrak{g}_{\alpha}\) and \(y\in\mathfrak{g}_{-\alpha}\). **Proposition 3.2**.: _Let \(\Pi_{1},\Pi_{2}\subseteq\Pi\) be disjoint subsets such that_ \[[\mathfrak{h}_{\alpha},\mathfrak{g}_{\beta}]=[\mathfrak{h}_{\beta},\mathfrak{ g}_{\alpha}]=0\] _for any \(\alpha\in\Pi_{1}\), \(\beta\in\Pi_{2}\). Let \(Q_{\delta}^{+}=\mathbb{Z}_{\geq 0}\Pi_{s}\) for \(s=1,2\). If \(\theta\in\Delta\) is a root of \(\mathfrak{g}\) that satisfies \(\theta\in Q_{1}^{+}+Q_{2}^{+}\), then necessarily \(\theta\in Q_{1}^{+}\) or \(\theta\in Q_{2}^{+}\)._ Proof.: Let us show that \([\mathfrak{g}_{\alpha},\mathfrak{g}_{\beta}]=0\) for \(\alpha\in\Pi_{1}\), \(\beta\in\Pi_{2}\). The Jacobi identity implies \[[\mathfrak{g}_{-\alpha},[\mathfrak{g}_{\alpha},\mathfrak{g}_{\beta}]]\subseteq[[ \mathfrak{g}_{-\alpha},\mathfrak{g}_{\alpha}],\mathfrak{g}_{\beta}]+[ \mathfrak{g}_{\alpha},[\mathfrak{g}_{-\alpha},\mathfrak{g}_{\beta}]].\] But \([[\mathfrak{g}_{-\alpha},\mathfrak{g}_{\alpha}],\mathfrak{g}_{\beta}]=0\) by assumption and \([\mathfrak{g}_{-\alpha},\mathfrak{g}_{\beta}]=0\) by the defining relations of \(\mathfrak{g}\). So \([\mathfrak{g}_{-\alpha},[\mathfrak{g}_{\alpha},\mathfrak{g}_{\beta}]]=0\). A similar argument shows \([\mathfrak{g}_{-\beta},[\mathfrak{g}_{\alpha},\mathfrak{g}_{\beta}]]=0\). We certainly have \([\mathfrak{g}_{-\gamma},[\mathfrak{g}_{\alpha},\mathfrak{g}_{\beta}]]=0\) for any \(\gamma\in\Pi\setminus\{\alpha,\beta\}\). As \([\mathfrak{g}_{\alpha},\mathfrak{g}_{\beta}]\subseteq\mathfrak{n}^{+}\) and \([\mathfrak{g}_{-\gamma},[\mathfrak{g}_{\alpha},\mathfrak{g}_{\beta}]]=0\) for any \(\gamma\in\Pi\), Corollary 3.2 implies \([\mathfrak{g}_{\alpha},\mathfrak{g}_{\beta}]=0\). Let \(\mathfrak{g}^{(s)}\) be the subalgebra of \(\mathfrak{g}\) generated by \(\mathfrak{g}_{\alpha}\) and \(\mathfrak{g}_{-\alpha}\) for \(\alpha\in\Pi_{s}\). Then what we have shown so far implies \([\mathfrak{g}^{(1)},\mathfrak{g}^{(2)}]=0\). Because \(\theta\in Q_{1}^{+}+Q_{2}^{+}\) we must have that \(\mathfrak{g}_{\theta}\) is contained in the algebra generated by \(\mathfrak{g}^{(1)}\) and \(\mathfrak{g}^{(2)}\). But then \(\mathfrak{g}_{\theta}\) is contained in either \(\mathfrak{g}^{(1)}\) or \(\mathfrak{g}^{(2)}\), which means \(\theta\in Q_{1}^{+}\) or \(\theta\in Q_{2}^{+}\). We now state several obvious corollaries: **Corollary 3.1**.: _Let \(\mathfrak{c}\) be the center of \(\mathfrak{g}\). Then \(\mathfrak{c}\subseteq\bigcap_{i=1}^{n}(\operatorname{Ann}_{\mathfrak{h}} \mathfrak{g}_{\alpha_{i}}\cap\operatorname{Ann}_{\mathfrak{h}}\mathfrak{g}_{ -\alpha_{i}})\) and_ Lemma 3.1 immediately implies the following corollary: **Corollary 3.2**.: _If \(x\in\mathfrak{n}^{+}\) satisfies \([\mathfrak{g}_{-\alpha},x]=0\) for all \(\alpha\in\Pi\), then \(x=0\). Similarly, if \(y\in\mathfrak{n}^{-}\) is such that \([\mathfrak{g}_{\alpha},y]=0\) for all \(\alpha\in\Pi\), then \(y=0\)._ Theorem 3.2 and Proposition 3.1 give the following corollary: **Corollary 3.3**.: _Let \(\mathcal{A}\) be a Cartan datum satisfying \(\mathfrak{g}_{-\alpha}=\mathfrak{g}_{\alpha}^{\vee}\) for all \(\alpha\in\Pi\). Assume_ \[\omega_{\mathfrak{h}}([x,y])=[\omega_{\alpha}(x),\omega_{-\alpha}(y)] \tag{7}\] _for all \(x\in\mathfrak{g}_{\alpha}\) and \(y\in\mathfrak{g}_{-\alpha}\), for all \(\alpha\in\Pi\) (see Section 3.2 for notation). Then \(\mathfrak{g}(\mathcal{A})\) admits a Chevalley automorphism \(\omega\) with \(\omega^{2}=\delta\), where \(\delta(v)=(-1)^{\overline{v}}v\)._ _In particular, if the \(\mathfrak{h}\)-modules \(\mathfrak{g}_{\alpha}\) are irreducible and \(\operatorname{rk}\alpha\leq 2\) for all \(\alpha\in\Pi\), then (7) holds._ As a generalization of Lemma 1.6 in [K1] we have: ### Cartan subdata and morphisms Let \(\mathfrak{h}^{\prime}\subseteq\mathfrak{h}\) be any Lie subalgebra of \(\mathfrak{h}\), and for each \(\alpha\in\pm\Pi\) choose \(\mathfrak{h}^{\prime}\)-submodules \(\mathfrak{g}_{\alpha}^{\prime}\subseteq\mathfrak{g}_{\alpha}\) such that \([\mathfrak{g}_{\alpha}^{\prime},\mathfrak{g}_{-\alpha}^{\prime}]\subseteq \mathfrak{h}^{\prime}\). Then we obtain what we call a _Cartan subdatum_\(\mathcal{A}^{\prime}\) of \(\mathcal{A}\), which by definition consists of the above information, and constitutes a Cartan datum of its own apart from possibly failing the condition that the map \(\mathfrak{g}_{\alpha}^{\prime}\otimes\mathfrak{g}_{-\alpha}^{\prime}\to \mathfrak{h}^{\prime}\) is nondegenerate. Nevertheless, we may define \(\tilde{\mathfrak{g}}(\mathcal{A}^{\prime})\) in the natural way, and we will have an obvious map \(\tilde{\mathfrak{g}}(\mathcal{A}^{\prime})\to\mathfrak{g}(\mathcal{A})\). There are two distinguished types of Cartan subdata. Let \(\Pi^{\prime}\subseteq\Pi\) be a subset of simple roots. Then we obtain a natural Cartan subdatum \(\mathcal{A}^{\prime}\coloneqq\mathcal{A}_{\Pi^{\prime}}\) of \(\mathcal{A}\) with \(\mathfrak{h}^{\prime}=\mathfrak{h}\), and \(\mathfrak{g}_{\alpha}^{\prime}=\mathfrak{g}_{\alpha}\) if \(\pm\alpha\in\Pi^{\prime}\), and \(\mathfrak{g}_{\alpha}^{\prime}=0\) otherwise. In this case \(\mathcal{A}^{\prime}\) will be a Cartan datum in its own right. We call this a _full_ Cartan subdatum. The other important case is when \(\mathfrak{h}^{\prime}=\mathfrak{t}\), and we choose vectors \(e_{1},\dots,e_{n},f_{1},\dots,f_{n}\) with \(e_{i}\in\mathfrak{g}_{\alpha_{i}}\), \(f_{i}\in\mathfrak{g}_{-\alpha_{i}}\), \(e_{i},f_{i}\) of the same parity. We call a Cartan subdatum of this form _classical_. Setting \(h_{i}\coloneqq[e_{i},f_{i}]\), we obtain in this way a Cartan matrix \(A^{\prime}=(\alpha_{j}(h_{i}))\), and with it a natural map \(\tilde{\mathfrak{g}}^{\prime}(A^{\prime})\to\mathfrak{g}(\mathcal{A})\), where \(\tilde{\mathfrak{g}}^{\prime}(A^{\prime})\) is the derived subalgebra of \(\tilde{\mathfrak{g}}(A^{\prime})\). The following result will be of use later on. **Corollary 3.4**.: _If there exists a classical Cartan subdatum of \(\mathcal{A}\) with Cartan matrix \(A^{\prime}\) for which \(\mathfrak{g}(A^{\prime})\) is not of finite growth, then \(\mathfrak{g}(\mathcal{A})\) is also not of finite growth._ Proof.: By the setup described above, we have a map \(\tilde{\mathfrak{g}}(A^{\prime})\to\mathfrak{g}(\mathcal{A})\). The image of this map admits as a subquotient the Lie superalgebra \(\mathfrak{g}^{\prime}(A^{\prime})\) quotient by its center, where \(\mathfrak{g}^{\prime}(A^{\prime})\) is the derived subalgebra of \(\mathfrak{g}(A^{\prime})\). Since this is of finite growth if and only if \(\mathfrak{g}(A^{\prime})\) is, we are done. ## 4. Clifford Kac-Moody algebras **Definition 4.1**.: We say that a Cartan datum \(\mathcal{A}\) (and by extension, the Lie superalgebra \(\mathfrak{g}(\mathcal{A})\)) is integrable if for any \(\alpha,\beta\in\pm\Pi\) there exists \(n\in\mathbb{N}\) such that \((\operatorname{ad}\mathfrak{g}_{\alpha})^{n}\mathfrak{g}_{\beta}=0\). **Definition 4.2**.: We call \(\mathfrak{g}(\mathcal{A})\) Clifford Kac-Moody if it satisfies the following properties: 1. The \(\mathfrak{h}\)-module \(\mathfrak{g}_{\alpha}\) is irreducible for any \(\alpha\in\pm\Pi\); and 2. \(\mathcal{A}\) is integrable. In this situation, we will also say that the Cartan datum \(\mathcal{A}\) is Clifford Kac-Moody. **Remark 4.1**.: For any full Cartan subdatum \(\mathcal{A}^{\prime}\) of \(\mathcal{A}\) (see Section 3.4), \(\mathcal{A}^{\prime}\) is Clifford Kac-Moody whenever \(\mathcal{A}\) is. **Remark 4.2**.: For symmetrizable Kac-Moody superalgebras admitting odd isotropic simple roots, one is led to consider non-conjugate positive systems obtained via odd reflections. Thus a 'better' behaved notion of integrability is to require that a root vector acts ad-locally integrably whenever it is a simple root vector for some base obtained from some number of odd reflections from a fixed starting one. We ignore this subtlety here, since it is more important when discussing representation theory, which for now we do not discuss. **Remark 4.3**.: For any \(\alpha\in\Pi\), choose a non-zero pure even coroot \(h_{\alpha}\) and set \(A:=(\beta(h_{\alpha}))_{\alpha,\beta\in\Pi}\). Then the integrability of \(\mathfrak{g}(\mathcal{A})\) implies that if \(A\) is elemental as in Sec. 2 of [HS], then it satisfies the condition of Lem. 3.1 in [HS]. ### Examples of Clifford Kac-Moody algebras **Example 4.1**.: Let \(\mathfrak{g}=\mathfrak{q}(n)\) denote the Lie subalgebra of \(\mathfrak{gl}(n|n)\) consisting of matrices of the form \[\begin{bmatrix}A&B\\ B&A\end{bmatrix},\] where \(A\) and \(B\) are arbitrary. Set \(\mathfrak{h}\subseteq\mathfrak{q}(n)\) to be the subalgebra of matrices of the form \[\begin{bmatrix}D&D^{\prime}\\ D^{\prime}&D\end{bmatrix},\] where \(D,D^{\prime}\) are diagonal. Then \(\mathfrak{h}\) is quasi-toral and self-normalizing in \(\mathfrak{g}\). Let \(\mathfrak{t}=\mathfrak{h}_{\overline{0}}\), the subalgebra of block diagonal matrices, and let \(\epsilon_{1},\dots,\epsilon_{n}\in\mathfrak{t}^{*}\) be the coordinate projections. Then if we set \(\alpha_{i}:=\epsilon_{i}-\epsilon_{i+1}\), we obtain a Cartan datum from \(\mathfrak{h}\) and \(\mathfrak{g}_{\pm\alpha_{1}},\dots,\mathfrak{g}_{\pm\alpha_{n-1}}\), and all roots are of rank \(2\). The root spaces \(\mathfrak{g}_{\alpha_{i}}\) are irreducible, and since integrability is easily checked, we see that \(\mathfrak{g}\) is Clifford Kac-Moody. Write \(e_{i}:=e_{ii}+e_{n+i,n+i}\) and \(E_{i}:=e_{i,n+i}+e_{n+i,i}\). Then the pure coroots in \(\mathfrak{h}_{\alpha_{i}}\) are given by \(h_{i}:=e_{i}-e_{i+1}\), \(c_{i}:=e_{i}+e_{i+1}\), and \(H_{i}:=E_{i}-E_{i+1}\). It is interesting to note that the pure coroots are distinct for different simple roots. One should view \(\mathfrak{q}(n)\) as the primary, motivating example of a Clifford Kac-Moody algebra, and of our construction at large. **Example 4.2**.: Let \(\mathfrak{g}=\mathfrak{sq}(n)=[\mathfrak{q}(n),\mathfrak{q}(n)]\). Then \(\mathfrak{g}\) has Cartan subalgebra given by \(\mathfrak{h}\) the set of matrices of the form \[\begin{bmatrix}D&D^{\prime}\\ D^{\prime}&D\end{bmatrix},\] where \(\operatorname{tr}(D^{\prime})=0\). The rest of the Cartan datum remains unchanged from that of \(\mathfrak{q}(n)\), and \(\mathfrak{sq}(n)\) is Clifford Kac-Moody whenever \(n\geq 3\) (notice that \(\mathfrak{g}_{\alpha_{1}}\) is not irreducible in \(\mathfrak{sq}(2)\)). **Example 4.3**.: Let \(\mathfrak{g}=\mathfrak{psq}(n):=\mathfrak{sq}(n)/\mathbb{C}I_{n|n}\). Similarly to the previous example, one can verify that \(\mathfrak{g}\) is Clifford Kac-Moody if \(n\geq 3\). **Example 4.4** (Takiff Superalgebras).: An interesting class of examples of Clifford Kac-Moody algebras is obtained via extensions of Takiff superalgebras, which can be described as follows (see also Example 5.1 in [S2]). Let \(\mathfrak{s}\) be a symmetrizable (see Example 4.5 for when \(\mathfrak{s}\) is not symmetrizable) integrable (as in Sec. 3 of [HS]) Kac-Moody superalgebra with invariant form \((-,-)\). We define \[T\mathfrak{s}:=\mathfrak{s}\otimes\mathbb{C}[\xi]/(\xi^{2})\oplus\mathbb{C} \langle\partial_{\xi},c,z\rangle,\] where \(\xi\) and \(\partial_{\xi}\) are odd, \(c\) and \(z\) even and central, and we have the following bracket (note that there are signs missing in the formula for the bracket in [S2]): \[[x\otimes p+a\partial_{\xi},y\otimes q+b\partial_{\xi}]=\] \[(-1)^{\overline{p}\overline{q}}[x,y]\otimes pq+(-1)^{\overline{p} }ay\otimes q^{\prime}+(-1)^{\overline{p}}bx\otimes p^{\prime}+(-1)^{\overline{ p}}(x,y)Res(p^{\prime}q)c+abz\] for \(x,y\in\mathfrak{s}\), \(a,b\in\mathbb{C}\), and \(p,q\in\mathbb{C}[\xi]\). Here we write e.g. \(p^{\prime}:=\partial_{\xi}p\), and \(Res(a+b\xi)=b\). To check that \(\mathfrak{g}:=T\mathfrak{s}\) is Clifford Kac-Moody, let \(\overline{t}\subseteq\mathfrak{s}\) be a maximal torus of \(\mathfrak{s}\), and let \(\mathfrak{h}=\overline{t}\otimes\mathbb{C}[\xi]\oplus\mathbb{C}\langle \partial_{\xi},c,z\rangle\). Then \(\mathfrak{h}\) is quasi-toral and self-normalizing in \(T\mathfrak{s}\). Write \(\mathfrak{t}:=\mathfrak{h}_{\overline{0}}\). If \(\overline{\Pi}=\{\overline{\alpha_{1}},\ldots,\overline{\alpha_{n}}\}\subseteq( \overline{t})^{*}\) denotes the simple roots of \(\mathfrak{s}\), let \(\alpha_{i}\in\mathfrak{t}^{*}\) denote the weight satisfying \(\alpha_{i}(t)=\overline{\alpha_{i}}(t)\) for \(t\in\overline{t}\), and \(\alpha_{i}(c)=\alpha_{i}(z)=0\). Then \(\Pi=\{\alpha_{1},\ldots,\alpha_{n}\}\subseteq\mathfrak{t}^{*}\) is linearly independent, and \(\mathfrak{h}\) along with the root spaces \(\mathfrak{g}_{\pm\alpha_{i}}\) give rise to a Cartan datum. The root spaces \(\mathfrak{g}_{\alpha_{i}}\) are spanned by \(e_{i}\otimes 1,e_{i}\otimes\xi\) where \(e_{i}\) is the Chevalley generator of \(\mathfrak{s}\) for the root \(\overline{\alpha_{i}}\), and it is easy to see this forms an irreducible \(\mathfrak{h}\)-module (with the help of \(\partial_{\xi}\)). Finally, \(\mathfrak{g}\) will be integrable because the same is true of \(\mathfrak{s}\). Observe that every simple root of \(T\mathfrak{s}\) is of rank \(2\). The \(\alpha\) coroot-space \(\mathfrak{h}_{\alpha}\) for a simple root \(\alpha\) is given by \(h_{\overline{\alpha}}\otimes\mathbb{C}[\xi]\oplus\mathbb{C}\langle c\rangle\), where \(h_{\overline{\alpha}}\) denotes the coroot in \(\mathfrak{s}\) of the simple root \(\overline{\alpha}\). Thus the pure coroots are \(h_{\overline{\alpha}}\otimes 1\), \(h_{\overline{\alpha}}\otimes\xi\), and \(c\); in particular every coroot space shares a pure coroot, and this coroot is central. As a special case of the above construction, we have that \[T\mathfrak{sl}(2)/(c-z)\cong\mathfrak{q}(2).\] We will see that in some ways the property of two coroot spaces sharing a pure, central coroot is characteristic of the superalgebras \(T\mathfrak{s}\). This construction represents the only known 'general' construction of nontrivial Clifford Kac-Moody algebras, i.e. ones with simple roots of rank bigger than \(0\). **Example 4.5**.: In the above example, we can drop the condition that \(\mathfrak{s}\) be symmetrizable at the cost of removing the central extension \(c\). We will still obtain a Clifford Kac-Moody algebra in this way, and we will still call this superalgebra \(T\mathfrak{s}\). **Example 4.6**.: For examples of superalgebras that are not Clifford Kac-Moody, let \(\mathfrak{s}\) be a symmetrizable Kac-Moody Lie superalgebra, and write \(\mathfrak{ts}\) for the subalgebra of \(T\mathfrak{s}\) spanned by \(\mathfrak{s}\otimes\mathbb{C}[\xi]\) and \(c\). It is a maximal torus of \(\mathfrak{s}\), then we set \(\mathfrak{h}=\overline{t}\otimes\mathbb{C}[\xi]\oplus\mathbb{C}\langle c\rangle\) to obtain a self-normalizing quasi-toral subalgebra. However in this case the root spaces of simple roots, spanned by \(e_{i}\otimes 1\) and \(e_{i}\otimes\xi\) for a Chevalley generator \(e_{i}\) of \(\mathfrak{s}\), will not be irreducible \(\mathfrak{h}\) modules because we no longer have the derivation \(\partial_{\xi}\). Thus \(\mathfrak{ts}\) is not Clifford Kac-Moody (however it does arise naturally from our more general construction, as \(\mathfrak{ts}\) contains no nontrivial ideals that intersect \(\mathfrak{h}\) trivially). Another example is obtained by considering \(\mathfrak{pts}:=\mathfrak{ts}/(c)\cong\mathfrak{s}\otimes\mathbb{C}[\xi]\); and indeed if \(\mathfrak{s}\) is not symmetrizable then this is the natural superalgebra to consider. This case is again clearly not Clifford Kac-Moody, but once again arises naturally from our construction. Finally, we note that \[\mathfrak{ts}(2)\cong\mathfrak{sq}(2)\ \ \text{and}\ \ \mathfrak{pts}(2)\cong \mathfrak{psq}(2).\] ## 5. Cartan datum with one simple root In light of Section 3.4 and Remark 4.1, it is wise to begin our study with Clifford Kac-Moody algebras \(\mathfrak{g}(\mathcal{A})\) having one simple root. Thus let \(\mathcal{A}\) be a Cartan datum with \(\Pi=\{\alpha\}\) and irreducible \(\mathfrak{h}\)-modules \(\mathfrak{g}_{\alpha}\) and \(\mathfrak{g}_{-\alpha}\). The integrability condition on \(\mathfrak{g}(\mathcal{A})\) means that \((\text{ad}\ \mathfrak{g}_{\alpha})^{n}\mathfrak{g}_{\alpha}=0\) and \((\text{ad}\ \mathfrak{g}_{-\alpha})^{n}\mathfrak{g}_{-\alpha}=0\) for some \(n\in\mathbb{N}\). For a simple root \(\alpha\) of any Cartan datum, write \[\mathfrak{g}\langle\alpha\rangle:=\mathfrak{h}_{\alpha}\oplus\bigoplus_{n\in \mathbb{Z}_{*0}}\mathfrak{g}_{n\alpha}.\] Said otherwise, \(\mathfrak{g}\langle\alpha\rangle\) is the subalgebra generated by \(\mathfrak{g}_{\alpha}\) and \(\mathfrak{g}_{-\alpha}\). ### Roots of Heisenberg type **Definition 5.1**.: We say that a simple root \(\alpha\) is of Heisenberg type if any of the following equivalent conditions hold: 1. \([\mathfrak{h}_{\alpha},\mathfrak{g}_{\alpha}]=0\); 2. \(\mathfrak{h}_{\alpha}\subseteq\operatorname{Ann}_{\mathfrak{h}}\mathfrak{g}_{ \alpha}\); 3. \(\mathfrak{h}_{\alpha}\) is central in \(\mathfrak{g}\langle\alpha\rangle\). The one part of the above definition that may not be clear is why \(\mathfrak{h}_{\alpha}\) need be abelian; however if there exists \(H_{1},H_{2}\in(\mathfrak{h}_{\alpha})_{\overline{1}}\), then \([H_{1},H_{2}]\in[H_{1},[\mathfrak{g}_{\alpha},\mathfrak{g}_{-\alpha}]]=0\). **Lemma 5.1**.: _If \(\alpha\) is of Heisenberg type, then \(\Delta=\{\pm\alpha\}\)._ Proof.: The Jacobi identity implies \[[\mathfrak{g}_{-\alpha},[\mathfrak{g}_{\alpha},\mathfrak{g}_{\alpha}]]\subseteq [\mathfrak{g}_{\alpha},[\mathfrak{g}_{\alpha},\mathfrak{g}_{-\alpha}]].\] Hence our assumption \([\mathfrak{g}_{\alpha},[\mathfrak{g}_{\alpha},\mathfrak{g}_{-\alpha}]]=0\) yields \([\mathfrak{g}_{-\alpha},[\mathfrak{g}_{\alpha},\mathfrak{g}_{\alpha}]]=0\). From Corollary 3.2 we obtain \([\mathfrak{g}_{\alpha},\mathfrak{g}_{\alpha}]=0\). A similar argument shows \([\mathfrak{g}_{-\alpha},\mathfrak{g}_{-\alpha}]=0\). **Definition 5.2**.: For each \(n\geq 0\), let \(\mathfrak{hs}(n)\) denote the following Lie superalgebra: let \(\mathfrak{h}(n)\) be the Lie superalgebra constructed in Example 2.1, and let \(C_{1}\) denote an irreducible representation of \(\mathfrak{h}(n)\) in which \(c\) acts as the identity; assume that \(C_{1}\) is purely even if \(n=0\). Now set \(C_{-1}:=C_{1}^{\vee}\), and let \(\mathfrak{h}\) denote the quotient of the \(\mathfrak{h}(n)\)-module \(C_{1}\otimes C_{-1}\) by the submodule generated by odd elements lying in its radical (see Remark 3.1). Then \(C_{-1}\oplus\mathfrak{h}\oplus\mathfrak{h}(n)\oplus C_{1}\) is Clifford Kac-Moody, where \(\mathfrak{h}\) acts trivially on \(C_{\pm 1}\), and the bracket map \([-,-]:C_{1}\otimes C_{-1}\to\mathfrak{h}\oplus\mathfrak{h}(n)\) is the quotient map onto \(\mathfrak{h}\). We set \(\mathfrak{hs}(n):=C_{-1}\oplus\mathfrak{h}\oplus C_{1}\). Similarly let \(\mathfrak{h}^{\Pi}\) denote the quotient of \(C_{1}\otimes\Pi C_{-1}\) by the submodule generated by odd elements lying in the radical. Then \(\Pi C_{-1}\oplus\mathfrak{h}^{\Pi}\oplus\mathfrak{h}(n)\oplus C_{1}\) will be Clifford Kac-Moody in a similar way, and we set \(\mathfrak{he}(n)^{\Pi}:=\Pi C_{-1}\oplus\mathfrak{h}\oplus C_{1}\). **Example 5.1**.: If \(n=0\) in the above, we obtain the purely even algebra \(\mathfrak{he}(0)=\mathbb{C}\langle e,h,f\rangle\) with \(h\) central and \([e,f]=h\). On the other hand, \(\mathfrak{he}(0)^{\Pi}=\mathbb{C}\langle e,h,f\rangle\) where \(e,h\) are odd, \(f\) is even, \(h\) is central, and \([e,f]=h\). Finally we note that \(\mathfrak{he}(2)\cong\mathfrak{sl}(1|1)\cong\mathfrak{th}(0)\). The following lemma is straightforward, and mostly given for purposes of clarity. **Lemma 5.2**.: _If \(\alpha\) is a simple root of rank \(n\) and of Heisenberg type, then \(\mathfrak{g}(\alpha)\) is a quotient by a central ideal of \(\mathfrak{h}\) of one of the following:_ 1. _if_ \(n\) _is odd, then is it a quotient of_ \(\mathfrak{he}(n)\)_;_ 2. _if_ \(n\equiv 2\)_(mod 4), then it is a quotient of either_ \(\mathfrak{he}(n)\) _or_ \(\mathfrak{he}(n)^{\Pi}\)_;_ 3. _if_ \(n\equiv 0\)_(mod 4), then it is a quotient of either_ \(\mathfrak{he}(n)\)_,_ \(\mathfrak{he}(n)^{\Pi}\)_, or_ \(\Pi C_{-1}\oplus\mathfrak{h}\oplus\Pi C_{1}\) _in the notation of Definition_ 5.2_._ ### Simple roots of non-Heisenberg type The goal for the rest of this section is to prove the following theorem: **Theorem 5.1**.: _Suppose \(\mathfrak{h}_{\alpha}\notin\operatorname{Ann}_{\mathfrak{h}}\mathfrak{g}_{\alpha}\). Then \(\mathfrak{g}_{-\alpha}\simeq\mathfrak{g}_{\alpha}^{\vee}\), and one of the following happens:_ 1. \(\operatorname{rk}\alpha=0\)_,_ \(\Delta=\{\pm\alpha\}\)_, and_ \(\mathfrak{g}(\alpha)\) _is isomorphic to_ \(\mathfrak{sl}(2)\)_._ 2. \(\operatorname{rk}\alpha=0\)_,_ \(\Delta=\{\pm\alpha,\pm 2\alpha\}\)_, and_ \(\mathfrak{g}(\alpha)\) _is isomorphic to_ \(\mathfrak{osp}(1|2)\)_._ 3. \(\operatorname{rk}\alpha=2\)_,_ \(\Delta=\{\pm\alpha\}\)_, and_ \(\mathfrak{g}(\alpha)\) _is isomorphic to either_ \(\mathfrak{sl}(2)\cong\mathfrak{sq}(2)\) _or_ \(\mathfrak{ptsl}(2)\cong\mathfrak{psq}(2)\) _(see Examples_ 4.2_,_ 4.3_, and_ 4.6_)._ 4. \(\operatorname{rk}\alpha=2\)_,_ \(\Delta=\{\pm\alpha,\pm 2\alpha\}\)_, and_ \(\mathfrak{g}(\alpha)\) _is isomorphic to either_ \(\mathfrak{osp}(1|2)\) _or_ \(\mathfrak{osp}(1|2)\) _(see Example_ 4.6_)._ Stated more simply, the above theorem says that if \(\alpha\) is not of Heisenberg type, then either it gives rise to a \(\mathfrak{sl}(2)\), \(\mathfrak{osp}(1|2)\), or a Takiff construction on one of these. We deduce Theorem 5.1 from the following results. **Lemma 5.3**.: _Suppose \(\mathfrak{h}_{\alpha}\notin\operatorname{Ann}_{\mathfrak{h}}\mathfrak{g}_{\alpha}\). There exist \(e\in\mathfrak{g}_{\alpha}\) and \(f\in\mathfrak{g}_{-\alpha}\) such that the subalgebra they generate is isomorphic to either \(\mathfrak{sl}(2)\) or \(\mathfrak{osp}(1|2)\)._ Proof.: First let us show that we can find an even \(\alpha\)-coroot \(h\in\mathfrak{h}_{\alpha}\) such that \(\alpha(h)\neq 0\). By assumption, there exists a homogeneous \(\alpha\)-coroot that does not annihilate \(\mathfrak{g}_{\alpha}\). If it is even, take \(h\) to be this coroot. If it is odd denote it by \(H\). It follows from \([H,\mathfrak{g}_{\alpha}]\neq 0\) and from the irreducibility of \(\mathfrak{g}_{\alpha}\) that \(H\notin\ker B_{\alpha}\). So there exists \(K\in\mathfrak{h}_{\overline{1}}\) such that \(\alpha([K,H])\neq 0\). Then \(h:=[K,H]\in\mathfrak{h}_{\alpha}\) is the even coroot we are looking for. Now, by definition, there exists a finite index set \(I\) and homogeneous elements \(e_{i}\in\mathfrak{g}_{\alpha}\) and \(f_{i}\in\mathfrak{g}_{-\alpha}\) of the same parity for each \(i\in I\) such that \(h=\sum_{i\in I}[e_{i},f_{i}]\). Hence \(\alpha(h)=\alpha(\sum_{i\in I}[e_{i},f_{i}])\neq 0\), which implies that \(\alpha([e_{j},f_{j}])\neq 0\) for some \(j\in I\). Now by our integrability assumption, it is straightforward to check that \(e:=e_{j}\), \(f:=f_{j}\) generate a subalgebra isomorphic to either \(\mathfrak{sl}(2)\) or \(\mathfrak{osp}(1|2)\), depending on whether \(e,f\) are even or odd. We denote by \(\mathfrak{s}\) the subalgebra isomorphic to either \(\mathfrak{sl}(2)\) or \(\mathfrak{osp}(1|2)\) obtained in Lemma 5.3. **Lemma 5.4**.: _Suppose \(\mathfrak{h}_{\alpha}\notin\operatorname{Ann}_{\mathfrak{h}}\mathfrak{g}_{\alpha}\), and let \(\mathfrak{s}\) be as above._ 1. _If_ \(\mathfrak{s}\simeq\mathfrak{sl}(2)\)_, then_ \(\Delta=\{\pm\alpha\}\)_._ 2. _If_ \(\mathfrak{s}\simeq\mathfrak{osp}(1|2)\)_, then_ \(\Delta=\{\pm\alpha,\pm 2\alpha\}\)_._ _Moreover, \(\operatorname{rk}\alpha\leq 2\) and \(\dim\mathfrak{g}_{\beta}=\dim\mathfrak{g}_{\alpha}\) for any \(\beta\in\Delta\)._ Proof.: The integrability of \(\mathfrak{g}(\mathcal{A})\) implies that it is finite-dimensional, and in particular a finite dimensional \(\mathfrak{s}\)-module. If \(e\in\mathfrak{s}\cap\mathfrak{g}_{\alpha}\) is non-zero, then by the finite-dimensional representation theory of \(\mathfrak{sl}(2)\) and \(\mathfrak{osp}(1|2)\), the map \[\text{ad }e:\mathfrak{g}_{k\alpha}\to\mathfrak{g}_{(k+1)\alpha} \tag{8}\] is surjective for all \(k\geq 0\). In particular \(\dim\mathfrak{g}_{k\alpha}\leq\dim\mathfrak{g}_{\alpha}\) for all \(k>0\). Since \(\mathfrak{g}_{\alpha}\) is irreducible and \(\operatorname{rk}\alpha=\operatorname{rk}k\alpha\) for all \(k\neq 0\), we have \(\dim\mathfrak{g}_{k\alpha}\geq\dim\mathfrak{g}_{\alpha}\) whenever \(\mathfrak{g}_{k\alpha}\neq 0\). As a consequence, \(\text{ad }e:\mathfrak{g}_{k\alpha}\to\mathfrak{g}_{(k+1)\alpha}\) is a linear isomorphism for \(k>0\) whenever \(\mathfrak{g}_{(k+1)\alpha}\neq 0\). If \(\mathfrak{s}\simeq\mathfrak{sl}(2)\), then \([e,e]=0\), hence \(\mathfrak{g}_{2\alpha}=0\). If \(\mathfrak{s}\simeq\mathfrak{osp}(1|2)\), then \([e,e]\neq 0\) and \([e,[e,e]]=0\), so \(\mathfrak{g}_{3\alpha}=0\). The description of \(\Delta\) follows immediately. Finally, from (8), we have \([e,\mathfrak{h}]=\mathfrak{g}_{\alpha}\). As \([e,\mathfrak{t}]=\mathbb{C}(e)\), we obtain that either \(\dim(\mathfrak{g}_{\alpha})_{\overline{0}}=1\) or \(\dim(\mathfrak{g}_{\alpha})_{\overline{1}}=1\). Therefore \(\operatorname{rk}\alpha\leq 2\) by our dimension formulas for irreducibles given in Section 2. **Lemma 5.5**.: _If \(\mathfrak{h}_{\alpha}\notin\operatorname{Ann}_{\mathfrak{h}}\mathfrak{g}_{\alpha}\), then \(\operatorname{rk}\alpha=0\) or \(\operatorname{rk}\alpha=2\)._ Proof.: Let us assume \(\operatorname{rk}\alpha\neq 0\). Then according to Lemma 5.4, we have \(1\leq\operatorname{rk}\alpha\leq 2\). The irreducibility of \(\mathfrak{g}_{\alpha}\) as an \(\mathfrak{h}\)-module implies \(\dim\mathfrak{g}_{\alpha}=(1|1)\). Let \(k\in\mathbb{Z}\) be the maximal integer such that \(k\alpha\in\Delta\) and fix \(\beta=k\alpha\). Then \(\dim\mathfrak{g}_{\beta}=(1|1)\) from Lemma 5.4. Choose non-zero elements \(x\in(\mathfrak{g}_{\beta})_{\overline{0}}\), \(X\in(\mathfrak{g}_{\beta})_{\overline{1}}\) and \(y\in(\mathfrak{g}_{-\beta})_{\overline{0}}\), and set \(H:=[X,y]\). We notice that \(x\) and \(y\) belong to an \(\mathfrak{sl}(2)\)-subalgebra, so \(\beta([x,y])\neq 0\). We have \[[H,x]=[[X,y],x]=[X,[y,x]]=\beta([x,y])X\neq 0,\] and \[[H,X]=[[X,y],X]=[X,[y,X]]=[[y,X],X]=-[[X,y],X]=-[H,X].\] Thus \([H,X]=0\), and we conclude that \([H,\mathfrak{g}_{\beta}]\neq 0\) and \(\beta([H,H])=0\). This is impossible if \(\operatorname{rk}\beta=1\),and thus we must have \(\operatorname{rk}\beta=\operatorname{rk}\alpha=2\). **Corollary 5.1**.: _If \(\mathfrak{h}_{\alpha}\notin\operatorname{Ann}_{\mathfrak{h}}\mathfrak{g}_{\alpha}\), then \(\mathfrak{g}_{-\alpha}\simeq\mathfrak{g}_{\alpha}^{\vee}\)._ Proof.: If \(\operatorname{rk}\alpha=0\), then \(\mathfrak{g}_{-\alpha}\simeq\mathfrak{g}_{\alpha}^{\vee}\) because \(\mathfrak{g}_{\alpha}\) and \(\mathfrak{g}_{-\alpha}\) are of the same parity. If \(\operatorname{rk}\alpha=2\), then according to (4) in Section 2.2, we have \(\mathfrak{g}_{\alpha}\otimes\mathfrak{g}_{-\alpha}\simeq\Pi\mathcal{S}( \mathfrak{h}_{\overline{1}}/\ker B_{\alpha})\) if \(\mathfrak{g}_{-\alpha}\simeq\mathfrak{g}_{\alpha}^{\vee}\) and \(\mathfrak{g}_{\alpha}\otimes\mathfrak{g}_{-\alpha}\simeq\mathcal{S}( \mathfrak{h}_{\overline{1}}/\ker B_{\alpha})\) otherwise. However in the latter situation, the top of \(\mathcal{S}(\mathfrak{h}_{\overline{1}}/\ker B_{\alpha})\) as an \(\mathfrak{h}\)-module is even, and thus \(\mathfrak{h}_{\alpha}=\operatorname{Im}([-,-]:\mathfrak{g}_{\alpha}\otimes \mathfrak{g}_{-\alpha}\to\mathfrak{h})\) would be purely even. However the proof of Lemma 5.5 has shown that \(\mathfrak{h}\) contains a non-trivial odd element. Our result follows. ### Realization of \(\mathfrak{g}(\alpha)\) as in Theorem 5.1 Again, suppose \(\mathfrak{h}_{\alpha}\nsubseteq\operatorname{Ann}_{\mathfrak{h}}\mathfrak{g}_{\alpha}\). We continue building on the results of Lemmas 5.4 and 5.5 along with Corollary 5.1. If \(\operatorname{rk}\alpha=0\), then it is clear that \(\mathfrak{g}(\alpha)\) is isomorphic to either \(\mathfrak{sl}(2)\) or \(\mathfrak{osp}(1|2)\). So assume \(\operatorname{rk}\alpha=2\). In this case \(\mathfrak{g}_{\alpha}\) is \((1|1)\)-dimensional. If \(\Delta=\{\pm\alpha\}\), let \(e\) and \(E\) be even and odd basis vectors of \(\mathfrak{g}_{\alpha}\), respectively. As \(\mathfrak{g}_{-\alpha}\simeq\mathfrak{g}_{\alpha}^{\vee}\), under this identification we may write \(f\coloneqq e^{\vee}\) and \(F\coloneqq\sqrt{-1}E^{\vee}\) as in Proposition 3.1, to form a (unique up to scalar) homogeneous basis for \(\mathfrak{g}_{-\alpha}\). If \(\Delta=\{\pm\alpha,\pm 2\alpha\}\), we instead let \(E\) and \(e\) be even and odd basis vectors of \(\mathfrak{g}_{\alpha}\) (note the change of order), and set \(f=\sqrt{-1}e^{\vee}\) and \(F=E^{\vee}\). Now in either case, we define the following pure coroots: \[h\coloneqq[e,f],\ \ c\coloneqq[E,F],\ \ H\coloneqq[E,f].\] If \(\Delta=\{\pm\alpha\}\), then we normalize \(e\) in the such that \(\alpha(h)=2\), and if \(\Delta=\{\pm\alpha,\pm 2\alpha\}\) we normalize \(e\) so that \(\alpha(h)=1\). Then the proof of Proposition 3.1 and a direct computation give the following relations: \[H=[e,F],\ \ [H,e]=\alpha(h)E,\ \ [H,E]=0,\ \ [H,H]=2c,\ \ \alpha(c)=0.\] It is now straightforward to check that \(\mathfrak{g}(\alpha)\) is isomorphic to either \(\mathfrak{ts}\) or \(\mathfrak{pts}\) where \(\mathfrak{s}=\mathfrak{sl}(2)\) if \(\Delta=\{\pm\alpha\}\) and \(\mathfrak{s}=\mathfrak{osp}(1|2)\) if \(\Delta=\{\pm\alpha,\pm 2\alpha\}\). Proof of Theorem 5.1.: The theorem is an immediate consequence of Lemmas 5.3 and 5.4, and 5.5, along with Corollary 5.1 and Section 5.3. ## 6. On Connectivity of simple roots Let \(\mathfrak{g}(\mathcal{A})\) be a Clifford Kac-Moody algebra. For a simple root \(\alpha\in\Pi\), let \(\mathfrak{g}(\alpha)\) denote the subalgebra of \(\mathfrak{g}(\mathcal{A})\) generated by \(\mathfrak{g}_{\alpha}\) and \(\mathfrak{g}_{-\alpha}\). Using the results of Section 5, we define the root type of a simple root \(\alpha\in\Pi\) according to the following table: \begin{tabular}{|c|c|} \hline root type & the root \(\alpha\) satisfies \\ \hline \hline \(\mathfrak{sl}(2)\) & \(\mathfrak{g}(\alpha)\cong\mathfrak{sl}(2)\) \\ \hline \(\mathfrak{osp}(1|2)\) & \(\mathfrak{g}(\alpha)\cong\mathfrak{osp}(1|2)\) \\ \hline \(\mathfrak{sl}(1|1)\) & \(\mathfrak{g}(\alpha)\cong\mathfrak{sl}(1|1)\) \\ \hline \(\mathfrak{hc}(0)\) & \(\mathfrak{g}(\alpha)\) a purely even Heisenberg of rank \(0\) \\ & (see Example 5.1) \\ \hline \(\mathfrak{hc}(0)^{\Pi}\) & a'mixed' parity Heisenberg of rank \(0\) \\ & (see Example 5.1) \\ \hline \(Tak(\mathfrak{sl}(2))\) & \(\operatorname{rk}\alpha=2\) and \(\mathfrak{g}(\alpha)\) a central quotient of \(\mathfrak{sl}(2)\) \\ & (case (iii) of Theorem 5.1) \\ \hline \(Tak(\mathfrak{osp}(1|2))\) & \(\operatorname{rk}\alpha=2\) and \(\mathfrak{g}(\alpha)\) a central quotient of \(\mathfrak{osp}(1|2)\) \\ & (case (iv) of Theorem 5.1) \\ \hline \(Tak(\mathfrak{sl}(1|1))\) & \(\operatorname{rk}\alpha=2\) and \(\mathfrak{g}(\alpha)\) a central quotient of \(\mathfrak{sl}(1|1)\cong\mathfrak{tb}(0)\) \\ & (see Example 4.6) \\ \hline \(H_{n}\) & \(\alpha\) is of Heisenberg type and \(\operatorname{rk}\alpha=n\) \\ & (see Section 5.1) \\ \hline \end{tabular} We emphasize that each simple root in a Clifford Kac-Moody algebra has a well-defined type. The above table hints that simple roots of rank \(0\) and rank \(2\) are distinguished among all simple roots. In this section, we indeed show that if we want 'interesting' interactions between simple roots, i.e. without any obvious ideals, we should ask that they are all of rank zero or rank \(2\). ### Connectivity The next natural question to address is when two simple roots can interact, according to their type in the above table. To be precise, we are interested in the following question: **Question:** for which ordered pairs of simple roots \((\alpha,\beta)\) of given type in the above table do the following equivalent condition hold: 1. \([\mathfrak{h}_{\alpha},\mathfrak{g}_{\beta}]\neq 0\); 2. \([\![(\mathfrak{h}_{\alpha})_{\overline{0}},\mathfrak{g}_{\beta}]\neq 0\); 3. there exists a pure, even \(\alpha\)-coroot \(h\) (see Section 3.3) for which \(\beta(h)\neq 0\). Notice that when the above question has a positive answer, we will have that \(\mathfrak{g}_{\alpha+\beta}\neq 0\), i.e. \(\alpha+\beta\in\Delta\). Equivalently, the above question asks when it is a necessary condition that \(\mathfrak{h}_{\alpha}\subseteq\operatorname{Ann}_{\mathfrak{h}}\mathfrak{g}_ {\beta}\), according to the root type, as listed in the above table. As an example, if \(\alpha\) is of \(\mathfrak{h}\mathfrak{e}(0)\)-type and \(\beta\) is of any other type, then we always have \([\![\mathfrak{h}_{\alpha},\mathfrak{g}_{\beta}]\!]=0\). This is because even Heisenbergs have no finite-dimensional representations with nontrivial central character. This section seeks to answer the above question, and the answer is depicted in the picture below. Namely, in the below diagram we positioned all root types in separate places, and then we draw an arrow from root type \(\alpha\) to root type \(\beta\) if it is possible that \([\![\mathfrak{h}_{\alpha},\mathfrak{g}_{\beta}]\!]\neq 0\) in a Clifford Kac-Moody algebra. First of all, we note this is an expanded version of Figure 1 from the introduction. In the above diagram, the bold-faced downward arrows \(\Rightarrow\) from \(\mathfrak{sl}(2),\mathfrak{osp}(1|2)\), and \(\mathfrak{sl}(1|1)\) signify that if \(\alpha\) is one of any of these three root types and \(\beta\) is another root type, then it is possible that \([\![\mathfrak{h}_{\alpha},\mathfrak{g}_{\beta}]\!]\neq 0\). We have used this notation to avoid an overwhelming thicket of arrows from these three root types. Further, the dashed arrow pointing to \(H_{1}\) is meant to signify that if \(\alpha\) is of type \(Tak(\mathfrak{sl}(2))\) and \(\beta\) is of type \(H_{1}\), and if we have \([\mathfrak{h}_{\alpha},\mathfrak{g}_{\beta}]\neq 0\), then \([\mathfrak{h}_{\beta},\mathfrak{g}_{\gamma}]=0\) for any simple root \(\gamma\), i.e. \(\beta\) is forced to become a'sink'. ### Sinks In light of condition (3) in our above question, an important question is when all pure, even coroots of \(\mathfrak{g}(\alpha)\) lie in an even Heisenberg triple, i.e. lie in a subalgebra isomorphic to \(\mathfrak{he}(0)\). In this case \([\mathfrak{h}_{\alpha},\mathfrak{g}_{\beta}]=0\) for all simple roots \(\beta\), i.e. \((\mathfrak{h}_{\alpha})_{\overline{0}}\) will be central in \(\mathfrak{g}(\mathcal{A})\). One of the main results of this section is the following: **Proposition 6.1**.: _All pure, even coroots of \(\mathfrak{g}(\alpha)\) lie in an even Heisenberg triple whenever \(\alpha\) is one of the following types:_ 1. \(\mathfrak{he}(0)\)_;_ 2. \(\mathfrak{he}(0)^{\Pi}\)_;_ 3. \(\mathfrak{he}(2)^{\Pi}\)_;_ 4. \(H_{n}\) _for_ \(n\geq 3\)_._ _In particular, in these cases, \([\mathfrak{h}_{\alpha},\mathfrak{g}_{\beta}]=0\) for all simple roots \(\beta\)._ The above proposition tells us that if \(\alpha\) is a simple root of the above type, then \(\mathfrak{g}(\alpha)\) generates an ideal containing no other simple roots. Thus they are'sinks' in the theory, which we see from the diagram above. Proof of Proposition 6.1.: The statement for \(\mathfrak{he}(0)\) is clear, and for \(\mathfrak{he}(0)^{\Pi}\) there are no even coroots, so that statement is vacuously true. For \(\mathfrak{he}(2)^{\Pi}\), let \(e\) and \(E\) be even and odd basis vectors of \(\mathfrak{g}_{\alpha}\), and \(f\) and \(F\) even and odd basis vectors of \(\mathfrak{g}_{-\alpha}\). Then our claim reduces to showing \([E,F]\in\mathbb{C}([e,f])\). Choose \(K\in\mathfrak{h}\) such that \(\alpha(K^{2})\neq 0\), so that \(K\) defines an odd automorphism of \(\mathfrak{g}_{\pm\alpha}\). Then \([K,e]=aE\) and \([K,F]=bf\) for non-zero \(a,b\in\mathbb{C}\). we know from Section 2.3 that \(\mathfrak{h}_{\alpha}\) is one-dimensional and purely even, so that \([e,F]=0\). Thus \[0=[K,[e,F]]=a[E,F]+b[e,f],\] which gives our result. Finally, we deal with case of \(H_{n}\) for \(n\geq 3\). In the proof, we will use the realization of a simple \(\mathfrak{h}\)-module given in Section 2.5. We remind here that \(\mathbb{C}[\xi_{1},...,\xi_{m}]\) is the superalgebra of polynomial in odd variables \(\xi_{1},...,\xi_{m}\) satisfying the relation \(\xi_{i}\xi_{j}=-\xi_{j}\xi_{i}\) for all \(i,j\). For a set \(I=\{i_{1},...,i_{k}\}\subseteq\{1,...,m\}\), we write \(\xi_{I}:=\xi_{i_{1}}\cdot...\cdot\xi_{i_{k}}\) for \(i_{1}<...<i_{k}\). We first consider the case where \(\operatorname{rk}\alpha\) is even, let \(m\in\mathbb{Z}\) be such that \(\operatorname{rk}\alpha=2m\). Then our assumption on \(\operatorname{rk}\alpha\) implies \(m\geq 2\). According to Section 2.5, we can identify \(\mathfrak{g}_{\alpha}\) as \(\mathbb{C}[\xi_{1},...,\xi_{m}]\) up to parity, and choose \(H_{1},...,H_{m},\bar{H}_{1},...,\bar{H}_{m}\in\mathfrak{h}_{\overline{1}}\) such that \(H_{i}\) acts on \(\mathfrak{g}_{\alpha}\) via multiplication by \(\xi_{i}\), and \(\bar{H}_{i}\) by the derivation \(\partial_{\xi_{i}}\). Similarly, we can identify \(\mathfrak{g}_{-\alpha}\) as \(\mathbb{C}[\phi_{1},...,\phi_{m}]\) up to parity, such that \(H_{i}\) acts on \(\mathfrak{g}_{-\alpha}\) via multiplication by \(-\phi_{i}\), and \(\bar{H}_{i}\) be the derivation \(\partial_{\phi_{i}}\). Let \(\xi_{I}\) and \(\phi_{J}\) be odd elements; we will show that \([\xi_{I},\phi_{J}]\) is either zero or a bracket of two even elements from \(\mathfrak{g}_{\alpha}\) and \(\mathfrak{g}_{-\alpha}\). We will use the fact that \(\mathfrak{h}_{\alpha}=[\mathfrak{g}_{\alpha},\mathfrak{g}_{-\alpha}]\subseteq \mathfrak{h}\), and so \([\mathfrak{h}_{\overline{1}},[\mathfrak{h}_{\overline{1}},\mathfrak{h}_{ \alpha}]]=0\) because \(\mathfrak{h}\) is quasi-toral. If \(|I\cap J|\geq 2\), then we can find \(i,j\in I\cap J\) such that \(i\neq j\), so \[0=[H_{i},[H_{j},[\xi_{I\setminus\{i,j\}},\phi_{J}]]]=\pm[\xi_{I},\phi_{J}].\] If \(I=J=\{i\}\), then because \(m\geq 2\), we can find \(j\notin I\) so that \[0=[H_{i},[\bar{H}_{j},[\xi_{j},\phi_{i}]]]=[H_{i},[\xi_{\varnothing},\phi_{i}]]=[ \xi_{i},\phi_{i}].\] If \(I=J=\varnothing\), then because \(m\geq 2\), we can find \(i,j\in\{1,...,m\}\) such that \(i\neq j\), so \[0=[\bar{H}_{i},[\bar{H}_{j},[\xi_{i},\phi_{j}]]]=\pm[\bar{H}_{i},[\xi_{i},\phi_ {\varnothing}]]=\pm[\xi_{\varnothing},\phi_{\varnothing}].\] Finally, if none of the above happens, then it must be that \(I\neq J\). If \(i\in I\setminus J\), then \[0=[\bar{H}_{i},[H_{i},[\xi_{I},\phi_{J}]]]=[\bar{H}_{i},[\xi_{I},-\phi_{i}\phi _{J}]]=\pm[\xi_{I\setminus\{i\}},\phi_{J\cup\{i\}}]\pm[\xi_{I},\phi_{J}].\] If \(i\in J\setminus I\), then \[0=[\bar{H}_{i},[H_{i},[\xi_{I},\phi_{J}]]]=[\bar{H}_{i},[\xi_{i}\xi_{I},\phi_ {J}]]=[\xi_{I},\phi_{J}]\pm[\xi_{I\cup\{i\}},\phi_{J\setminus\{i\}}].\] In the above two equations, we see that our term of interest, \([\xi_{I},\phi_{J}]\), is equal to a bracket of two even elements, as desired. We now consider the case where \(\operatorname{rk}\alpha\) is odd, let \(m\in\mathbb{Z}\) be such that \(\operatorname{rk}\alpha=2m+1\). Again, from Section 2.5, we can find \(H_{1},...,H_{m},\bar{H}_{1},...,\bar{H}_{m},\tilde{H}\in\mathfrak{h}_{\mathbb{ T}}\) such that \(\mathfrak{g}_{\alpha}\) can be realized as \(\mathbb{C}[\xi_{1},...,\xi_{m+1}]\), such that \(H_{i}\) acts by \(\xi_{i}\), \(\bar{H}_{i}\) acts by \(\partial_{\xi_{i}}\), and \(\tilde{H}\) acts by \(\xi_{m+1}+\partial_{\xi_{m+1}}\). Similarly, \(\mathfrak{g}_{-\alpha}\) can be realized as \(\mathbb{C}[\phi_{1},...,\phi_{m+1}]\), such that \(H_{i}\) acts by \(-\phi_{i}\), \(\bar{H}_{i}\) acts by \(\partial_{\phi_{i}}\), and \(\tilde{H}\) acts by \(-\phi_{m+1}+\partial_{\phi_{m+1}}\). As in the previous case, let \(\xi_{I}\) and \(\phi_{J}\) be odd elements. We notice that in the current setting \((\operatorname{rk}\alpha=2m+1)\), it must be that \(|I|\) and \(|J|\) are odd integers. We will show that \([\xi_{I},\phi_{J}]\) is either zero or a bracket of two even elements from \(\mathfrak{g}_{\alpha}\) and \(\mathfrak{g}_{-\alpha}\). The only computations left (which are not analogous to the case when \(\operatorname{rk}\alpha\) is even) are for \(I\setminus\{m+1\}=J\setminus\{m+1\}\) and \(|I\setminus\{m+1\}|\leq 1\). If \(I=J\) and \(I\setminus\{m+1\}=\{i\}\), then \[0=[\tilde{H},[H_{i},[\xi_{m+1},\phi_{i}]]]=[\tilde{H},[\xi_{i}\xi_{m+1},\phi_{i }]]=\pm[\xi_{i},\phi_{i}]\pm[\xi_{i}\xi_{m+1},\phi_{i}\phi_{m+1}].\] If \(I=J=\{m+1\}\), then \[0=[\tilde{H},[\bar{H}_{1},[\xi_{1},\phi_{m+1}]]]=[\tilde{H},[\xi_{\varnothing}, \phi_{m+1}]]=[\xi_{m+1},\phi_{m+1}]\pm[\xi_{\varnothing},\phi_{\varnothing}].\] This concludes our proof. Connectivity properties of \(Tak(\mathfrak{sl}(2))\), \(Tak(\mathfrak{osp}(1|2))\), and \(Tak(\mathfrak{sl}(1|1))\) In this section we assume that \(\mathfrak{g}(\alpha)\) is of type \(Tak(\mathfrak{sl}(2))\), \(Tak(\mathfrak{osp}(1|2))\), or \(Tak(\mathfrak{sl}(1|1))\); equivalently we assume that \(\operatorname{rk}\alpha=2\) and \(\mathfrak{g}_{\alpha}^{\vee}\cong\mathfrak{g}_{-\alpha}\). In this case, by Section 2.3, \(\mathfrak{h}_{\alpha}\) is generated by an odd element which we call \(H_{\alpha}\). **Lemma 6.1**.: _If \(\beta\) is a simple root with \(\operatorname{rk}\beta=0\), we have \([\mathfrak{h}_{\alpha},\mathfrak{g}_{\beta}]=0\)._ Proof.: Since \([H_{\alpha},\mathfrak{g}_{\beta}]=0\) we are done. **Lemma 6.2**.: _Suppose that \(\beta\) is of rank 1. Then_ 1. _if_ \(\alpha\) _is of type_ \(Tak(\mathfrak{sl}(1|1))\) _or_ \(Tak(\mathfrak{osp}(1|2))\)_, then_ \([\mathfrak{h}_{\alpha},\mathfrak{g}_{\beta}]=0\)_;_ 2. _if_ \(\alpha\) _is of type_ \(Tak(\mathfrak{sl}(2))\) _and_ \([\mathfrak{h}_{\alpha},\mathfrak{g}_{\beta}]\neq 0\)_, then_ \([\mathfrak{h}_{\beta},\mathfrak{g}_{\gamma}]=0\) _for all simple roots_ \(\gamma\) _(i.e._ \(H_{1}\) _'becomes' a sink, justifying the dashed arrow in our diagram)._ Proof.: For (1), we have \(H_{\alpha}^{2}=0\) for \(Tak(\mathfrak{sl}(1|1))\), while \(H_{\alpha}^{2}\) is the center of an even Heisenberg triple for \(Tak(\mathfrak{osp}(1|2))\). Thus we necessarily have \(\beta(H_{\alpha}^{2})=0\) for these two cases. Since \(\beta\) is of rank one this implies that \([H_{\alpha},\mathfrak{g}_{\beta}]=0\), and we are done. For (2), if \([\mathfrak{h}_{\alpha},\mathfrak{g}_{\beta}]\neq 0\), then clearly \(H_{\alpha}\) acts by an automorphism on \(\mathfrak{g}_{\beta}\). Write \(e,E\) and \(f,F\) for homogeneous bases of \(\mathfrak{g}_{\beta}\) and \(\mathfrak{g}_{-\beta}\) respectively, and set \(H_{\beta}=[E,f]\). Then we see that \[[H_{\alpha},H_{\beta}]=a[e,f]+b[E,F],\] for some nonzero \(a,b\in\mathbb{C}\). On the other hand we may write \(\mathfrak{h}_{\alpha}\ni[H_{\alpha},H_{\beta}]=rh_{\alpha}+sc_{\alpha}\), where \(h_{\alpha}\) is the pure \(\alpha\) coroot lying in an \(\mathfrak{sl}(2)\)-triple, and \(c_{\alpha}\) is central in \(\mathfrak{g}(\alpha)\). Further, because \(H_{\beta}^{2}=0\), we see that either \(r=0\) or \(s=0\), and perhaps both are \(0\). Now since \[a[e,f]+b[E,F]=rh_{\alpha}+sc_{\alpha},\] and it is clear that \(\beta(a[e,f]+b[E,F])=0\), we also must have \(r\beta(h_{\alpha})+s\beta(c_{\alpha})=0\). By \(\mathfrak{sl}(2)\)-representation theory we must have \(\beta(h_{\alpha})\neq 0\), so this forces \(r=0\), which tells us that \[a[e,f]+b[E,F]=sc_{\alpha}.\] However since \(\alpha(c_{\alpha})=0\), this forces \(a\alpha([e,f])+b\alpha([E,F])=0\). Since \([e,f]\) lies in a Heisenberg triple, we must have \(\alpha([e,f])=0\), so necessarily \(\alpha([E,F])=0\). Thus \(\alpha((\mathfrak{h}_{\beta})_{\overline{0}})=0\), which implies \([H_{\alpha},H_{\beta}]=0\). It follows that \[a[e,f]+b[E,F]=0,\] meaning that \([E,F]\in\mathbb{C}([e,f])\), so that all pure, even coroots of \(\mathfrak{g}(\beta)\) lie in a Heisenberg triple. This completes the argument. ### Connectivity properties of \(H_{1}\) **Lemma 6.3**.: _Let \(\alpha\in\Pi\) be such that \(\operatorname{rk}\alpha=1\), and let \(\beta\in\Pi\) be any simple root with either \(\operatorname{rk}\beta<2\) or \(\operatorname{rk}\beta=2\) of type \(Tak(\mathfrak{sl}(2))\) or \(Tak(\mathfrak{osp}(1|2))\). Then \([\mathfrak{h}_{\alpha},\mathfrak{g}_{\beta}]=0\)._ Proof.: For the proof, let us first realize the subalgebra of \(\mathfrak{g}(\mathcal{A})\) generated by \(\mathfrak{g}_{\alpha}\) and \(\mathfrak{g}_{-\alpha}\). Because \(\operatorname{rk}\alpha=1\), we have \(\mathfrak{g}_{-\alpha}\cong\mathfrak{g}_{\alpha}^{\vee}\), and thus we identify \(\mathfrak{g}_{-\alpha}\) and \(\mathfrak{g}_{\alpha}^{\vee}\). Let \(e\) and \(E\) be even and odd basis vectors for \(\mathfrak{g}_{\alpha}\), respectively. Then \(f:=e^{\vee}\) and \(F:=\sqrt{-1}E^{\vee}\) form a homogeneous basis for \(\mathfrak{g}_{-\alpha}\). Let \(h:=[e,f]\), \(c:=[E,F]\) and \(H:=[e,F]\). We notice that \(e,h,f\) form a Heisenberg subalgebra, so the integrability of \(\mathfrak{g}(\mathcal{A})\) implies \(\beta(h)=0\). Suppose first that \([H,\mathfrak{g}_{\beta}]=0\). The irreducibility of \(\mathfrak{g}_{\alpha}\) implies the existence of \(K\in\mathfrak{h}_{\overline{1}}\) such that \([K,\mathfrak{g}_{\alpha}]=\mathfrak{g}_{\alpha}\). In particular, \([K,H]=ah+bc\) for some non-zero \(a,b\in\mathbb{C}\). Therefore \[0=[[K,H],\mathfrak{g}_{\beta}]=[ah+bc,\mathfrak{g}_{\beta}].\] As \(\beta(h)=0\), we must have \(\beta(c)=0\). Thus we obtain \([\mathfrak{h}_{\alpha},\mathfrak{g}_{\beta}]=0\), as desired. We conclude: \[[H,\mathfrak{g}_{\beta}]=0\implies[\mathfrak{h}_{\alpha},\mathfrak{g}_{\beta }]=0. \tag{9}\] We now prove the lemma case by case, depending on the properties of \(\beta\). If \(\operatorname{rk}\beta=0\), we must have \([H,\mathfrak{g}_{\beta}]=0\), so we are done. Similarly if \(\operatorname{rk}\beta=1\), then since \(H^{2}=0\) we must have \([H,\mathfrak{g}_{\beta}]=0\), so we are done. Suppose now \(\beta\) is of \(Tak(\mathfrak{sl}(2))\)-type or \(Tak(\mathfrak{osp}(1|2))\)-type. In the latter case, \(\mathfrak{g}(\alpha)\) contains \((\mathfrak{p})\mathfrak{sq}(2)\) as a subalgebra with the same coroots, so it suffices to assume we are in this case. However by Lemma 6.2, if \([\mathfrak{h}_{\alpha},\mathfrak{g}_{\beta}]\neq 0\) then necessarily \([\mathfrak{h}_{\beta},\mathfrak{g}_{\alpha}]=0\). If we write \(H_{\beta}\) for a nonzero odd coroot of \(\mathfrak{h}_{\beta}\), then this implies that \([H_{1},H_{\beta}]=0\). Thus \(H,H_{\beta}\) have commuting actions on \(\mathfrak{g}_{\beta}\), so since \(H_{1}\) acts nontrivially it must act nontrivially on the \(\mathfrak{sl}(2)\) triple inside \(\mathfrak{g}(\beta)\). However if \(h_{\beta}\) denotes the even coroot of said \(\mathfrak{sl}(2)\) triple, then this implies \(\alpha(h_{\beta})\neq 0\), a contradiction. ## 7. Queer Kac-Moody algebras ### Regularity and indecomposability **Definition 7.1**.: We say that a Cartan datum \(\mathcal{A}\) (and by extension, the Lie superalgebra \(\mathfrak{g}(\mathcal{A})\)) is regular if \([\mathfrak{h}_{\alpha},\mathfrak{g}_{\beta}]=0\) implies \([\mathfrak{h}_{\beta},\mathfrak{g}_{\alpha}]=0\) for any \(\alpha,\beta\in\Pi\). From Proposition3.2 we deduce: **Corollary 7.1**.: _If \(\mathfrak{g}\) is regular and \(\alpha,\beta\in\Pi\) are such that \([\mathfrak{h}_{\alpha},\mathfrak{g}_{\beta}]=0\), then \(\alpha+\beta\) is not a root of \(\mathfrak{g}\)._ **Definition 7.2**.: Let \(\mathcal{A}\) be a Cartan datum. We say that \(\mathcal{A}\) is indecomposable if there does not exist a nontrivial partition \(\Pi=\Pi_{1}\sqcup\Pi_{2}\) such that \([\mathfrak{h}_{\alpha},\mathfrak{g}_{\beta}]=[\mathfrak{h}_{\beta},\mathfrak{ g}_{\alpha}]=0\) for all \(\alpha\in\Pi_{1}\), \(\beta\in\Pi_{2}\). Our work in the previous section immediately implies: **Theorem 7.1**.: _Let \(\mathfrak{g}(\mathcal{A})\) be a regular, indecomposable Clifford Kac-Moody algebra with \(|\Pi|>1\). Then one of two possibilities can occur:_ 1. \(\operatorname{rk}\alpha=0\) _and_ \(\mathfrak{g}_{-\alpha}\simeq\mathfrak{g}_{\alpha}^{\vee}\) _for all_ \(\alpha\in\Pi\)_;_ 2. \(\operatorname{rk}\alpha=2\) _and_ \(\mathfrak{g}_{-\alpha}\simeq\mathfrak{g}_{\alpha}^{\vee}\) _for all_ \(\alpha\in\Pi\)_._ ### Queer Kac-Moody superalgebras Following Theorem 7.1, it makes sense to consider the class of Clifford Kac-Moody algebras in which any simple root \(\alpha\in\Pi\) satisfies \(\operatorname{rk}\alpha=2\) and \(\mathfrak{g}_{-\alpha}\simeq\mathfrak{g}_{\alpha}^{\vee}\). **Definition 7.3**.: Let \(\mathcal{A}\) be a Cartan datum. We say the Lie superalgebra \(\mathfrak{g}(\mathcal{A})\) is queer Kac-Moody (qKM) if the following conditions are satisfied: 1. \(\mathcal{A}\) is integrable, see Definition 4.1; 2. \(\mathcal{A}\) is regular, see Definition 7.1; 3. every \(\alpha\in\Pi\) is of rank \(2\); 4. for any \(\alpha\in\Pi\), \(\mathfrak{h}\)-module \(\mathfrak{g}_{\alpha}\) is irreducible, and \(\mathfrak{g}_{-\alpha}\simeq\mathfrak{g}_{\alpha}^{\vee}\); 5. for any \(\alpha\in\Pi\) we have \(\mathfrak{h}_{\alpha}\nsubseteq\sum_{\beta\in\Pi\smallsetminus\{\alpha\}} \mathfrak{h}_{\beta}\). In this situation we will also say that \(\mathcal{A}\) is a queer Kac-Moody Cartan datum. **Remark 7.1**.: We notice that every qKM algebra is a Clifford Kac-Moody algebra, and the possible simple root types are \(Tak(\mathfrak{sl}(2))\), \(Tak(\mathfrak{osp}(1|2))\), and \(Tak(\mathfrak{sl}(1|1))\). **Remark 7.2**.: From Corollary 3.3 we deduce that any qKM algebra admits a Chevalley automorphism \(\omega\) extending \(\omega_{\mathfrak{h}}\) and satisfying \(\omega^{2}=\delta\). **Remark 7.3**.: Condition (KM2) should be viewed as a convenient assumption; we expect that if we would remove the regularity condition, we would obtain algebras with a nontrivial ideal inside generated by 'non-regular' simple roots. However for the sake of tightening the scope of this paper, we limit our considerations. **Remark 7.4**.: Property (KM5) is a condition on the linear independence of the simple coroots of \(\mathfrak{g}(\mathcal{A})\). In a Clifford Kac-Moody algebra with all simple roots of rank \(0\), this condition coincides with the usual linear independence of coroots condition as in [K1]. **Lemma 7.1**.: _Let \(\mathcal{A}\) be a Cartan datum with simple roots \(\Pi=\{\alpha_{1},\ldots,\alpha_{n}\}\) satisfying (KM1)-(KM4) of Definition 7.3. Then \(\mathcal{A}\) satisfies (KM5) if and only if the odd coroots in \(\mathfrak{h}_{\alpha_{1}},\ldots,\mathfrak{h}_{\alpha_{n}}\) are linearly independent._ Proof.: The backward direction is clear. For the forward direction, suppose that \(\mathcal{A}\) satisfies (KM5) but the odd coroots are not linearly independent. Since \(\dim(\mathfrak{h}_{\alpha_{i}})_{\overline{1}}=1\) for all \(i\), let \(H_{i}\in(\mathfrak{h}_{\alpha_{i}})_{\overline{1}}\) be any nonzero element. Then by our assumption, for some \(i\) we have \(H_{i}\in\sum_{j\neq i}\mathfrak{h}_{\alpha_{j}}\). However \(H_{i}\) generates \(\mathfrak{h}_{\alpha_{i}}\) as an \(\mathfrak{h}\)-module; thus \[\mathfrak{h}_{\alpha_{i}}=\mathbb{C}\langle H_{i}\rangle\oplus[\mathfrak{h}, H_{i}]\subseteq\sum_{j\neq i}[\mathfrak{h},\mathfrak{h}_{\alpha_{i}}] \subseteq\sum_{j\neq i}\mathfrak{h}_{\alpha_{j}},\] contradiction (KM5). **Remark 7.5**.: It follows from our work that if \(\mathcal{A}\) is qKM, then for a simple root \(\alpha\), the subalgebra \(\mathfrak{g}(\alpha)\) is isomorphic to a central quotient of \(\mathfrak{ts}\) for a Kac-Moody Lie superalgebra with one simple root (see Example 4.6). This illustrates a beauty in the theory of queer Kac-Moody superalgebras: they are built by 'glueing together' extensions of Takiffs of Kac-Moody superalgebras with one simple root. A trivial way to glue together such Takiffs is via the Takiff construction applied to an arbitrary Kac-Moody Lie superalgebra; however, in a sense, this is a less interesting type of glueing. We will find that if we avoid such constructions, we are led to more 'interesting' Lie superalgebras such as \(\mathfrak{q}(n)\), and also that the theory becomes very rigid. ### Cartan datum for qKM algebras Because qKM algebras only have simple roots of type \(Tak(\mathfrak{s})\) for \(\mathfrak{s}=\mathfrak{sl}(2),\mathfrak{osp}(1|2)\), or \(\mathfrak{sl}(1|1)\), we can more explicitly present their Cartan datum. We do that now, while also introducing notation that will be used henceforth throughout the paper. Let \(\mathcal{A}\) be a qKM Cartan datum, and let \(\Pi=\{\alpha_{1},\ldots,\alpha_{n}\}\) be the simple roots. For each \(i\), we have the subalgebra \(\mathfrak{g}(\alpha_{i})\) generated by \(\mathfrak{g}_{\alpha_{i}}\) and \(\mathfrak{g}_{-\alpha_{i}}\). We choose a homogeneous bases of \(\mathfrak{g}_{\pm\alpha_{i}}\), using the identification \(\mathfrak{g}_{-\alpha_{i}}=\mathfrak{g}_{\alpha_{i}}^{\vee}\), as follows: 1. if \(\alpha_{i}\) is of type \(Tak(\mathfrak{sl}(2))\), set \(e_{i}\) and \(E_{i}\) to be nonzero even and odd elements of \(\mathfrak{g}_{\alpha_{i}}\), and let \(f_{i}:=e_{i}^{\vee}\) and \(F_{i}:=\sqrt{-1}E_{i}^{\vee}\). Rescale the choice of \(e_{i}\) so that \(\alpha_{i}([e_{i},f_{i}])=2\); 2. if \(\alpha_{i}\) is of type \(Tak(\mathfrak{osp}(1|2))\) or \(Tak(\mathfrak{sl}(1|1))\), set \(E_{i}\) and \(e_{i}\) to be nonzero even and odd elements of \(\mathfrak{g}_{\alpha_{i}}\) (notice the change in order), and let \(f_{i}:=\sqrt{-1}e_{i}^{\vee}\) and \(F_{i}:=E_{i}^{\vee}\). If \(\alpha_{i}\) is of type \(Tak(\mathfrak{osp}(1|2))\), then rescale the choice of \(e_{i}\) so that \(\alpha_{i}([e_{i},f_{i}])=1\). It follows from the above choices that: 1. if \(\alpha_{i}\) is of type \(Tak(\mathfrak{sl}(2))\), then \(e_{i},[e_{i},f_{i}],f_{i}\) form an \(\mathfrak{sl}(2)\) triple; 2. if \(\alpha_{i}\) is of type \(Tak(\mathfrak{osp}(1|2))\), then \([e_{i},e_{i}],e_{i},[e_{i},f_{i}],f_{i},[f_{i},f_{i}]\) form an \(\mathfrak{osp}(1|2)\)-quintuple; 3. if \(\alpha_{i}\) is of type \(Tak(\mathfrak{sl}(1|1))\), then \(e_{i},[e_{i},f_{i}],f_{i}\) form an \(\mathfrak{sl}(1|1)\)-triple. On should view the \(e_{i},E_{i},f_{i}\), and \(F_{i}\) as the 'Chevalley generators' for a qKM algebra. #### 7.3.1. Uniqueness of generators Before going further, we note that the above choices are not unique: in particular we may simultaneously rescale \(e_{i}\) and \(f_{i}\) by \((-1)\), or simultaneously rescale \(E_{i}\) and \(F_{i}\) by \(\lambda\in\mathbb{C}^{\times}\). For \(Tak(\mathfrak{sl}(1|1))\) we may also rescale \(e_{i},f_{i}\) by any \(\lambda\in\mathbb{C}^{\times}\). #### 7.3.2. Pure coroots Set \[H_{i}:=[E_{i},f_{i}];\] one may check that \(H_{i}\neq 0\), and we also always have \[H_{i}=[e_{i},F_{i}].\] Further: \[h_{i}:=[e_{i},f_{i}]\quad\ c_{i}:=[E_{i},F_{i}].\] The following table summarizes some of the important properties, which are easy to prove, of the pure coroots we have introduced. \begin{tabular}{|c|c|c|c|} \hline & \(Tak(\mathfrak{sl}(2))\) & \(Tak(\mathfrak{osp}(1|2))\) & \(Tak(\mathfrak{sl}(1|1))\) \\ \hline \(H_{i}^{2}=\) & \(c_{i}\) & \(c_{i}\) & \(0\) \\ \hline \(c_{i}\) lies in... & \(\mathfrak{sl}(1|1)\)-triple & even Heisenberg triple & even Heisenberg triple \\ & \(\alpha_{i}(c_{i})=0\) & \(\beta(c_{i})=0\) for all roots \(\beta\) & \(\beta(c_{i})=0\) for all roots \(\beta\) \\ \hline \(h_{i}\) lies in... & \(\mathfrak{sl}(2)\)-triple & \(\mathfrak{osp}(1|2)\) quintuple & \(\mathfrak{sl}(1|1)\)-triple \\ & \(\alpha_{i}(h_{i})=2\) & \(\alpha_{i}(h_{i})=1\) & \(\alpha_{i}(h_{i})=0\) \\ \hline \end{tabular} **Remark 7.6**.: A more concrete viewpoint on the elements described above is given using the fact that each \(\mathfrak{g}(\alpha_{i})\) is a central quotient of \(\mathfrak{ts}=\mathfrak{s}\otimes\mathbb{C}[\xi]\oplus\mathbb{C}\{c\}\) for \(\mathfrak{s}=\mathfrak{sl}(2),\mathfrak{osp}(1|2)\), or \(\mathfrak{sl}(1|1)\). In these terms, \(e_{i},f_{i}\) are the Chevalley generators of \(\mathfrak{s}\), \(h_{i}\) is the coroot of \(\mathfrak{s}\), \(c_{i}=c\), and \(H_{i}=h_{i}\otimes\xi\). #### 7.3.3. \(X\) and \(Y\) matrices Let \(\mathcal{A}\) be a qKM Cartan datum. We now introduce two \(n\times n\) matrices \(X(\mathcal{A})=X=(x_{ij})\) and \(Y(\mathcal{A})=Y=(y_{ij})\) of complex numbers, which encode much of the Cartan datum. The entries are defined as follows: \[[H_{j},e_{i}]=x_{ji}E_{i},\ \ \text{and}\ \ [H_{j},E_{i}]=y_{ji}e_{i}.\] Then because \(\mathfrak{g}_{-\alpha_{i}}\cong\mathfrak{g}_{\alpha_{i}}^{\vee}\), we have \[[H_{i},f_{j}]=-(-1)^{\overline{e_{i}}}x_{ij}F_{j},\ \ \text{and}\ \ [H_{i},F_{j}]=(-1)^{ \overline{e_{i}}}y_{ij}f_{j}.\] **Remark 7.7**.: The entries of the matrices \(X(\mathcal{A})\) and \(Y(\mathcal{A})\) are not unique; indeed, in addition to permuting indices, if we rescaled our choices of generators \(e_{i}\) and \(E_{i}\) (where permitted) we would scale certain entries in the matrices. Indeed, if we rescale \(E_{i}\) by \(\lambda\) the entries change as follows: \[y_{ij},y_{ji}\mapsto\lambda y_{ij},\lambda y_{ji},\ \ \ \ x_{ij},x_{ji}\mapsto\lambda x_{ij},x_{ji}/\lambda.\] #### 7.3.4 Formulas **Lemma 7.2**.: _We have the following formulas:_ _(1) for any \(i,j\):_ \[[H_{i},H_{j}]=x_{ij}c_{j}+y_{ij}h_{j}=x_{ji}c_{i}+y_{ji}h_{i};\] _(2) for any \(i,j\):_ \[\alpha_{k}([H_{i},H_{j}])=x_{ik}y_{jk}+x_{jk}y_{ik},\] _in particular,_ \[\alpha_{j}(H_{i}^{2})=x_{ij}y_{ij};\] _(3)_ \[y_{ii}=0\ \ \text{for all $i$};\] _(4)_ \[x_{ii}=\alpha_{i}(h_{i})=\begin{cases}2&\text{if $\alpha_{i}$ of type $Tak(\mathfrak{sl}(2))$},\\ 1&\text{if $\alpha_{i}$ of type $Tak(\mathfrak{osp}(1|2))$},\\ 0&\text{if $\alpha_{i}$ of type $Tak(\mathfrak{sl}(1|1))$}.\end{cases}\] _(5)_ \[\alpha_{i}([H_{i},H_{j}])=y_{ij}\alpha_{i}(h_{i}).\] Proof.: Formula (1) follows from applying the Jacobi identity \([H_{i},H_{j}]\) with either \(H_{i}=[E_{i},f_{i}]\) or \(H_{j}=[E_{j},f_{j}]\). Formula (2) holds because \(\alpha_{k}([H_{i},H_{j}])\) is given by the scalar action of \([H_{i},H_{j}]\) on \(\mathfrak{g}_{\alpha_{k}}\); however this is the action of the operator \(H_{i}H_{j}+H_{j}H_{i}\), and one sees that it acts by the given scalar. Formulas (3) and (4) are straightforward, and formula (5) follows from (2), (3), and (4). **Lemma 7.3**.: _For two simple roots \(\alpha_{i},\alpha_{j}\) of type \(Tak(\mathfrak{sl}(2))\) or \(Tak(\mathfrak{osp}(1|2))\) we have_ \[x_{ij}x_{ji}y_{ji}+y_{ij}\alpha_{i}(h_{j})=y_{ij}\alpha_{i}(h_{i}),\] _and_ \[y_{ij}y_{ji}(\alpha_{i}(h_{i})-\alpha_{j}(h_{j}))=y_{ij}^{2}\alpha_{i}(h_{j})- y_{ji}^{2}\alpha_{j}(h_{i}).\] _In particular if \(\alpha_{i}(h_{i})=\alpha_{j}(h_{j})\), then_ \[y_{ij}^{2}\alpha_{i}(h_{j})=y_{ji}^{2}\alpha_{j}(h_{i}).\] Proof.: Indeed, if we apply \(\alpha_{i}\) to formula (1) of Lemma 7.2 we obtain \[x_{ij}\alpha_{i}(c_{j})+y_{ij}\alpha_{i}(h_{j})=y_{ij}\alpha_{i}(h_{i}).\] Now we use that for \(Tak(\mathfrak{sl}(2))\) and \(Tak(\mathfrak{osp}(1|2))\) type we have \(H_{i}^{2}=c_{i}\), and apply (2) of Lemma 7.2, we obtain the first formula. To obtain the second formula, swap the indices on \(i\) and \(j\) in the first formula, and subtract one from the other. The third formula is immediate. ### Dynkin diagrams **Definition 7.4**.: Let \(\mathfrak{g}(\mathcal{A})\) be a qKM algebra, with notation as explained in Section 7.3. We call \(A=(a_{ij})=(\alpha_{j}(h_{i}))\) the Cartan matrix of \(\mathfrak{g}(\mathcal{A})\). Define \(D=D(\mathcal{A})\) to be the Dynkin diagram associated with the Cartan datum \(\mathcal{A}\) as follows: draw a vertex for each \(i=1,\ldots,n\) according to the following rule: 1. draw \(\diamondsuit\) if \(a_{ii}=2\), or equivalently \(\alpha_{i}\) is of type \(Tak(\mathfrak{sl}(2))\); 2. draw \(\blacklozenge\) if \(a_{ii}=1\), or equivalently \(\alpha_{i}\) is of type \(Tak(\mathfrak{osp}(1|2))\); 3. draw \(\diamondsuit\) if \(a_{ii}=0\), or equivalently \(\alpha_{i}\) is of type \(Tak(\mathfrak{sl}(1|1))\). Next we draw an edge from vertex \(i\) to vertex \(j\) if \(a_{ij}a_{ji}\neq 0\), and we label the edge by the pair \(a_{ij},a_{ji}\). We will sometimes draw an unlabeled edge between two vertices to indicate that we assume \(a_{ij}a_{ji}\neq 0\). If we disregard the type of a simple root, we draw \(\Box\) for its corresponding vertex. **Example 7.1**.: The Dynkin diagram of \(\mathfrak{q}(n)\) looks as follows: This is also the Dynkin diagram of \(T\mathfrak{sl}(n)\). **Remark 7.8**.: From the results of Section 8 we see that two simple roots \(\alpha\) and \(\beta\) of a qKM algebra \(\mathfrak{g}(\mathcal{A})\) are connected in the associated Dynkin diagram \(S(A)\) if and only if \(\alpha+\beta\) is a root of \(\mathfrak{g}(\mathcal{A})\), that is, \(\mathfrak{g}_{\alpha+\beta}\neq 0\). **Definition 7.5**.: We say two distinct simple roots \(\alpha,\beta\) are connected if the same is true in the Dynkin diagram. Note that by the following lemma, this is equivalent to \(\alpha_{i}(h_{j})\alpha_{j}(h_{i})=a_{ij}a_{ji}\neq 0\). **Lemma 7.4**.: _Suppose that \(\mathcal{A}\) is a qKM Cartan datum, and \(\alpha_{i},\alpha_{j}\) are distinct simple roots. Then the following are equivalent:_ 1. \(\alpha_{i}\) _and_ \(\alpha_{j}\) _are connected;_ 2. \(\mathfrak{g}_{\alpha_{i}+\alpha_{j}}\neq 0\)_;_ 3. \([\mathfrak{h}_{\alpha_{i}},\mathfrak{g}_{\alpha_{j}}]\neq 0\)_;_ 4. \([H_{i},\mathfrak{g}_{\alpha_{j}}]\neq 0\)_;_ 5. \(x_{ij}\) _and_ \(y_{ij}\) _are not both zero;_ 6. \(\alpha_{j}((\mathfrak{h}_{\alpha_{i}})_{\overline{0}})\neq 0\)_._ 7. \(\alpha_{j}(h_{i})\neq 0\)_._ Proof.: The implications between all but the last condition a straightforward application of regularity, the equivalence of the conditions in the question at the start of Section 6, and the fact that \(H_{i}\) generates \(\mathfrak{h}_{\alpha_{i}}\) as an ideal of \(\mathfrak{h}\). The final condition is equivalent by \(\mathfrak{sl}(2)\)-representation theory if \(\alpha_{i}\) is of type \(Tak(\mathfrak{sl}(2))\) or \(Tak(\mathfrak{osp}(1|2))\), and for \(Tak(\mathfrak{sl}(1|1))\) it is because \(\alpha_{j}(c_{i})=0\) for all roots \(\alpha_{j}\) (see the table in Section 7.3). **Remark 7.9**.: Consider the map \(\Theta\) from Dynkin diagrams of qKM algebras to Dynkin diagrams of Kac-Moody superalgebras, obtained by turning diamonds into circles (see [K2] for more on Dynkin diagrams of Kac-Moody superalgebras). We notice that if the Dynkin diagram \(D\) of a qKM algebra \(\mathfrak{g}(\mathcal{A})\) has that \(\Theta(D)\) is of infinite growth, then \(\mathfrak{g}(\mathcal{A})\) is also of infinite growth, by Section 3.4. ## 8. Two simple roots in a qKM algebra **Lemma 8.1**.: _Let \(\alpha_{i},\alpha_{j}\) be connected simple roots in a qKM algebra \(\mathfrak{g}(\mathcal{A})\). Then either \(x_{ij}x_{ji}\neq 0\) or \(y_{ij}y_{ji}\neq 0\)._ Proof.: Suppose that neither of the above two equations holds; because \(\alpha_{i}\) and \(\alpha_{j}\) are connected, we may assume WLOG that \(x_{ij}=y_{ji}=0\) and \(x_{ji}y_{ij}\neq 0\). Then by (1) of Lemma 7.2, we learn that \[x_{ji}c_{i}=y_{ij}h_{j}.\] However since \(\alpha_{i}(c_{i})=0\), this implies \(\alpha_{i}(h_{j})=0\), contradicting Lemma 7.4. **Definition 8.1**.: Let \(\mathfrak{g}(\mathcal{A})\) be a qKM algebra. We say two distinct simple roots \(\alpha_{i},\alpha_{j}\in\Pi\) are coupled if they are connected and we have \(\alpha_{i}(c_{j})=\alpha_{j}(c_{i})=0\). Equivalently, by Lemma 8.1, either \(x_{ij}=x_{ji}=0\), in which case we say \(\alpha_{i}\) and \(\alpha_{j}\) are \(X\)-coupled, or \(y_{ij}=y_{ji}=0\), in which case we say \(\alpha_{i}\) and \(\alpha_{j}\) are \(Y\)-coupled. **Remark 8.1**.: By (1) of Lemma 7.2, we have for two simple roots \(\alpha_{i}\) and \(\alpha_{j}\) the relation: \[x_{ij}c_{j}+y_{ij}h_{j}=x_{ji}c_{i}+y_{ji}h_{i}.\] Thus if \(\alpha_{i}\) and \(\alpha_{j}\) are coupled, the above formula gives a nontrivial relation between one pure even coroot on each side. In particular, if they are \(X\)-coupled we obtain \[y_{ij}h_{j}=y_{ji}h_{i},\hskip 14.226378pty_{ij}y_{ji}\neq 0,\] and if they are \(Y\)-coupled we obtain \[x_{ij}c_{j}=x_{ji}c_{i},\hskip 14.226378ptx_{ij}x_{ji}\neq 0.\] **Example 8.1**.: The prototypical example of a qKM algebra with coupled simple roots is the extension of the Takiff superalgebra \(T\mathfrak{s}\) for any Kac-Moody superalgebra \(\mathfrak{s}\). Indeed, in this case we have \(Y=0\), i.e. all connected simple roots are \(Y\)-coupled. **Example 8.2**.: For \(\mathfrak{g}=\mathfrak{q}(n)\), no distinct simple roots are coupled with one another. The aim for this section is to prove the following theorem: **Theorem 8.1**.: _Suppose that \(\mathcal{A}\) is qKM and \(\alpha_{i},\alpha_{j}\in\Pi\) are distinct simple roots. Then one of the following cases must occur:_ 1. \(\alpha_{i}\) _and_ \(\alpha_{j}\) _are not connected;_ 2. \(\alpha_{i}\) _and_ \(\alpha_{j}\) _are_ \(Y\)_-coupled;_ 3. \(\alpha_{i}\) _and_ \(\alpha_{j}\) _are_ \(X\)_-coupled,_ \(\alpha_{i}(h_{j})\alpha_{j}(h_{i})=4\)_, and one of the following holds up to swapping the indices_ \(i\) _and_ \(j\)_:_ 1. \(\alpha_{i}\) _and_ \(\alpha_{j}\) _are of type_ \(Tak(\mathfrak{sl}(2))\)_; or,_ 2. \(\alpha_{i}\) _is of type_ \(Tak(\mathfrak{sl}(2))\)_,_ \(\alpha_{j}\) _is of type_ \(Tak(\mathfrak{osp}(1|2))\)_, and we have_ \(\alpha_{j}(h_{i})=-2\) _and_ \(\alpha_{i}(h_{j})=-1\)_;_ 4. \(\alpha_{i}\) _and_ \(\alpha_{j}\) _are not coupled, both of type_ \(Tak(\mathfrak{sl}(2))\)_, and_ \(y_{ij}y_{ji}\neq 0\)_._ Note that the following is false for the entries of the \(X\)-matrix (in particular it fails for \(\mathfrak{q}_{(2,2)}^{-}\), see Section 11.3). **Corollary 8.1**.: _If \(\mathcal{A}\) is qKM, then \(y_{ij}=0\Rightarrow y_{ji}=0\)._ Proof.: This follows immediately from Theorem 8.1. The proof of Theorem 8.1 will occupy the rest of the section, and we will also prove auxiliary results which will be important in future sections. We begin by dealing with the case of \(Tak(\mathfrak{sl}(1|1))\): **Lemma 8.2**.: _Suppose that \(\alpha_{i}\) is of type \(Tak(\mathfrak{sl}(1|1))\) and is connected to \(\alpha_{j}\). Then \(\alpha_{i}\) and \(\alpha_{j}\) are \(Y\)-coupled._ Proof.: Suppose that \(\alpha_{i}\) and \(\alpha_{j}\) are connected, so that \(\alpha_{i}(h_{j})\alpha_{j}(h_{i})\neq 0\). If \(\alpha_{i}\) is of type \(Tak(\mathfrak{sl}(1|1))\), then \(H_{i}^{2}=0\), implying that \(x_{ij}y_{ij}=0\). Suppose that \(x_{ij}=0\), so that (1) of 7.2 gives \[y_{ij}h_{j}=x_{ji}c_{i}+y_{ji}h_{i}.\] Applying \(\alpha_{i}\) to the RHS we obtain \(0\), implying that \(\alpha_{i}(h_{j})=0\), a contradiction. Thus we must have \(y_{ij}=0\), implying that \[x_{ij}c_{j}=x_{ji}c_{i}+y_{ji}h_{i}.\] If we apply \(\alpha_{j}\), we know from the table in Section 7.3 that \(\alpha_{j}(c_{j})=\alpha_{j}(c_{i})=0\), so we obtain \(y_{ij}\alpha_{j}(h_{i})=0\); since \(\alpha_{j}(h_{i})\neq 0\), this forces \(y_{ji}=0\). Thus we have proven that \(\alpha_{i}\) is \(Y\)-coupled to \(\alpha_{j}\). **Lemma 8.3**.: _Let \(\mathfrak{g}(\mathcal{A})\) be a qKM algebra and let \(\alpha\in\Pi\) be of type \(Tak(\mathfrak{sl}(2))\). Let \(e,E,f,F,h,c,H\) be the spanning set for \(\mathfrak{g}(\alpha)\cong(\mathfrak{p})\mathfrak{sq}(2)\) as defined in Section 7.3. Let \(\beta\in\Pi\) be a simple root different from \(\alpha\). Then:_ 1. \(\mathfrak{g}_{\beta-\beta(h)\alpha}\neq 0\)_, and_ \(\mathfrak{g}_{\beta+(1-\beta(h))\alpha}=0\)_;_ 2. _if_ \(\beta(h)=-1\) _and_ \(v\in\mathfrak{g}_{\beta}\)_, then_ \[[E,v]+[e,[H,v]]=0.\] Proof.: The first statement follows from the integrability of the action of \(\mathfrak{sl}(2)\). Now suppose that \(\beta(h)=-1\). Then \(\mathfrak{g}_{\beta+2\alpha}=0\) so that \([e,[E,v]]=0\). Therefore \[0 =-[f,[e,[E,v]]]=-[[f,e],[E,v]]-[e,[[f,E],v]]\] \[=[h,[E,v]]+[e,[H,v]]=[E,v]+[e,[H,v]].\] **Proposition 8.1**.: _Let \(\mathfrak{g}(\mathcal{A})\) be a qKM algebra and let \(\alpha_{i},\alpha_{j}\in\Pi\) be two simple roots type \(Tak(\mathfrak{sl}(2))\). If \(\alpha_{i}\) and \(\alpha_{j}\) are connected, then exactly one of the following happens:_ 1. \(\alpha_{i}\) _and_ \(\alpha_{j}\) _are coupled, in which case either:_ 1. \(\alpha_{i}\) _and_ \(\alpha_{j}\) _are_ \(Y\)_-coupled; or_ 2. \(\alpha_{i}\) _and_ \(\alpha_{j}\) _are_ \(X\)_-coupled and_ \(\alpha_{i}(h_{j})\alpha_{j}(h_{i})=4\)_._ 3. \(\alpha_{i}(h_{j})=\alpha_{j}(h_{i})=-1\) _and_ \(\alpha_{i}(c_{j})\alpha_{j}(c_{i})\neq 0\)_;_ 4. \(\alpha_{i}(h_{j})=\alpha_{j}(h_{i})=-2\)_, and exactly one of_ \(x_{ij}\)_,_ \(x_{ji}\) _is zero;_ 5. \(\alpha_{i}(h_{j}),\alpha_{j}(h_{i})\in\mathbb{Z}_{\leq 2}\) _and_ \(\alpha_{i}(c_{i})\alpha_{j}(c_{i})\neq 0\)_._ Proof.: Because we assume \(\alpha_{i},\alpha_{j}\) are connected, we have \(\alpha_{i}(h_{j})\alpha_{j}(h_{i})\neq 0\). Now because \(\alpha_{i}(h_{i})=\alpha_{j}(h_{j})\), Lemma 7.3 implies that: \[y_{ij}^{2}\alpha_{i}(h_{j})=y_{ji}^{2}\alpha_{j}(h_{i}). \tag{10}\] So if \(y_{ij}y_{ji}=0\), it must be that both \(y_{ij}=y_{ji}=0\), implying \(\alpha_{i}\) and \(\alpha_{j}\) are \(Y\)-coupled. If, on the other hand, \(x_{ij}x_{ji}=0\), then Lemma 7.3 gives: \[2y_{ji} =y_{ij}\alpha_{i}(h_{j}),\] \[2y_{ij} =y_{ji}\alpha_{j}(h_{i}).\] Thus \(y_{ij},y_{ji}\) is a non-trivial solution to the linear system defined by the matrix \[\begin{bmatrix}2&-\alpha_{i}(h_{j})\\ -\alpha_{j}(h_{i})&2\end{bmatrix},\] implying that the determinant is \(0\), which gives \[\alpha_{i}(h_{j})\alpha_{j}(h_{i})=4.\] In particular, this shows that if \(\alpha_{i}\) and \(\alpha_{j}\) are \(X\)-coupled, then \(\alpha_{i}(h_{j})\alpha_{j}(h_{i})=4\), as desired. On the other hand, if \(\alpha_{i}\) and \(\alpha_{j}\) are not coupled but we still have \(x_{ij}x_{ji}=0\), then exactly one of \(x_{ij}\), \(x_{ji}\) is zero, and we have \(\alpha_{i}(h_{j})\alpha_{j}(h_{i})=4\). It remains to show then that \(\alpha_{i}(h_{j})=\alpha_{j}(h_{i})=-2\), or equivalently that \(\alpha_{i}(h_{j})\neq-1\) and \(\alpha_{j}(h_{i})\neq-1\), which will follow from our final argument. Thus suppose that \(\alpha_{i}(h_{j})=-1\) and \(\alpha_{i}\), \(\alpha_{j}\) are not coupled. Then Lemma 8.3 implies \([E_{j},e_{i}]+x_{ji}[e_{j},E_{i}]=0\). As a result \[0=[f_{i},[E_{j},e_{i}]+x_{ji}[e_{j},E_{i}]]=[E_{j},-h_{i}]+x_{ji}[e_{j},-H_{i}] =(\alpha_{j}(h_{i})+x_{ji}x_{ij})E_{j},\] and \[0=[F_{i},[E_{j},e_{i}]+x_{ji}[e_{j},E_{i}]]=[E_{j},H_{i}]+x_{ji}[e_{j},c_{i}]=( y_{ij}-x_{ij}x_{ji}y_{ij})e_{j}.\] Because \(\alpha_{i}\) and \(\alpha_{j}\) are not coupled, we have \(y_{ij}\neq 0\), so \(x_{ij}x_{ji}=1\) and \(\alpha_{j}(h_{i})=-1\). This completes our proof. **Proposition 8.2**.: _Let \(\mathfrak{g}(\mathcal{A})\) be a qKM algebra. Let \(\alpha_{i}\in\Pi\) be of \(Tak(\mathfrak{sl}(2))\)-type and \(\alpha_{j}\in\Pi\) be of \(Tak(\mathfrak{osp}(1|2))\)-type. If \(\alpha_{i}\) is connected to \(\alpha_{j}\), then either they are Y-coupled, or they are \(X\)-coupled and we have \(\alpha_{j}(h_{i})=-2\) and \(\alpha_{i}(h_{j})=-1\)._ Proof.: First of all, we have \(\alpha_{i}(c_{j})=0\) by the table from Section 7.3. In particular \(x_{ji}y_{ji}=0\). We may consider the \(\mathfrak{sq}(2)\)-subquotient of \(\mathfrak{g}(\alpha_{j})\) generated by \(\mathfrak{g}_{2\alpha_{j}}\) and \(\mathfrak{g}_{-2\alpha_{j}}\). Then Proposition 8.1 tells us that \(y_{ji}=0\) implies \(y_{ij}=0\) as well, so that \(\alpha_{i}\) and \(\alpha_{j}\) are \(Y\)-coupled. Let us assume they are not \(Y\)-coupled, so that \(x_{ji}=0\), \(y_{ij}y_{ji}\neq 0\). Then Proposition 8.1 tells us that \(\alpha_{i}(h_{j})(2\alpha_{j})(h_{i})=4\). If \(\alpha_{j}(h_{i})=-1\), then Lemma 8.3 then gives \([E_{i},e_{j}]+[e_{i},[H_{i},e_{j}]]=0\), thus \[0=[F_{j},[E_{i},e_{j}]+[e_{i},[H_{i},e_{j}]]]=-[E_{i},H_{j}]+[e_{i},[F_{j},[H_{ i},e_{j}]]]. \tag{11}\] As \([H_{i},e_{j}]\in\mathbb{C}(E_{j})\), we have \([F_{j},[H_{i},e_{j}]]\in\mathbb{C}(c_{j})\), so \([e_{i},[F_{j},[H_{i},e_{j}]]]=0\). Together with (11), we obtain \[0=[E_{i},H_{j}]=y_{ji}e_{i}\neq 0,\] a contradiction. Thus we must have \(\alpha_{j}(h_{i})=-2\) and \(\alpha_{i}(h_{j})=-1\). Now if \(\alpha_{i}\) and \(\alpha_{j}\) were not \(X\)-coupled, by then Proposition 8.1 we would have \(2\alpha_{j}(h_{i})=\alpha_{i}(h_{j})=-2\), which is not the case, meaning instead they must be \(X\)-coupled, and we are done. We now finish the proof of Theorem 8.1 with the following proposition. **Proposition 8.3**.: _If \(\alpha_{i}\) and \(\alpha_{j}\) are both of type \(Tak(\mathfrak{osp}(1|2))\) and are connected, then they cannot be \(X\)-coupled (in particular they must be \(Y\)-coupled)._ Proof.: Suppose that \(\alpha_{i}\) and \(\alpha_{j}\) are \(X\)-coupled. By considering \(2\alpha_{i}\) and \(\alpha_{j}\) as \(X\)-coupled simple roots, Proposition 8.2 tells us that we have \(\alpha_{j}(h_{i})=-2\) and \((2\alpha_{i})(h_{j})=-1\). However this implies that \(\alpha_{i}(h_{j})=-1/2\), which is impossible. ## 9. Completely coupled vs. completely uncoupled It is interesting to understand what are the possible configurations of simple roots of a qKM algebra. If we consider qKM algebras of finite growth, the coupling property is very well behaved. First, a definition. **Definition 9.1**.: Let \(\mathcal{A}\) a be qKM Cartan datum with matrices \(X=X(\mathcal{A})\) and \(Y=Y(\mathcal{A})\) as in Section 7.3. We say that \(\mathcal{A}\), and also \(\mathfrak{g}(\mathcal{A})\), is 1. completely coupled if any two simple roots that are connected are coupled; 2. completely \(X\)-coupled if \(x_{ij}=0\) for all \(i\neq j\); 3. completely \(Y\)-coupled if \(Y=0\), i.e. \(y_{ij}=0\) for all \(i,j\); 4. completely uncoupled if for all \(i\neq j\), either \(\alpha_{i}\) and \(\alpha_{j}\) are not connected, or they are connected and uncoupled. The main theorem to be proven in this section is: **Theorem 9.1**.: _Suppose that \(\mathfrak{g}(\mathcal{A})\) is an indecomposable, finite growth qKM algebra. Then \(\mathcal{A}\) is either completely \(X\)-coupled, completely \(Y\)-coupled, or completely uncoupled._ We also have explicit descriptions of completely \(X\)-coupled and completely \(Y\)-coupled algebras. **Theorem 9.2**.: _If \(\mathfrak{g}(\mathcal{A})\) is an indecomposable, completely \(X\)-coupled qKM algebra, then its Cartan datum satisfies one of the following up to isomorphism, and \(\mathfrak{g}(\mathcal{A})\) is finite growth:_ 1. \(\Pi=\{\alpha_{1},\alpha_{2}\}\) _with_ \(\alpha_{1},\alpha_{2}\) _both of type_ \(Tak(\mathfrak{sl}(2))\)_, and_ \(h_{1}=-h_{2}\)_. We have_ \[X=\begin{bmatrix}2&0\\ 0&2\end{bmatrix},\hskip 14.226378ptY=\begin{bmatrix}0&-1\\ 1&0\end{bmatrix}.\] _The Dynkin diagram is given by:_ \[\xygraph{-2,-2};\] 2. \(\Pi=\{\alpha_{1},\alpha_{2}\}\) _with_ \(\alpha_{1},\alpha_{2}\) _both of type_ \(Tak(\mathfrak{sl}(2))\)_, and_ \(h_{1}=-2h_{2}\)_. We have_ \[X=\begin{bmatrix}2&0\\ 0&2\end{bmatrix},\hskip 14.226378ptY=\begin{bmatrix}0&-2\\ 1&0\end{bmatrix}.\] _The Dynkin diagram is given by:_ \[\xygraph{-4,-1};\] _._ 3. \(\Pi=\{\alpha_{1},\alpha_{2}\}\) _with_ \(\alpha_{1}\) _of type_ \(Tak(\mathfrak{sl}(2))\)_,_ \(\alpha_{2}\) _of type_ \(Tak(\mathfrak{osp}(1|2))\)_, and_ \(h_{1}=-2h_{2}\)_. We have_ \[X=\begin{bmatrix}2&0\\ 0&1\end{bmatrix},\hskip 14.226378ptY=\begin{bmatrix}0&-2\\ 1&0\end{bmatrix}.\] _The Dynkin diagram is given by:_ For the completely \(Y\)-coupled case, see Theorem 9.3. ### \(X\)-couplings **Lemma 9.1**.: _Suppose that \(\mathcal{A}\) is an indecomposable qKM Cartan datum. Then if \(\alpha_{i},\alpha_{j}\) are distinct, connected, \(X\)-coupled simple roots, then \(\Pi=\{\alpha_{i},\alpha_{j}\}\), i.e. there are no other simple roots._ Proof.: By Theorem 8.1, our assumption implies that \(\alpha_{k}(h_{k})=1\) or \(2\) for \(k=i,j\). On the other hand, since \(x_{ij}=x_{ji}=0\) by assumption, (1) of Lemma 7.2 implies that \(h_{i}=\lambda h_{j}\) for \(\lambda=y_{ij}/y_{ji}\neq 0\). We see that \[0<\alpha_{i}(h_{i})=\lambda\alpha_{i}(h_{j}).\] Since \(\alpha_{i}(h_{j})\in\mathbb{Z}_{<0}\), this forces \(\lambda\in\mathbb{R}_{<0}\). If \(\gamma\) were any other simple root in \(\mathcal{A}\), then we must have \(\gamma(h_{i}),\gamma(h_{j})\leq 0\); on the other hand this means that \(0\leq\lambda\gamma(h_{j})=\gamma(h_{i})\). Thus \(\gamma(h_{i})=0\), and similarly \(\gamma(h_{j})=0\). Thus \(\gamma\) is not connected to either \(\alpha_{i}\) or \(\alpha_{j}\), which by indecomposability implies that \(\Pi=\{\alpha_{i},\alpha_{j}\}\), and we are done. Proof of Theorem 9.2.: It remains to check that the only possible Cartan data are the ones listed. However this follows from (3) of Theorem 8.1, which tells us the possible simple root types and the values of \(\alpha_{i}(h_{j})\), and we can rescale \(y_{ij}\) and \(y_{ji}\) to make it as above, by rescaling our choice of generators (see Remark 7.7). The relations on \(h_{1}\) and \(h_{2}\) come from (1) of Lemma 7.2. ### Complete coupling for finite growth qKM algebras **Lemma 9.2**.: _Let \(\mathfrak{g}(\mathcal{A})\) be a qKM algebra. Let \(\alpha_{1}\), \(\alpha_{2}\) and \(\alpha_{3}\) be simple roots, such that their corresponding full subdiagram of the Dynkin diagram of \(\mathfrak{g}(\mathcal{A})\) is:_ _If \(\alpha_{1}\) and \(\alpha_{2}\) are coupled, then \(\alpha_{2}\) and \(\alpha_{3}\) are coupled._ Proof.: Since we may assume by Lemma 9.1 that \(\alpha_{1}\) and \(\alpha_{2}\) are \(Y\)-coupled, (1) of Lemma 7.2 implies that \[x_{12}c_{2}=x_{21}c_{1},\] where \(x_{12},x_{21}\neq 0\). Thus we obtain that \(\alpha_{1}(c_{2})=0\), since the same is true of \(c_{1}\). Applying \(\alpha_{1}\) to the formula \[x_{23}c_{3}+y_{23}h_{3}=x_{32}c_{2}+y_{32}h_{2}\] gives \[0=x_{32}\alpha_{1}(c_{2})+y_{32}\alpha_{1}(h_{2})=y_{32}\alpha_{1}(h_{2}).\] Since \(\alpha_{1}(h_{2})\neq 0\), this forces \(y_{32}=0\), so by Lemma 8.1 we obtain \(y_{23}=0\) also, and we are done. **Lemma 9.3**.: _Let \(\mathfrak{g}(\mathcal{A})\) be a qKM algebra of finite growth. Let \(\alpha_{1}\), \(\alpha_{2}\) and \(\alpha_{3}\) be simple roots, such that their corresponding full subdiagram of the Dynkin diagram of \(\mathfrak{g}(\mathcal{A})\) is:_ _If \(\alpha_{1}\) and \(\alpha_{2}\) are coupled, then \(\alpha_{3}\) is coupled with \(\alpha_{1}\) and \(\alpha_{2}\)._ Proof.: We again may assume by Lemma 9.1 that any coupling that occurs here is \(Y\)-coupling. If \(\alpha_{3}\) is of type \(Tak(\mathfrak{sl}(1|1))\) or \(Tak(\mathfrak{osp}(1|2))\), then we are done by Theorem 8.1. If \(\alpha_{1}\) is of type \(Tak(\mathfrak{sl}(1|1))\) or \(Tak(\mathfrak{osp}(1|2))\), then it must be \(Y\)-coupled to \(\alpha_{2}\) and \(\alpha_{3}\), so by Remark 8.1 we learn that \(c_{2}\) and \(c_{3}\) are proportional. This implies that \(\alpha_{2}(c_{3})=\alpha_{3}(c_{2})=0\), so \(\alpha_{2}\) and \(\alpha_{3}\) are coupled. Similar considerations apply if \(\alpha_{2}\) is of type \(Tak(\mathfrak{sl}(1|1))\) or \(Tak(\mathfrak{osp}(1|2))\). Thus we may assume that \(\alpha_{1}\), \(\alpha_{2}\) and \(\alpha_{3}\) are all of type \(Tak(\mathfrak{sl}(2))\), and we know that \(y_{12}=y_{21}=0\), since they are \(Y\)-coupled. If either \(\alpha_{1}\) or \(\alpha_{2}\) were coupled to \(\alpha_{3}\), then we would have that \(c_{1}\) is a nonzero multiple of \(c_{2}\) is a nonzero multiple of \(c_{3}\), meaning that \(\alpha_{i}(c_{j})=0\) for all \(i,j\), so all roots are coupled. Thus we assume \(\alpha_{3}\) is not coupled to either \(\alpha_{1}\) or \(\alpha_{2}\); in particular \(y_{13}y_{31}y_{23}y_{32}\neq 0\). Now suppose that \(x_{13}=0\); then (1) of Lemma 7.2 gives \[y_{13}h_{3}=x_{31}c_{1}+y_{31}h_{1}.\] Since \(\alpha_{2}(c_{1})=\alpha_{1}(c_{1})=0\), so evaluating \(\alpha_{1}\) and \(\alpha_{2}\) on the above equation, we obtain \[y_{13}a_{31} =2y_{31},\] \[y_{13}a_{32} =y_{31}a_{12}.\] However this implies \(a_{31}/a_{32}=2/a_{12}\), which is impossible, as \(a_{31},a_{32},a_{12}<0\). Thus we must have \(x_{13}\neq 0\), and for similar reasons \(x_{23}\neq 0\). By (1) or Lemma 7.2 we have \([H_{1},H_{2}]=x_{21}c_{1}=x_{12}c_{2}\), and by (2) of the same lemma we obtain: \[x_{13}y_{23}+x_{23}y_{13}=x_{21}x_{13}y_{13}=x_{12}x_{23}y_{23}.\] We may rescale \(E_{1}\) and \(E_{2}\) such that \(y_{13}=y_{23}=1\), so from the above equation we obtain \[x_{13}(1-x_{21})+x_{23}=x_{13}+x_{23}(1-x_{12})=0.\] As \(x_{13}x_{23}\neq 0\), the above equations imply that \((1-x_{21})(1-x_{12})=1\), so \[x_{12}+x_{21}=x_{12}x_{21}. \tag{12}\] We again use equations (1) and (2) of Lemma 7.2 to deduce that \[x_{12}y_{32} =\alpha_{2}([H_{1},H_{3}])=y_{31}a_{12},\] \[x_{21}y_{31} =\alpha_{1}([H_{2},H_{3}])=y_{32}a_{21}.\] From these equations we obtain \(x_{12}x_{21}=a_{12}a_{21}\) and \(\frac{x_{12}}{x_{21}}=\frac{y_{31}^{2}a_{12}}{y_{32}^{2}a_{21}}\). As in the proof of Proposition 8.1, we must have \(y_{31}^{2}a_{13}=y_{13}^{2}a_{31}=a_{31}\) and \(y_{32}^{2}a_{23}=y_{23}^{2}a_{32}=a_{32}\), so \[\frac{x_{12}}{x_{21}}=\frac{a_{12}a_{23}a_{31}}{a_{21}a_{13}a_{32}}>0.\] Squaring (12) and dividing by \(x_{12}x_{21}\), we get \[\frac{x_{12}}{x_{21}}+\frac{x_{21}}{x_{12}}+2=x_{12}x_{21}\] But \(\frac{x_{12}}{x_{21}}>0\) and \(x_{12}x_{21}=a_{12}a_{21}\), so \(a_{12}a_{21}>2\). It is now easy to see that \(A\) is not a finite type Cartan matrix, meaning \(\mathfrak{g}(\mathcal{A})\) is not of finite growth by Remark 7.9. We immediately obtain the following corollary: **Corollary 9.1**.: _If \(\mathfrak{g}(\mathcal{A})\) is an indecomposable qKM algebra of finite growth, then it is either completely coupled or completely uncoupled._ ### \(Y\)-couplings: realization via the Takiff construction **Theorem 9.3**.: _Let \(\mathfrak{g}(\mathcal{A})\) be an indecomposable, completely \(Y\)-coupled qKM algebra (i.e. \(Y=0\)). Let \(A=(a_{ij})=(\alpha_{j}(h_{i}))\) be the Cartan matrix of \(\mathfrak{g}(\mathcal{A})\), and set \(\mathfrak{s}=\mathfrak{s}(A)\) to be the corresponding contragredient, Kac-Moody Lie superalgebra, with root lattice \(Q_{\mathfrak{s}}\). Finally, let \(T\mathfrak{s}\) be as in Example 4.4. Set_ \[J_{\mathcal{A}}:=\bigcap_{i=1}^{n}\operatorname{Ann}_{\mathfrak{h}}\mathfrak{ g}_{\alpha_{i}},\hskip 14.226378ptJ_{\mathfrak{s}}:=\bigcap_{\alpha\in Q _{\mathfrak{s}}}\operatorname{Ann}_{T\mathfrak{s}}(\mathfrak{s}_{\alpha} \otimes\mathbb{C}[\xi]).\] _Then:_ 1. _Up to rescaling of generators (see Remark_ 7.7_), we have an equality of matrices_ \(X=A\)_;_ 2. _we have an embedding_ \[\mathfrak{g}(\mathcal{A})/J_{\mathcal{A}}\hookrightarrow T\mathfrak{s}(A)/J_{ \mathfrak{s}},\] _which is an isomorphism on all root spaces._ Thus Theorem 9.3 says if \(Y=0\), \(\mathfrak{g}(\mathcal{A})\) is, up to extensions and derivations in \(\mathfrak{h}\), a Takiff superalgebra. Proof of Theorem 9.3.: Fix \(\alpha_{i}\) a simple root of \(\mathfrak{g}(\mathcal{A})\). We first show that after rescaling \(E_{1},\ldots,E_{n}\), we obtain that \(X=A\). The irreducibility of \(\mathfrak{g}_{\alpha_{i}}\) as an \(\mathfrak{h}\)-module implies the existence of \(I\in\mathfrak{h}_{\overline{1}}\) such that \([I,E_{i}]=e_{i}\). For each \(j\), let \(y_{j}\in\mathbb{C}\) be the scalar satisfying \([I,E_{j}]=y_{j}e_{j}\). We compute: \[x_{ij}y_{j}=\alpha_{j}([I,H_{i}])=\alpha_{j}(h_{i})=a_{ij}\] Thus it suffices to show that we can rescale \(E_{1},\ldots,E_{n}\) so that \(y_{j}=1\) for all \(j\); or equivalently that \(y_{j}\neq 0\) for all \(j\). However the above formula clearly shows that if \(\alpha_{j}\) is connected to \(\alpha_{i}\), then \(y_{i}\neq 0\Rightarrow y_{j}\neq 0\). Since the Dynkin diagram is connected, we are done, and we obtain, after rescaling, \(x_{ij}=a_{ij}\) for all \(i,j\). For any \(X\in\mathfrak{h}_{\overline{1}}\) define the complex numbers \(\langle X,e_{i}\rangle\) and \(\langle X,E_{i}\rangle\) by the formula: \[[X,e_{i}+E_{i}]=\langle X,e_{i}\rangle E_{i}+\langle X,E_{i}\rangle e_{i}.\] Repeating the argument given for \(I\) and the fact that \(X=A\), we have \(\langle X,E_{i}\rangle=\langle X,E_{j}\rangle\) for any \(i,j\). Write \(\mathfrak{t}_{\mathfrak{s}}\) for the Cartan subalgebra of \(\mathfrak{s}\), and let \(t_{1},\ldots,t_{n}\in\mathfrak{t}_{\mathfrak{s}}\) be such that \(\alpha_{i}(t_{j})=\delta_{ij}\). Define a map \(\tilde{\phi}:\tilde{\mathfrak{g}}(\mathcal{A})\to T\mathfrak{s}(A)/J_{ \mathfrak{s}}\) as follows: \[e_{i} \mapsto\tilde{e}_{i}\] \[f_{i} \mapsto\tilde{f}_{i}\] \[E_{i} \mapsto\tilde{e}_{i}\otimes\xi\] \[F_{i} \mapsto\tilde{f}_{i}\otimes\xi\] \[\mathfrak{t}\ni h \mapsto\sum_{i}\alpha_{i}(h)t_{i}\] \[\mathfrak{h}_{\overline{1}}\ni X \mapsto\langle X,E_{1}\rangle\partial_{\xi}+\sum_{i=1}^{n}\langle X,e_{i}\rangle t_{i}\otimes\xi\] It is an immediate verification that the above assignment determines a well defined homomorphism of Lie superalgebras. We further notice that \((\tilde{\phi}\big{(}\tilde{\mathfrak{g}}(\mathcal{A})\big{)})_{\tilde{\alpha }}=(T\mathfrak{s}(A)/Z_{\mathfrak{s}})_{\tilde{\alpha}}\) for any nonzero \(\tilde{\alpha}\in\mathbb{Z}(\tilde{\alpha}_{1},...,\tilde{\alpha}_{n})\). As \(T\mathfrak{s}(A)/Z_{\mathfrak{s}}\) admits no ideals intersecting \((T\mathfrak{s}(A)/Z_{\mathfrak{s}})_{\overline{0}}\) (its Cartan subalgebra) trivially, we see that the map \(\tilde{\phi}\) factors through a map \(\phi:\mathfrak{g}(\mathcal{A})\to T\mathfrak{s}(A)/J_{\mathfrak{s}}\), which is bijective on all root spaces. Finally, it is easy to check that \(\phi(J_{\mathfrak{g}})=0\), so that \(\phi\) factors through an map \(\mathfrak{g}(\mathcal{A})/J_{\mathfrak{g}}\to T\mathfrak{s}(A)/J_{ \mathfrak{s}}\), which is easily checked to be injective. ## 10. Completely uncoupled qKM algebras In this section, we consider the structure of completely uncoupled qKM algebras \(\mathfrak{g}(\mathcal{A})\), by which we mean that \(\alpha\) and \(\beta\) are uncoupled for any distinct \(\alpha,\beta\in\Pi\). In particular, we will be interested in the possible Dynkin diagrams, and a classification of those with finite growth. By Theorem 8.1, we may assume all simple roots are of \(Tak(\mathfrak{sl}(2))\)-type. One should view this as the more nontrivial setting, that is far away from Takiff superalgebras. We will see in fact that being completely uncoupled makes a qKM algebra very rigid. We begin with a lemma stating some of the properties we will use of completely uncoupled algebras. **Lemma 10.1**.: _Let \(\mathcal{A}\) be a completely uncoupled qKM Cartan datum. Then:_ 1. _any full Cartan subdatum (see Section_ 3.4_)_ \(\mathcal{A}^{\prime}\) _of a completely uncoupled qKM Cartan datum_ \(\mathcal{A}\) _is also completely uncoupled;_ 2. _two distinct simple roots_ \(\alpha_{i}\) _and_ \(\alpha_{j}\) _are connected if and only if_ \([H_{i},H_{j}]\neq 0\)_;_ 3. _we have, for all_ \(i,j\)_,_ \(y_{ij}y_{ji}\neq 0\)_, and:_ \[y_{ij}^{2}\alpha_{i}(h_{j})=y_{ji}^{2}\alpha_{j}(h_{i}).\] Proof.: Part (1) is clear, and (3) is an immediate application of Lemma 7.3 and Corollary 8.1. For part (2), the backward direction is obvious. For the forward direction, (1) of Lemma 7.2 tells us that \[[H_{i},H_{j}]=x_{ij}c_{j}+y_{ij}h_{j}.\] However we must have \(y_{ij}\neq 0\), and since \(h_{j}\) cannot be a multiple of \(c_{j}\), we must have \([H_{i},H_{j}]\neq 0\) **Theorem 10.1**.: _Let \(\mathfrak{g}(\mathcal{A})\) be a completely uncoupled qKM algebra and let \(A\) be its Cartan matrix. Let \(\mathfrak{s}=\mathfrak{s}(A)\) be the (central quotient of the) Kac-Moody Lie algebra corresponding to \(A\), with Cartan subalgebra \(\mathfrak{t}\)1, set of simple roots \(\Pi\) and simple coroots \(h_{1},...,h_{n}\). Then the root system \(\Delta\) of \(\mathfrak{g}(\mathcal{A})\) and the root system \(\Delta_{\mathfrak{s}}\) of \(\mathfrak{s}\) coincide (as subsets of \(\mathfrak{t}^{*}\)). Moreover, \(\mathfrak{g}(\mathcal{A})\) is finite-dimensional if and only if \(\Delta\) is finite._ Footnote 1: this is a Kac-Moody Lie algebra in the sense of [GHS], with the condition on the dimension of the Cartan subalgebra \(\mathfrak{t}\) more relaxed. Proof.: It is immediate that \(\Delta_{\mathfrak{s}}\subseteq\Delta\). Suppose \(\Delta_{\mathfrak{s}}\neq\Delta\); then the Chevalley automorphism on \(\mathfrak{g}\) (see Remark 7.2) implies that \(\Delta^{+}\setminus\Delta_{\mathfrak{s}}\neq\emptyset\). Let \(\beta\in\Delta^{+}\setminus\Delta_{\mathfrak{s}}\) be of minimal height. Then \(\beta\) is not proportional to any simple root. Since \(\beta\) is a root of \(\mathfrak{g}(\mathcal{A})\), there exists \(\alpha\in\Pi\) such that \(\beta-\alpha\in\Delta^{+}\), so the minimality of the height of \(\beta\) implies \(\beta-\alpha\in\Delta_{\mathfrak{s}}^{+}\). Let \(\mathfrak{s}_{\alpha}\subseteq\mathfrak{s}\) be the \(\mathfrak{sl}(2)\) subalgebra corresponding to \(\alpha\). From the integrability of \(\mathfrak{g}(\mathcal{A})\) we obtain that \(\bigoplus_{k\in\mathbb{Z}}\mathfrak{g}_{\beta+k\alpha}\) is a finite-dimensional \(\mathfrak{s}_{\alpha}\)-module. But then \(r_{\alpha}(\beta)\) is a positive root of \(\mathfrak{g}(\mathcal{A})\), of smaller height than \(\beta\), so \(r_{\alpha}(\beta)\in\Delta_{\mathfrak{s}}^{+}\). This implies that \(\beta\in\Delta_{s}\), a contradiction. Hence \(\Delta=\Delta_{\mathfrak{s}}\). If \(\Delta\) is finite then because \(\dim\mathfrak{g}_{\beta}<\infty\) for all \(\beta\in\Delta_{\mathfrak{g}}\), we have \(\mathfrak{g}\) is finite-dimensional. **Lemma 10.2**.: _There does not exist a completely uncoupled qKM algebra \(\mathfrak{g}(\mathcal{A})\) with the following Dynkin diagram:_ Proof.: We write \(\alpha_{1},\alpha_{2},\alpha_{3},\alpha_{4}\) for the simple roots and assume \(\alpha_{1}\) corresponds to the middle vertex of the Dynkin diagram. Lemma 10.1 implies that \([H_{1},H_{2}]\neq 0\). As \([H_{1},H_{2}]\in\mathfrak{h}_{\alpha_{1}}\cap\mathfrak{h}_{\alpha_{2}}\), we have \([H_{1},H_{2}]\in\ker\alpha_{4}\). So \[[H_{1},H_{2}]\in\mathbb{C}\langle c_{1},h_{1}\rangle\cap\ker\alpha_{4}.\] A symmetric argument implies \[[H_{1},H_{3}]\in\mathbb{C}\langle c_{1},h_{1}\rangle\cap\ker\alpha_{4}.\] However, since \(\alpha_{4}(h_{1})\neq 0\), the above subspaces are all one-dimensional and thus equal. Hence: \[\mathbb{C}\langle[H_{1},H_{2}]\rangle=\mathbb{C}\langle c_{1},h_{1}\rangle \cap\ker\alpha_{4}=\mathbb{C}\langle[H_{1},H_{3}]\rangle.\] Therefore \(\mathbb{C}\langle[H_{1},H_{2}]\rangle\in(\mathfrak{h}_{\alpha_{3}})_{\overline {0}}\subseteq\ker\alpha_{2}\). Thus by Lemma 7.2 we have \(0=\alpha_{2}([H_{1},H_{2}])=y_{12}\alpha_{2}(h_{2})\), i.e. \(y_{12}=0\), a contradiction of being uncoupled. **Lemma 10.3**.: _Suppose \(\mathfrak{g}(\mathcal{A})\) is a completely uncoupled qKM algebra with the following Dynkin diagram:_ _Then \(\mathfrak{g}(\mathcal{A})\) is necessarily of type \(A_{3}\), that is, its Cartan matrix is \(A=\begin{pmatrix}2&-1&0\\ -1&2&-1\\ 0&-1&2\end{pmatrix}\)._ Proof.: Using the notation of Section 7.3, we have, by the proof of Proposition 8.1, \[[H_{i},H_{j}]=x_{ij}c_{j}+y_{ij}h_{j}=x_{ji}c_{i}+y_{ji}h_{i}, \tag{13}\] \[y_{ij}^{2}\alpha_{i}(h_{j})=y_{ji}^{2}\alpha_{j}(h_{i}) \tag{14}\] As \(\mathfrak{g}(\mathcal{A})\) is completely uncoupled, we have \(y_{12}y_{21}y_{23}y_{32}\neq 0\). We also have \[\alpha_{2}([H_{1},H_{3}])e_{2}=[[H_{1},H_{3}],e_{2}]=(x_{32}y_{12}+x_{12}y_{32 })e_{2}.\] By Lemma 7.4 we have \([H_{1},H_{3}]=0\), so \[0=\alpha_{2}([H_{1},H_{3}])=x_{32}y_{12}+x_{12}y_{32}. \tag{15}\] For \(i=2\), \(j=3\), equation (13) evaluated on \(\alpha_{1}\) gives \[0=x_{23}\alpha_{1}(c_{3})+y_{23}\alpha_{1}(h_{3})=x_{32}\alpha_{1}(c_{2})+y_{ 32}\alpha_{1}(h_{2}). \tag{16}\] As \(y_{32}\neq 0\), the linear system (15) and (16) in the variables \(x_{32}\) and \(y_{32}\) admits a non-trivial solution. Therefore \[0=\left|\begin{array}{cc}\alpha_{1}(c_{2})&\alpha_{1}(h_{2})\\ y_{12}&x_{12}\end{array}\right|=x_{12}\alpha_{1}(c_{2})-y_{12}\alpha_{1}(h_{2}). \tag{17}\] Using (13) again for \(i=1\), \(j=2\) and evaluating on \(\alpha_{1}\) we obtain \[x_{12}\alpha_{1}(c_{2})+y_{12}\alpha_{1}(h_{2})=x_{21}\alpha_{1}(c_{1})+y_{21 }\alpha_{1}(h_{1})=2y_{21}.\] Together with (17), we deduce \(y_{12}\alpha_{1}(h_{2})=y_{21}\). Finally, (14) implies \[\alpha_{1}(h_{2})(y_{21}^{2}\alpha_{2}(h_{1}))=\alpha_{1}(h_{2})(y_{12}^{2} \alpha_{1}(h_{2}))=(y_{12}\alpha_{1}(h_{2}))^{2}=y_{21}^{2},\] hence \(\alpha_{1}(h_{2})\alpha_{2}(h_{1})=1\). A symmetric argument gives \(\alpha_{3}(h_{2})\alpha_{2}(h_{3})=1\), and our claim follows. Using Remark 7.9, we obtain the following as an immediate corollary to Lemmas 10.2 and 10.3, and Proposition 8.1: **Corollary 10.1**.: _Let \(\mathfrak{g}(\mathcal{A})\) be a completely uncoupled indecomposable qKM algebra (with at least two simple roots). If \(\mathfrak{g}(\mathcal{A})\) is of finite growth, then its Dynkin diagram is one of the following types:_ [MISSING_PAGE_POST] ## 11. The \(\mathfrak{g}(\mathcal{A})\)-model In this section, we prove the following theorem. **Theorem 11.1**.: _Let \(\mathfrak{g}(\mathcal{A})\) be a completely uncoupled indecomposable qKM algebra. Then \(\mathfrak{g}(\mathcal{A})\) is of finite growth, and \(\mathfrak{g}(\mathcal{A})\) is of finite growth._ Proof.: Let \(\mathfrak{g}(\mathcal{A})\) be a completely uncoupled indecomposable qKM algebra. Then \(\mathfrak{g}(\mathcal{A})\) is of finite growth, and \(\mathfrak{g}(\mathcal{A})\) is of finite growth. **Theorem 11.2**.: _Let \(\mathfrak{g}(\mathcal{A})\) be a completely uncoupled indecomposable qKM algebra. Then \(\mathfrak{g}(\mathcal{A})\) is of finite growth, and \(\mathfrak{g}(\mathcal{A})\) is of finite growth._ Proof.: Let \(\mathfrak{g}(\mathcal{A})\) be a completely uncoupled indecomposable qKM algebra. Then \(\mathfrak{g}(\mathcal{A})\) is of finite growth, and \(\mathfrak{g}(\mathcal{A})\) is of finite growth. Proof.: Let \(\mathfrak{g}(\mathcal{A})\) be a completely uncoupled indecomposable qKM algebra. Then \(\mathfrak{g}(\mathcal{A})\) is of finite growth, and \(\mathfrak{g}(\mathcal{A})\) is of finite growth. Let \(\mathfrak{g}(\mathcal{A})\) be a completely uncoupled indecomposable qKM algebra. Then \(\mathfrak{g}(\mathcal{A})\) is of finite growth, and \(\mathfrak{g}(\mathcal{A})\) is of finite growth. Proof.: Let \(\mathfrak{g}(\mathcal{A})\) be a completely uncoupled indecomposable qKM algebra. Then \(\mathfrak{g}(\mathcal{A})\) is of finite growth, and \(\mathfrak{g}(\mathcal{A})\) is of finite growth. **Remark 10.1**.: In the next section we will construct finite growth qKM algebras associated to each of the above diagrams, and see they are of finite growth. ### Question on general Dynkin diagrams **Lemma 10.4**.: _Suppose that \(\mathfrak{g}(\mathcal{A})\) is a completely uncoupled qKM algebra with Dynkin diagram:_ _Then if \(a_{12}a_{21}\neq 1\), \(a_{23}a_{32}a_{31}a_{13}\neq 1\), then \(a_{12}a_{23}a_{31}=a_{21}a_{32}a_{13}\)._ The upshot of Lemma 10.4 is that a Dynkin diagram with underlying graph of a triangle generically cannot correspond to a completely uncoupled qKM algebra. This section in general shows that completely uncoupled qKM algebras are very rigid. We do not prove this lemma here since we do not use it, but we state it in order to motivate the following question. **Question** Do the Dynkin diagrams listed in Corollary 10.1, along with \(\includegraphics[]{10.1}{\includegraphics[]{10.1}{\includegraphics[]{10.1}{ \includegraphics[]{10.1}{\includegraphics[]{10.1}{\includegraphics[]{10.1}{ \includegraphics[]{10.1}{\includegraphics[]{10.1}{\includegraphics[]{10.1}{ \includegraphics[]{10.1}{\includegraphics[]{10.1}{\includegraphics[]{10.1}{ \includegraphics[]{10.1}{\includegraphics[]{10.1}{\includegraphics[]{10.1}{ \includegraphics[]{10.1}{\includegraphics[]{10.1}{\includegraphics[]{10.1}{ \includegraphics[]{10.1}{\includegraphics[]{10.1}{\includegraphics[]{10.1}{ \includegraphics[]{10.1}{\includegraphics[]{10.1}{\includegraphics[]{10.1} {\includegraphics[]{10.1}{\includegraphics[]{10.1}{\includegraphics[]{10.1} {\includegraphics[]{10.1}{\includegraphics[]{10.1}{\includegraphics[]{10.1} {\includegraphics[]{10.1}{\includegraphics[]{10.1}{\includegraphics[]{10.1} {\includegraphics[]{10.1}{\includegraphics[]{10.1}{\includegraphics[]{10.1} {\includegraphics[]{10.1}{\includegraphics[]{10.1}{\includegraphics[]{10.1} {\includegraphics[]{10.1}{\includegraphics[]{10.1}{\includegraphics[]{10.1} {\includegraphics[]{10.1}{\includegraphics[]{10.1}{\includegraphics[]{10.1} {\includegraphics[]{10.1}{\includegraphics[]{10.1}{\includegraphics[]{10.1} {\includegraphics[]{10.1}{\includegraphics[]{10.1}{\includegraphics[]{10.1} {\includegraphics[]{10.1}{\includegraphics[]{10.1}{\includegraphics[]{10.1} {\includegraphics[]{10.1}{\includegraphics[]{10.1}{\includegraphics[]{10.1 }{\includegraphics[]{10.1}{\includegraphics[]{10.1}{\includegraphics[]{10.1 }{\includegraphics[]{10.1}{\includegraphics[]{10.1}{\includegraphics[]{10.1 }{\includegraphics[]{10.1}{\includegraphics[]{10.1}{\includegraphics[]{10.1 }{\includegraphics[]{10.1}{\includegraphics[]{10.1}{\includegraphics[]{10.1 }{\includegraphics[]{10.1}{\includegraphics[]{10.1}{\includegraphics[]{10.1 }{\includegraphics[]{10.1}{\includegraphics[]{10.1}{\includegraphics[]{10.1 }{\includegraphics[]{10.1}{\includegraphics[]{10.1}{\includegraphics[]{10.1 }{\includegraphics[]{10.1}{\includegraphics[]{10.1}{\includegraphics[]{10.1 }{\includegraphics[]{10.1}{\includegraphics[]{10.1}{\includegraphics[]{10.1 }{\includegraphics[]{10.1}{\includegraphics[]{10.1}{\includegraphics[]{10.1 {\includegraphics[]{10.1}{\includegraphics[]{10.1}{\includegraphics[]{10.1 }{\includegraphics[]{10.1}{\includegraphics[]{10.1}{\includegraphics[]{10.1 {\includegraphics[]{10.1}{\includegraphics[]{10.1}{\includegraphics[]{1.1 }\includegraphics[]{10.1}{\includegraphics[]{10.1}{\includegraphics[]{1.1}{\includegraphics[]{10.1}{\includegraphics[]{1.1}{\includegraphics[]{1.1}{\includegraphics[]{1.1}{\includegraphics[]{1.1}{\includegraphics[]{1.1}{\includegraphics[]{1.1}{\includegraphics[]{1.1}{\includegraphics[]{1. 1}{\includegraphics[]{1.1}{\includegraphics[]{1.1}{\includegraphics[]{1. }\includegraphics[]{1.1}{\includegraphics[]{1. 1}{\includegraphics[]{1. 1}{\includegraphics[]{1. 1}{\includegraphics[]{1. 1}{\includegraphics[]{1. 1}{\includegraphics[]{1. 1}{\includegraphics[]{1. 1}{\includegraphics[]{1. 1}{\includegraphics[]{1. 1}\includegraphics[]{1. 1}{\includegraphics[]{1. 1}{\includegraphics[]{1. 1}\includegraphics[]{1. 1{\includegraphics[]{1. 1}\includegraphics[]{1. 1}\includegraphics[]{1. 1}\includegraphics[]{1. 1{\includegraphics[]{1. 1}\includegraphics[]{1. 1}\includegraphics[]{1. \includegraphics[]{1. 1}\includegraphics[]{1. \includegraphics[]{1. \includegraphics[]{1. 1}\includegraphics[]{1. \includegraphics[]{1. \inclinclinclegraphics[]{1. 1 }\includegraphics[]{1. \inclinclinclinclinclincl[]{1. \incl Now let \(\mathcal{A}\) be a completely uncoupled Cartan datum with Dynkin diagram of type \(A(n)^{(1)}\). Denote the simple roots of \(\mathfrak{g}(\mathcal{A})\) by \(\alpha_{0},...,\alpha_{n}\); we will consider all indices mod \(n+1\). A direct computation (see [Si]) shows that we can rescale \(E_{i}\) such that \(y_{i+1,i}=-y_{i,i+1}=1\) and \(x_{i+1,i}=x_{i,i+1}=-1\). In particular \(x_{ij}=a_{ij}\) for any \(i,j\). Moreover, it is shown there that \(\sum_{i=0}^{n}h_{i}=0\). We notice that \(\sum_{i=0}^{n}H_{i}\in\bigcap_{\alpha\in\Pi}\text{Ann}_{\mathfrak{h}} \mathfrak{g}_{\alpha}\), and that any other linear combination of \(H_{0},\dots,H_{n}\) that lies in \(\bigcap_{\alpha\in\Pi}\text{Ann}_{\mathfrak{h}}\mathfrak{g}_{\alpha}\) must be proportional to \(\sum_{i=0}^{n}H_{i}\). We denote \(K:=\sum_{i=0}^{n}H_{i}\) and notice that \([K,K]=0\). Part (1) of Lemma 7.2 and the above values of \(x_{ij}\) and \(y_{ij}\) imply: \[-c_{0}-h_{0} =-c_{n}+h_{n},\] \[-c_{0}+h_{0} =-c_{1}-h_{1},\] so that \[2c_{0}=c_{1}+h_{1}+c_{n}-h_{n}. \tag{18}\] Finally, let \(\alpha_{0},...,\alpha_{n},\beta_{1},...,\beta_{m}\) be a basis for \(\mathfrak{h}_{\overline{0}}^{*}\), we let \(\alpha_{0}^{*},...\alpha_{n}^{*},\beta_{1}^{*},...\beta_{m}^{*}\) be the dual basis for \(\mathfrak{h}_{\overline{0}}\). **Theorem 11.2**.: _Let \(\mathfrak{g}(\mathcal{A})\) be a qKM algebra with Dynkin diagram of type \(A(n)^{(1)}\) for \(n\in\mathbb{Z}_{\geq 2}\). Then \(\mathfrak{ps}\mathfrak{q}_{n}\) can be identified with a subquotient of \(\mathfrak{g}(\mathcal{A})\), where \(\mathfrak{ps}\mathfrak{q}_{n}\) is the odd affinization of \(\mathfrak{ps}\mathfrak{q}_{n}\) with respect to its non-degenerate form._ _Concretely, there exists a qKM algebra \(\mathfrak{g}(\mathcal{A}^{\prime})\) that can be identified with an ideal of \(\mathfrak{g}(\mathcal{A})\) such that \(\mathfrak{g}(\mathcal{A})=\mathfrak{g}(\mathcal{A}^{\prime})+\mathfrak{h}_{ \overline{1}}\), and \(\mathfrak{ps}\mathfrak{q}_{n}\simeq\mathfrak{g}(\mathcal{A}^{\prime})/ \mathfrak{c}_{\overline{0}}^{\prime}\), where \(\mathfrak{c}_{\overline{0}}^{\prime}\) is the even part of the center of \(\mathfrak{g}(\mathcal{A}^{\prime})\)._ Proof.: We use the notation of [G] for a basis of \(\mathfrak{psq}_{n}\). Let \(\mathfrak{h}^{\prime}\) be a subalgebra of \(\mathfrak{h}\) given by \(\mathfrak{t}^{\prime}=\mathfrak{t}\) and \(\mathfrak{h}^{\prime}=\mathbb{C}(K,H_{1},...,H_{n})\) (as in the notation above). Let \(\mathcal{A}^{\prime}\) be the Cartan datum obtain from \(\mathcal{A}\) by replacing \(\mathfrak{h}\) by \(\mathfrak{h}^{\prime}\) (and restricting the \(\mathfrak{h}\)-modules \(\mathfrak{g}_{\alpha}\) to \(\mathfrak{h}^{\prime}\)). Clearly \(\mathfrak{g}(\mathcal{A}^{\prime})\) is a qKM algebra and can be embedded naturally as an ideal of \(\mathfrak{g}(\mathcal{A})\) satisfying \(\mathfrak{g}(\mathcal{A})=\mathfrak{g}(\mathcal{A}^{\prime})+\mathfrak{h}_{ \overline{1}}\). Now define a map from \(\tilde{\mathfrak{g}}(\mathcal{A}^{\prime})\) onto \(\mathfrak{psq}_{n}\) as follows: \[e_{i}\mapsto X_{E_{i,i+1},0}\otimes 1\text{ for }1\leq i\leq n,\] \[E_{i}\mapsto X_{0,E_{i,i+1}}\otimes 1\text{ for }1\leq i\leq n,\] \[f_{i}\mapsto X_{E_{i+1,i},0}\otimes 1\text{ for }1\leq i\leq n,\] \[F_{i}\mapsto X_{0,E_{i+1,i}}\otimes 1\text{ for }1\leq i\leq n,\] \[e_{0}\mapsto X_{E_{n,0},0}\otimes t,\] \[E_{0}\mapsto X_{0,E_{n,0}}\otimes t,\] \[f_{0}\mapsto X_{E_{0,n},0}\otimes t^{-1},\] \[F_{0}\mapsto X_{0,E_{0,n}}\otimes t^{-1},\] \[\alpha_{0}^{*}\mapsto t\delta_{t},\] \[h_{i}\mapsto X_{E_{i,i}-E_{i+1,i+1},0}\otimes 1\text{ for }1\leq i \leq n,\] \[\beta_{i}^{*}\mapsto 0,\] \[K\mapsto K,\] \[H_{i}\mapsto X_{0,E_{i,i}-E_{i+1,i+1}}\otimes 1\text{ for }1\leq i \leq n.\] It is straightforward yet tedious to verify that the above assignment determines a well defined homomorphism of Lie superalgebras, which surjects onto \(\mathfrak{psq}_{n}\). As \(\mathfrak{psq}_{n}\) has no non-trivial ideals intersecting \(\mathfrak{h}\) trivially, this homomorphism factors through a map \(\mathfrak{g}(\mathcal{A})\to\mathfrak{p}\hat{\mathfrak{s}}\mathfrak{q}_{n}\). It is clear that the kernel of this lies entirely in the even part of the center. It should be noted that the odd part of the Cartan subalgebra of \(\mathfrak{g}(\mathcal{A})\) cannot be much more complicated than that of \(\mathfrak{psq}_{n}\). Indeed let \(Y\in\mathfrak{h}_{\overline{1}}\). Because \(x_{ij}=a_{ij}\) and the submatrix of \(A\) obtained by removing the first row and the first column is non-degenerate, there is a unique linear combination \(\sum_{i=1}^{n}t_{i}H_{i}\) such that \([Y-\sum_{i=1}^{n}t_{i}H_{i},e_{j}]=0\) for \(1\leq j\leq n\). So we assume \([Y,e_{j}]=0\) for all \(1\leq j\leq n\). Let \(r\in\mathbb{C}\) be the scalar satisfying \([Y,E_{1}]=re_{1}\). Then \([Y,H_{1}]=rh_{1}\), so \[ra_{12}e_{2}=[rh_{1},e_{2}]=[[Y,H_{1}],e_{2}]=x_{12}[Y,E_{2}]=a_{12}[Y,E_{2}],\] implying \([Y,E_{2}]=re_{2}\). Repeating this argument, we obtain \([Y,E_{j}]=re_{j}\) for all \(1\leq j\leq n\). For \(j=0\), we obtain the following: \[-re_{0}=ra_{10}e_{0}=[rh_{1},e_{0}]=[[H_{1},Y],e_{0}]=[H_{1},[Y,e_{ 0}]]+a_{10}[Y,E_{0}]=[H_{1},[Y,e_{0}]]-[Y,E_{0}],\] \[-re_{0}=ra_{n0}e_{0}=[rh_{n},e_{0}]=[[H_{n},Y],e_{0}]=[H_{n},[Y,e_{ 0}]]+a_{n0}[Y,E_{0}]=[H_{n},[Y,e_{0}]]-[Y,E_{0}].\] These equations imply that \([H_{1},[Y,e_{0}]]=[H_{n},[Y,e_{0}]]\); however since \(y_{n,0}=-y_{1,0}\), this implies that \([Y,e_{0}]=0\). Thus we obtain further that \([Y,E_{0}]=re_{0}\). It is immediate that \([Y,Y]\) is central. ### Type \(A(1)^{(1)}\) Let \(s\in\{\pm 1\}\) and \(x_{12},x_{21},y_{12},y_{21}\in\mathbb{C}\) satisfy \(\frac{y_{12}}{y_{21}}=s\sqrt{\frac{a_{12}}{a_{21}}}\) and \(x_{12}x_{21}=2+s\sqrt{a_{12}a_{21}}=2(1+s)\). We assume without loss of generality that \(x_{12}\neq 0\) and define a datum \(\mathcal{A}^{s}_{(2,2)}\). Let \(\mathfrak{h}\) be a \((3|2)\)-dimensional quasi-toral superalgebra, defined as follows. Let \(\{h_{1},h_{2},h_{3}\}\) be a basis for \(\mathfrak{t}\) and \(\{H_{1},H_{2}\}\) a basis for \(\mathfrak{h}_{\overline{1}}\). Let \(\alpha_{1},\alpha_{2},\alpha_{3}\) be a basis of \(\mathfrak{t}^{*}\), defined by \[\alpha_{j}(h_{i})=\begin{pmatrix}2&-2&0\\ -2&2&1\\ 0&1&0\end{pmatrix}.\] We also write \(\delta=\alpha_{1}+\alpha_{2}\). We let \(c_{1}=x_{12}y_{12}h_{3}\) and \(c_{2}=\frac{1}{x_{12}}(x_{21}c_{1}+y_{21}h_{1}-y_{12}h_{2})\). We define the relations in \(\mathfrak{h}\) by \[[H_{i},H_{i}] =2c_{i},\] \[=x_{12}c_{2}+y_{12}h_{2}=x_{21}c_{1}+y_{21}h_{1}.\] We set \(\Pi=\{\alpha_{1},\alpha_{2}\}\). It is clear that \(\alpha_{1}\) and \(\alpha_{2}\) are linearly independent in \(\mathfrak{t}^{*}\). We let \(\mathfrak{g}_{\alpha_{i}}\) be an irreducible \(\mathfrak{h}\)-module with \(e_{i}\) and \(E_{i}\) even and odd basis vectors, the action of \(\mathfrak{h}\) given by \[H_{i}\cdot(e_{i}+E_{i}) =2E_{i},\] \[H_{j}\cdot(e_{i}+E_{i}) =x_{ji}E_{i}+y_{ji}e_{i}.\] Our definition of \(\mathfrak{h}\) ensures this is a well defined action. Now we let \(\mathfrak{g}_{-\alpha_{i}}:=\mathfrak{g}_{\alpha_{i}}^{\vee}\) and let \(f_{i}:=e_{i}^{\vee}\) and \(F_{i}:=\sqrt{-1}\cdot E_{i}^{\vee}\) be a basis for it. Finally, we let \([-,-]_{\alpha_{i}}:\mathfrak{g}_{\alpha_{i}}\otimes\mathfrak{g}_{-\alpha_{i}} \rightarrow\mathfrak{h}\) be the \(\mathfrak{h}\)-module homomorphism defined by \([e_{i},F_{i}]_{\alpha_{i}}=H_{i}\). All this information defines a Cartan datum \(\mathcal{A}^{s}_{(2,2)}\). It then follows that \(\mathfrak{g}(\mathcal{A}^{s}_{(2,2)})\) is a qKM algebra. We write \(\mathfrak{q}^{+}_{(2,2)}:=\mathfrak{g}(\mathcal{A}^{1}_{(2,2)})\) and \(\mathfrak{q}^{-}_{(2,2)}:=\mathfrak{g}(\mathcal{A}^{-1}_{(2,2)})\). **Theorem 11.3**.: \(\mathfrak{q}^{s}_{(2,2)}\) _are of finite growth, their real root spaces are of dimension \((1|1)\) and their imaginary root space is of dimension \((2|2)\)._ Proof.: Let \(\mathfrak{g}\) denote either \(\mathfrak{q}^{+}_{(2,2)}\) or \(\mathfrak{q}^{-}_{(2,2)}\). Let \(\mathfrak{s}\) be the Kac-Moody Lie algebra corresponding to \(\mathfrak{g}\) as in Theorem 10.1. Then \(\mathfrak{s}\simeq\mathfrak{sl}(2)^{(1)}\) and is of finite growth. We can associate to \(\mathfrak{g}\) the same Weyl group \(W\) as of \(\mathfrak{s}\), and every reflection of \(W\) can be lifted to an automorphism of \(\mathfrak{g}\). As \(\dim\mathfrak{g}_{\alpha}=(1|1)\) for any \(\alpha\in\Pi\), we have \(\dim\mathfrak{g}_{\beta}=(1|1)\) for any real root \(\beta\in\Delta\). If we write \(\delta=\alpha_{1}+\alpha_{2}\) then the last result can be stated as \(\dim\mathfrak{g}_{\alpha+k\delta}=(1|1)\) for any \(\alpha\in\Pi\), \(k\in\mathbb{Z}\). Finally, Proposition 8.1 implies, without loss of generality, that \(\alpha_{2}(c_{1})\neq 0\). We consider the \(\mathfrak{sq}(2)\) subalgebra of \(\mathfrak{g}\) corresponding to \(\alpha_{1}\). The representation theory of \(\mathfrak{sq}(2)\) implies that \(\dim\mathfrak{g}_{k\delta}\geq(2|2)\), for any \(k\in\mathbb{Z}\smallsetminus\{0\}\). We notice that \(\delta(c_{1})\neq 0\). If \(\mathfrak{g}_{k\delta}>(2|2)\), then there exists \(\theta\in\mathfrak{g}_{k\delta}\) such that \([\mathfrak{g}_{-\alpha_{1}},\theta]=0\). This implies \(0=[c_{1},\theta]=k\delta(c_{1})\theta\), so \(\delta(c_{1})=0\), a contradiction. We conclude that \(\dim\mathfrak{g}_{k\delta}=(2|2)\) for any \(k\in\mathbb{Z}\smallsetminus\{0\}\). This concludes the proof.
2309.11316
Dynamic Pricing of Applications in Cloud Marketplaces using Game Theory
The competitive nature of Cloud marketplaces as new concerns in delivery of services makes the pricing policies a crucial task for firms. so that, pricing strategies has recently attracted many researchers. Since game theory can handle such competing well this concern is addressed by designing a normal form game between providers in current research. A committee is considered in which providers register for improving their competition based pricing policies. The functionality of game theory is applied to design dynamic pricing policies. The usage of the committee makes the game a complete information one, in which each player is aware of every others payoff functions. The players enhance their pricing policies to maximize their profits. The contribution of this paper is the quantitative modeling of Cloud marketplaces in form of a game to provide novel dynamic pricing strategies; the model is validated by proving the existence and the uniqueness of Nash equilibrium of the game.
Safiye Ghasemi, Mohammad Reza Meybodi, Mehdi Dehghan Takht-Fooladi, Amir Masoud Rahmani
2023-09-20T13:41:45Z
http://arxiv.org/abs/2309.11316v1
# Dynamic Pricing of Applications in Cloud Marketplaces using Game Theory ###### Abstract The competitive nature of Cloud marketplaces as new concerns in delivery of services makes the pricing policies a crucial task for firms; so that, pricing strategies has recently attracted many researchers. Since game theory can handle such competing well this concern is addressed by designing a normal-form game between providers in current research. A committee is considered in which providers register for improving their competition-based pricing policies. The functionality of game theory is applied to design dynamic pricing policies. The usage of the committee makes the game a complete-information one, in which each player is aware of every others' payoff functions. The players enhance their pricing policies to maximize their profits. The contribution of this paper is the quantitative modeling of Cloud marketplaces in form of a game to provide novel dynamic pricing strategies; the model is validated by proving the existence and the uniqueness of Nash equilibrium of the game. Index Terms: Application, Cloud computing marketplace, competition-based pricing, game theory, Nash equilibrium. ## 1 Introduction From 2007, Cloud computing has been emerged as one of the most attractive technologies in IT industry [1, 2, 3, 4]. An increasing number of companies are taking advantage of services provided by Cloud computing. The services are in terms of Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS), which are offered by different providers. IaaS providers prepare computing resources and storage resources [1, 2] in form of virtual machines (VMs). These computing resources are requested by PaaS/SaaS providers or industrial/academic organizations for their applications to be run without necessity to maintain underlying infrastructures [2, 3]. Nowadays, many Cloud computing providers compete with each other in maximizing their profit. The competition between SaaS providers has been emerged as a new challenge in this area. SaaS providers offer software applications and their related services [2]. The competition between providers may cause the market and dynamic prices be changed over time; therefore, the pricing strategies of providers must be economically efficient [2]. There are a lot of researches that focus on pricing and strategic behavior in the Internet markets and Cloud computing [1, 3, 4]. Different pricing researches are studied in [1, 2, 3, 4]. A price optimization approach in a free competitive market is investigated in [1, 2]; besides, the proposed approach in [3], maximizes users' profit by transferring their application to other providers or continuing using the current provider. Furthermore, a research [3], considered dynamic pricing mechanisms of providers with different levels of services; users select a proper based on some parameters such as response time, security and storage capacity. Communication of competitive providers and users in form of a game model is presented in [3]; users tend to choose the services with the best quality (QoS) while several service providers cooperate with each other in an oligopoly market for attracting more and more users and increase the profits. However, just a few works exist on competition between SaaS providers in order to increase their users. It is to be noted that users usually prefer a provider with the least price. Thus, the offered prices have an essential impact on number of users. Besides, pricing strategy has a significant influence on the profit-maximizing strategies of companies [1]. Pricing of applications is computed by considering some properties, such as price formation, structure of payment flow, price discrimination, and assessment base [3, 2, 4]. Most researches, which have modeled the interactions of providers or users in form of game, have mainly focused on optimal allocation of resources in Cloud providers, but none study the competition-based pricing models especially for applications. In current paper, competitive pricing model is studied, which considers the pricing concerns of an application. We design a game of which each player is fully aware of the game and other players, known as a complete information game [1]. The players of the game are SaaS providers, who tend to attract Cloud users to maximize their final profit; they try to compute a proper price for the current request. The unique features of our work are as follows. First, this work has an analytical insight into Cloud markets and provides a quantitative modeling of these markets in form of a game between providers. Goal of the players in game is to attract users by offering proper prices; and based on the game model, the equilibrium is computed using Nash equilibrium primitives. This paper lies in trying to capture the strategic dynamics of providers as a strategic form game and the competition-based pricing model, which considers the main parameters of pricing applications. Second, the work covers some novel considerations in pricing policy of applications, which makes pricing more flexible. The rest of this paper is organized as follows. In Section2, we discuss the preliminaries of SaaS providers' interactions in Cloud computing and introduce game theory concepts. The proposed distributed algorithm, which leads to the optimal solution for the application pricing game is formulated in Section3. The experimental results are reported in Section4. Finally, the paper is summarized with some concluding remarks in Section5. ## 2 Problem Statement and Notations Cloud computing has a powerful paradigm in request processing for delivery of applications through provisioning of virtualized resources (Buyya, 2009). We suppose a Cloud computing marketplace which delivers application to the users; the overall architecture of the proposed Cloud market is depicted in Fig.1. Services of Cloud computing are consumed over the Internet; they can be accessed by users either via web browsers, known as a direct access or by the application programming interfaces (APIs), named as indirectly access (Stanovska, 2009). A unit, named _Cloud Committee_, in which users can join and demand applications via the API is considered; it can be assumed as a central coordinator between users and providers. The coordinator receives users' requests; then, it transfers a request to the registered providers. After receiving the request, providers offer an optimal price for provisioning the requested application; then, users are notified of prices by the committee. Finally, users contact to a SaaS provider whose service is the best based on its offered price and the performance. As depicted in Fig.1, requests can be processed directly or indirectly. If a user demands services via the web browser of a SaaS provider (directly), the SaaS provider offers a price to the user without any comparison with other available SaaS provider. Otherwise, via indirect access, providers, who have registered in the committee, perform dynamic comparative pricing. In this case, the request is sent to all registered SaaS providers and afterwards the providers offer their prices based on a game-theoretic model; then, the best offer is chosen. ### 2.1 User requests As mentioned previously, a user demands applications from SaaS providers via _Cloud Committee_. A request consists of parameters such as application identification, required configuration, and time period of the request. The configuration is introduced in form of parameters of VMs such as type, memory, and price. These properties comprise a VM model named VMM as _{Size_, _Memory_, _Core_, _Storage_, _HostOS_, _HourCost_}. Per unit prices are determined to charge users for using the resources according to the _Size_ of each virtual resource. The parameters _Memory_, _Core_, _Storage_ and _HostOS_ are used for finding the proper resources to host the demanded application; _HourCost_ is applied for computing the operational cost of resources in a provider. The requests are commonly gathered in a request pool in form of a vector, named _REQ\(<\)Req\({}_{l}\)_, _Req\({}_{2}\)_,..., _Req\({}_{l}\)_\(>\)_ by _Cloud Committee_, where _Req\({}_{l}\)_, demonstrates request \(r\) in _REQ_ as { _AppID_, _Wr\({}_{r}\)_, _Pay\({}_{r}\)_, _Prf\({}_{l}\)_}. _AppID_, presents the requested application of _Req\({}_{r}\)_, _Wr\({}_{r}\)_ is the willingness of the user to pay and it is the maximum amount that the user is willing to pay; it is somehow the budgetary constraint of the user for _Reg._; \(\tau_{r}\) represents the duration of time that the application _AppIDr_ is required; _Pay_ introduces the payment flow of the user which may be single payment or regularly recurring payment, and _Prf._ shows whether the user requires a certain performance level guarantees for a determined price or not; it is to be noted that such guarantees increase the price of the request. A more detailed discussion on the parameters is presented in Section2.3. **Table 1** System parameters \begin{tabular}{|l l|} \hline Notation & Declaration \\ \hline Req\({}_{r}\) & Request \(r\) of the Cloud market \\ q & Number of requests sent to providers of _Cloud Committee_ \\ VMM & The considered model of virtual machine \\ \(\tau_{i}\) & The duration of time the application is needed in _Req._ \\ R\({}_{i}\) & Number of VMs that provider \(i\) bought from IaaS provider \\ L\({}_{i}\) & Number of applications that provider \(i\) bought from software developers \\ N & Number of SaaS providers or players of game DPG \\ \(\alpha_{i}\) & Per unit benefit of virtual resources of provider \(i\) \\ \(\beta_{i}\) & Per unit benefit applications of provider \(i\) \\ S & Possible strategies of DPG \\ S\({}_{i}\) & Possible strategies of SaaS provider \(i\) (player \(i\)) \\ s & Strategy profile of players in DPG \\ s\({}_{i}\) & Selected strategy of player \(i\) \\ \(\mu_{i}\) & Number of services of the requested application in _Req._ \\ P\({}_{i}\) & Offered price of SaaS provider \(i\) (player \(i\)) for _Req._ \\ c\({}_{ij}\) & Infrastructural cost that a provider needs to pay when providing resources for _App\({}_{i}\)_ \\ C\({}_{i}\) & Cost of providing _Req._ for SaaS provider \(i\) (player \(i\)) \\ \(\theta_{ij}\) & Price of application \(j\) in provider \(i\) \\ \(\alpha_{i}\) & The predefined parameter by provider \(i\) \\ u(s) & Payoff function for player \(i\) while playing strategy profile \(s\) \\ \hline \end{tabular} The adopted notations in this research are summarized in Table1. ### 2.2 Market state The received requests are delivered to the registered SaaS providers simultaneously by _Cloud Committee_. Let SaaS provider \(i\) bought \(R_{i}\) numbers of VMs of different types; for each VM \(k\) that is offered by provider \(i\), a per unit benefit is defined (Truong-Hu, 2014) as \(\alpha_{ui}\) in \(\alpha_{u}\)=\(\sim\)\(\alpha_{ui}\), \(\alpha_{u}\),..., \(\alpha_{u}\)\(>\). Furthermore, provider \(i\) have \(L_{i}\) numbers of instances of the applications; each application \(j\) has an individual per unit benefit for the provider as \(\beta_{ij}\) in \(\beta_{i}\)=\(<\)\(\beta_{i,1}\), \(\beta_{i,2}\),..., \(\beta_{i}\)\(>\). The applications can be multi-tenant; a single instance of a multi-tenant application serves multiple users. Although multi-tenant applications are more expensive, providers try to increase these applications as multi-tenancy can be economical; they have greater benefit for the providers as software development and maintenance costs are shared. The instances list of applications which provider \(i\) owns is _Appr_=\(<\)_Appi, Appz,..., Appz_\(>\); application _Appp_=\(\{\)_AppID\({}_{i}\), \(\mu_{i}\), **Srv**, MT\({}_{i}\), \(\theta_{ij}\)}; where, _AppID\({}_{i}\)_ presents the application \(j\)'s identification in provider \(i\); this application is assumed to consist of \(\mu_{ij}\) services, _Srv_ is the list of services in the application in form of \(<\)_VMM\({}_{ij}\), VMM\({}_{i}\)_,..., VMM\({}_{\mu i}\)\(>\). Each service demands an individual VM type. It is to be noted that \(\mu_{ij}\) and _Srv_ are dependent on provider which hosts the application; therefore, they are determined by each provider independently. \(MT_{ij}\) denotes the number of users able to own the application simultaneously; it is to be noted that \(MT_{ij}\)\(>\)1 for multi-tenant applications. \(\theta_{ij}\) is the initial price of _Appv_ which is determined by its developer. Suppose _Req._ demands for _AppID\({}_{ij}\)_; the requested application is _Appv\({}_{i}\)_. The cost that SaaS provider \(i\) has to pay IaaS provider for hosting _App\({}_{ij}\)_ is computed as \[c_{ij}=\tau_{r}\times\sum_{k=1}^{\mu_{ij}}VMM_{jk}.\,HourCost,\qquad\qquad\forall \text{k}\in[1,\ldots,\mu_{i}]. \tag{1}\] After receiving a request, the provider computes the cost of processing the request. In the competition between SaaS providers, one may win due to its offered price while others lose. ### 2.3 Pricing models for the applications Software products and requests have different properties, which affect the pricing strategies; these properties are extracted (Narahari, 2005; Lehmann, 2009; Mathew, 2010) as follows: \begin{table} \begin{tabular}{|l l|} \hline Notation & Declaration \\ \hline Req\({}_{r}\) & Request \(r\) of the Cloud market \\ q & Number of requests sent to providers of _Cloud Committee_ \\ VMM & The considered model of virtual machine \\ \(\tau_{i}\) & The duration of time the application is needed in _Req._ \\ R\({}_{i}\) & Number of VMs that provider \(i\) bought from IaaS provider \\ L\({}_{i}\) & Number of applications that provider \(i\) bought from software developers \\ N & Number of SaaS providers or players of game DPG \\ \(\alpha_{i}\) & Per unit benefit of virtual resources of provider \(i\) \\ \(\beta_{i}\) & Per unit benefit applications of provider \(i\) \\ S & Possible strategies of DPG \\ S\({}_{i}\) & Possible strategies of SaaS provider \(i\) (player \(i\)) \\ s & Strategy profile of players in DPG \\ s\({}_{i}\) & Selected strategy of player \(i\) \\ \(\mu_{i}\) & Number of services of the requested application in _Req._ \\ P\({}_{i}\) & Offered price of SaaS provider \(i\) (player \(i\)) for _Req._ \\ c\({}_{ij}\) & Infrastructural cost that a provider needs to pay when providing resources for _Appv\({}_{i}\)_ \\ C\({}_{i}\) & Cost of providing _Req._ for SaaS provider \(i\) (player \(i\)) \\ \(\theta_{ij}\) & Price of application \(j\) in provider \(i\) \\ \(\alpha_{i}\) & The predefined parameter by provider \(i\) \\ u(s) & Payoff function for player \(i\) while playing strategy profile \(s\) \\ \hline \end{tabular} \end{table} Table 1: System parameters **Initial cost**: the amount of money that the service provider spends for buying the software; this factor consists of the costs of the components of a SaaS based service. * **Resource appropriation**: the efficient allocation of resources will help in reducing the wastage of resources and help keep the service as lean as possible. Eq.1 computes the resource appropriation of an application. * **Multi-tenancy**: Number of users accessing the application simultaneously, which helps in lowering costs for the users and providers; SaaS providers can fully exploit the underlying technology. * **User willingness to pay**: users determine an amount of money that they intend to pay for the application according to its realized value; providers do not have any knowledge of this factor, therefore, users determine it in the request. * **Performance**: the SaaS provider guarantees a certain performance level for a determined price and pays a penalty in the case that the performance is not achieved. * **Structure of payment flow**: users can make a single payment and thus obtain perpetual rights of use for the service, or can make a regularly recurring payment. In this research, based on the values of these factors, different levels of services are determined; the levels affect the pricing strategy introduced in Section3. For detailed parameters that lead to dynamic pricing, we refer reader to (Narahari, 2005; Lehmann, 2009; Mathew, 2010). Different values of these parameters comprise different states, which are introduced as service levels; Table2 depicts some of the states. Users can view the details of each level while requesting in the web page of _Cloud Committee_. The values of each parameter are assigned as follows. The value of 1, for Utilization parameter, denotes that the resource appropriation of current application is the same as its requirements; false value for Multi-tenancy parameter indicates that the application is not a multi-tenant one, and true value shows a multi-tenant application is available. As mentioned previously, a multi-tenant application has a higher initial price, but as the deployment and maintenance costs are shared, users have a lower final price; Performance parameter is true, when the provider guarantees a certain performance level and determines a penalty of violation for the service; otherwise, it is false. It is to be noted that when Utilization is less than 1, Performance cannot be guaranteed and it is false. Finally, values of Payment flow parameter can be single or recurring. Services have different parameters which provide different levels of service (see Table2). Users can determine which level to access while requesting; for instance, if a user demands a typical application (not multi-tenant) with performance guarantees and single payment, then the level of service is one. Service levels are used in pricing strategies of providers introduced in Section3.3. The initial cost and resource appropriation parameters directly affect the offered price of requests; the remaining parameters affect the price by determining different service levels. Each level individually influences the offered prices. In addition to these factors, a service provider must consider the offered prices of other service providers as well. Competition-based pricing, which sets the prices based on the other competitors' prices, is a potential dynamic pricing model. A dynamic pricing model of applications is proposed within this research by the aim of the game theory. ## 3 Dynamic Pricing of Application Requests in a Competition-based Cloud Marketplace This section firstly, studies the formulation of our proposed approach for SaaS providers' pricing model with the aim of optimizing their profit. Then, in order to establish a competition-based pricing model the setup of a game between SaaS providers is discussed. ### Proposed Architecture The overall architecture of the considered Cloud computing market (Fig.1) is depicted in Fig.2 with several SaaS providers. _Request Interface_, which is placed on top of the architecture, is an interface unit for received requests, _REQ_; it maps request \(r\) into the introduced form in Section2.1 as _Req\({}_{r}\)_. The next unit, _Request Handler_, consists of two modules: _Provisioning_ module and _Pricing_ module. These modules have main role in \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline service level & Utilization & Multi-tenancy & Performance & payment flow & \(\alpha_{i}\) \\ \hline Level 1 & \(\geq\) 1 & False & True & Single & [0.1,0.15,...,0.4] \\ \hline Level 2 & \(\geq\) 1 & False & True & Recurring & [0.04,0.05,...,0.09] \\ \hline Level 3 & \(<\)1 & True & False & Recurring & [0.006,0.009,..., 0.03] \\ \hline Level 4 & \(<\)1 & True & False & Single & [0.001,0.002,..., 0.005] \\ \hline \end{tabular} \end{table} Table 2: The information of service levels and the corresponding parameters of service processes of SaaS providers; _Provisioning_ module allocates the proper available VMs, stored in _Virtual Resources_ unit of the provider; _Pricing_ module determines a dynamic price for the current request, _Req._ API sends _Req._ to SaaS providers, who are registered in the considered committee. The registered SaaS providers compete for serving _Req._. They perform allocation of resources and compute a pricing process; finally, SaaS provider \(i\) sends its offer to _Market Manager_ in form of \(A_{i}\). _Market Manager_ receives offers of SaaS providers, and stores them in a vector named \(\mathbf{s}\). It is sent to the user of \(\textit{Req}_{r}\), in order to the most proper offer be chosen. _Market Manager_ resends the overall information of the offers to the providers to inform the winner provider of the competition; this information is sent as a vector named \(\textit{Rep}\)=\(\{\textit{winner\_id},\mathbf{s}\}\). As mentioned previous, application \(j\) of provider \(i\), \(\textit{App}_{i}\), consists of \(\mu_{j}\) variant services. The requirements of service \(m\) are specified in terms of parameters of VM, \(\textit{VMM}_{m}\) in \(\textit{Srv}_{i}\). In our model, goal of SaaS providers is to find the most proper price, while satisfying the requirements of the application to increase their profit. In next section, the formulation of SaaS providers for achieving the goal is studied. ### Formulation of providers' strategies optimization It is supposed that SaaS providers face an optimization problem for maximizing their profits, while satisfying users. The profit of a SaaS provider is the difference between its earned revenue of processing the requests and its paid cost for providing applications and deploying them on its virtual resources. The optimization problem of SaaS provider \(i\) is formulated as follows. \[\begin{array}{ll}\textit{max}&u_{i}=\textit{max}(P_{i}-C_{i})=\textit{max}&P _{i}-\textit{min}&C_{i}\\ \textit{s}.\textit{t}&P_{i}\leq W_{r}\\ &P_{i}\leq c_{ij}+\theta_{ij}\\ &P_{i}>0,C_{i}>0,\end{array} \tag{2}\] where \(u_{i}\) is the profit of SaaS provider \(i\); \(P_{i}\) and \(C_{i}\) are the revenue and the cost of SaaS provider \(i\) while provisioning \(\textit{Req}_{r}\), respectively; \(\theta_{ij}\) denotes the initial cost that provider \(i\) has paid for owning the requested Figure 2: The target Cloud computing marketplace model with the considered structure of a SaaS provider application, i.e. development cost of the application \(j\); \(c_{ij}\) is the resource appropriation cost of the requested application \(j\) in provider \(i\) (Eq.1). The constraints of Eq.2 are considered to guarantee some features as follows. The first constraint is that the offered price (\(P_{i}\)) should not exceed the users' willingness to pay. It is supposed that the users' willingness to pay cannot exceed the initial cost of the application \(j\) in provider \(i\) (\(\phi_{j}\)) and its deploying cost (\(c_{ij}\)) (Narahari, 2005), i.e., \(W_{r}\leq c_{ij}\)+\(\theta_{ij}\). This assumption is applied by the committee to prevent users having low \(W_{r}\). Second constraint shows that the offered price does not exceed the sum of \(c_{ij}\) and \(\theta_{ij}\); otherwise users prefer to buy the required application of _Req._ and its infrastructural requirements individually. The first two constraints result \(P_{i}\leq W_{r}\leq c_{ij}\)+\(\theta_{ij}\). The last constraint denotes both the revenue and the cost have positive values. A recommended solution to reach the goal is to maximize \(P_{i}\) while minimizing \(C_{i}\), in such a way that the constraints are maintained; the ultimate value of \(P_{i}\), which satisfies the constraints, is achieved by some parameters, which will be discussed later. ### A game-theoretic setup The interaction of SaaS providers can be modeled in form of a game. Hereafter, we formulate the game for the application pricing problem in the considered Cloud computing marketplace. Definition 1: Let DPG=(\(N\),\(S\),\(u\)) be a non-cooperative finite dynamic pricing game with complete information. \(N\) is a finite set of \(n\) SaaS providers in Cloud marketplace indexed by \(i\); \(S\)=\(S_{i}\)\(\times\)\(\cdots\)\(\times S_{n}\), where \(S_{i}\) is a finite set of strategies of provider \(i\), which presents its pricing policies; \(u\)=(\(u_{r}\),...,\(u_{n}\)), where \(u\) is payoff function of provider \(i\). Let \(s\)=(\(s_{1}\),..., \(s_{n}\))\(c\)\(S\) as the strategy profile, where \(s\)\(c\)\(S_{i}\) is the strategy of player \(i\); \(s_{i}\) is chosen in a way to maximize \(u\)(\(s\)). Players of DPG (dynamic pricing game) are a set of SaaS providers of Cloud computing which registered in _Cloud Committee_. Although the number of SaaS providers are rapidly grows there are a finite number of providers in Cloud environment (Hurwitz, 2010); thus, we have a finite set of players, which is a necessity in a finite game. Users can easily find the latest list of SaaS providers that offer software solutions in their interested area. SaaS providers who register in _Cloud Committee_ have a common database which makes DPG a complete information game. SaaS provider can discriminate the prices (Narahari, 2005; Lehmann, 2009) based on the per unit benefits that each application and VM have; the price discrimination offers a same application to different users at different prices, based on the introduced factors in Section2.3. We use a competition-based pricing model, realized by designing the considered non-cooperative game, which has also benefited from price discrimination. \(S_{i}\) denotes the strategy set of provider \(i\) as, \[S_{i}=\sqrt{\omega_{i}}\big{(}1+\gamma\sqrt{\omega_{i}}\big{)}\big{(}\theta_{ ij}+c_{ij}\big{)}, \tag{3}\] where, \(\omega_{i}\) is a parameter which is determined by provider \(i\), \(\gamma\) is a constant value determined by the committee, less than 1. Let \(\rho_{r}\) denote Utilization parameter for _Req._, which demands for _AppIDj_, as \[\rho_{r}=\frac{n_{r}}{n_{i}}. \tag{4}\] Where \(R_{r}\) is the required infrastructures of requested application in _Req._, \(R_{i}\) denotes the provided resources for the request by provider \(i\). The greater values of \(\rho_{r}\) guarantee a certain performance level for a determined price; otherwise, for values of \(\rho_{r}\) less than 1, i.e. (\(R\)\(<R_{i}\)), the offered price is discounted. The other factors that influence \(\omega_{i}\) are as follows; multi-tenancy state of the requested application, the user's interest for performance guarantees and payment flow, which are discussed in Section2.3. These factors determine different service levels. We determine different range of \(\omega_{i}\) according to each service level. Let \(C_{\rho_{r}}\)_C\(\pi\)_, \(C_{\pi\omega}\)_ and \(C_{\pi\gamma}\)_denote Utilization, Multi-tenancy, Payment flow, and Performance, respectively. The service levels are determined according to these parameters (see Table2); \(\omega_{i}\) has different ranges of values based on each service level. Lower service levels have higher prices; i.e., for initial levels of service, high values of \(\omega_{i}\) (e.g. [0.1,0.15,...,0.4]) are considered in order to not decrease the price of the provisioned request; these levels would have smaller discounted prices, and vice versa; the supposed ranges of \(\omega_{i}\) are discussed in Section5.1. Service provisioning is a cost-prone process; however, there is a trade-off between revenues and costs. Formally, payoff function, which introduces the profit of a provider, consists of both the properties of users' demand in _Req._, and the corresponding properties of the provided service. Definition 2: For the strategy profile \(c\)\(S\), \(u\)\(\cdot\)\(S\)\(\rightarrow\)\(R\) is the payoff function, which assigns numerical values to each member of the strategy set \(S\). For all \(x\)\(y\)\(c\)\(S\), strategy \(x\) is preferred over strategy \(y\) iff \(u\)(\(x\))\(>\)\(u\)(\(y\)). A payoff function is originated from software pricing principles discussed in Section2.1 (Narahari, 2005; Lehmann, 2009; Mathew, 2010). The payoff function of player \(i\), for \(\mathit{Req}_{r}\), is \[u_{i}(\boldsymbol{s})=D_{i}(S_{i}-C_{i}), \tag{5}\] where \(S_{i}\) and \(C_{i}\) are the strategy of provider \(i\) for providing the request, respectively. \(\boldsymbol{D}\)=\(\{D_{i},D_{2},\,\ldots,\,D_{n}\}\) denotes the demand vector of SaaS providers; \(D_{i}\)=\(1\) if and only if \(\mathit{argmin}(\boldsymbol{s})\)=\(i\), i.e. \(i\) is the index of the minimum of profile \(\boldsymbol{s}\) and the strategy of player \(i\) has the least value in profile \(\boldsymbol{s}\); therefore player \(i\) wins the game, otherwise it is zero. \(u_{i}(\boldsymbol{s})\) not only depends on the strategy of provider \(i\) but also depends on all others', i.e. \(\boldsymbol{s}\)=(\(s_{1},\,s_{2},\,...,\,s_{n}\)). Users usually choose the least price for a service with a satisfactory performance; therefore, the payoff is zero unless for the provider with the least price. \(C\) is computed as, \[C_{i}=\omega_{i}\big{(}\alpha_{j}c_{ij}+\beta_{ij}\theta_{ij}\big{)}. \tag{6}\] \(\omega_{i}\) is determined by each provider individually; it is applied to ensure the positivity of \(u_{i}\). \(\beta_{ij}\) and \(\alpha_{i}\) are per unit benefits of application \(j\) for provider \(i\) and per unit benefits of virtual resources that host \(\mathit{Req}_{r}\), respectively; \(\alpha_{j}=\sum_{k=1}^{n_{ij}}\alpha_{ik}\), where, \(\alpha_{ik}\) denotes the per unit benefits of VM \(k\) for provider \(i\). Finally, the payoff function represented in Eq.5, is simplified as \[u_{i}=\ D_{i}\left(\sqrt{\omega_{i}}\big{(}1+\gamma\sqrt{\omega_{i}}\big{)} \big{(}\theta_{ij}+c_{ij}\big{)}-\omega_{i}\big{(}\big{(}\sum_{k=1}^{n_{ij}} \alpha_{jk}\big{)}c_{ij}+\beta_{ij}\theta_{ij}\big{)}\right). \tag{7}\] The strategy profiles must converge to a desired profile, which is known as the solution concept of the game. This solution is named as Nash equilibrium in a normal form game; next section investigates the equilibrium. Algorithm1 presents the algorithm for DPG. The output is the list of providers' pricing offers in Nash equilibrium. Firstly, the provider receives a request from _Request Dispatcher_ of _Cloud Committee_ in line1. Then, the game starts and it proceeds until the equilibrium achieved. Providers have some virtual resources for deploying the requested application in lines3 and 4. Each provider specifies a value to \(\omega\) via line5 (Table2); the price offer is computed in line6 (Eq.3). The offered values of all providers are saved in _BidList_, which is a distributed memory between providers and the _Market Manager_ of _Cloud Committee_. In line7, the winner of the game is found. The payoffs are computed through Eq.7 in lines8-11. Finally, achieving Nash equilibrium is checked in lines12-13; the process checks, whether any of the providers can get more benefit by changing the current offer, while other providers do not change their strategies. ``` Input: Request of applications, \(\mathit{Req}_{r}\); Information of SaaS providers: list of VMs and list of applications, their benefits \(\boldsymbol{\alpha}\), \(\beta\), \(\gamma\), \(\boldsymbol{D}\)=\(0\); Output: Optimal list of prices, BidList \(\mathit{Req}_{r}=\) CloudCommittee.Dispatch(); Do foreach Service in associated \(\mathit{AppID}\) of \(\mathit{Req}\)do SelectedVMList[Service] = Select a proper VM for Service; \(\omega\)= Selecto(\(\mathit{Req}_{r}\)); BidList[CurrentPrv] = \(\mathit{ScurrentPrv}\) (\(\mathit{Req}_{r}\), \(\theta_{\mathit{AppID}}\), SelectedVMList, \(\omega\), \(\gamma\)); Winner = MarketMgr(BidList); If CurrentPrv matches Winner Cost = \(\mathit{CcurrentPrv}\)(\(\mathit{Req}_{r}\), SelectedVMList, \(\alpha_{\mathit{CurrentPrv}}\), \(\beta_{\mathit{CurrentPrv}}\)); \(D_{\mathit{CurrentPrv}}\)=1; \(\mathtt{u}_{\mathit{CurrentPrv}}\) = BidList[CurrentPrv] - Cost; If MarketMgr() matches NE then Return BidList; While (1) ``` The complexity of this algorithm is \(n\times\mid\mid\mid\mid\)\(\mid\)\(\mid\)\(\mid\)\(\mid\), where \(n\) is the number of available SaaS providers and \(\mid\mid\mid\)\(\mid\) is the size of the strategy set of the provider. Next section investigates the properties of Nash equilibrium for the game. ## 4 Market Equilibrium Cloud computing is a complex and heterogeneous distributed environment, in which management of the interactions between entities is a challenging task and needs automated and integrated intelligent strategies. States of SaaS providers in Cloud computing environment are unpredictable; therefore, predicting the behavior of providers accurately, would be a costly task. For this reason, we applied game theory concepts to simplify the problem of dynamic pricing of applications in Cloud computing market; in this section, the properties, existence, and uniqueness of solution concept of DPG are discussed. ### Nash equilibrium conditions for the game Unfortunately, the problem of finding Nash equilibrium of a general-sum game with \(n\) players cannot be formulated as a linear program (Shoham, 2008); thus, we cannot state the problem as an optimization problem as presented in Eq.2. In DPG, providers determine pricing strategies, which satisfy them with their expected payoff, known as Nash equilibrium (Fudenberg, 1996). So, Nash equilibrium is an optimal criterion for DPG, which none of the SaaS providers can get more benefits by changing the selected strategy unilaterally; in Nash equilibrium, the assumption is that the other providers do not change their strategies. Previously mentioned, the strategy of players, \(S_{i}\), is a linear function of parameters related to the request and application; e.g., initial price of application, resources appropriation, and their benefit list. Each SaaS provider chooses the value of \(\omega_{i}\) from a predefined range of finite values determined based on level of provided service (Table2). As presented in Algorithm1, they will continue choosing these values until the equilibrium condition is satisfied. Hereafter, the existence of at least equilibrium and its uniqueness will be studied. ### Nash equilibrium existence and uniqueness The termination condition of Algorithm1 in Section3.3 is to achieve Nash equilibrium; in Theorem1 it is proven and discussed. **Theorem1** There is at least one Nash equilibrium for DPG. **Proof** Shoham (Shoham, 2008) has proven that every game with a finite number of players and a finite number of strategies has at least one Nash equilibrium. DPG has a finite number of players, which are SaaS providers in Cloud environment (Buyya, 2009). Besides, both strategy profile and payoff function of DPG are finite, as their parameters have finite values. Therefore, Shoham's theorem verifies the existence of Nash equilibrium in DPG. If the values of these parameters were chosen from a continuous value set, then catching Nash equilibrium in Algorithm1 (line12) would have the complexity of \(n^{n}\); therefore, some other intelligent strategies would be needed. Finally, in order to guarantee the termination of the game, the uniqueness of Nash equilibrium is discussed as well. **Theorem2** DPG has a unique Nash equilibrium. **Proof** Based on Theorem1 and the well-known Weierstrass theorem (De Branges, 1959), \(u_{i}\) is a closed function as it is a finite function (Rudin, 1964). The Weierstrass theorem guarantees that every function defined on a closed interval can be uniformly approximated by a polynomial function. This polynomial function can be assumed a linear function. \(u(\mathbf{s})\) consists of several polynomial terms, which are linear. The concavity of \(u(\mathbf{s})\) can be easily proved by studying its linear terms; as \(ax+b\) can be supposed as a concave function, \(S_{i}\) is a concave one as well. On the other hand, \(C_{i}\) is an affine too, and it is concave. Consequently \(u_{i}\) which is \(S_{i}\)-\(C_{i}\) is a concave function on convex set \(\omega_{i}\). It is to be noted that \(u(\mathbf{s})\) is a second-order differentiable and concave function of its parameters (Chen, 2011), which guarantees the convergence of DPG. Based on the concavity of \(u(\mathbf{s})\), an equilibrium point, \(\mathbf{s}^{o}\), of a game with a concave payoff function can be assumed as the following. \[u_{i}(\mathbf{s}^{o})=max_{y_{i}}\{u_{i}(\mathbf{s}^{o}_{1},\cdots,\mathbf{y}_{i},\cdots, \mathbf{s}^{o}_{n})|(\mathbf{s}^{o}_{1},\cdots,\mathbf{y}_{i},\cdots,\mathbf{s}^{o}_{n}) \epsilon S\}\ \ (i=1,\cdots,n) \tag{8}\] At point \(\mathbf{s}^{o}\), every provider stays in its best state and never changes the strategy while other strategies are unchanged. After considering the fact that DPG is a concave game with \(n\) players, the uniqueness of Nash equilibrium is proved by using standard techniques based on (Rosen, 1965). Based on Theorems1 and 2, DPG would have an individual Nash which is known as the solution concept. Thus, the game finds a solution for providers to reach the most available profit; in next section, this solution in a duopoly is discussed and the strategy of players in Nash is presented in form of a closed-form in duopoly. ### Closed-form expression of the pricing strategy In this section, Nash equilibrium of DPG in a duopoly is studied; the proof of a duopoly can be generalized to a scenario having more than two SaaS providers. Nash equilibrium price can be obtained through the best response function of each player in a non-cooperative game (Chen, 2011); i.e. \(\mathbf{s}^{*}\) is considered as Nash equilibrium if \(s^{*}_{i}\) is the best response of provider \(i\): \[u_{1}(\mathbf{s}^{*})=u_{1}(s^{*}_{1},s^{*}_{2})\geq u_{1}(s_{1},s^{* }_{2}),\forall s_{1}\epsilon S_{1}\] \[u_{2}(\mathbf{s}^{*})=u_{2}(s^{*}_{1},s^{*}_{2})\geq u_{2}(s^{*}_{1},s _{2}),\forall s_{2}\epsilon S_{2} \tag{9}\] The optimal \(\mathbf{s}\) corresponding to maximal \(u(\mathbf{s})\), which is the best response of provider \(i\), is computed by differentiating \(u(\mathbf{s})\) with respect to \(\mathbf{s}\); then, it is set to zero, as follows \[\frac{\partial u_{i}}{\partial\mathbf{s}}=D_{i}\left(\frac{1}{2\sqrt{ \omega_{i}}}+\gamma\right)\big{(}\theta_{ij}+c_{ij}\big{)}-D_{i}\big{(}(\sum_{ k=1}^{\mu_{ij}}\alpha_{jk})\epsilon_{ij}+\beta_{ij}\theta_{ij}\big{)}. \tag{10}\] Jointly solving the expression \(\frac{\partial u_{i}}{\partial\mathbf{s}}=0\), the optimal pricing policy of provider \(i\) in a duopoly can be obtained as the closed-form expression of pricing policies. With a view to the parameters of \(s_{i}\) consists of \(\gamma\), \(\theta_{ij}\), \(c_{i}\), and \(\omega_{i}\), \(\gamma\) is a constant coefficient determined by _Cloud Committee_ marketplace, \(\theta_{ij}\) is a value defined by the developer of the application, \(c_{ij}\) is computed based on resource appropriations, and \(\omega_{i}\), which is determined by provider \(i\), equals to the following value in equilibrium point, \[{\omega_{i}}^{*}=\left(\frac{\theta_{ij}+c_{ij}}{2\Big{(}(\zeta_{k=1}^{\kappa_ {ij}}\alpha_{jk})c_{ij}+\beta_{ij}\theta_{ij}\Big{)}-\gamma(\theta_{ij}+c_{ij })\Big{)}}\right)^{2}. \tag{12}\] The best response of each player in the considered duopoly is \(\mathbf{s}=\big{(}\sqrt{{\omega_{1}}^{*}}\big{(}1+\gamma\sqrt{{\omega_{1}}^{*}} \big{)}\big{(}\theta_{1j}+c_{1j}\big{)},\sqrt{{\omega_{2}}^{*}}\big{(}1+\gamma \sqrt{{\omega_{2}}^{*}}\big{)}\big{(}\theta_{2j}+c_{2j}\big{)}\big{)}\). Consequently, closed-form expression of pricing policies of provider \(i\) in a duopoly is \(\sqrt{{\omega_{i}}^{*}}\big{(}1+\gamma\sqrt{{\omega_{i}}^{*}}\big{)}\big{(} \theta_{ij}+c_{ij}\big{)}\), which is known as the solution of duopoly market in DPG. ## 5 Performance Evaluation In this section, some experiments for analyzing the proposed model of competition of SaaS providers in Cloud computing marketplace are developed. Firstly, the parameter settings and performance metrics are studied; then, the simulation configurations are explained, and finally the results are presented. ### Experimental Setup In this section, the parameters and the configuration of DPG are clarified. The experiments are run on a semi Cloud computing marketplace, CloudSim toolkit 3.0.2, as follows. \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline \multicolumn{1}{|c|}{\begin{tabular}{c} Att. \\ Size \\ \end{tabular} } & \begin{tabular}{c} VCPU \\ (GB) \\ \end{tabular} & \begin{tabular}{c} Memory \\ (GB) \\ \end{tabular} & \begin{tabular}{c} Storage \\ (GB) \\ \end{tabular} & \begin{tabular}{c} Price per VM/\$ \\ \end{tabular} \\ \hline t2.small & 1 & 2 & 1x 4 SSD & \$0.026/Hour \\ \hline t2.medium & 2 & 4 & 1x 4 SSD & \$0.052/Hour \\ \hline m3.medium & 1 & 3.75 & 1x 4 SSD & \$0.070/Hour \\ \hline c3.large & 2 & 3.75 & 2x 16 SSD & \$0.105/Hour \\ \hline m3.large & 2 & 7.5 & 1x 32 SSD & \$0.140/Hour \\ \hline R3.large & 2 & 15 & 1x 32 SSD & \$0.175/Hour \\ \hline \end{tabular} \end{table} Table 3: Pricing defined by IaaS provider #### 5.1.1 Parameters setting The considered marketplace consists of multiple SaaS Cloud providers, which initially owned random number of different type of VMs. Three methods are added to implement the process of SaaS providers in the market. The first method is used to determine whether the provider can provide the received request based on its virtual resources or not. This method investigates the properties of available VMs by the aim of providers' _Resource Manager_ and the requirements of each request by the aim of _Request Interface_. For simplicity, a single service is deployed on each VM. The second method finds the most proper VMs for deploying the requested application. This method chooses a VM, which is capable of deploying the service in a low per hour cost, for all services in the application. The third method specifies a price for supporting the request. The parameters of VMs such as size, memory, and price are considered based on what Amazon EC2 has defined (in December 2015). The parameters and the prices of VMs are listed in Table3. In the experiments, each VM hosts just one service. _Vm_ class of CloudSim toolkit is extended to support the mentioned properties of _VMM_ in Section2.1, based on Table3. In the experiments, \(\gamma\) is set to 0.95 (Truong-Huu, 2014); it corresponds to a 0.05 interest rate. The parameter \(\omega_{i}\) is derived from a finite set based on the level of the service. The probability distribution of values of \(\omega_{i}\) is initialized as uniform distribution. #### 5.1.2 Simulation Configuration Cloud environment is modeled in form of one IaaS provider, several SaaS providers and some users. In the simulations, we assumed two or 10 number of SaaS Cloud providers with a single IaaS provider and different provided VMs. The requests of users are modeled as application demands in form of _Req._ These requests include execution-related requirements of applications such as memory, CPU usage, etc. Moreover, each SaaS provider in our supposed Cloud computing marketplace owns multiple applications, and each application may consist of several services; the same list of ERP applications (Enterprise Resource Planning) is considered in each SaaS provider. Different ERP applications are provided from different SaaS providers; CRM is an ERP application, which may have three main instances: Essential, Basic, and Professional. Some instances of Microsoft CRM applications and their potential costs are presented in Table4. Cloud applications' costs vary based on commercial fees1. Providers are monthly billed per user for online provisioning; the licensing prices are determined based on the instances, for on premise provisioning. The prices of our applications are derived from ERP providers, such as Actionstep, iCIMS, Plex Systems and Host Analytics Inc.; the assumed values of parameters of the simulation are considered like (Truong-Huu, 2014)2. Footnote 1: [https://www.softwareadvice.com/crm/](https://www.softwareadvice.com/crm/) Footnote 2: [https://aws.amazon.com/getting-started/](https://aws.amazon.com/getting-started/) Footnote 3: [https://www.softwareadvice.com/crm/](https://www.softwareadvice.com/crm/) ### Equilibrium Efficiency To validate the correctness of our proposed competition-based pricing approach, we run the experiments in a duopoly market with two SaaS providers. In the following experiments the unit of profit and prices are S and iteration denotes the number of repetitions which has not any unit. By such an assumption, we simplified the experiments, while retaining the competitive characteristics of the considered market. Fig.(a)a shows the profit of two providers while receiving different requests. As depicted in this figure, while the game proceeds, both providers almost obtain an increasing profit. Then, the experiments are performed for more than two providers to validate the approach in an oligopoly Cloud market. Fig.(b)b shows the profit of SaaS providers (\(N\)=10). From Fig.(b)b, it can be observed that \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Application\(\&\)Type of Provision & On-Premise & Online (per user per month) \\ \hline CRM Server 2013 & \$4922 & \$150 \\ \hline CRM Professional User CAL & \$983 & \$65 \\ \hline CRM Professional Device CAL & \$787 & \$65 \\ \hline CRM Basic User CAL & \$342 & \$30 \\ \hline CRM Basic Device CAL & \$236 & \$30 \\ \hline CRM Essential CAL & \$79 & \$15 \\ \hline \end{tabular} \end{table} Table 4: Considered applications offered by SaaS providers with their costs3 while the game proceeds, the profits mostly increase as well. These experiments assess the performance of the competitive pricing mechanism. As it can be observed in Fig.3, the sum of profits of providers in duopoly and oligopoly with the same conditions are approximately equal to each other. One reason for large differences of profits of providers as depicted in Fig.(b)b is that these SaaS providers have different \(\mathbf{a}\) and \(\mathbf{\beta}\) (per unit benefits of resources and applications, respectively). Fig.4 depicts the evolution of offered prices of providers, which is inserted as bids. Provider \(i\) chooses pricing parameter, \(\omega_{i}\), randomly, that turns out different prices. In each iteration, Nash equilibrium circumstances, introduced in Section4.1, are checked. As shown in Fig.(a)a, in 17(b) iteration, both providers reach Nash equilibrium, where their profit is better than their other offers. As depicted in the figure, the offers of providers are not changed after reaching Nash equilibrium; this state is called the convergence point of the game. Fig.(b)b repeats the experiments for \(N\)=10 providers. In some cases, the game runs more than 100 iterations to reach the equilibrium. Providers get Nash equilibrium in iteration 43(d), where their profit is better than other offers, while the other providers do not change their offers. Comparing Fig.(a)a and Fig.(b)b, it is to be noted that in a multiple-players game, the convergence of pricing policy in an oligopoly needs longer run, which is expected as the growth of strategy profile of players. The simulation results verify that the considered game always converges to the optimal solution known as Nash Equilibrium. Actually, price offering of providers converges to the optimal price. The optimal price is the least one that satisfies the constraints in Eq.2. In Fig.(a)a it can be obviously observed that provider 2 is the winner of the game. The states of providers in Nash equilibrium are depicted in Table5. It is to be noted that a demanding request of service level 1 is supposed; based on the same price of application (\(\theta\)), and resource appropriation costs (\(c\)), different per unit benefits, different strategies (\(s\)) are generated. The winner is provider \(i\)=9; therefore, the profit of all players except 9 is actually zero; however, as depicted in Table5, their imaginary profits, in case they are the winner, are presented to enable the comparison. \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|} \hline Provider \(i\) & \(\alpha_{i}\) & \(\beta_{i}\) & \(\omega_{i}\) & si & Cost & Profit \\ \hline 1 & 0.1 & 0.4 & 0.001 & 11.20504 & 0.0599 & 11.14514 \\ \hline 2 & 0.3 & 0.25 & 0.001 & 11.56334 & 0.0975 & 11.45984 \\ \hline 3 & 0.8 & 0.25 & 0.001 & 11.20504 & 0.23945 & 10.96559 \\ \hline 4 & 0.4 & 0.5 & 0.001 & 11.7262 & 0.1505 & 11.5757 \\ \hline 5 & 0.2 & 0.4 & 0.001 & 11.23761 & 0.081 & 11.5661 \\ \hline 6 & 0.8 & 0.15 & 0.001 & 12.21479 & 0.25125 & 11.96354 \\ \hline 7 & 0.1 & 0.7 & 0.001 & 12.37766 & 0.077 & 12.30066 \\ \hline 8 & 0.4 & 0.5 & 0.001 & 11.7262 & 0.1505 & 11.5757 \\ \hline 9 & 0.2 & 0.15 & 0.001 & 11.0096 & 0.1026 & 10.94549 \\ \hline 10 & 0.3 & 0.7 & 0.001 & 11.56334 & 0.10325 & 11.43084 \\ \hline \end{tabular} \end{table} Table 5: Presentation of providers’ strategies parameters while \(\gamma\)=0.95, \(\theta\)=65$, \(c\)=295$ Figure 3: Payoff of providers in a (a) duopoly, (b) oligopoly Cloud computing market \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Criterion Approach} & \multicolumn{2}{c|}{Duopoly} & \multicolumn{2}{c|}{Oligopoly} \\ \cline{2-5} & average of best price & average of profit & average of best price & average of profit \\ \hline DPG & 12.43 & 11.21 & 11.2 & 10.48 \\ \hline Price discovery & 15 & 11 & 14.3 & 10.5 \\ \hline \end{tabular} Finally, some experiments are performed to compare the approach with other methods to identify the breakthrough that has been achieved using DPG model. In (Muzaffar, 2017), a price discovery algorithm for searching the optimal price for a service with price-sensitive demand is studied which no information is required on reservation price. Our approach is compared with (Muzaffar, 2017); the average of the best price and the profit of providers are considered in the comparison. The same demand rate of the applications is assumed in experiments. The results are depicted in Table 6, as follows. \begin{table} \end{table} Table 6: The length evolution of game while the players of the game increase DPG and Price discovery approach of (Muzaffar, 2017) are performed in both duopoly and oligopoly markets which have two and 10 SaaS providers. They have multiple instances of Microsoft CRM applications whose costs are presented in Table4. As depicted in Table6, in both markets while using DPG the best prices are less than (Muzaffar, 2017). Although the offered prices in our approach is less the average of profits that providers gain is approximately the same in both approaches. The reason is that the number of requests that each provider may serve increases when it wins the game. ### Validating the scalability of algorithm Previous experiments considered at most \(N\)=10 SaaS providers in oligopoly markets; however, the proposed algorithm can scale to a realistic size of players without disobeying time limits. On a Macbook Core 2 Duo running at 2.40GHz with 4.0GB RAM, the number of players is exponentially increased, with different Figure 4: Evolutions of SaaS providers’ bids in a (a) duopoly, (b) oligopoly (N=10) Cloud computing market to equilibrium state requests and parameters. It is expected that as the number of players grows the execution time of the algorithm increases as well; it can be also concluded from Fig.4. As illustrated in Table7, the affects that the growth of game's size has on the length of the algorithm's run is not exponential. Table7 depicts the average number of iterations and the average of elapsed time of the game for reaching the equilibrium in ten runs. The algorithm must check whether or not the equilibrium is achieved, in each iteration. As discussed previously, the order of Algoritm1 is \(O(n^{2})\). The longest time required for reaching the equilibrium, for 1024 SaaS providers, is 834 seconds. ## 6 Conclusion Recently Cloud computing has been emerged as a new information technology development, which has been noted as a services marketplace. Third marketplace faces with the competition and cooperation of its providers; this research focuses on the competition of Cloud providers. SaaS providers compete with each other by offering suitable resource provisioning with a desirable price. The scenario is modeled in a finite normal form game. Players of the game are SaaS providers; their strategies are considered as competition-based dynamic pricing policies based on different application properties; their preferences are the revenue, which is obtained by providing the request with offered price. We verified the existence and uniqueness of Nash equilibrium for the game. In addition, the experimental evaluations are performed and the theoretical evaluations are verified. Providers seek equilibria to perform an adaptive pricing strategy; the considered game, which computes the preferred dynamic prices for each provider, converges to a unique Nash equilibrium, in which none of the providers tend to change their strategies. Assumption of having just one IaaS provider can be omitted in the case of extending the model in the future to focus on resource provisioning techniques of providers. The infinite set of strategies is another issue for our future study.
2308.16744
MS-BioGraphs: Sequence Similarity Graph Datasets
Progress in High-Performance Computing in general, and High-Performance Graph Processing in particular, is highly dependent on the availability of publicly-accessible, relevant, and realistic data sets. To ensure continuation of this progress, we (i) investigate and optimize the process of generating large sequence similarity graphs as an HPC challenge and (ii) demonstrate this process in creating MS-BioGraphs, a new family of publicly available real-world edge-weighted graph datasets with up to $2.5$ trillion edges, that is, $6.6$ times greater than the largest graph published recently. The largest graph is created by matching (i.e., all-to-all similarity aligning) $1.7$ billion protein sequences. The MS-BioGraphs family includes also seven subgraphs with different sizes and direction types. We describe two main challenges we faced in generating large graph datasets and our solutions, that are, (i) optimizing data structures and algorithms for this multi-step process and (ii) WebGraph parallel compression technique. We present a comparative study of structural characteristics of MS-BioGraphs. The datasets are available online on https://blogs.qub.ac.uk/DIPSA/MS-BioGraphs .
Mohsen Koohi Esfahani, Paolo Boldi, Hans Vandierendonck, Peter Kilpatrick, Sebastiano Vigna
2023-08-31T14:04:28Z
http://arxiv.org/abs/2308.16744v1
# MS-BioGraphs: Sequence Similarity Graph Datasets ###### Abstract Progress in High-Performance Computing in general, and High-Performance Graph Processing in particular, is highly dependent on the availability of publicly-accessible, relevant, and realistic data sets. To ensure continuation of this progress, we (i) investigate and optimize the process of generating large sequence similarity graphs as an HPC challenge and (ii) demonstrate this process in creating MS-BioGraphs, a new family of _publicly available real-world edge-weighted graph datasets with up to \(2.5\) trillion edges_, that is \(6.6\) times greater than the largest graph published recently. The largest graph is created by matching (i.e., all-to-all similarity aligning) \(1.7\) billion protein sequences. The MS-BioGraphs family includes also seven subgraphs with different sizes and direction types. We describe two main challenges we faced in generating large graph datasets and our solutions, that are, (i) optimizing data structures and algorithms for this multi-step process and (ii) WebGraph parallel compression technique. We present a comparative study of structural characteristics of MS-BioGraphs. The datasets are available online on [https://blogs.qub.ac.uk/DIPSA/MS-BioGraphs](https://blogs.qub.ac.uk/DIPSA/MS-BioGraphs). Graph Datasets, High-Performance Computing, Biological Networks, Sequence Similarity Graph, Graph Algorithms ## I Introduction Because of the fast increase in the data production rate, and the existence of unstructured connections in these data, High-Performance Graph Processing (HPGP) has to date been widely applied in various fields of science, humanities, and technology. This fact has two main implications for the efficiency of public research and academia that aim to consider the real-world challenges and to design practically-applicable solutions to those challenges. The first effect is the necessity of having **realistic and up-to-date graph datasets** and the second implication is the necessity of considering the effects of new contributions (such as algorithms, processing models, parallelization, and data structures) on **a wide range of input datasets** to cover different application domains. However, as we detail in Section II, the public graph datasets are small, domain-restricted, and not suitable indicators of real-world data which makes them not ideal for progressing HPGP. To confront this problem, we investigate and optimize the HPC process of generating sequence similarity graphs and demonstrate this process in creating and introducing **MS-BioGraphs**, a new family of real-world graphs with up to 2.5 trillion edges that makes them **the largest real-world public graphs**. This family contains different graph sizes and direction types with similar structures that make them suitable for a range of applications with different input size requirements. Moreover, this graph family shows a very different graph structure in comparison to other real-world graphs (such as social networks and web graphs) and so, complements the current graph collection. We faced two major challenges in optimizing (i) creation and (ii) compression of these large graphs. The creation of these large datasets is a multi-step process in which (a) the dependency between steps and (b) the processing requirements (i.e., availability of processing resources, memory, and storage) should be considered in the selection and creation of data structures and algorithms of each step. The flow of data between different steps of a multi-step process have important effects on the processing efficiency of the steps. As such, the whole process and processing requirements should be considered and be optimized by **process-wide engineering and design of data structures and algorithms**. The processing model is one of the main choices in this optimization. The distributed-memory processing model [1, 2] implies two restrictions: (i) fixing the degree of parallelism (i.e., the number of machines/processors involved in the processing) and (ii) limiting the size of processed data to the total memory of the cluster. On the other hand, the storage-based processing model [3, 4] does not practically limit the size of data but deploys only one machine and increases the processing time. Therefore, we designed the processes as multi-step tasks where each step is performed as a distributed parallel computation but without communication between machines. Machines process the partitions independently from each other and use the cluster's shared storage for loading and storing the (intermediary) data. The second major problem is efficient compression of graph datasets to facilitate fast transfer of the created datasets. The WebGraph Framework [5] provides graph compression at high scale, but the compression process is sequential and we **extend the WebGraph framework by parallelizing compression**. We study some **features of the MS-BioGraphs** showing that (i) while these large biographs follow a skewed degree distribution (similar to other real-world graphs), they expose a different arrangement of edges in comparison to previous graph types by having tight connections between the frequently-occurring high-degree vertices that make their graph structure distinct from other real-world graph types, (ii) weights have a skewed distribution with a tail close to power-law distribution, (iii) the main graph and its large subgraphs exhibit a high-degree of connectivity, and (iv) the asymmetric MS-BioGraphs have a close Push and Pull Locality which is different from social networks and web graphs. The contributions of this paper are introduction of: * the HPC-optimized multi-step process of creating large sequence similarity graphs, * the MS-BioGraphs family as the largest real-world public graphs and publishing them as open datasets, * parallel compression in the WebGraph framework, and * a structural analysis of MS-BioGraphs. This paper is structured as: Section II motivates the discussion by exploring the needs for large real-world graphs and considering their effects on progressing HPGP. Section III introduces the processing model and parallel graph compression as our solutions for the major challenges in processing large graphs. Section IV explains the creation process of large graphs and demonstrates it for creating MS-BioGraphs. Section V presents a structural analysis of MS-BioGraphs and compares them with other types of real-world graphs. Section VI discusses related work and Section VII concludes the paper and expresses future directions. ## II Motivation In this section, we consider (i) the necessity of creating updated and cross-domain datasets, (ii) the impacts of these datasets on the progress of HPGP, and (iii) the features of an ideal graph dataset. ### _Why Do We Need Updated and Real-World Graphs?_ (1) While synthetic graph generators [6, 7] can create large graphs, _the structural features of synthetic graphs do not match the real-world ones_. E.g., they may expose several gaps in the degree distribution [8] and randomly selected vertices have a large percentage of similar neighbors. As such, the severity of challenges relating to partitioning, locality and load balance in synthetic graphs is often much lower than in real-world datasets. Therefore, the techniques that are sufficient for synthetic graphs may not be applicable for real-world datasets. (2) Some graph optimizations are dependent on the architecture of machines and it is the tension between data size and the architecture capacities that forms the challenge context and presents the opportunity to design novel data structures, algorithms and processing models. E.g., the design of locality-optimizing algorithms [9, 10, 11, 12] depends on the fact that CPU's cache contains a small portion of the data. By the advent of CPUs with cache sizes of multiple GigaBytes, the locality optimizing algorithms play no role for small datasets as accesses to a large portion of data is covered by cache. Similarly, the progress of distributed graph processing [1, 2] may be slowed down by increase in per-machine memory capacity that is enough to host available datasets. This shows that _without large real-world datasets, it is not possible to progress these architecture-competing HPGP activities_. (3) Several HPC research fields (such as architecture design, distributed and disk-based processing, and high-performance IO) have tight connections and dependencies on graph algorithms and datasets. _The effectiveness and realness of graph datasets guarantees the efforts on the dependent fields to have real-world impacts_. (4) Creating a real-world graph dataset provides a representation of the data that _acts as a new source for extracting domain-specific information and knowledge_ by deploying graph algorithms. As an example, sequence similarity graphs have several usages in biology including sequence clustering [13, 14, 15, 16], predicting pseudogene functions [17], effective selection of conotoxins [18], predicting evolution [19] and gene transfer [20]. A comprehensive graph representation of the data is also beneficial (i) to validate previous hypotheses (that have been verified on a small portion of data) in a wider perspective and (ii) to provide new opportunities to make new contributions by considering the new patterns and connections revealed in graph representation. ### _Why Do We Need Different Types of Real-World Graphs?_ (1) Previous studies have shown that different real-world graph types exhibit contrasting behaviors with graph analytic algorithms and optimizations [12]. Examples include the long execution time of small road networks in Label Propagation Connected Components [21, 22], the different impact of similarity and locality in web graphs and social networks [23]. This implies that a wider range of graph types will be necessary _to better study and comprehend the structure of graphs and to compare them_. This better understanding of different graph types and their structures will also be helpful to _design synthetic graph generators with greater similarity to real-world graphs_ (Section II-A). (2) A wide range of real-world datasets facilitates _cross-domain evaluation of the new contributions_ and provides broad and correct assessment across a variety of use cases (i.e., better pruning of the falsifiable insights [24]). Also, we will have the opportunity to improve several graph algorithms and optimizations that exploit the structure of graphs [2, 10, 25, 26]. ### _Creating Real-World Graphs: An HPC Problem_ (1) Creating real-world graphs is a time-consuming process [27, 28, 29] and is periodically repeated. As the size of input dataset (connections in web graphs, links in social networks, or similarities in sequences) grows, greater amounts of computations and processing resources are required. (2) Some tasks in creating graphs are widely used in deploying graph algorithms, such as format conversion, transposition, and symmetrization, are time-consuming [30]. Optimizing these steps is directly transferred in graph algorithms. ### _The Current Largest Graph Datasets_ At present, the last largest public graph dataset we are aware of is the Software Heritage 2022-04 version-control-history graph1[28] with 376 billion edges that was published in 2022. Footnote 1: [https://docs.softwareheritage.org/devel/swh-dataset/graph/dataset.html](https://docs.softwareheritage.org/devel/swh-dataset/graph/dataset.html) The largest web graph is Web Data Commons 2012 hyperlink graph2[29], with 128 billion edges that was published about 9 years ago. The largest social network graph is a snapshot of Twitter on 2010 [31] with 1.5 billion edges. Footnote 2: [http://webdatacomms.org/hyperlinkgraph/2012-08/download.html](http://webdatacomms.org/hyperlinkgraph/2012-08/download.html) These graphs are outdated and/or not an indicative of the growth in size of data that is happening in the real world. ### _What Is An Ideal Graph Dataset?_ The discussions in this section show that a new family of graphs should ideally (i) be backed by a real-world phenomenon, (ii) cover a wide range of graph sizes to make it suitable for different applications, (iii) exhibit new structural features that are not seen in other real-world graphs, (iv) contain graphs much larger than existing ones and in line with the exponential growing rate of the worldwide datasets 3, and (v) be available as open datasets to research communities. Footnote 3: [https://www.idc.com/gedoc.jsp?containerId=US490189222](https://www.idc.com/gedoc.jsp?containerId=US490189222) and [https://www.statista.com/statistics/871513/word/wide-data-created](https://www.statista.com/statistics/871513/word/wide-data-created) ## III HPC Challenges and Our Solutions In this section, we present two major challenges we faced in creating large datasets. Section III-A explores how to efficiently utilize a small cluster for processing large datasets. Section III-B explores how to parallelize the compression process of the large weighted graph datasets. We demonstrate our solutions for these two challenges in Section IV where we detail creating MS-BioGraphs. ### _The Processing Model_ We search for a processing model that _(i) dynamically adjusts the degree of parallelism (i.e., the number of machines/processors involved in the processing) and (ii) does not restrict the size of processed data to the total memory of the cluster_ while machines have access to a shared storage that hosts the datasets and the intermediary data. The distributed-memory processing model [1, 2] sets an upper bound for the size of dataset based on the total memory of the cluster. This model also makes the waiting time of jobs dependent on the size of the requested resources. If we need a greater number of machines, we may need to wait for a longer time before scheduling the job. Therefore, to optimize cluster utilization it is necessary _to minimize the waiting time_. The storage-based processing model [3, 4], on the other hand, does not practically limit the size of data, but deploys one machine and increases the processing time. To satisfy the mentioned requirements, we deploy a distributed model in which algorithms are designed as a number of sequential steps with parallel workloads per step. In each step, machines contribute to the total processing independently from each other and the input and output data for each processing slot is loaded from and stored to the shared storage. So, machines only communicate (a) to the shared storage to retrieve/store data and (b) to the scheduler to receive a partition of a task or to inform completion of a partition. In this way, each machine requires a memory size that is enough to complete a partition. This facilitates processing the datasets whose sizes are greater than the available memory. Moreover, as the machines do not communicate with each other, each step can be started as soon as at least one machine becomes available and new machines can join/leave a running step. This (i) relaxes the assumption of permanent availability of a fixed number of resources during the whole execution time, (ii) minimizes the waiting time, and (iii) optimizes cluster utilization. ### _Parallelizing Graph Compression_ As MS-BioGraphs have binary sizes of up to 20 TeraBytes, it is necessary to compress them to make their storage, transfer over the network, and processing more efficient. To that end, we used the WebGraph framework4[5] which is an open-source graph compression framework that has been continuously maintained and updated during the last 20 years. This framework provides a graph compression and includes a rich set of graph operations and analytics. Moreover, the users of languages and frameworks with WebGraph support, such as Hadoop, C++, Python, and Matlab, benefit from direct access to MS-BioGraphs. Footnote 4: [https://webgraph.di.unimi.it/](https://webgraph.di.unimi.it/) WebGraph provides facilities for storing edge-labelled graphs. Labels are stored contiguously in a bitstream in edge order (i.e., lexicographical source/destination order), and an offset file containing pointers to the start of the sequence of labels associated with the neighbors of a vertex. The bitstream can be loaded into memory or memory-mapped to support graphs with a larger size than core memory. Moreover, offsets are loaded using the Elias-Fano representation, a quasi-succinct data structure that brings the required storage space for each offset to a few bits [32]. Historically, the design of the labelled facilities in WebGraph decoupled the compression of the underlying graph and the storage of the labels. This approach has the advantage of implementing a clear separation of concerns and makes it possible to pair compression schemes and label storage schemes arbitrarily. However, in processing MS-BioGraphs, it became clear that the approach is very inefficient in a number of situations, and in particular when transposing, symmetrizing or permuting very large labelled graphs. In all of these operations, graph edges are first divided into batches that are sorted in core memory using a parallel sorting algorithm and compressed on disk; then, one can traverse the resulting transposed (or symmetrized, or permuted) graph sequentially. However, this traversal is quite expensive as the compressed representation is optimized for space and ease of storage, but not for speed of traversal; ideally, the transformed temporary graph should be traversed exactly once. The previous design was thus at odds with this approach, as two passes were necessary to compress the graph and to store the labels. Moreover, the current implementation of labelled graphs did not allow for parallel storage--a fundamental requirement in processing large-scale graphs. We extended the WebGraph framework in two directions: in the first phase, we extended labelled graphs to support parallel compression of the underlying graph. This first extension decreased significantly the compression time (scaling is linear in the number of cores) but did not solve the problem of multiple passes over the temporary representation. In the second phase, we partially violated the decoupled design of labelled graphs in WebGraph, adding to the compression phase of the main storage format class of WebGraph, BVGraph (that compresses and stores the underlying graph), an option to store the labels at the same time. This created a dependency of BVGraph on a _specific_ labelled graph implementation; that is, the parallel and simultaneous compression of graph and labels can only happen with a specific, bitstream-based label representation. However, since recompressing the underlying graph in a different format can be performed with very low cost, and the bitstream-based label implementation is the only presently-available option, the implementation remains, in practice, highly (albeit not completely) decoupled. ## IV Generating MS-BioGraphs In Section III, we introduced solutions for major challenges in processing large graphs. In this section, we demonstrate those solutions to design and implement the algorithms required in different steps of creating the MS-BioGraphs. ### _Terminology_ A directed graph \(G=(V,E)\) is defined by a set of vertices \(V\) and a set of edges (a.k.a. arcs) \(E\subseteq V\times V\); an edge is an ordered pair \((u,v)\) that indicates an edge from vertex \(u\) to \(v\). In a directed weighted (a.k.a., labelled) graph \(G_{w}=(V,E)\), the set of edges is a subset of \(V\times V\times\mathbb{N}\), where \((u,v,w)\in E\) represents an edge from \(u\) to \(v\) with weight \(w\). The _undirected weighted graph_\(G_{u}=(V,E)\) is defined as a directed weighted graph where for each \((u,v,w)\in E\), there is an edge \((v,u,w)\in E\). A protein sequence is a string of letters, each letter representing one of the 20 canonical amino acids. Each of these 20 amino acids is represented by a letter from "ACDE-FGHILKMLNQPQRSTVWY"5. A _sequence similarity matching_ or a sequence aligning algorithm is an algorithm that receives two sequences as inputs and outputs a number that represents the similarity of the input sequences. Footnote 5: [https://en.wikipedia.org/wiki/Amino_acid](https://en.wikipedia.org/wiki/Amino_acid) Similarity is calculated by comparing aligned amino acids whose matches are not directional and match values are derived from a symmetrical matrix (e.g., PAM and BLOSUM). Therefore, similarity using standard approaches (e.g., Smith-Waterman) is undirected, i.e., \(Similarity(S_{1},S_{2})\)=\(Similarity(S_{2},S_{1})\). Also, two sequences may have multiple matches as the start point of match is not restricted. For a set of protein sequences, the _sequence similarity graph_ is a weighted undirected graph whose vertices represent proteins and with an edge \((u,v,w)\) expressing the fact that the similarity between proteins \(u\) and \(v\) (the endpoints of the edge) is \(w\); in other words, the weight of an edge is the similarity score calculated by the sequence aligner algorithm. It is important to note that the sequence similarity graph is _not a clique_, as only edges with a minimum level of similarity are added to the graph (the aligner algorithms only produces an output if the two sequences can be matched). ### _Input Dataset & Environment Setup_ Inspired by HipMCL [13], we use the Metaclust dataset6[33] that contains 1.7 billion protein sequences in FASTA format7. We collected all similarities produced by the LAST sequence alignment algorithm8[34] Version 1293. We selected LAST as aligner as it shows better single-machine performance [35] and has been widely used and maintained since its publication in 2011. Footnote 6: [https://metaclust.mmseqs.com/2018_06/metaclust_all.gz](https://metaclust.mmseqs.com/2018_06/metaclust_all.gz) Footnote 7: [https://en.wikipedia.org/wiki/FASTA_format](https://en.wikipedia.org/wiki/FASTA_format) Footnote 8: [https://gitlab.com/mcfrith/last](https://gitlab.com/mcfrith/last) Sequence matching by LAST is performed in two steps: (i) creating a database (DB) from sequences using a program called lastdb and (ii) aligning the sequences of a file against the created database using lastal (with PAM30 scoring matrix and default values for other options) that outputs the matched sequences and their scores. Table I shows the hardware used in this project for all experiments; they all have CentOS 7.9 installed. Since the mentioned computers are setup in a job-sharing cluster, not all machines (and not all of their cores and memory capacity) were permanently available in all steps. So, for each step we report the machines that were used and their processing times. The cluster is backed by a 2 PetaBytes Lustre file system that provided up to 8 Gbps bandwidth in our experiments. We have implemented most of our algorithms as extensions to the LaganLighter framework9[36], in the C language with OpenMP parallelization. We also used the libnuma library with the interleaved NUMA memory policy. Our algorithms were compiled with gcc-9.2 using the optimization -O3 flag. \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline & **SkylakeX** & **SkylakeX-2** & **Haswell** & **Egyc** \\ \hline CPU Model & Intel Xeon & Intel Xeon & Intel Xeon & AMD \\ Gold 6130 & Gold 6126 & E5-4627 & Eype 7702 \\ CPU Freq. (GHz) & 2.10 - 3.7 & 2.6 - 3.7 & 1.2 - 2.6 & 2.0 - 3.35 \\ CPU Cores/Machine & 32 & 24 & 40 & 128 \\ Memory/Machine & 768 GB & 1,536 GB & 1,024 GB & 2,048 GB \\ Number of Machines & 1 & 2 & 2 & 4 \\ \hline \end{tabular} \end{table} TABLE I: Machines the next step is to run Java, and we used JDK-17 and OpenJDK-19-Loop that provides the incubator.foreign package to facilitate frequent file mapping using the MemorySegment class. Our processing model requires a dispatcher to track the steps and to assign partitions of job in each step to machines. The schedulers that support job dependencies (such as Slurm and OpenPBS) can be used as this scheduler. We have implemented the dispatcher as a PHP script that is backed by an Apache server and a MySQL database. ### _Process-Wide Data Structures and Algorithms Engineering and Design_ In this section, we design the general process of creating MS-BioGraphs as a multi-step process by considering the flow of data and dependencies of steps. We explain the detailed algorithms and implementations of each step in Section IV-D. (1) To create MS-BioGraphs, we compute all-against-all matching of the sequences. Since sequence similarity is a symmetric relation, instead of matching each pair of sequences twice, we can match each sequence only to sequences with lower IDs. This produces a directed weighted graph whose symmetric version represents all the matches and their scores. This imposes the cost of symmetrization but reduces the alignment computations by 50%. We have the following steps as depicted also in Figure 1. First, we need to create LAST database(s) using lastdb and then call last to create the similarities, i.e., the asymmetric graph in the coordinate format (COO). The next step is converting the COO graph to the Compressed Sparse Columns (CSC) [37] format which is followed by symmetrizing and compression. We also create some subgraphs to support research studies with different graph size and direction requirements. Therefore an "Edge Filtering" step is required to create subgraphs and we need to remove zero-degree vertices. (2) We need to consider whether to run the lastal in parallel mode on one single machine. Our preliminary evaluation showed that the lastal does not continuously engage all processors. The other problem is the long processing time (366 hours as we report in Section IV-D) as a result of deploying one machine. However, there is a more important implication of running one instance of lastal and that is its output. The output of the "Alignment" step is used as input to the "COO to CSC" step. The CSC format consists of two arrays: the offsets array and the edges array. The offsets array is indexed by a vertex ID to identify the index of the first edge of that vertex in the edges array. In creating the edges array, we need to read edges from the COO graph and to write each edge based on the offset identified by its destination endpoint. This requires random write accesses to the edges array which requires 8 Bytes per edge (4 bytes for the ID of the source endpoint and 4 Bytes for the weight), or about 10 TB memory. As no machine has this size of memory, the other option is to convert the subgraphs of the COO format to the CSC subgraphs and then merge the CSC subgraphs to create the CSC graph. While this can be done in a distributed way (Section III-A), it implies one extra reading and one extra writing of all edges. So, we face three problems: (i) load imbalance of lastal in parallel mode, (ii) long execution time in the "Alignment" step, and (iii) storage overhead in the "COO to CSC" step. Our solution for this cross-step problem is to partition the input dataset that converts the adjacency matrix of the graph to a number of blocks. The graph construction is now performed by calling concurrent instances of lastal for different blocks, (i.e., pair of partitions) and each instance is run in sequential mode. This optimizes load balance, increases the cluster utilization, and significantly reduces the computation time by concurrently deploying multiple machines (Section III-A). Each block of the adjacency matrix is stored in a separate file and allows us to efficiently create the CSC graph in the distributed model by partially creating the CSC graph for each partition where it is only needed to load the relevant blocks (for partition \(p_{j}\), all edges exists in \((p_{i},p_{j})\) blocks where \(i\leq j\)) and we do not need to keep the whole edges array in the memory. By having a sufficiently large number of partitions, we ensure the memory space required for a slice of the edges array is available on each machine. (3) The output of "COO 2 CSC" is symmetrized to create the main graph. This is efficiently done in the distributed model by transposing and merging the transposed graph with the CSC graph. It is possible to merge the "COO 2 CSC" and "Symmetrize" steps into one step by transposing each partition while creating the CSC format and then merging the transposed subgraphs and CSC. However, this results in concurrency of two write and one read storage operations for all edges that may overload the storage bandwidth. Our evaluation shows that overloading storage bandwidth in our cluster (with per-user bandwidth limit) imposes longer delays. However, merging these steps is beneficial for clusters that provide greater storage bandwidth limit. (4) "Edge Filtering" and "Removing Zero-Degrees" are efficiently done in the distributed model. The last step is creating Fig. 1: Creation Steps (UC: uncompressed) compressed version in WebGraph format which deploys a shared-memory model. ### _Processing Steps_ **Step 1: Assigning IDs to Sequences & Creating DBs.** We divide the input dataset (of size 471 GB), which is in the name-sequence format, into 120 ID-sequence partitions by replacing the name of each sequence with its ID so that we can use the output of lastal without converting the names of sequences to their IDs. Then, lastdb is called for each of these 120 ID-sequence files. We write the ID and name of sequences of each partition in an ID-name file so that the results of analytics on the sequence similarity graphs can be used to identify the name of sequences using their IDs. We used a shared-memory parallel implementation for this step that runs on one Epyc machine. Then, all instances of lastdb are called concurrently to create the databases of 120 ID-sequence files. While multiple machines could be used for processing required in lastdb, the small number of partitions (i.e., 120) made it enough to deploy one machine. **Step 2: Sequence Aligning.** We align sequences of each partition against partitions with smaller IDs. This involves running lastal for \(7\,260\) pairs of partitions. We launch lastal in the distributed model (Section III-A) by implementing a CPU-load meter program that continuously compares the load of allocated processors. If the processors are not completely busy, a job (i.e., a pair of partitions) is requested from the dispatcher and is passed to a lastal instance. We slightly modified lastal in order to (i) receive two additional arguments that indicate the two partitions that are matched, and (ii) to have an additional output that, for each two matched sequences, writes their IDs and the resulting score (in binary format) in a file. The binary output files are named by the ID of aligning partitions (i.e., the additional input arguments). In this way, for each pair of partitions, say \((p_{i},p_{j})\) where \(p_{i}\leq p_{j}\), the output files contain a set of tuples. Each tuple \((x,y,z)\) indicates that \(x\) and \(y\) are the sequence IDs (\(x\leq y\)) and z is their matching score. This creates the COO graph in a collection of files, each named based on the ID of partitions. The total COO files is 15 TB. This step was completed by using 8 machines (3 Epyc machines and 5 Intel ones). The total processing is \(366.3\) machine-hours or \(45.7\) hours per machine, on average. **Step 3: Converting COO to CSC.** The CSC format consists of (i) the \(offsets\) array which is the prefix sum of the in-degrees and (ii) the \(edges\) array. The \(offsets\) array identifies which section of the \(edges\) array holds the in-neighbors of a vertex. To convert the COO graph (stored in multiple files) to CSC format, three phases are required: (i) performing one pass over the COO files and counting the degree of destinations, (ii) calculating the prefix-sum of the degree of vertices to specify the offsets array (which is written to the secondary storage in order to protect from the following changes), and (iii) a second pass over the COO files that writes each edge in the edges array in the index obtained from the offsets array (indexed by the ID of the destination vertex to get the insertion point, which is then atomically incremented) and sorting neighbors of each vertex (based on their IDs) before writing to the secondary storage. The second pass involves random write accesses to the edges array. However, by the special arrangement of the COO files (explained in Section IV-C) it suffices to have a memory space that is large enough to host edges of the partition(s) that are processed together and it is not required to host the whole edges array. In the first pass, the COO files of each partition are read to identify the degree of vertices. In the second pass, we grouped the 120 partitions of the vertices into a number of groups, where vertices in each group require about 1 TB memory space for their edges. We load the COO files of the partitions in each group, after processing them in memory, we write the processed edges to the relevant offsets of the edges file. As the performance of the algorithm is only dependent on the storage and as parallel threads of one machine are sufficient to saturate the bandwidth of the shared storage in our cluster, we implemented a shared-memory parallel model. However, it is easily integrated into the distributed model (Section III-A) by applying a modification to retrieve the partition group ID from the dispatcher instead of processing all groups sequentially. We used one Epyc machine for the parallel processing of this step that completed in \(20.4\) hours. To confirm the correctness of the CSC graph, we designed and implemented a validation algorithm in the distributed model. Each machine requests a partition from the dispatcher and loads the CSC edges of vertices in that partition to the memory. Then, the COO files of this partition are read by parallel threads and, for each edge in a COO file, a binary search is performed between the edges of the destination vertex in the CSC format (which has been partially loaded as the edges array of the CSC graph). The validation completed in \(18.6\) machine-hours using 4 Epyc machines. **Step 4: CSC to Compressed WebGraph Format.** In this step, we convert the binary CSC graph to the compressed WebGraph format. We implemented an extension of the ArclabelledImmutableGraph10 class with random accesses to the edges in order to parallelize the compression (Section III-B). We used one Haswell machine and the task completed in 19 hours. Footnote 10: [https://webgraph.d.ui.imi.it/docs/it/unimi/dsi/webgraph/labelling/](https://webgraph.d.ui.imi.it/docs/it/unimi/dsi/webgraph/labelling/) Footnote 11: Exactly 5,035,492,026. It is necessary to mention that two sequences can be matched by lastal with two or more scores. Therefore, the graphs created in Steps 2 and 3 have some edges with the same endpoints but different weights. As these _same-endpoints edges_ were less than 1% of the total edges11, we selected the weight with the largest value for these edges and the compressed WebGraph format has at most one edge between each ordered pair of vertices. As we explain in Section V, this directed graph (that contains only similarities to neighbours with lower or equal IDs) is called **MSA500** and its symmetric version (that is created in the next step) is called **MS**. **Step 5: Symmetrizing.** To create the MS graph, we compute the symmetric version of the MSA500 in two ways: (1) by using the WebGraph framework and (2) by designing and implementing a distributed algorithm (Section III-A) to compute the symmetric graph in binary CSC format. The distributed algorithm works in three steps: (i) it divides the vertices of the MSA500 graph into partitions (i.e., subgraphs), transposing each partition, and storing the transposed versions on secondary storage in the binary CSC format, (ii) it creates the offsets array of the symmetric graph by calculating the degree of each vertex (that is, the sum of its degrees in the asymmetric graph and in transposed subgraphs) and computing the prefix sum, and (iii) it creates the edges array of the symmetric graph that for each vertex of a partition includes edges from the asymmetric graph (MSA500) and the transposed subgraphs (the start index for the edges of a vertex is identified by the calculated offsets array). Edges are sorted either on the first step (which is more work-efficient) or on the last step. The algorithm ran on 4 Epyc machines and the total processing was 160 machine-hours. We also validated the two symmetric versions (the WebGraph format and the binary CSC format) by matching the degree and edges of all vertices. **Step 6: Edge Filtering & Removing Zero-Degrees.** By the end of Step 5, we have the MS graph with 2.5 trillion edges and the MSA500 graph with 1.2 trillion edges. To support a larger extent of users with varying processing models/needs and storage/memory limits, we decided to create smaller subgraphs by filtering edges. To create **undirected subgraphs**, we used the cumulative weight degree distribution of the MS graph (Figure 1(a)) to identify 3 weight borders in order to create subgraphs with 20%, 5%, and 0.1% of the total edges that are called **MS200**, **MS50**, and **MS1**, respectively. As removing edges by considering weights may remove all edges of vertices that do not have enough large weights, we considered another sampling method by considering **vertex-relative weights**. In this method, for each vertex we identify the maximum weight and then remove the edges whose weights are smaller than a fraction of the maximum weight of the vertex. As a result, an edge \((u,v,w)\) may be removed when considering it as an edge for \(v\), but its symmetric version \((v,u,w)\) may remain after filtering as an edge of \(u\) that has a lower maximum weight. As a result of considering the vertex-relative weights, **directed subgraphs** are created. We used the vertex-relative weight degree distribution of the MS graph (Figure 1(b)) to identify three borders to create directed subgraphs with 20%, 5%, and 1% of the total edges that are, respectively, called **MSA200**, **MSA50**, and **MSA10**. To create these 6 subgraphs, we designed a distributed algorithm which is similar to the symmetrizing step. First, the graph is divided into partitions and for each partition, edges are traversed, filtered, and stored on secondary storage. So for each partition, 6 sub-partitions (3 directed and 3 undirected ones) are created. In the second step, for each 6 target subgraphs, the related stored sub-partitions are merged to create the subgraph. In this way, by making one pass over the edges of the MS graph, all 6 subgraphs are created. As a result of weight-based filtering in creating the undirected subgraphs, the zero degree vertices increased to 19%-97%. To **remove the zero degree vertices**, we designed a shared-memory parallel algorithm that first identifies the zero degree vertices and creates the vertex-renumbering array12. By removing the zero-degree vertices from the offsets array, the new offsets array is created. The new edges array is created by assigning the new neighbour IDs using the renumbering array. We used 4 Epyc machines for filtering step and the total processing required 31.1 machine-hours. The validation also finished in 27.6 machine-hours. The execution of zero-degree removal step for three undirected subgraphs (MS200, MS50, and MS1) using one Epyc machine completed in 2.4 hours. The validation process completed in 2.3 hours on one Epyc machine. Footnote 12: The renumbering array is indexed by the old ID of a vertex and returns its new ID (in the graph with removed zero-degrees). We publish the reverse array (new-ID to old-ID) so that names of vertices (sequences) can be identified. Fig. 2: MS graph weight distributions ## V Characteristics of MS-BioGraphs In this section, we investigate the characteristics of these graphs and compare them to other real-world graphs. We offer five views for the data presented in this section: * The _Frequency_ plot that for a value indicated by the x-axis (such as a degree, weight, or component size) shows the number of times that value happens and based on the log-scaled left y-axis, * The _Fibonacci Binned Frequency_ plot based on the log-scaled left y-axis (that connects averaged values of frequency over intervals whose lengths are Fibonacci numbers [38]) to help better visual interpreting of the "cloud of points" that is seen in the tail of frequency plots, * The _Complementary Cumulative Frequency_ plot [39], which is the numerosity-based equivalent of the complementary cumulative distribution function and for a value on the x-axis shows the number of larger or equal values based on the log-scaled left y-axis, and * The _Cumulative Frequency_ plot that for a value on the x-axis shows the number of smaller or equal values as a percentage on the linear-scaled _right_ y-axis, and * The _Cumulative Edges_ plot that for a degree indicated by the x-axis shows the total edges of the vertices with degrees less or equal to that degree as a percentage of the total edges and based on the linear-scaled _right_ y-axis. Both in binned plots and complementary cumulative frequency plots in log-log scale data approximately distributed as a power law is displayed on a straight line; they are more reliable than frequency plots for visual inspection [38, 39]. Please note that all functions shown are discrete, and the lines connecting their points are only visual aids. ### _General Statistics_ **Naming.** The name of each graph is started by two characters \(\mathbf{M}\) and \(\mathbf{S}\) as initials of Metaclust (as the source dataset) and Sequence similarity (as the real-world domain of the graph), respectively. The name of the directed subgraphs has a third character \(\mathbf{A}\) that indicates the graph is asymmetric. The name of subgraphs is followed by \(\mathbf{3}\)**digits** that show the relative-size of the subgraph in comparison to the MS graph [40], multiplied by a thousand. Column 5 of Table II summarizes the naming scheme. For the undirected subgraphs MS200 [41], MS50 [42], and MS1 [43] the weight of edges (shown as W in the table) has been considered as the filtering metric. MSA500 [44] is the asymmetric graph of MS. For the directed subgraphs MSA200 [45], MSA50 [46], and MSA10 [47] the vertex-relative weight (shown as VRW in the table) has been used as sampling metric (as explained in Section IV-D, Step 6). **Statistics.** Table II shows statistics of the MS-BioGraphs: number of vertices and edges, maximum (in-/out-) degree, minimum and maximum values of weights, number of zero (in-/out-) degrees, and average degree. Table II also includes information about the connectivity of MS-BioGraphs: the number of connected components and the relative size of the largest component in comparison to its graph size. We detail the connectivity distributions and its computing process in Section V-D. The last columns of Table II shows the size of compressed graphs on secondary storage. As we explained in Subsection III-B, a weighted graph is stored as two compressed graphs: the baseline (or underlying) graph (that includes degree of vertices and endpoints of edges) and the labels graph (that contains weights of edges). ### _Degree Distribution_ Figure 3 compares the degree distribution of the MS graph with symmetric versions of Twitter MPI13[31, 48] (as a social network) and SK-Domain14 (as a web graph). Footnote 13: [http://networkrepository.com/soc-twitter-mpi-sws.php](http://networkrepository.com/soc-twitter-mpi-sws.php) Footnote 14: [https://law.di.unimi.it/webdata/sk-2005/](https://law.di.unimi.it/webdata/sk-2005/) The Frequency degree distribution plot shows that the **MS graph has a skewed degree distribution**. The Fibonacci Binned plot shows that the degree distribution does not follow a particular known degree distribution, especially given that two changes of concavity are observed. \begin{table} \begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{Stame} & \multirow{2}{*}{Directed} & \multirow{2}{*}{IV (A)} & \multirow{2}{*}{IV (B)} & \multirow{2}{*}{IV (D)} & \multicolumn{2}{c|}{Filtering} & \multicolumn{2}{c|}{Wave Log} & \multicolumn{2}{c|}{Weight} & \multicolumn{2}{c|}{Zero Log} & \multicolumn{2}{c|}{Avg Log} & \multicolumn{2}{c|}{Avg Log} & \multicolumn{2}{c|}{Nodule Com Comple} & \multicolumn{2}{c|}{Syng (Fi)} \\ \cline{4-14} & & & Integration & & RMS & RMS & RMS & RMS & RMS & RMS & RMS & RMS & RMS & RMS & RMS & RMS \\ \hline \multirow{2}{*}{**MS200**} & \multirow{2}{*}{No} & \multirow{2}{*}{1,414.4} & 502.9 & 0.200E, W & 745.7 & 460 & 634.925 & 6.4 & 1,415.8 & 148.9 & 99.95 & 6,843.6 & 4,696.0 \\ \cline{2-14} & & & & & & & & & & & & & & & & \\ \hline \multirow{2}{*}{**MS500**} & \multirow{2}{*}{No} & \multirow{2}{*}{585.6} & \multirow{2}{*}{124.7} & \multirow{2}{*}{0.050E, W} & 707.8 & 60 & 634.925 & 6.4 & 1,415.8 & 148.9 & 99.95 & 6,843.6 & 4,696.0 \\ \cline{2-14} & & & & & & & & & & & & & & & & \\ \hline \multirow{2}{*}{**MS1**} & \multirow{2}{*}{No} & \multirow{2}{*}{483.1} & \multirow{2}{*}{26} & \multirow{2}{*}{0.001E, W} & 745.7 & 460 & 634.925 & **0.0** & 355.6 & 338.3 & 96.61 & **1,362.7** & **1,1319.6** \\ \cline{2-14} & & & & & & & & & & & & & & & & \\ \hline \multirow{2}{*}{**MS500**} & \multirow{2}{*}{Yes} & \multirow{2}{*}{1,757.3} & \multirow{2}{*}{1,244.9} & \multirow{2}{*}{\(\text{D}_{\text{neigh}}\oplus\text{IV}_{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\texttexttexttexttexttexttexttexttexttext{\text{\text{\text{\text{\text \texttexttext{ \texttexttexttext{ \text{\texttexttexttexttexttexttexttexttext{ \texttexttexttext{\texttexttexttexttext{ \texttext{\texttexttexttexttexttexttext{ \texttexttexttext{ \texttexttexttexttext{ \texttexttexttexttexttext{ \texttexttexttext{\texttexttexttexttexttexttext{ \texttexttexttext{ \texttexttext{ \texttexttexttexttexttexttext{ \texttexttexttexttext{ \texttexttexttexttext{ \texttexttexttexttext{ \texttexttexttexttexttext{ \texttexttexttext{ \texttexttexttexttexttext{ \texttexttexttext{ \texttexttexttexttext{ \texttexttexttexttext{ \texttexttexttext{ \texttexttexttexttexttext{ \texttexttexttext{ \texttexttexttexttexttext{ \texttexttexttexttext{ \texttexttexttexttext{ \texttexttexttexttext{ \texttexttexttexttext{ \texttexttexttexttext{ \texttexttexttexttext{ \texttexttexttexttexttext{ { \texttexttexttexttexttexttexttexttext{ { \texttexttexttexttexttexttexttexttext{ {\texttexttexttexttexttexttexttexttext{ {\texttexttexttexttexttexttexttexttext{ {\texttexttexttexttexttexttexttexttexttext{ {\texttexttexttexttexttexttexttexttext{ {\texttexttexttexttexttexttexttexttext{ {\texttexttexttexttexttexttexttexttext{ {\texttexttexttexttexttexttexttext{ {\texttexttexttexttexttexttexttexttext{ {\texttexttexttexttexttexttexttexttext{ { \texttexttexttexttexttexttexttexttext{ {\texttexttexttexttexttexttexttext{ { \texttexttexttexttexttexttexttexttext{ { \texttexttexttexttexttexttexttext{ { \texttexttexttexttexttexttexttext{ { \texttexttexttexttexttexttexttexttext{ { \texttexttexttexttexttexttexttext{ { \texttexttexttexttexttexttexttext{ { \texttexttexttexttexttexttexttext{ { \texttexttexttexttexttexttexttext{ { \texttexttexttexttexttexttext{ { \texttexttexttexttexttexttexttext{ { \texttexttexttexttexttexttexttext{ { \texttexttexttexttexttexttexttext{ { \texttexttexttexttexttexttexttext{ { { \texttexttexttexttexttexttexttexttexttext{ { { \texttexttexttexttexttexttexttexttexttext{ { { \texttexttexttexttexttexttexttexttexttext{ { { \texttexttexttexttexttexttexttexttexttext{ { { \texttexttexttexttexttexttexttexttexttext{ { { \texttexttexttexttexttexttexttexttexttext{ { { \texttexttexttexttexttexttexttexttexttext{ { { \texttexttexttexttexttexttexttexttext{ { { \texttexttexttexttexttexttexttexttexttext{ { { \texttexttexttexttexttexttexttexttexttext{ { { { \texttexttexttexttexttexttexttexttexttexttexttext{ { { { { \texttexttexttexttexttexttexttexttexttexttexttexttext{ { { { { {{{{{{{\leftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleftleft { {\texttexttexttexttext{ { {\texttexttexttexttexttexttexttext{ { {{\texttexttexttexttexttexttexttexttext{ { {{\texttexttexttexttexttexttexttexttext{ { { {\texttexttexttexttexttexttexttexttext{ { { { \texttexttexttexttexttexttexttext{ By comparing the MS graph to two other types, we identify that the MS graph has a steep slope on the Cumulative Edges plot that indicates more than 98% of edges are incident to the vertices with degrees 100 to 50K. For the Twitter and SK graphs the vertices with degrees between 100-50K contain about 60% and 40% of the total edges, respectively. Unlike the two other types, the low-degree vertices (degrees \(\leq\) 100) and very high-degree vertices (degrees \(\geq\) 50K) hardly contribute to the total edges in MS. To identify the connection between vertices, we use the **degree decomposition** plots [12] in Figure 4. Vertices are divided into vertex classes based on their degrees: vertices with degrees 1-10, 10-100,.... For each vertex class, we consider edges with destination endpoints in this vertex class. For these edges, we identify and aggregate the vertex classes of the source endpoints. This shows how vertices of different vertex classes contribute (as source of edges) to a vertex class. As an example, in Figure 3(a), the first vertex class is 1-10 and has 7 bars. The second bar with yellow color indicates 26% contribution from the vertices with degrees 10-100. In other words, vertices with degrees 10-100 are the source endpoints of 26% of the edges to the vertices with degrees 1-10. The degree decomposition figures show that, unlike the social network and web graphs, in the MS graph the low-degree vertices (vertices with degree less than 100) are the main constituents of the low-degree vertices and do not contribute to the higher vertex classes. Moreover, MS graph has similarities to the social network graph as high-degree vertices (vertices with degrees in 100-100K classes) are tightly connected to each other. The tight connection between high-degree vertices in the MS graph becomes more important by comparing the Cumulative Frequency of these vertices in MS graph to the social network in Figure 3 that shows more than 60% of the vertices of the MS graph are vertices with degrees in the range 100-50K (this explains also the steep slope on Fig. 4: Degree decomposition Fig. 3: Degree distribution the Cumulative Edges plot). In contrast, these vertex classes include a few percentages of the total vertices in the social network. This **tight connection between high-degree vertices and its coincidence with their high cumulative frequency introduces a new structure of real-world skewed graphs** with obvious differences to the previously studied ones such as web graphs and social networks [12]. We see similar trends in the degree distributions of the MS subgraphs. Figure 5 shows the in-degree distribution of MSA50, the out-degree distribution of MSA50, and the degree distribution of MS50. We see that the slope of the Cumulative Edge reduces and the increase in the curve starts from vertices with lower degrees (degree 2 in MSA50 and degree 10 in MS50 rather than degree 100 in MS), which is a result of filtering methods. ### _Weight Distribution_ Figure 2 shows the weight and vertex-relative weight distribution of the MS graph and their Cumulative Frequency plots. The Figure also includes the Fibonacci Binned plot of weight frequencies. The plots indicate that **weights do not have a random distribution and follow a skewed distribution with a tail close to power-law distribution**. Fig. 5: Degree distribution of MSA50 and MS50 Fig. 7: MS - Avg. weight binned scatter plot ### _Weakly-Connected Components_ Figure (c)c shows the component size distribution for symmetric and asymmetric MS-BioGraphs. The plots indicate **a power-law size distribution and a very-high degree of connectivity in MS and also large subgraphs**. Table II illustrates the number of components in graphs and also the size of the largest component. The table shows that filtering has almost increased the number of components and has reduced the size of the largest component. Moreover, we observe that (as is expected) using vertex-relative weight sampling (in Section IV-D, Step 6) has resulted in better preserving of connectivity in asymmetric subgraphs. ### _Push vs. Pull Locality_ The Push vs. Pull Locality metric [12] considers the cumulative effectiveness of the in-hubs in comparison to the out-hubs in an asymmetric graph. Figure 8 illustrates it for Twitter MPI (as a social network), for SK Domain (as a web graph), and the MSA200 (as a MS-BioGraph). The push locality curve is created by sorting vertices by their in-degrees in descending order and computing the cumulative number of edges. The x-axis shows the number of sorted vertices and the y-axis shows the cumulative edges (as a percentage of the total edges). The push locality curve illustrates how many of edges are supported by accumulating CPU cache with data of vertices with the largest in-degrees (i.e., in-hubs). Similarly, the pull locality curve is created by using the out-degrees of vertices and illustrates the cumulative edges covered by out-hubs. Figure 8 shows that for Twitter MPI, the pull locality curve has continuously greater values than the push locality curve. In other words, if we fill cache with the data of out-hubs, more reuse is expected in comparison to filling cache with the data of the same number of in-hubs. On the other hand, for SK Domain, we observe that in-hubs are more powerful than out-hubs and for the same number of hubs, greater number of edges (i.e., more reuse of vertex data) is covered by the out-hubs in comparison to the in-hubs. For MSA200 (as shown by Figure 8 and also other asymmetric MS-BioGraphs), the push locality curve is very close to the pull locality curve. This shows that **MS-BioGraphs, in contrast to social networks and web graphs, demonstrate the same Push and Pull Locality**. Table II shows that hubs (including in-hubs and out-hubs) in MS-BioGraphs have a degree fewer than one million, and we explained in Subsection V-B that high-degree vertices are very frequent. We have a greater number of high-degree vertices with lower contribution per vertex that results in a smoother slope of the push and locality curves (Figure 8) for MSA200 in comparison to Twitter and SK. ## VI Related Work _Impacts of Creating Datasets on Progressing Research Fields._ The progress of scientific fields depends on the existence of real-world challenges. To encourage further research in HPC, challenges such as DIMACS15, and HPEC16, have been created. Creating updated datasets has the same effect by keeping the research fields motivated and challenging. As an example, image databases such as MOT17 have been presented in Computer Vision and real-world graph datasets [29, 27, 28, 31] are actively used in HPGP. Footnote 15: [http://archive.dimacs.rutgers.edu/Challenges/](http://archive.dimacs.rutgers.edu/Challenges/) Footnote 16: [https://www.omgwiki.org/hpec/files/hpec-challenge/](https://www.omgwiki.org/hpec/files/hpec-challenge/) Footnote 17: [https://motchallenge.net](https://motchallenge.net) _Sequence Alignment Algorithms._ We used the LAST aligner that provides better performance and reliability (Section IV-B). However, the solutions for constructing large graphs apply equally to other aligners [49, 34, 35, 50, 51]. _Generating and Processing Graphs._ Unlike the storage-based processing model [4, 3, 52], the distributed-storage processing model [53] divides the total storage between multiple machines that makes the machines dependent on each other for accessing the storage. The progress of parallel and Fig. 8: Push vs Pull Locality distributed file systems has provided larger bandwidth that requires new processing models. As explained in Section II-C, creating and analyzing graphs deals with graph algorithms such as graph transposition [54, 55], symmetrization, and sorting [56, 26] that requires further investigations. _Analyzing Graph Structure._ The study of different graph types and their structures has been performed in [29, 57, 58, 59, 12] that present different topological metrics and tools to analyze the differences between different graph types. ## VII Conclusion & Future Work To provide a more effective HPGP research environment by accessing realistic and updated datasets with a better coverage of various application-domains, this paper presents solutions for the challenges in creating and compression of large graphs. We explained the process of creating large graphs as a multi-step HPC process that requires process-wide model-specific engineering and design of data structures and algorithms. We introduced parallel compression in WebGraph framework that facilitates efficient compressing of large graphs. We demonstrated the effectiveness of our solutions in generating the **MS-BioGraphs**, a family of sequence similarity graphs with up to \(2.5\) trillion edges which is \(6.6\) times greater than the previous largest real-world graph. In addition to HPGP benchmarking and networks study, these graphs have several usages in biology. We presented a comparative study of the characteristics of these graphs that shows a skewed degree distribution and a particular graph structure by exposing a tight connection between frequent high-degree vertices that makes their structure very different from web graphs and social networks. Further investigations for optimizing the whole process of creating large graphs are necessary. Is it possible to shorten the flow and parallelize/merge the steps and increase reuse of data? Which data structures and algorithms are needed? What impacts are made by different distributed models? ## Acknowledgements This work was partially supported by (i) the High Performance Computing center of Queen's University Belfast and the Kelvin-2 supercomputer (UKRI EPSRC grant EP/T022175/1) and (ii) the SERICS project (PE00000014) under the NRRP MUR program funded by the EU - NGEU. First author was also supported by a scholarship from the Department for the Economy, Northern Ireland and Queen's University Belfast.
2309.11302
Extending a result of Chen, Erchenko and Gogolev
In a recent paper, Chen, Erchenko and Gogolev have proven that if a Riemannian manifold with boundary has hyperbolic geodesic trapped set, then it can be embedded into a compact manifold whose geodesic flow is Anosov. They have to introduce some assumptions that we discuss here. We explain how some can be removed, obtaining in particular a result applicable to all reasonable 3 dimensional examples.
Yannick Guedes Bonthonneau
2023-09-20T13:30:03Z
http://arxiv.org/abs/2309.11302v2
# Extending a result of Chen, Erchenko and Gogolev. ###### Abstract. In a recent paper [3], Chen, Erchenko and Gogolev have proven that if a Riemannian manifold with boundary has hyperbolic geodesic trapped set, then it can be embedded into a compact manifold whose geodesic flow is Anosov. They have to introduce some assumptions that we discuss here. We explain how some can be removed, obtaining in particular a result applicable to all reasonable 3 dimensional examples. In all that follows, \((M,g)\) will denote a smooth compact Riemannian manifold with smooth boundary \(\partial M\), of dimension \(n\). The geodesic flow is defined on a subset of \(SM\times\mathbb{R}\), and we will always assume that the set of points \(v\in SM\) whose geodesic trajectory never encounters \(\partial SM\) is a hyperbolic set. As in [3], we consider **Problem A**.: _Can \(M\) be isometrically embedded as an open set of a compact manifold \((N,g^{\prime})\) without boundary, so that the geodesic flow of \(N\) is Anosov?_ Since metrics whose geodesic flow is Anosov form the \(C^{2}\) interior of the metrics that do not have conjugate points [10], certainly \(M\) must not have any conjugate point, so we will make this assumption from now on. It is a standard assumption in the context of inverse problems for manifolds with boundary, that the boundary is strictly convex, i.e that the second fundamental form \[II_{\partial M}>0. \tag{1}\] This ensures that the riemannian distance function for points that are close to each other on the boundary is realized by small geodesics that only touch the boundary at their endpoints. Thus, the set of points whose geodesic trajectory remains inside \(M\) for all times does not meet \(\partial M\). In practice, this simplifies many computations so we will assume (1) holds. **Definition 1**.: _Let \(M\) be a compact manifold with non empty boundary, with hyperbolic trapped set, no conjugate points, and strictly convex boundary. We say that it is an Anosov manifold with boundary._ Let us recall the result of [1] **Theorem 1** (Theorem A, [1]).: _Let \(M\) be an Anosov manifold with boundary, and furthermore assume that the boundary components of \(M\) are topological spheres, then Problem A has a positive solution._ Since the only connected compact manifold in one dimension is a sphere, this theorem in particular solves completely the case of surfaces. In higher dimension, the assumption that the boundary components are spheres seemed very stringent, the question being whether there exists any example where \(M\) is not a topological ball. In a second version of their paper, the authors in [1] claim that their argument also applies to the case \(\partial M\simeq\mathbb{S}^{1}\times\mathbb{S}^{n-2}\). In this paper, we will prove the following results. First, we forego the topology question, and observe that the metric extension of [1] yields almost directly **Theorem 2**.: _Let \(M\) be an Anosov manifold with boundary. Then one can embed isometrically \(M\subset N\) into a conformally compact complete \(n\)-manifold \(N\) whose interior has uniformly hyperbolic geodesic flow, negative curvature at infinity, asymptotically constant, and no conjugate points. Also, \(N\) is diffeomorphic to \(M\)._ Next, we consider the case of the boundary having a very simple topology, and obtain some partial results about the topology of \(M\) **Theorem 3**.: _Let \(M\) be an Anosov manifold with boundary of dimension at least \(3\)_ * _If at least one boundary component is diffeomorphic to a sphere, then_ \(M\) _is diffeomorphic to a ball, and has no trapped set._ * _If at least one boundary component is diffeomorphic to_ \(\mathbb{S}^{1}\times\mathbb{S}^{n-2}\)_, then_ \(M\) _is diffeomorphic to a solid torus. It is a convex neighbourhood of a single closed geodesic._ As observed by the authors of [1], in the second case, one can use the residual finiteness of the \(\pi_{1}\) of hyperbolic manifolds to embed \(M\) in the finite cover of any given compact hyperbolic \(n\)-manifold. If one component of the boundary is a \(\mathbb{S}^{n-p-1}\) bundle over a \(p\)-dimensional Anosov manifold, one could imagine that \(M\) is diffeomorphic to the corresponding ball bundle ; except from some partial results, whether this is true or not has eluded the author. Finally, we show that for 3 manifolds, the problem can also be completely solved. **Theorem 4**.: _Let \(M\) be an oriented Anosov 3-manifold. Then Problem A has a positive answer._ It seems that current knowledge of topology of Riemannian manifolds of dimension \(\geq 4\) is not sufficient to settle the question in higher dimensions. **Structure of the arguments.** We will rely on a good part of the argument in [1]. Let us recall its gist. They first construct an extension of the metric near the boundary, whose curvature is bounded above uniformly, but becomes arbitrarily negative at an arbitrarily small distance of the boundary, and constant at a fixed distance. The second part is to study the dynamics of Jacobi Fields, and prove that the passage of the geodesics through the patch of possibly positive curvature is compensated by the large negative curvature in the rest of the extension, so that the extension does not create conjugate points, and preserves the Axiom A property. The last part is to find a manifold of constant sectional curvature that can be glued to the extended manifold. We point out that in the arguments of the second part, it is not really important that the curvature becomes constant far from the boundary. What is crucial is that it is arbitrarily negative on the complement of an arbitrarily small patch near \(\partial M\), and uniformly bounded above. If one were to build an extension satisfying only these two properties, the second part of the arguments would apply. Then if one glues to the extension a manifold with concave boundary and negative curvature, the rest of the arguments also apply. The remaining topological question is thus whether there exist concave manifold with negative curvature and prescribed metric near boundary components. **Organization of the paper.** We will first give a slightly different presentation of the construction of the metric extension, so that Figure 1. The extension of the metric near the boundary it applies to the most general case and prove Theorem 2. Next, we will discuss some topological results about the general problem, and finish the proof of Theorem 3. Finally, we will concentrate on the 3 dimensional case, and prove Theorem 4. We happily thank C. Lecuire for providing the key argument for Theorem 4. We also warmly thank B. Petri and F. Paulin for many explanations. The author of this paper is not an expert in topology of hyperbolic 3 manifold, or higher dimensional topology for that matter. If such an expert would take interest in these questions, maybe significant further progress could be made. ## 1. The extension problem We will here present the constructive arguments of [1] a little differently. We start by giving formulae for the curvatures of a metric in coordinates adapted to an hypersurface. ### Study of the curvatures in a slice situation In this section, we consider \(S\) a compact manifold endowed with a family of Riemannian metrics \((g_{t})_{t}\), and a function \(f\) of the parameter \(t\), that we will assume to lie in an interval \(I\subset\mathbb{R}\). Both are assumed to be smooth, and we are chiefly interested in the sectional curvatures of the metrics on \(I\times S\) given by \[h=dt^{2}+g_{t},\qquad\tilde{h}=dt^{2}+f(t)^{2}g_{t}.\] In what follows, will think of \(g_{t}\) as fixed, and will let \(f\) vary to obtain some interesting properties. Quantities related to \(\tilde{h}\) will be denoted with a \(\ {}^{\sim}\). In [1], some very classical formulae are recalled, but some computations have only been done in local coordinates using Christoffel coefficients; we will here try to give an intrinsic presentation, simplifying the proof of the first part of their statement. On the manifold \(I\times S\), we denote by \(T\) the vector field \(\partial/\partial t\), and the letters \(X,Y\) will denote vector fields tangent to \(S\), that do not depend on \(t\). Recall that the second fundamental form is \(II=(1/2)\partial_{t}h\), and the Shape operator \(A\) is defined by \[h(AX,Y)=II(X,Y).\] Here, \[g_{t}(\tilde{A}X,Y)=\frac{f^{\prime}}{f}g_{t}(X,Y)+\frac{1}{2}\partial_{t}g_{ t}(X,Y),\] so that \[\tilde{A}=\frac{f^{\prime}}{f}+A.\] We will assume that the slices \(S_{t}=\{t\}\times S\) are strictly convex, i.e that \(\tilde{A}>0\). More precisely, we will assume that \(A\geq 0\), and that \(f^{\prime}/f>0\). We say that the slices are _uniformly_ strictly convex if we have \(\tilde{A}>C>0\) for some global constant \(C>0\). By \(\sigma_{U,V}\), we denote the plane generated by vectors \(U,V\). **Proposition 1**.: _We have the following formulae for the sectional curvatures of \(\tilde{h}\), where \(a\neq 0\), and \(X,Y\) are orthonormal for \(g_{t}\) at the point of computation_ \[\tilde{K}(\sigma_{X,T}) =-\frac{f^{\prime\prime}}{f}+K(\sigma_{X,T})-2\frac{f^{\prime}}{f }g_{t}(AX,X).\] \[\tilde{K}(\sigma_{X,Y}) =\frac{1}{f^{2}}K^{int}(\sigma_{X,Y})+\left[g_{t}(AX,Y)^{2}-g_{t} (AX,X)g_{t}(AY,Y)\right]\] \[\qquad-\left(\frac{f^{\prime}}{f}\right)^{2}-2\frac{f^{\prime}}{f }\left[g_{t}(AX,X)+g_{t}(AY,Y)\right]\] \[\tilde{K}(\sigma_{X+aT,Y}) =\frac{f(t)^{2}\tilde{K}(\sigma_{X,Y})+a^{2}\tilde{K}(\sigma_{T,Y })+2ah(R(X,Y)Y,T)}{f(t)^{2}+a^{2}}.\] Proof.: This essentially follows from the Gauss and Codazzi equations. Let us start with the sectional curvature of the plane \(\sigma_{X,T}\) generated by \(T\) and \(X\neq 0\). It is given by \[K(\sigma_{X,T})=-\frac{g_{t}((A^{\prime}+A^{2})X,X)}{g_{t}(X,X)},\] so that \[\tilde{K}(\sigma_{X,T})=-\frac{f^{\prime\prime}}{f}+K(\sigma_{X,T})-2\frac{f^{ \prime}}{f}\frac{g_{t}(AX,X)}{g_{t}(A,A)}.\] Let us now turn to the curvature of the plane \(\sigma_{X,Y}\) generated by \(X\) and \(Y\). The Gauss' equation gives \[K(\sigma_{X,Y})=K^{int}(\sigma_{X,Y})-\frac{g_{t}(AX,X)g_{t}(AY,Y)-g_{t}(AX,Y) ^{2}}{g_{t}(X,X)g_{t}(Y,Y)-g_{t}(X,Y)^{2}},\] where \(K^{int}\) is the sectional curvature of \(g_{t}\). Since \(f\) is constant on each slice, we find directly \[\tilde{K}^{int}(\sigma_{X,Y})=\frac{1}{f^{2}}K^{int}(\sigma_{X,Y}).\] We deduce that taking \(X\) and \(Y\) orthonormal (for \(g_{t}\)) at the point of interest, we get \[\tilde{K}(\sigma_{X,Y}) =\frac{1}{f^{2}}K^{int}(\sigma_{X,Y})-\left[g_{t}(AX,X)g_{t}(AY,Y)-g _{t}(AX,Y)^{2}\right]\] \[\quad-\left(\frac{f^{\prime}}{f}\right)^{2}-2\frac{f^{\prime}}{f} (g_{t}(AX,X)+g_{t}(AY,Y)).\] Let us now turn to the curvature of the plane \(\sigma_{T+aX,Y}\), generated by \(X+aT\) and \(Y\), for some \(a\neq 0\). Without changing the plane (but changing \(a\)), we can assume that \(X,Y\) are orthogonal (for \(g_{t}\)) at the point of interest. Then we have \[K(\sigma_{X+aT,Y})=\frac{h(R(X+aT,Y)Y,X+aT)}{(a^{2}+g_{t}(X,X))g_{t}(Y,Y)}.\] This gives (using the symmetries of the curvature tensor) \[K(\sigma_{X+aT,Y})= \frac{1}{a^{2}+g_{t}(X,X)^{2}}\left(g_{t}(X,X)^{2}K(\sigma_{X,Y}) +a^{2}K(\sigma_{T,Y})\right)\] \[+\frac{2a}{(a^{2}+g_{t}(X,X)^{2})g_{t}(Y,Y)}\langle R(X,Y)Y,T\rangle.\] It is this last mixed term in the RHS that was estimated using coordinates and Christoffel coefficients in [5]. We are seeking to compute \[F(X,Y):=\langle\nabla_{X}\nabla_{Y}Y-\nabla_{Y}\nabla_{X}Y-\nabla_{[X,Y]}Y,T\rangle.\] According to the Codazzi equation, we have \[F(X,Y)=\nabla_{X}^{int}II(Y,Y)-\nabla_{Y}^{int}II(X,Y).\] Now, we observe that \[\tilde{F}(X,Y)=f(t)^{2}F(X,Y).\] This \(f(t)^{2}\) term will be compensated by \(\tilde{h}(Y,Y)\), so we conclude that \[\tilde{K}(\sigma_{X+aT,Y})=\frac{f(t)^{2}\tilde{K}(\sigma_{X,Y})+a^{2}\tilde{ K}(\sigma_{T,Y})+2aF(X,Y)}{f(t)^{2}+a^{2}}.\] ### Extension of the metric We assume that we are given \((M,g)\) as in the introduction. Near the boundary, we can always add cylinders, to define \[N:=M\cup(\partial M)_{x}\times]0,+\infty]_{t},\] which is a smooth manifold with boundary, diffeomorphic to \(M\). Theorem 2 follows from the following **Lemma 1**.: _We can build a metric \(\tilde{h}\) on the interior of \(N\) that extends the metric of \(M\), and so that this metric is conformal to a smooth metric on \(N\). Additionally, there exists a constant \(C_{0}>0\) so that for \(\kappa>0\) large enough, we can ensure that_ 1. _In_ \(\mathring{N}\setminus M\)_, the sets_ \(\{t=t_{0}\}\) _are equidistant sets from the boundary_ \(\partial M\)_. They are uniformly strictly convex_ 2. _The sectional curvature of_ \(\tilde{h}\) _tends to_ \(-\kappa^{2}\)_, with equality for_ \(t\geq 1\) _if_ \(\partial M\) _admits a constant curvature metric._ 3. _The sectional curvature of_ \(\tilde{h}\) _is globally bounded above by_ \(C_{0}\)_,_ 4. _For_ \(t\geq 1/\sqrt{\kappa}\)_, the sectional curvature of_ \(\tilde{h}\) _is less than_ \(-\kappa^{2}/2\)_._ Proof.: We will work near a fixed boundary component \(P\subset\partial M\). Near \(P\), we can write the metric in the form \[h=dt^{2}+g_{t}(x,dx),\] where \(g_{t}\) is a family of metrics on \(P\), \(-\epsilon<t\leq 0\), and \(t\) is the geodesic distance to \(P\). We can extend the family \(g_{t}\) smoothly to \(]-\epsilon,+\infty[\), and since \(\partial_{t}g_{t}>0\) near \(t=0\), we can ensure that * \(\partial_{t}g_{t}>\frac{1}{2}\partial_{t}g_{t}|_{t=0}\) for \(t\in]-\epsilon,1/2]\) * \(\partial_{t}g_{t}\geq 0\) for all \(t\)'s * \(\partial_{t}g_{t}=0\) for \(t\geq 1\). Additionally, if we are given a metric \(g^{\prime}\) on \(P\), we can ensure that \(g_{t}=Cg^{\prime}\) for \(t\geq 1\) and some constant \(C>0\). When \(P\) supports a metric of constant sectional curvature, this is the choice that we make. Next, we want to look for \(\tilde{h}=dt^{2}+f(t)^{2}g_{t}(x,dx)\) in the form given by the previous SS. We impose \(f=1\) on \(]-\epsilon,0]\). Let us analyze the conditions of the theorem and their consequence for \(f\). First, to ensure uniform strict convexity, it suffices to assume that \(f^{\prime\prime}\geq 0\) globally, and that \(f^{\prime}/f>C>0\) on \([1/2,+\infty[\). Next, \(f^{\prime\prime}\geq 0\) also ensures1 that \(\tilde{K}\leq K\), so that (3) is satisfied with \(C_{0}\geq\sup K\). Footnote 1: One needs that \(f^{\prime\prime}\geq 0\), \(f^{\prime}\geq 0\) and \(f\geq 1\). Since \(f=1\) on \(]-\epsilon,0]\) and is smooth, it suffices to assume that \(f^{\prime\prime}\geq 0\) Let us now consider property (2). We observe that since for \(t\geq 1\), \(\partial_{t}g_{t}=0\), we get the formulae (\(t\geq 1\)) \[\tilde{K}(\sigma_{X,T}) =-\frac{f^{\prime\prime}}{f}\] \[\tilde{K}(\sigma_{X,Y}) =\frac{1}{f^{2}}K^{int}(\sigma_{X,Y})-\left(\frac{f^{\prime}}{f} \right)^{2}\] \[\tilde{K}(\sigma_{X+aT,Y}) =\frac{f(t)^{2}\tilde{K}(\sigma_{X,Y})+a^{2}\tilde{K}(\sigma_{Y, T})}{f(t)^{2}+a^{2}}.\] This suggests to take for \(t\geq 1\) \[f(t)=f_{cc}(t):=\begin{cases}\frac{C}{\kappa}\cosh(\kappa t)&\text{if }\inf K_{t \geq 1}^{int}=-C^{2}<0,\\ e^{\kappa t}&\text{if }K_{t\geq 1}^{int}=0,\\ \frac{C}{\kappa}\sinh(\kappa t)&\text{if }K_{t\geq 1}^{int}\geq 0\text{ and }\sup K_{t\geq 1}^{int}=C^{2}>0.\end{cases} \tag{2}\] This ensures that the curvature tends to \(-\kappa^{2}\) as \(t\to+\infty\) (exponentially fast in \(t\)). When \(g_{t\geq 1}\) has constant curvature, we have \(\tilde{K}=-\kappa^{2}\) for \(t\geq 1\). Now, for item (4), we observe that for \(C_{0}\) large enough, \[\tilde{K}(\sigma_{X,T})\leq-\frac{f^{\prime\prime}}{f}+C_{0},\quad\tilde{K}( \sigma_{X,Y})\leq\frac{C_{0}}{f^{2}}-(f^{\prime}/f)^{2},\] \[\tilde{K}(\sigma_{X+aT,Y})\leq\max(\tilde{K}(\sigma_{X,T}),\tilde{K}(\sigma_{ X,Y}))+\frac{C_{0}}{f}.\] Let \(t_{0}\in(0,1)\) be such that the function \(f_{cc}\) defined in (2) satisfies \(f_{cc}\geq 1\) for \(t\geq t_{0}\). Then taking this suggestion for \(f\), not only for \(t\geq 1\) but \(t\geq t_{0}\), we get \[\tilde{K}\leq 2C_{0}-\kappa^{2}.\] We observe that (for example), \(t_{0}=1/\sqrt{\kappa}\) satisfies the condition for \(\kappa\) large. We also observe that \(f_{cc}^{\prime\prime}\geq 0\) for \(t\geq 0\), and \(f_{cc}^{\prime}/f_{cc}>C>0\) for \(t>1/2\), uniformly as \(\kappa\) becomes large. It remains thus to define \(f\) on the interval \([0,1/\sqrt{\kappa}]\) so that \(f^{\prime\prime}\geq 0\), to obtain a smooth function. The only condition for this is that \((0,f(0))=(0,1)\) is strictly above the tangent to the graph of \(f_{cc}\) at \(t=t_{0}=1/\sqrt{\kappa}\). Accordingly, we require that \[f_{cc}(t_{0})-f_{cc}^{\prime}(t_{0})t_{0}<1.\] We can then check case by case that this holds for \(t_{0}=1/\sqrt{\kappa}\) and \(\kappa\) large enough. Let us finally check the smooth conformal compactness. For this, it suffices to set \(y=e^{-\kappa t}\), to write the metric in the cylinder in the form \[\tilde{h}=\frac{1}{\kappa^{2}y^{2}}\left(dy^{2}+m(y)g_{t\geq 1}(x,dx)\right).\] Here, the function \(m\) is a smooth function of \(y^{2}\), and \(y\) is a boundary defining function. ## 2. General observations about the topology of the problem Let us collect some general informations about universal covers of manifolds with hyperbolic geodesic flow. This is classical material. We will denote by \(\pi:\widetilde{M}\to M\) the universal cover of \(M\), and by \(\widetilde{N}\) that of \(N\). Since \(M\) is geodesically convex, and does not have conjugate points, we can pick \(x_{0}\in M\) and identify \(\widetilde{M}\) with the set of \(u\in T_{x}M\) such that \(\exp_{x}(u)\in M\simeq\mathbb{R}^{n}\). By this construction, we identify \(\widetilde{M}\) as a geodesically convex subset of \(\widetilde{N}\simeq\mathbb{R}^{n}\). The fundamental group \(\pi_{1}(M)\) is realized by isometries of \(\widetilde{N}\), which preserve \(\partial\widetilde{M}\). In particular, \(\partial\widetilde{M}\) is a Galois cover of \(\partial M\). The embedding \(\imath:\partial M\hookrightarrow M\) induces a map \(\imath_{*}:\pi_{1}(\partial M)\to\pi_{1}(M)\). If \(P\) is a connected component of \(\partial M\), and \(\widetilde{P}\) a connected component of \(\pi^{-1}(P)\), then we have \(\pi_{1}(\widetilde{P})=\{\gamma\in\pi_{1}(P)\ |\ \imath_{*}(\gamma)=0\}\), and \(P=\widetilde{P}/\imath_{*}(\pi_{1}(P))\). The elements of \(\pi_{1}(M)\) (except \(1\)) are represented by a special kind of isometries, called _loxodromic_. They have a continuous extension to (Holder) homeomorphisms of \(B^{n}=\widetilde{N}\cup\mathbb{S}^{n-1}\) (the _visual compactification_ of \(\widetilde{M}\)). They have exactly two fixed points, which lie in \(\mathbb{S}^{n-1}\). They preserve the corresponding geodesic, along which they are a translation. In particular, if two isometries \(\gamma\), \(\mu\) commute, there must exist another \(\eta\), and \(k,\ell\in\mathbb{N}\) so that \(\gamma=\eta^{k}\), \(\mu=\eta^{\ell}\). This implies that there is no copy of \(\mathbb{Z}^{2}\) inside \(\pi_{1}(M)\). Projecting \(\partial\widetilde{M}\) along rays from \(x_{0}\) on \(\mathbb{S}^{n-1}\), we find that \(\partial\widetilde{M}\) is (Holder) homeomorphic to an open set of \(\mathbb{S}^{n-1}\). Its complement \(\Lambda\) is called the _limit set_. It is the set of endpoints of geodesics that remain inside in \(M\) for all times. From these general facts, we deduce our Theorem 3. Proof.: Let us start by observing that the following are equivalent 1. At least one connected component of \(\partial\widetilde{M}\) is a sphere 2. At least one connected component of \(\partial\widetilde{M}\) is compact 3. \(\partial\widetilde{M}\simeq\mathbb{S}^{n-1}\) 4. \(\widetilde{M}\simeq B^{n}\) 5. \(\pi_{1}(M)=\{1\}\) 6. \(M\) is diffeomorphic to a closed ball. Certainly, (1) implies (2), and since \(\partial\widetilde{M}\) is an open set of \(\mathbb{S}^{n-1}\), if it is compact, it must the whole sphere, so (2) implies (3). Using rays starting from \(x_{0}\), we can then build a diffeomorphism between \(\widetilde{M}\) and \(B^{n}\), so that (3) implies (4). In that case, every element of \(\pi_{1}(M)\) preserves a compact set of \(\mathbb{R}^{n}\), so must be trivial, and (4) implies (5). If (5) holds, then \(M=\widetilde{M}\) must be compact, and using again rays from \(x_{0}\), we find that \(M\) is a closed ball. Finally, if \(M\) is a closed ball, its boundary is a sphere. This takes care of item (1) of Theorem 3. Now, we also have equivalence between 1. \(\Lambda\) has exactly two points 2. The boundary of \(M\) is diffeomorphic to a bundle: \(\mathbb{S}^{n-2}\to\partial M\to\mathbb{S}^{1}\). 3. One connected component of \(\partial M\) is diffeomorphic to a bundle \(\mathbb{S}^{n-2}\to P\to\mathbb{S}^{1}\). 4. One connected component \(P\) of \(\partial M\) satisfies \(\imath_{*}(\pi_{1}(P))\simeq\mathbb{Z}\). Certainly, (a) implies that \(\pi_{1}(M)\) is generated by a non trivial loxodromic isometry \(\gamma\). In particular the boundary of \(M\) is the quotient of \(\mathbb{R}\times\mathbb{S}^{n-2}\) by the group generated by \(\gamma\). We can use Fermi coordinates along the geodesic preserved by \(\gamma\) to write \(\widetilde{M}\simeq\mathbb{R}\times\mathbb{R}^{n-1}\), a decomposition that is preserved by \(\gamma\), so that \(\widetilde{M}\) intersected with every slice \(t\times\mathbb{R}^{n-1}\) is star shaped with smooth boundary (and thus diffeomorphic to a ball in \(\mathbb{R}^{n-1}\)). This implies that \(M\) is a ball bundle above the circle, and its boundary a sphere bundle above the circle. This implies (b) (which implies (c)). Now assume (c) and let \(P\) be the corresponding connected component, and \(\widetilde{P}\) a connected component of \(\pi^{-1}(P)\). From the arguments in the ball case above, we know that \(\widetilde{P}\) cannot be compact. If \(n>3\), this implies \(\widetilde{P}=\mathbb{R}\times\mathbb{S}^{n-2}\), and \(\imath(\pi_{1}(P))\simeq\mathbb{Z}\) is generated by one non trivial isometry. In the case \(n=3\), we have to consider that case \(\widetilde{P}\simeq\mathbb{R}^{2}\). However, since \(\pi_{1}(M)\) cannot contain \(\mathbb{Z}^{2}\), this case is ruled out, and so it is the same as \(n>3\), and we have (d). Now, assuming (d), let \(\imath_{*}(\pi_{1}(P))=<\gamma>\). Then let \(c(t)\) be the geodesic preserved by \(\gamma\). We find that \(\widetilde{P}\) is at bounded distance from \(\{c(t)|t\in\mathbb{R}\}\). This implies that \(\partial\widetilde{P}\) (seen as a subset of \(\mathbb{S}^{n-1}\)) can only contain the endpoints of \(c(t)\), and so this implies (a). In the case that the bundle is trivial, this gives item (b) of Theorem 3. We have identified two type of particularly simple boundaries. Let us discuss now some more partial results in this direction. **Lemma 2**.: _Let us assume that a boundary component \(P\) satisfies \(\Gamma:=\imath(\pi_{1}(P))=\pi_{1}(\Sigma)\), where \(\Sigma\) is a compact Anosov manifold of dimension \(p\leq n-2\). Then \(P\) is the only boundary component._ Proof.: In that case, the visual boundary \(\partial\Gamma\) is homeomorphic to a sphere \(\mathbb{S}^{p-1}\), and since \(\Gamma\) is a hyperbolic subgroup of \(\pi_{1}(M)\), the limit set of \(\Gamma\), which must be also \(\partial\widetilde{P}\), is also homeomorphic to \(\mathbb{S}^{p-1}\). Now, since \(\mathbb{S}^{n-1}\setminus\mathbb{S}^{p-1}\) must be connected, we deduce that \(\mathbb{S}^{n-1}\setminus\mathbb{S}^{p-1}=\tilde{P}\). This means that \(P\) is the whole boundary of \(M\). If the embedding of \(\mathbb{S}^{p-1}\) into \(\mathbb{S}^{n-1}\) is the standard one, we get that \(\tilde{P}\simeq\mathbb{S}^{n-p-1}\times\mathbb{R}^{p}\), and \(M\) turns out to be tubular neighbourhood of \(\Sigma\) However, there are more than one way to embed a sphere in another, except in the case that \(n>2p\). **Corollary 1**.: _If \(\Sigma\) is a surface, and \(M\) has dimension at least \(5\), then \(M\) is diffeomorphic to a ball bundle over \(\Sigma\)._ In \(4\) dimensions, understanding exactly which \(B_{2}\) bundles over a compact surface can be endowed with a complete convex hyperbolic structure is not completely solved. This question was investigated by several authors -- see [1], [10]. Let us close this section with a modicum of information regarding the general case for the possible shape of the boundary. We specialize to the case of \(M\) having \(4\) dimensions, to be able to use the geometrization theorem. For this, we will rely on several results from the theory of the topology of \(3\) manifolds. For example, the reader can consult [1], in particular its SS3. Since it is not our main focus, we will be very cursory. **Lemma 3**.: _Let \(M\) be an Anosov \(4\)-manifold with boundary, with some boundary component \(P\neq\emptyset\). Assume that \(P\subset M\) is \(\pi_{1}\)-injective. Then either \(P=\mathbb{S}^{3}\) or it decomposes as a connected sum_ \[A_{1}\#...\#A_{p}\#B_{1}\#\ldots\#B_{\ell},\] _where each \(A_{j}\) is a copy of \(\mathbb{S}^{2}\times\mathbb{S}^{1}\), and each \(B_{j}\) is a compact hyperbolic 3 manifold._ Proof.: According to the prime decomposition theorem, we can decompose \[P=P_{1}\#\ldots\#P_{m},\] where no \(P_{j}\) can be decomposed as a non-trivial connected sum, and (unless \(P=\mathbb{S}^{3}\)) each \(P_{j}\) is either \(\mathbb{S}^{1}\times\mathbb{S}^{2}\) or is irreducible (i.e every embedding of \(\mathbb{S}^{2}\) bounds a ball). Now, according to the geometrization theorem, we can decompose each \(P_{j}\) as \[P_{j}=\cup Q_{j,\ell},\] where each \(Q_{j,\ell}\) is a manifold whose boundary components are torii, and whose interior admits a geometric structure, in the list of Thurston's \(8\) geometries. Also, each torus boundary is \(\pi_{1}\) embedded. Now, since we assumed that \(P\) is \(\pi_{1}\)-injective, and \(\pi_{1}(M)\) cannot contain a \(\mathbb{Z}^{2}\) subgroup, we deduce that there cannot exist any incompressible torus in the boundary, and, in particular, the decomposition of the \(P_{j}\) must be trivial: each of them supports a Thurston geometry. Now, among the Thurston geometries, we observe that for either a compact Euclidean, Nil or Sol manifold, the fundamental group must be a semi-direct product \(\mathbb{Z}^{2}\times\mathbb{Z}\), which contains a \(\mathbb{Z}^{2}\). In the \(\mathbb{H}^{2}\times\mathbb{R}\) or \(\widetilde{SL(2,\mathbb{R})}\) case, the fundamental group must contain a cyclic normal subgroup, i.e a \(\mathbb{Z}\) center. Since we cannot have torsion, it must also contain a \(\mathbb{Z}^{2}\), and this is also ruled out. Next, we observe that in the spherical case, elements of the \(\pi_{1}\) must have finite order, which is not possible because there are no elliptic elements in \(\pi_{1}(M)\). In particular, if \(P_{j}\) has spherical geometry, \(P_{j}=\mathbb{S}^{3}\). We deduce that each \(P_{j}\) has geometry either \(\mathbb{S}^{2}\times\mathbb{R}\), or \(\mathbb{H}^{3}\), or is a sphere. ## 3. The case of 3-manifolds In this section, let us concentrate on the case that \(M\) is 3 dimensional and oriented. Then the boundary components are compact oriented surfaces, so either a sphere, a torus or surfaces of genus \(g>1\). As we have seen, if \(M\) is neither a solid torus nor a ball, all the components must be of the latter variety. The authors of [3] have already commented on how to embed the torus or the ball, so we will concentrate on the remaining case. As noted in SS1 we can assume that the boundary components have constant curvature. **Theorem 5**.: _Let \(M\) be a 3-manifold whose boundary is strictly convex, with curvature \(-\kappa^{2}\) constant near the boundary. Assume further that all connected components of the boundary are hyperbolic surfaces of genus \(g>1\). Provided the hyperbolic metric is well chosen on the boundary, \(M\) can be embedded into a compact manifold \(N^{\prime}\) without boundary, such that the curvature is \(-\kappa^{2}\) in \(N^{\prime}\setminus M\)._ Proof.: We start by recalling this result (see Theorem 3.3 in [10]). **Lemma 4**.: _For \(g>1\), there exists \(N_{g}\) a compact hyperbolic 3-manifold whose boundary is a totally geodesic surface \(S_{g}\) of genus \(g\)._ (As explained in [10], not all hyperbolic structures on surfaces can be realized as totally geodesic boundaries, essentially because of Mostow rigidity). Let us come back to our problem. We will from now on assume that the boundary of \(M\) is connected. If there are several components, one can work component wise. Let us hence denote \(\Sigma=\partial M\), and endow \(\Sigma\) with a hyperbolic metric \(g_{\partial N_{g}}\) so that Lemma 4 applies, and we also have \(\Sigma=\partial N_{g}\) as a totally geodesic boundary. Near the boundary of \((N_{g},h_{0})\), we have \[N_{g}\simeq\Sigma_{x}\times[0,\delta[_{\tau},\quad h_{0}=d\tau^{2}+\cosh(\tau)^{2 }g_{\partial N_{g}}(x,dx). \tag{3}\] Let us apply the construction of SS1. For \(C_{0}>0\) large enough, we can ensure that \(C_{0}g_{\partial N_{g}}>g_{\partial M}\), where \(g_{\partial M}\) is the metric on \(\Sigma\simeq\partial M\). We can thus build an extension \(M\subset N\) according to Lemma 1. In \(N\setminus M\), for \(1\leq t\leq 2\), the metric takes the form (for some \(\kappa>0\)) \[dt^{2}+\frac{1}{\kappa^{2}}\cosh(\kappa t)^{2}g_{\partial N_{g}}(x,dx).\] In the formula above we recognize the expression in coordinates of the metric \(h_{0}/\kappa^{2}\) (here, \(\tau=\kappa t\)). If the local coordinates in (3) extend as far as \(\tau\leq 2\kappa\), we can thus glue \(N_{\{t<2\}}\) with \(N_{g}\setminus\Sigma\times[0,2\kappa[\), and this will conclude the proof of Theorem 5 setting \[N^{\prime}=N_{\{t<2\}}\cup(N_{g}\setminus\Sigma\times[0,2\kappa[).\] The difficulty here is that for the dynamical arguments of [2] to apply, we need to be able to take \(\kappa\) arbitrarily large. We will thus be done if we can prove **Lemma 5**.: _For any \(\kappa>0\), we can choose \(N_{g}\) such that a \(2\kappa\) neighbourhood of \(\partial N_{g}\) is diffeomorphic to \(\partial N_{g}\times[0,2\kappa[\)._ The argument of the proof was communicated by C. Lecuire. It relies on the following very fine statement from the topology of hyperbolic manifolds **Theorem 6** (Theorem 9.2, [1]).: _Fundamental groups of compact 3 dimensional hyperbolic manifolds are LERF (locally extended residually finite)._ This means that whenever \(H\subset\pi_{1}(M)\) is finitely generated, and \(\gamma\in\pi_{1}(M)\setminus H\), there exists a finite index subgroup \(\Gamma\subset\pi_{1}(M)\) such that \(H\subset\Gamma\) and \(\gamma\notin\Gamma\). Let us come back to Proof of Lemma 5.: We will build \(N_{g}\) by induction, so we start by denoting \(N_{g}^{0}\) the one provided by Lemma 4. Let us now consider geodesics of \(N_{g}^{0}\) with endpoints in its boundary. We will say that two such geodesics are boundary-homotopic, if there is a homotopy between them, so that at each time, the endpoints remain in the boundary. Working on the universal cover, it is elementary to observe that any such geodesic is either boundary homotopic to a point in the boundary, or to a geodesic that intersects the boundary orthogonally. Additionally, such a boundary-orthogonal geodesic uniquely minimizes the length among its boundary-homotopy class. Let us set \[\mathfrak{G}=\{ka\ |\ k\in\mathbb{N},\ \text{some boundary-orthogonal geodesic has length $a$.}\}\] Certainly, \(\mathfrak{G}\) is a discrete set, and its only accumulation point is \(+\infty\). We can enumerate it as \[\mathfrak{G}=\{2\ell_{j}\ |\ j\geq 0\}.\] With a bit more of elementary riemannian geometry, we find that \(\ell_{0}\), the largest \(\ell\) such that the \(\ell\)-neighbourhood of \(\Sigma\) is product, is exactly half the length of a shortest boundary-orthogonal geodesic (there may be several non-homotopic orthogonal geodesics with the same length). Let us denote \(m_{0}>0\) the number of boundary-orthogonal geodesic with length \(2\ell_{0}\), and \(\gamma_{0}\) one of them. Let us now consider the double \(A_{0}=DN_{g}^{0}\) of \(N_{g}^{0}\) along \(\Sigma\), and \(D\gamma_{0}\) the double of \(\gamma_{0}\), a closed geodesic of length \(4\ell\). Using the LERF theorem, we find a finite index subgroup \(\Gamma\subset\pi_{1}(A_{0})\) so that \(\pi_{1}(\Sigma)\subset\Gamma\), and \([D\gamma_{0}]\notin\Gamma\). Let us denote \(B_{0}=\mathbb{H}^{3}/\Gamma\) the corresponding cover of \(A_{0}\). The geodesic \(D\gamma_{0}\) is lifted to a closed geodesic \(\gamma_{0}^{\prime}\) of length at least \(2\times 2\ell\). Let us pick a lift \(\Sigma\subset B_{0}\), intersecting \(\gamma_{0}^{\prime}\). Actually, \(\gamma_{0}^{\prime}\) only cuts \(\Sigma\) twice. We cut \(B_{0}\) along \(\Sigma\), which cuts \(\gamma_{0}^{\prime}\) into two pieces. There are two sides to this cut, and we call "side A" the side corresponding to the longest piece of \(\gamma_{0}^{\prime}\). To the other side, we glue a copy of the original \(N_{g}^{0}\). We obtain thus a manifold with \(\Sigma\) as boundary, that we rename \(N_{g}^{0,1}\). (Possibly, it may be non-connected, so we only keep the connected component that has a boundary). It is a cover of \(N_{g}^{0}\). Let us now consider boundary-orthogonal geodesics of \(N_{g}^{0,1}\). They cover boundary-orthogonal geodesics of \(N_{g}^{0}\). On the other hand, we can lift those of \(N_{g}^{0}\) uniquely to ones of \(N_{g}^{0,1}\), so that there is a bijection between the two sets. Let us now consider the ones of length \(2\ell_{0}\). We have ensured that the geodesic above \(\gamma_{0}\) has length \(\geq 4\ell_{0}\), so that there are at most \(m_{0}-1\) such geodesics. We can thus apply the same procedure to \(N_{g}^{0,1}\), to obtain \(N_{g}^{0,2}\), etc, until there are no boundary-orthogonal geodesics of length \(2\ell_{0}\) left; we are sure this will end at the latest at the step \(m_{0}\). We denote by \(N_{g}^{1}\) the corresponding manifold. \(N_{g}^{1}\) is a connected hyperbolic 3-manifold with boundary, whose boundary is a totally geodesic copy of \(\Sigma\). Additionally, the lengths of its boundary-orthogonal geodesics are contained in \(\mathfrak{G}\), and they are strictly larger than \(2\ell_{0}\). So they must be at least \(2\ell_{1}\). Proceeding by induction, we can thus construct a manifold whose boundary is a totally geodesic copy of \(\Sigma\), and whose shortest boundary-orthogonal geodesic has length as large as desired. For example strictly larger than \(4\kappa\). This implies that a \(2\kappa\) neighbourhood of \(\Sigma\) is product. This closes the proof of Theorem 5 Observe that there are examples of compact hyperbolic \(4\)-manifolds whose \(\pi_{1}\) is not LERF. One of the incarnations of the fact that our problem here becomes much more difficult when the dimension increases. Let us conclude the proof of the main theorem 4. Proof of Theorem 4.: We have already embedded isometrically \(M\subset N^{\prime}\), so that the curvature of \(N^{\prime}\) satisfies 1. \(K\leq C_{0}\) globally, 2. \(K\leq-\kappa^{2}/2\) on \(N^{\prime}\setminus V\), where \(V\) is the \(1/\sqrt{\kappa}\) neighbourhood of \(M\). We have seen that \(C_{0}\) is fixed, and \(\kappa\) can be taken arbitrarily large. We have also ensured that the slices are uniformly strictly convex. In the arguments below, \(C>0\) will denote a constant that does not depend on the choice \(\kappa\), and that may change at every line. Except from this, we will use the notations for Jacobi fields introduced in [1]. Observe that their \(1/C_{1}\) corresponds to our \(1/\sqrt{\kappa}\). The key technical tool is the comparison lemma 2.8 in [1]. In particular, it tells us that in \(N^{\prime}\setminus V\), in the region \(t>1/\sqrt{\kappa}\), where the curvature is below \(-\kappa^{2}/2\), a Jacobi field with \(\mu_{J}(0)>-Q\) satisfies \(\mu_{J}(t)>(1-\epsilon)\kappa/\sqrt{2}\) for \(t\gtrsim 1\), when \(\kappa\) is large and \(Q\) remains fixed. We will rely on their arguments of SS8, trying to give some detail. Let us first sketch the proof of **Lemma 6**.: _Let \(v\in\partial_{-}SM\) and \(J\) be a perpendicular Jacobi field along the corresponding geodesic. If \(\mu_{J}(0)>Q_{M}\), then \(J\) does not vanish in positive time._ Proof.: According to Lemma 8.5 in [1], the time travel in the region \(t\in[0,1/\sqrt{\kappa}]\) is bounded above by \(C/\sqrt{\kappa}\). If \(J\) a perpendicular Jacobi field along a geodesic \(\gamma_{v}\) starting at a point \(v\in SN^{\prime}\), let us assume that \(v\in\partial SM\) is entering. Then according to Proposition 4.1 of [1], if \(\mu_{J}(0)>Q_{M}\), either \(v\in\Gamma_{-}\) and we are done, or \(\gamma\notin\Gamma_{-}\). Then as \(\gamma_{v}\) leaves \(SM\), \(\mu_{J}(\ell(v))>-Q_{M}\). Since the travel time in the collar is small and the curvature bounded above, we deduce that \(\mu_{J}\) is at least \(-2Q_{M}\) when \(\gamma_{v}\) enters \(\{t>1/\sqrt{\kappa}\}\). Now, in this region, the curvature is bounded above by \(-\kappa^{2}/2\), so that applying the comparison lemma 2.8 of [1], and taking into account that the travel time inside \(t>1/\sqrt{\kappa}\) must be at least \(1\), if \(\kappa\) is large enough, \(\mu_{J}\) must be at least \(Q_{M}\) again when \(\gamma_{v}\) enters again \(SM\) (if it ever does). We can conclude by induction on the times. **Lemma 7**.: \(N^{\prime}\) _has no conjugate points_ Proof.: Let us consider a nonzero Jacobi field \(J\) that vanishes at some point above \(N^{\prime}\setminus M\). Then, using again the comparison lemma, we deduce that \(\mu_{J}>Q_{M}\) as long as the geodesic remains in \(SN^{\prime}\setminus SM\), and thus cannot vanish again according to our lemma. Let us now consider a nonzero Jacobi field vanishing at some point. If the corresponding geodesic remains in \(SM\), then we are done. Otherwise consider the first time that it exits \(SM\). Then, using the argument of the proof of Lemma 8.11 of [10], we must have \(\mu_{J}>-Q_{M}\). Otherwise we could apply Proposition 4.1 in reversed time, and obtain a contradiction. Again by the smallness of the collar and the very negative curvature in \(\{t>1/\sqrt{\kappa}\}\), if the geodesics come back again in \(SM\), we must have \(\mu_{J}>Q_{M}\) at the point of entry. Then proceeding by induction as above enables us to conclude. Finally, we have to prove that the geodesic flow is Anosov. For this it suffices, according to Eberlein's theorem, to prove that no nonzero Jacobi field can be globally bounded. Since \(M\) has axiom A geodesic flow and the curvature is very negative in \(\{t>1/\sqrt{\kappa}\}\), this is a given for any Jacobi field along a geodesic that remains either in \(SM\) or \(SN^{\prime}\setminus SM\) for all times. Consider a geodesic \(\gamma\) entering \(SM\) at a point \(v\), with a perpendicular Jacobi field \(J\) satisfying \(\mu_{J}>Q_{M}\) at that point. Then according to Proposition 4.1 of [10], we have either \(\|J\|\to+\infty\), or \(\int_{SM}\mu_{J}>-C_{0}\) before \(\gamma\) exits \(SM\). Now, before the geodesic enters again (if it ever does) in \(SM\), since the curvature is so negative, using the comparison lemma again, we must have \[\int_{SN\setminus SM}\mu_{J}\geq(1-\epsilon)\frac{\kappa}{\sqrt{2}}-\epsilon,\] where \(\epsilon\) tends to \(0\) as \(\kappa\) grows large. Here we have taken into account that the travel time must be at least \(1\) above \(N^{\prime}\setminus M\). We also used Proposition 4.1, ensuring that \(\mu_{J}>-Q_{M}\) entering \(SN^{\prime}\setminus SM\). Now, either the sequence of times when the geodesic changes component is finite, and we know directly by the comparison lemma and Proposition 4.1 that \(|J|\to+\infty\), or the sequence is infinite. In that case, we observe that between two consecutive times the geodesic entered \(SM\), we have \[\int\mu_{J}>\frac{1}{10}\kappa-C_{0}>1,\] so that \(\|J\|\to+\infty\) also in positive time. Let us now consider the case that as \(\gamma\) enters \(SM\), \(\mu_{J}\leq Q_{M}\). Then, we reverse time, and see this as a geodesic entering \(SN^{\prime}\setminus SM\) with \(\mu_{J}\geq-Q_{M}\). Then the same argument as above apply, only in negative time.
2309.03706
Constraining light dark matter and mediator with $B^+ \rightarrow K^+ ν\bar ν$ data
We study the decay of $B^+$ meson into $K^+$ plus a light mediator $\phi$, which subsequently decays into a dark matter pair, $\bar \chi \chi$. Integrating constraints from DM relic density, direct detection, collider data and $B$-physics, alongside the recently reported results form Belle II experiment, we analyze the couplings between the mediator, standard model fermions, and the dark matter particles. Our results indicate that if the decay process $\phi \rightarrow \bar \chi \chi$ is kinematically allowed, i.e. $m_\phi > 2m_\chi$, then the mediator mass must be constrained within 0.35 GeV $\lesssim m_\phi \lesssim$ 3 GeV. Conversely, if $m_\phi < 2m_\chi$, the mediator $m_\phi$ is long-lived relative to the detector size, and the only allowed decay channel is $\phi \rightarrow e^+ e^-$.
Murat Abdughani, Yakefu Reyimuaji
2023-09-07T13:33:04Z
http://arxiv.org/abs/2309.03706v2
# Constraining light dark matter and mediator with \(B^{+}\to K^{+}\nu\bar{\nu}\) data ###### Abstract We study the decay of \(B^{+}\) meson into \(K^{+}\) plus a light mediator \(\phi\) followed by production of dark matter (DM) pair \(\bar{\chi}\chi\), which can mimic the flavor changing neural current (FCNC) process of \(B^{+}\to K^{+}\nu\bar{\nu}\) in the standard model (SM). Constraints from DM relic density, direct detection, collider and \(B-\)physics data combined with the recently reported \(B^{+}\to K^{+}\nu\bar{\nu}\) branching ratio performed at the Belle II experiment, we analyse couplings between the mediator and the SM fermions as well as the dark matter particle. We obtain that: if the process \(\phi\to\bar{\chi}\chi\) is kinematically allowed, i.e. \(m_{\phi}>2m_{\chi}\), then \(m_{\phi}\) should be larger than mass of the muon pair; Otherwise, \(\phi\) should play the role of missing energy, and only allowed decaying channel is \(\phi\to e^{+}e^{-}\). Introduction Rare \(B\) meson decay with a kaon and neutrino pair in the final state, i.e. \(B^{+}\to K^{+}\nu\bar{\nu}\), has a clear theoretical prediction among the flavour changing neutral current (FCNC) processes, and it is severely suppressed due to the loop effect and the Glashow-Iliopoulos-Maiani mechanism [1]. Thus it an plays important role in searching for new physics and testing the Standard Model (SM). While its SM prediction is \((5.06\pm 0.14\pm 0.28)\times 10^{-6}\)[2], recent experiment at the SuperKEKB asymmetric energy electron-positron collider reported the value of \((2.4\pm 0.7)\times 10^{-5}\)[3] with the significance of about \(3\sigma\) above the SM. This indeed calls for the new physics beyond the SM (BSM). The weak interaction of neutrinos makes it invisible at the detector, and any process with missing energy \(\not{E}\) at the final state, i.e. \(B^{+}\to K^{+}+\not{E}\), contributes to measured branching ratio of the decay \(B^{+}\to K^{+}\nu\bar{\nu}\). At the quark level it is just the \(b\to s+\not{E}\) transition. Dark matter (DM) or any other light particle with sufficiently weak interaction strength involved in this transition can be considered as the missing energy. Therefore, precision experiments of \(B\) meson decay are crucial in the searching or constraining the DM or other light BSM particles. DM on the other side, is an indispensable ingredient of the evolution of the universe. The thermal DM known to be in equilibrium with SM particles in the early times and freeze-out later when its annihilation rate could not catch up with expansion rate of the universe [4]. After the thermal freeze-out, the DM comoving number density remains constant and leads to the observed DM relic abundance. This, on the other hand, determines the DM annihilation rate at the freeze-out. Interestingly, annihilation cross-section of this process is of the same order of magnitude as the weak interaction. Thus, this weakly-interacting massive particle (WIMP) [5] becomes the mostly well studied and promising DM candidate. The lower limit for mass of the WIMP is few GeV [6], which is model dependent. However, the null result from various DM search experiments have pushed the lower limit to MeV scale with proper parameter or model selection [7]. Combined study of DM and new physics hint from B meson decay is an intriguing window towards the construction of BSM physics. In this work, an extension of the SM with a light scalar mediator and a Majorana fermion is studies. Relying the constraints from various experiments, we find optimal ranges of the model parameters. The paper is structured as follows. In Sec. II a new physics model is presented. Sec. III gives discussions about the B meson decay into kaon and SM fermions as well as extra fermionic dark matter pairs. Experimental constraints and parameter spaces are studied in Sec. IV, followed by conclusion of the paper drawn in Sec. V. ## II A simple DM model In this paper, we study a generic model with light Majorana DM \(\chi\) and a light scalar \(\phi\) couples to the SM fermions, the dimension-4 interaction Lagrangian can be simply written as 1 Footnote 1: Pseudo-scalar current \(\bar{f}\gamma^{5}f\) vanish at the loop order for \(0^{-}\to 0^{-}\) meson decay processes [8; 9]. \[\mathcal{L}_{\rm int}=-\frac{ym_{f}}{v}\phi\bar{f}f-\frac{1}{2}\kappa\phi\bar {\chi}\chi\, \tag{1}\] where \(m_{f}\) represent masses of the SM fermions, \(y\) is weight of the Yukawa coupling, \(v\simeq 246\) GeV is the electroweak vacuum expectation value (vev). The Feynman diagrams at the leading-order for DM and fermion pair productions in the final state are shown in FIG. 1. We have to note that, simple Lagrangian Eq.(1) is not started from gauge symmetry, but phenomenologically viable after the electroweak symmetry breaking of higher-dimensional effective operators [10]. The constant \(\frac{m_{f}}{v}\) in the coupling indicate that the this operator is induced from integrating out the Higgs portal [11]. Effective filed theory approach to the decay processes of \(B\) with a single invisible scalar or fermion pair at the final state in dimension-5 or dimension-6 operators were studied at Refs. [8; 9; 12; 13; 14; 15] Figure 1: The leading-order Feynman diagrams for \(b\to s\chi\bar{\chi}\) and \(b\to sf\bar{f}\) processes. The \(q\) stands for \(u,c,t\) quarks, \(f\) include all quarks and massive leptons. ## III \(B\) meson decay When the scalar mediator mass \(m_{\phi}\) is smaller than \(B\) and \(K\) meson mass difference, \(m_{\phi}<m_{B}-m_{K}\), the effective Lagrangian for \(b\to s\phi\) process can be obtained by integrating out heavy particles running in the loop in the FIG. 1, which is [16; 17] \[\mathcal{L}_{b\to s\phi}=\frac{ym_{b}}{v}\frac{3\sqrt{2}G_{F}m_{q}^{2}V_{qs}^{ *}V_{qb}}{16\pi^{2}}\times\phi\bar{s}_{L}b_{R}+\text{h.c.}\, \tag{2}\] where \(q\) are the \(u,c,t\) quarks in the loop, \(G_{F}\) is the Fermi constant and \(V_{qs,qb}\) are CKM matrix elements. It is obvious that contribution from the quarks other than top are negligible. The decay width of process \(B\to K+\phi\) is in the form of \[\Gamma_{B\to K\phi}= \left(\frac{ym_{b}}{v}\frac{3\sqrt{2}G_{F}m_{t}^{2}V_{ts}^{*}V_{ tb}}{16\pi^{2}}\right)^{2}\frac{\sqrt{(m_{B}^{2}-(m_{K}+m_{\phi})^{2})(m_{B}^{2 }-(m_{K}-m_{\phi})^{2})}}{16\pi m_{B}^{3}} \tag{3}\] \[\times|\langle K|\bar{s}_{L}b_{R}|B\rangle|^{2}\,\] where hadronic transition matrix element is \[\langle K|\bar{s}_{L}b_{R}|B\rangle=\frac{m_{B}^{2}-m_{K}^{2}}{m_{b}-m_{s}}f_{ 0}(q^{2})\, \tag{4}\] \(m_{b,s}\) are bottom and strange quark masses. The form factor is [18; 19] \[f_{0}(q^{2})=\frac{0.33}{1-q^{2}/37.46}\, \tag{5}\] with \(q^{2}=m_{\phi}^{2}\) for the final state \(\phi\) particle. Subsequently, if kinematically allowable, scalar \(\phi\) decay to DM or SM fermion pairs. The decay width of \(\phi\) to fermion pair (includes DM) is \[\Gamma_{\phi\to ff}=\frac{C_{\phi\bar{f}f}^{2}}{8\pi}m_{\phi}\left(1-\frac{4m _{f}^{2}}{m_{\phi}^{2}}\right)^{3/2}\Theta(m_{\phi}-2m_{f})\, \tag{6}\] where \(C_{\phi\bar{f}f}^{2}\) is scalar-fermion-fermion interaction term coefficient, \(\Theta\) is the Heaviside step function. For the decay width to the hadronic final states, we consulted to the Refs. [12; 16; 20]. In the left panel of FIG. 2, we show the branching ratio of \(B\to K+\phi\) process satisfying the result of Belle experiment [3] in \((m_{\phi},y)\) space (assuming that branching ratio of \(\phi\) decays into \(\not\!\!E\) is 100%). For \(m_{\phi}\lesssim 4.5\) GeV, \(y\) is around 0.003 to meet with observed \(B\to K+\nu\bar{\nu}\) value. When \(m_{\phi}\) approaches \(m_{B}-m_{K}\), we need larger coupling \(y\) due to decreasing phase space. On the right panel, fixed \(m_{\chi}\) to 0.1 GeV, for different \(\kappa\) values, we draw the curves which satisfy same condition as the red line in the left panel. For the fixed \(m_{\chi}\) and \(m_{\phi}\), as the value of \(\kappa\) decrease, the branching fraction \(Br(\phi\to\bar{\chi}\chi)\) also decrease. To obtain the value \((2.4\pm 0.7)\times 10^{-5}\), one needs larger \(y\) to increase the \(Br(B\to K+\phi)\), and it can be seen from the right panel of FIG. 2. Three bumps near \(m_{\phi}\sim 1,3.4,3.9\) GeV are due to the peaks in the decay width to SM particles shown as in FIG. 4 of Ref. [12], which are caused by Pion pair and Charmonium resonance final decay products. ## IV DM relic density and constraints When the decay process \(\phi\to\bar{\chi}\chi\) is kinematically allowed, i.e. \(m_{\chi}<m_{\phi}/2\), in the vicinity of freeze-out, DM only can annihilate to the lighter SM fermion pairs final states through \(s\)-channel heavy mediator \(\phi\). The cross section which determines the DM relic density for Figure 2: Left: The branching ratios of \(B\to K+\phi\) process as a function of \(m_{\phi}\) and \(y\). The red solid line is correspond to central value from [3], blue dashed and green dotted lines are \(1\sigma\) lower and upper bounds respectively. Right: The branching ratios of process \(B\to K+\phi\) followed by \(\phi\to\bar{\chi}\chi\) meet with central value from [3] for different \(\kappa\) values, where DM masses are fixed to 0.1 GeV. the process \(\bar{\chi}\chi\to\bar{f}f\) is given as \[\sigma_{\bar{\chi}\chi\to\bar{f}f}=\frac{1}{512\pi}\left(\frac{\kappa m_{f}}{v} \right)^{2}\sqrt{1-\frac{4m_{f}^{2}}{s}}\left(1-\frac{2m_{f}^{2}}{s}\right) \left(1-\frac{2m_{\chi}^{2}}{s}\right)\frac{s}{(s-m_{\phi}^{2})^{2}} \tag{7}\] where \(s\equiv E_{\rm cm}^{2}\) is the center-of-mass energy squared. Note that this equation holds for not resonant annihilation. The thermally averaged cross section is defined as \[\langle\sigma v\rangle_{\bar{\chi}\chi\to\bar{f}f}=\int_{4m_{\chi}^{2}}^{ \infty}ds\ \sigma_{\bar{\chi}\chi\to\bar{f}f}v_{\rm lab}\frac{\sqrt{s-4m_{\chi}^{2}}(s-2 m_{\chi}^{2})K_{1}(\sqrt{s}/T)}{8m_{\chi}^{4}TK_{2}^{2}(m_{\chi}/T)}\, \tag{8}\] where \(K_{i}\) are the modified Bessel functions of order \(i\). If DM does not resonantly annihilate, even for the light fermions \(m_{f}\ll m_{\chi}\), \(\langle\sigma v\rangle\) is at least three order less than standard thermal cross section of \(10^{-26}\ {\rm cm}^{3}/s\). Therefore, we have to resort to resonantly enhanced cross section of DM annihilation. We use Feynrules[21] to implement the models, and then import them to MicroMEGAS-5.3.41[22] for DM relic density and DM-nucleus cross section calculations. To explore full parametric phase space, we scan over the range in the TABLE 1, where upper limit of 0.1 for \(y\) is taken from LEP [23]. We consider likelihoods of DM relic density from PLANCK and direct detection for various experiments. The total \(\chi_{\rm tot}^{2}\) defined as the sum of individual \(\chi^{2}\) values of DM relic density and DD cross section as \[\chi_{\rm tot}^{2}=\chi_{\Omega h^{2}}^{2}+\chi_{\rm DD}^{2}. \tag{9}\] We hire emcee [24] based on Markov Chain Monte Carlo (MCMC) method to undertake the task of sampling the parameter space with the likelihood \(\propto\exp(-\chi_{\rm tot}^{2}/2)\). The \(\chi^{2}\) of DM relic density is described as a Gaussian distribution \[\chi_{\Omega h^{2}}^{2}=\left(\frac{\mu_{t}-\mu_{0}}{\sqrt{\sigma_{\rm theo}^{2} +\sigma_{\rm exp}^{2}}}\right)^{2} \tag{10}\] \begin{table} \begin{tabular}{c c c c} \hline \hline Parameters & Minimum & Maximum & Prior \\ \hline \(m_{\phi}\) & 0.01 GeV & \(m_{B}-m_{K}\) & flat/log \\ \(m_{\chi}\) & 0.01 GeV & \(m_{\phi}/2\) & flat/log \\ \(\kappa\) & \(10^{-6}\) & 4 & log \\ y & \(10^{-6}\) & 0.1 & log \\ \hline \hline \end{tabular} \end{table} Table 1: Ranges and priors for input parameters adopted in the scans. where \(\mu_{t}\) is the predicted theoretical value we obtained, \(\mu_{0}\) is the experimental central value, and introduce 10% theoretical uncertainty \(\sigma_{\rm theo}^{2}=0.1\mu_{t}\) due to uncertainties from the Boltzmann equation solver and the entropy table in the early universe. We use PLANCK 2018 data [25] to constrain our predicted relic density \(\Omega h^{2}\). Their reported central value with statistical error is \(\Omega h^{2}=0.118\pm 0.002\). The estimation of \(\chi^{2}\) for the DM-nucleus spin-independent and spin-dependent cross section is \[\chi^{2}_{\rm DD}=\left(\frac{\sigma_{\rm DD}}{\sigma_{\rm DD}^{0}/1.64}\right) ^{2} \tag{11}\] where \(\sigma_{\rm DD}\) and \(\sigma_{\rm DD}^{0}\) are predicted theoretical value obtained from MicroMEGAS-5.3.41 and upper limits of the cross sections for a given DM mass at 90% confidence level from the experiments DarkSide-50 [26], CRESST-III [27], PICO-60 [28], PandaX-4T [29], and Xenon1T [30]. The salient point of this paper over the other previous works is that branching fraction of \(\phi\) decay to all possible SM particles \(Br(\phi\to\text{SM SM})\) is not unity. Thus, we have to factorize the limits from the experiments LEP [23], Belle [31], BESIII [32], LHCb [33; 34], NA62 [35; 36; 37], KTeV [38; 39; 40], CHARM [41; 42; 43; 39] and PS191 [45]. If we assume \(Br(\phi\to\text{SM SM})=f_{\text{SM}}\) for a sample point \(y=y_{0}\) and other parameters are fixed, \(y\) should equivalent to \[y^{\prime}=(y_{0}^{4}/f_{\text{SM}})^{1/4} \tag{12}\] in the case of SM only decay products. Therefore, the exclusion limit become more robust if branching ratio to the SM final states are less than one. Note that pure SM final state cannot explain \(B\) meson decay anomaly unless the mediator is long lived compared to the detector size. In the FIG. 3, we show the sample plots with \(\chi^{2}_{\rm tot}<6\) (minimum value of \(\chi^{2}_{\rm tot}\simeq 0\)) and require that the \(Br(B\to K+\not{E})\) are in the \(2\sigma\) range. The relevant constraints from LEP [23] and LHCb [33; 34] are also shown. From the figure we know that, in the area above experimental upper limits, the branching ratio to the DM pair should be equal to survive. The SM pair final states can be part of \(\phi\) decay channels below the experimental exclusion lines, the closer to the upper limit, the bigger \(Br(\phi\to SM\)\(SM)\) gets, and it is known from Eq. (12). Generally, in order to obtain definite value of \(Br(B\to K+\not{E})\), \(Br(B\to K+\phi)\) should be increases as the \(\phi\) decay ratio to the SM particles become larger, consequently we need larger \(y\) as shown in the FIG.3. The survived samples are located at Charmonium resonant area or large \(m_{\phi}\) near the \(m_{B}-m_{K}\). There in not sample points survived when \(m_{\phi}\lesssim 2m_{\mu}\), where only \(\bar{\chi}\chi\to e^{+}e^{-}\) channel allowed and corresponding annihilation cross section is too small even if one introduces the resonance enhancement. When annihilation process \(\bar{\chi}\chi\to\bar{c}c\) is closed, DM relic density is on the high side, and that is the reason that there is a gap near \(m_{\phi}\simeq 2.5\) GeV. Note that if the lifetime of \(\phi\) particle is long enough to escape the detector before its decay, \(\phi\) is the missing energy regardless of the decay products. However, to match with DM relic density and \(Br(B\to K+\not{E})\), both \(\kappa\) and \(y\) can not be too small. In our samples in FIG. 3, maximum lifetime of the \(\phi\) particle is at the order of \(\sim 10^{-14}\)s, thus it decays before the escaping from the detector. On the other case, if \(m_{\chi}>m_{\phi}/2\), although the decay products of \(\phi\) are only SM particles, long lived \(\phi\) relative to the detector size behaves like missing energy. When \(y\simeq 3\times 10^{-3}\) and \(m_{\phi}<2m_{\mu}\), \(\Gamma(\phi\to e^{+}e^{-})\) can be less than \(10^{-15}\) GeV to be invisible, and \(m_{\phi}\) should be heavier than electron pair mass to decay before BBN. Heavy DM in this case easily satisfy relic density [43; 12]. ## V Conclusion In this paper, we study a light mediator \(\phi\) and a light DM \(\chi\) in a simple model which lead to a signal in the recent Belle \(B\to K+\bar{\nu}\nu\) measurement. Combined with DM relic density and direct detection constraints, when \(2m_{\chi}<m_{\phi}\), i.e. \(\phi\to\bar{\chi}\chi\) is available, we obtain that \(m_{\phi}\) should be larger than muon pair mass. In addition, branching fraction of \(\phi\) to DM pair equal to one, except for three narrow areas where \(\phi\) can decay to SM particles partially. When \(m_{\chi}>m_{\phi}/2\), DM easily have correct relic density, \(\phi\) should play the role of missing energy in Belle detector, and small width of \(\phi\) require \(m_{\phi}<2m_{\mu}\), in other word, \(\phi\) only can decay to electron pair. ## Acknowledgment MA is supported by Tianchi Yingcai Young Doctoral Project. YR is supported by the Natural Science Foundation of Xinjiang Uygur Autonomous Region of China under grant No. 2022D01C52.
2309.13515
Learning-based Inverse Perception Contracts and Applications
Perception modules are integral in many modern autonomous systems, but their accuracy can be subject to the vagaries of the environment. In this paper, we propose a learning-based approach that can automatically characterize the error of a perception module from data and use this for safe control. The proposed approach constructs an inverse perception contract (IPC) which generates a set that contains the ground-truth value that is being estimated by the perception module, with high probability. We apply the proposed approach to study a vision pipeline deployed on a quadcopter. With the proposed approach, we successfully constructed an IPC for the vision pipeline. We then designed a control algorithm that utilizes the learned IPC, with the goal of landing the quadcopter safely on a landing pad. Experiments show that with the learned IPC, the control algorithm safely landed the quadcopter despite the error from the perception module, while the baseline algorithm without using the learned IPC failed to do so.
Dawei Sun, Benjamin C. Yang, Sayan Mitra
2023-09-24T00:29:45Z
http://arxiv.org/abs/2309.13515v2
# Learning-based Perception Contracts and Applications ###### Abstract Perception modules are integral in many modern autonomous systems, but their accuracy can be subject to the vagaries of the environment. In this paper, we propose a learning-based approach that can automatically characterize the error of a perception module from data and use this for safe control. The proposed approach constructs a _perception contract (PC)_ which generates a set that contains the ground-truth value that is being estimated by the perception module, with high probability. We apply the proposed approach to study a vision pipeline deployed on a quadcopter. With the proposed approach, we successfully constructed a PC for the vision pipeline. We then designed a control algorithm that utilizes the learned PC, with the goal of landing the quadcopter safely on a landing pad. Experiments show that with the learned PC, the control algorithm safely landed the quadcopter despite the error from the perception module, while the baseline algorithm without using the learned PC failed to do so. ## I Introduction Perception plays a crucial role in making modern autonomous systems react and adapt in unstructured environments. In the past decade, we have seen perception algorithms invented to perform a variety of tasks such as detection and classification of obstacles [1], pose estimation [2], and localization [3]. However, any interpretation of signals from the real world has to deal with noise, which is sometimes poorly understood and characterized. As the output of the perception module drives downstream decisions or control modules in an autonomous system, the errors in perception may result in unwanted and even catastrophic actions. Therefore, the problem of designing reliable perception-based control algorithms for safety-critical systems has become an active research topic, and attempts have been made in recent works, for example [4, 5]. One way to design reliable perception-based controllers is to first characterize the perception error and then design controllers that are robust to the perception error. In a series of recent works [6, 7], the notion of _perception contracts (PC)_ has been proposed to address this problem. A PC characterizes the error of perception modules in a way that can be useful for proving system-level invariants. The authors have shown that such perception contracts can be automatically constructed from data and they used PCs to verify several vision-based lane-keeping systems [6, 7]. In this paper, we aim to solve the controller synthesis problem inspired by the idea of PCs. Specifically, the perception contract used in this paper maps a perceived value to a perception error, which characterizes the uncertainty of the perception result1. Such a perception contract can then be used by the downstream controller and decision-making modules to compensate for the uncertainty of the perception, which contributes to enhancing the safety and robustness of the autonomous system. Footnote 1: In the earlier works, the PC mapped the ground truth values to perception error. In this paper, we consider perception modules that aim to estimate the value of a specific quantity, for example, the position of an obstacle. Let \(y\) be the ground-truth value of this quantity. Most perception algorithms only provide a single-point estimate \(\hat{y}\), which can be seen as a noisy version of \(y\). However, in order to design robust controllers, the uncertainty of the estimate \(\hat{y}\) is also needed. To this end, we construct a perception contract (PC) for the perception module, which is simply a mapping from the perceived value \(\hat{y}\) to a set that contains the ground-truth value \(y\) with a high probability. This containment should hold even under environmental variations. We model the PC using a neural network and carefully design loss functions that balance the error and conservativeness of the PC. The only requirement for applying the proposed approach is a data set that consists of pairs of estimate \(\hat{y}\) and ground truth \(y\). Once trained on the data set, the PC can be executed online and quickly compute the uncertainty of any estimate \(\hat{y}\). The computed uncertainty can then be used by downstream modules to compute safe control signals. We evaluated the proposed approach on a quadcopter equipped with a camera. The perception module of interest is a program that aims to estimate the pose of a landing pad from camera images. By applying the proposed approach, we successfully constructed an accurate PC for the perception module. Experimental results show that the error of the learned PC was below \(0.2\%\) on the testing set. In order to study the conservativeness of the learned PC, we tuned some parameters of the perception module to change the characteristics of the perception error. We measured and reported the error of the PC under these unseen perception parameters. Results show that the output of the learned PC is tight and adheres to the perception parameter under which it was trained. Then, in order to show the PC's capability of being used by downstream modules, we design a robust control algorithm that utilizes the learned PC and aims to safely land the quadcopter on the landing pad. Experiments show that with the learned PC, the quadcopter can safely complete the landing task despite the significant perception error. In summary, our contribution is threefold. * We proposed an approach that can automatically learn a perception contract from data; * We demonstrated that the learned perception contract can be used by downstream modules to compensate for the error of the upstream perception module; * We evaluated the proposed approach in a real-world application. ## II Related work Perception-based controlAs new types of sensors emerge, the problem of integrating these sensors and the corresponding perception modules into the control pipeline has attracted interest. In recent works [8, 9, 10], perception-based controllers have been studied to enable aggressive control for quadcopters. Further, data-driven approaches have been developed by the machine learning community. For example, imitation learning [11] and reinforcement learning [12] have been used to learn vision-based control policies. However, those approaches cannot provide any guarantees for the synthesized controller, which limits their application in safety-critical applications. As autonomous system enters more and more safety-critical domains, designing perception-based control with formal safety guarantees has become a key problem in robotics research. In a series of recent works by Dean et al., the authors studied the robustness guarantees of perception-based control algorithms under simple perception noises: In [13], the authors proposed a perception-based controller synthesis approach for linear systems and provided a theoretical analysis. In [4], the authors proposed robust barrier functions that provide an approach for synthesizing safety-critical controllers under uncertainty of the state. More complex perception modules have also been studied. In [5], the authors studied the perception-based control synthesis problem for a vehicle with a Lidar. The authors use neural networks to learn a control Lyapunov function (CLF) and a control barrier function (CBF) in the observation space, which enable safe navigation in an environment with unknown obstacles. In [14], a perception module is learned with neural networks from data. Instead of a single state, the perception module outputs a set of states. Then, the authors apply contraction theory and robust motion planning algorithms to synthesize control that is robust to the perception error. Our approach is similar to the one proposed in [14] in the sense that they both introduce a high-confidence set for the perception results. In contrast, our approach can be applied to construct such high-confidence sets for an existing perception module, while in [14], the authors have to construct these sets while designing the perception module. Analysis of systems with black-box modulesThe proposed approach is also related to works concerning the analysis of systems with black-box modules. As the complexity of autonomous systems increases, it becomes impossible to analyze them exactly. Many modules in modern autonomous systems are black boxes, for example, modules based on neural networks. Data-driven approaches have been applied to analyze those systems instead. The approach of constructing a perception contract from data is mainly adapted from the reachability analysis approach proposed in [15], where machine learning is used to analyze the reachability of black-box systems. VerifAI [16] provides a complete framework for analyzing autonomous systems with ML modules in the loop. In [6], the authors study safe abstractions of ML-based perception modules, which inspires the approach proposed in this paper. [6] is focused on verifying properties of an existing system and thus concerned with predicting the estimate \(\hat{y}\) given the ground truth \(y\). In contrast, this paper is focused on the synthesis problem and thus aims to predict the ground truth \(y\) given the estimate \(\hat{y}\). ## III Learning perception contracts In this paper, we consider general control systems that entail perception modules as shown in Figure 1. The state of the system is denoted by \(x\in\mathcal{X}\), where \(\mathcal{X}\) is the state space of the system. For example, if the system is a quadcopter, the state vector \(x\) may include the position, velocity, and attitude of the quadcopter. For a control system that interacts with the real world, access to only the state \(x\) of the system itself is not enough. It is very common that the value of an external quantity \(y\in\mathcal{Y}\subseteq\mathbb{R}^{n}\) is also needed in order to complete a task. For example, for a quadcopter that is asked to land, \(y\) can be the position of the landing pad. However, access to the ground-truth value of \(y\) is often unavailable. To this end, a _perception module_ is used to estimate the value of \(y\). As shown in Figure 1, we assume that a perception module is given _a priori_. The perception module aims at estimating the value of \(y\). We denote the output of the perception module by \(\hat{y}\) and call it the perceived value. Due to the complexity and uncertainty of the real world, the perceived value does not always coincide with the ground-truth value, and there is a non-zero perception error \(e=\hat{y}-y\). ### _Perception contracts_ In order to design a robust controller that utilizes the perception results, one has to analyze the perception module Fig. 1: An autonomous system that entails a perception module and the proposed perception contract. and characterize the perception error. That is, given the state \(x\) and the perceived value \(\hat{y}\), one has to be able to know some characteristics of the perception error \(e\), for example, the maximum of \(\|e\|\). However, it is a challenging task. Although there is a relationship between the state \(x\), the ground-truth value \(y\), and the perceived value \(\hat{y}\), it is usually not possible to find a closed-form expression of this relationship. Inspired by [17], we propose to use _perception contracts_ as a unified way of characterizing perception errors. Perception contracts then serve as an interface between the perception module and the controller synthesis algorithm. Perception modules and controller synthesis algorithms that follow the same interface can be freely paired. In general, the relationship between the state \(x\), the ground-truth value \(y\), and the perceived value \(\hat{y}\) is not deterministic. Thus, we assume that \(x\), \(y\) and \(\hat{y}\) conform to a joint distribution \(\mathcal{D}\), i.e., \(x,y,\hat{y}\sim\mathcal{D}\). We aim at finding a way to recover \(y\) from \(x\) and \(\hat{y}\). To this end, we define a _perception contract_ as a mapping \(\mathcal{A}:\mathcal{X}\times\mathcal{Y}\to 2^{\mathcal{Y}}\), \(x,\hat{y}\mapsto\mathcal{A}(x,\hat{y})\), where \(2^{\mathcal{Y}}\) is the power set of \(\mathcal{Y}\). Ideally, given the state \(x\) and the perceived value \(\hat{y}\), the output of the perception contract \(\mathcal{A}(x,\hat{y})\) should contain the ground-truth value \(y\), i.e., \(y\in\mathcal{A}(x,\hat{y})\). However, such a requirement for the perception contract is too strong, given the fact that the relationship between \(x\), \(y\), and \(\hat{y}\) is stochastic. Instead, we allow approximately correct perception contracts, and the error of a perception contract \(\mathcal{A}\) is defined as follows. **Definition 1** (Error of a perception contract).: _The error of a perception contract \(\mathcal{A}:\mathcal{X}\times\mathcal{Y}\to 2^{\mathcal{Y}}\) is_ \[\Pr_{x,y,\hat{y}\sim\mathcal{D}}\left(y\notin\mathcal{A}(x,\hat{y})\right). \tag{1}\] In this paper, we study the problem of constructing a perception contract of a given perception module. Next, we will propose an approach to this problem, where we model the perception contract using a neural network. ### _Learning perception contracts from data_ In this section, we propose an algorithm for learning a perception contract from data. The perception contract is modeled with a neural network. By definition, the output of the perception contract is a set. In order to use a neural network to model a perception contract, we need to assume that the output of the perception contract is from a finite-dimensional domain, i.e., the elements of this domain can be represented with finite-dimensional vectors. In this paper, we adopt ellipsoids as the domain. That is, the output of the perception contract is always an ellipsoid. In theory, any finite-dimensional domain such as hyper-rectangles or zonotopes could be used as the output of the perception contract. We adopt ellipsoids due to their smoothness and simple representations. An ellipsoid in \(\mathbb{R}^{n}\) is defined by a center \(c\in\mathbb{R}^{n}\) and a non-singular matrix \(C\in\mathbb{R}^{n\times n}\) and is denoted by \(\mathcal{E}\left(c,C\right):=\{x\in\mathbb{R}^{n}:\|C(x-c)\|_{2}\leq 1\}\). Thus, in order to model a perception contract whose output is an ellipsoid, we only need two parametric functions \(c_{\theta}:\mathcal{X}\times\mathcal{Y}\to\mathbb{R}^{n}\) and \(C_{\theta}:\mathcal{X}\times\mathcal{Y}\to\mathbb{R}^{n\times n}\) with parameters \(\theta\in\mathcal{W}\). Given a state \(x\) and a perceived value \(\hat{y}\), \(c_{\theta}\) and \(C_{\theta}\) defines an ellipsoid with center \(c_{\theta}(x,\hat{y})\) and shape \(C_{\theta}(x,\hat{y})\). Therefore, the parametric perception contract is as follows \[\mathcal{A}_{\theta}(x,\hat{y}):=\mathcal{E}\left(c_{\theta}(x,\hat{y}),C_{ \theta}(x,\hat{y})\right). \tag{2}\] As will be shown in Section IV-B, \(c_{\theta}\) and \(C_{\theta}\) are actually modeled as two heads of the same neural network. For the sake of simplicity, we denote \(X=(x,y,\hat{y})\) and define a helper function \[g_{\theta}(X):=\left\|C_{\theta}(x,\hat{y})\left(y-c_{\theta}(x,\hat{y}) \right)\right\|_{2}. \tag{3}\] Clearly, \(g_{\theta}(X)\leq 1\) if and only if \(y\in\mathcal{A}_{\theta}(x,\hat{y})\). We aim at designing a learning algorithm to find a \(\theta\) that minimizes the error of the parametric perception contract \(\mathcal{A}_{\theta}\). In other words, we want to minimize the following loss function. \[L(\theta):=\mathop{\mathbb{E}}_{X\sim\mathcal{D}}\left[\mathbb{I}_{\mathbb{R} ^{\geq 0}}\left(g_{\theta}(X)-1\right)\right], \tag{4}\] where \(\mathbb{I}_{\mathbb{R}^{\geq 0}}\left(\cdot\right)\) is the indicator function, i.e., \(\mathbb{I}_{\mathbb{R}^{\geq 0}}\left(x\right)=1\) if \(x>0\), and \(\mathbb{I}_{\mathbb{R}^{\geq 0}}\left(x\right)=0\) otherwise. In practice, the closed-form expression of the underlying distribution \(\mathcal{D}\) is not available, and one can only sample from \(\mathcal{D}\). Thus, the above loss function cannot be minimized directly since the expectation over \(\mathcal{D}\) cannot be computed exactly. Therefore, we resort to empirical risk minimization as follows. First, a training set \(S=\{X_{i}\}_{i=1}^{N}\) is constructed, where the samples \(X_{i}=(x^{(i)},y^{(i)},\hat{y}^{(i)})\) are independently drawn from the data distribution \(\mathcal{D}\). Then, the _empirical loss_ on \(S\) is defined as follows. \[L_{ERM}(\theta)=\frac{1}{N}\sum_{i=1}^{N}\ell\left(g_{\theta}(X_{i})-1\right), \tag{5}\] where \(\ell(x):=\max\{0,\frac{x}{a}+1\}\) is the hinge loss function with the hyper-parameter \(\alpha>0\), which can be seen as a soft proxy for the indicator function. In addition to minimizing the error of the perception contract, we also have to penalize its conservativeness. Otherwise, we might get trivial perception contracts with extremely low error. For example, the error of the perception contract \(\mathcal{A}(x,\hat{y})\equiv\mathcal{Y}\) is \(0\), but such a perception contract is useless in the design of robust controllers. Thus, the volume of the ellipsoid must be penalized. Inspired by [18], we use \(-\log(|C^{\intercal}C|)\) as a proxy of the volume of an ellipsoid \(\mathcal{E}\left(c,C\right)\), and the following regularization term is added to penalize conservative outputs. \[L_{REG}(\theta)=-\frac{1}{N}\sum_{i=1}^{N}\log\Big{(}\left|C_{\theta}(x^{(i)}, \hat{y}^{(i)})^{\intercal}C_{\theta}(x^{(i)},\hat{y}^{(i)})\right|\Big{)}. \tag{6}\] In practice, we solve the following optimization problem with stochastic gradient descent. \[\hat{\theta}=\arg\min_{\theta}L_{ERM}(\theta)+\lambda L_{REG}(\theta), \tag{7}\] where \(\lambda\) is a hyper-parameter balancing the error and conservativeness of the perception contract. ### _Probabilistic correctness of the perception contract_ With standard techniques from statistical learning theory, we can derive the following theorem which gives an upper bound on the error of the learned perception contract. **Theorem 1**.: _For any \(\epsilon>0\), and a random training set \(S\) with \(N\) i.i.d. samples, with probability at least \(1-2\exp(-2N\epsilon^{2})\), the following inequality holds,_ \[\mathop{\mathbb{E}}_{X\sim\mathcal{D}}\big{[}\mathbb{I}_{\mathbb{ R}^{\geq 0}}\left(g_{\hat{\theta}}(X)-1\right)\big{]}\\ \leq\frac{1}{N}\sum_{i=1}^{N}\tilde{\ell}(g_{\hat{\theta}}(X_{i}) -1)+\frac{12}{\alpha}L_{g}\sqrt{\frac{p}{N}}+\epsilon, \tag{8}\] _where \(p\) is the number of scalar parameters of the neural network, and \(\tilde{\ell}(\cdot)=\min\{1,\ell(\cdot)\}\) is the truncated hinge loss, and \(L_{g}\) is the Lipschitz constant of \(g_{\theta}\) w.r.t. \(\theta\)._ **Remark**.: _Theorem 1 shows that by controlling \(\epsilon\) and \(N\), the actual error \(\mathop{\mathbb{E}}_{X\sim\mathcal{D}}\big{[}\mathbb{I}_{\mathbb{R}^{\geq 0}} \left(g_{\hat{\theta}}(X)-1\right)\big{]}\) can be made arbitrarily close to the empirical loss \(\frac{1}{N}\sum_{i=1}^{N}\tilde{\ell}(g_{\hat{\theta}}(X_{i})-1)\), with arbitrarily high probability. Furthermore, the empirical loss can be made very small in practice due to the high capacity of the neural network. Of course, there is no free lunch in general. In order to drive the empirical loss to \(0\), we might have to increase the number of parameters, which in turn increases the term \(\frac{12}{\alpha}L_{g}\sqrt{\frac{p}{N}}\)._ ## IV Application study: safe quadcopter landing In this section, we evaluate the approach proposed in the last section in a real-world application, namely, safe quadcopter landing. ### _The safe landing problem_ The quadcopterIn this section, we consider the problem of designing an algorithm for safely landing the quadcopter as shown in Figure (a)a. The quadcopter is built based on a DJI\({}^{\text{\textregistered}}\) F450 frame. The size of the quadcopter is \(36\text{cm}\times 36\text{cm}\times 10\text{cm}\) measured without propellers. A Raspberry Pi 3\({}^{\text{\textregistered}}\) computer and a Navio2\({}^{\text{\textregistered}}\) board2 are mounted on the quadcopter. The Navio2\({}^{\text{\textregistered}}\) board provides essential sensors for quadcopter control such as inertial measurement units (IMU) and a barometric pressure sensor. A camera is connected to the Raspberry Pi computer. The camera can produce \(320\times 240\) images at a rate of \(60\) frames per second. The state of the quadcopter is Footnote 2: [https://navio2.hipi.io/](https://navio2.hipi.io/) \[x:=[p_{x},p_{y},p_{z},v_{x},v_{y},v_{z},\phi_{x},\phi_{y},\phi_{z},\omega_{x},\omega_{y},\omega_{z}], \tag{9}\] where \(p_{x}\), \(p_{y}\), and \(p_{z}\) are the 3D position of the quadcopter and similarly, \(v\), \(\phi\) and \(\omega\) are velocity, attitude and angular velocity respectively. The workspaceThe quadcopter is operating in a \(6\text{m}\times 6\text{m}\times 3\text{m}\) workspace. Furthermore, a motion capture system (Vicon\({}^{\text{\textregistered}}\)) is used for low-latency and high-accuracy localization of the quadcopter. The state \(x\) estimated by the Vicon\({}^{\text{\textregistered}}\) system is viewed as the ground-truth state of the quadcopter. As shown in Figure (b)b, a landing pad is placed on the ground of the workspace. The goal of the quadcopter is to land on the landing pad. Although the quadcopter has access to its state \(x\) estimated by the Vicon\({}^{\text{\textregistered}}\) system, the position of the landing pad is _unknown_ to the quadcopter. We denote the ground-truth position of the landing pad by \(y\in\mathbb{R}^{3}\). Coordinate framesThere are several coordinate frames that will be used in the experiments. They are the world frame \(W\), the quadcopter frame \(Q\), and the camera frame \(C\). For the same quantity, we use subscripts to discriminate among its values under different coordinate frames. For example, \(y_{W}\) is the coordinate of the landing pad in the world frame, while \(y_{C}\) is that in the camera frame. When the subscript is omitted, it denotes the value in the world frame. The vision-based perception moduleIn order to land on the landing pad, the quadcopter uses the camera to estimate the position \(y_{W}\). To this end, we attach an ArUco marker [19] to the landing pad. An ArUco marker is basically a QR-code-like image, which enables robust and fast pose estimation from images. By calling functions implemented in the OpenCV library [20], we can estimate the position of the landing pad in the camera frame, and such an estimate is denoted by \(\hat{y}_{C}\). Then, we transform \(\hat{y}_{C}\) into the world frame \(\hat{y}_{W}\) using the extrinsic parameter between the camera and the quadcopter, and the quadcopter's pose \(x_{W}\) in the world frame. However, \(\hat{y}_{C}\) and \(x_{W}\) are measured by two different sensors (camera and Vicon) without synchronization, and this asynchronization contributes as the main source of the perception error. The task specificationThe goal of the quadcopter is to navigate to a box around the ground-truth position \(y\) of the landing pad and then turn off the motors. The box is defined as \(\{y\}\oplus G\) where \(G\) is a \(0.1m\times 0.1m\times 0.05m\) box defined as \(G:=[-0.05,0.05]\times[-0.05,0.05]\times[0,0.05]\), and \(\oplus\) is the Minkowski addition. The quadcopter is said to violate the safety requirement if it stops the motors outside \(\{y\}\oplus G\). ### _Learning a perception contract_ Next, we apply the proposed approach to learn an abstraction of the perception module. The most straightforward idea is to directly apply the proposed approach to learn a perception contract as a function of \(x\) and \(\hat{y}\) in the world frame. However, by doing this, we bind the learned perception Fig. 2: The quadcopter and the landing pad. contract to a specific transformation between the quadcopter and the camera. Such a perception contract will not be usable after the transformation between the quadcopter and camera is changed, for example, when the camera direction is switched from forward to downward. To tackle this issue, we instead learn a perception contract in the camera frame, i.e., \(x_{C}\), \(y_{C}\), and \(\hat{y}_{C}\) will be used in learning. As the first step, we construct a data set. Construction of the data setDuring the process of collecting data, we fix the position of the landing pad and measure its ground-truth position \(y\) beforehand. Then, we program the quadcopter to follow some predefined paths and record the state \(x\) and the perceived value \(\hat{y}\) on the fly. The pre-defined paths are designed such that by following it the quadcopter explores diverse positions with diverse velocities in the workspace. We then transform all the quantities into the camera frame and obtained the data set \(\{(x_{C}^{(i)},y_{C}^{(i)},\hat{y}_{C}^{(i)})\}_{i=1}^{N}\) with \(N=5000\). Machine learning setupWe model the perception contract using a three-layer neural network of which the hidden layers contain \(64\) and \(128\) neurons. The input of the neural network is the concatenation of the state \(x_{C}\) and the perceived value \(\hat{y}_{C}\). The output size of the neural network is \(12\). The first three elements of the neural network's output are used as the center of the ellipsoid and the remaining nine elements are reshaped into a \(3\times 3\) matrix that describes the shape of the ellipsoid. We train the neural network for \(20\) epochs using the Adam [21] optimizer. The learning rate is set to \(0.001\). Test of the learned perception contractAfter training, we evaluate the learned perception contract using real data and estimate its error. To do that, we manually control the quadcopter to fly in the workspace until the quadcopter finishes \(\sim 300\) measurements of the landing pad. For each measurement, we check whether \(y\in\mathcal{A}(x,\hat{y})\) and compute an empirical error rate. As mentioned before, the perception error mainly comes from the asynchronization between the camera and Vicon. By tuning the camera buffer size in the OpenCV library, we can change the extent to which the camera image lags from the Vicon measurements. For each buffer size configuration, we conducted the same test and reported the error of the learned perception contract in Table I. The training data was collected at buffer size \(=1\). As can be seen from Table I, the error of the learned perception contract is as low as \(0.19\%\) at buffer size \(=1\). On the other hand, the learned perception contract adheres to the buffer size configuration under which it was trained. As buffer size increases, the error of the learned perception contract drastically increases, which suggests that the output of the perception contract is not conservative. Moreover, it takes only \(\sim 3\) milliseconds on the Raspberry Pi computer to compute \(\mathcal{A}(x,\hat{y})\) for each query \(x\) and \(\hat{y}\), which enables it to be used in online control pipelines. ### _Utilizing the learned perception contract_ In order to demonstrate the PC's capability of being used by downstream modules, we design a simple algorithm that utilizes the output of the perception contract to solve the safe landing problem. The proposed algorithm is a state machine as shown in Figure 3. An overview of the state machine is as follows. In the beginning, the quadcopter estimates the position of the landing pad using its perception module. Then, it runs the perception contract to compute the ellipsoid that contains the ground-truth position. If the size of the ellipsoid is above a threshold, the quadcopter navigates to a new waypoint determined by the ellipsoid and returns to the initial state to do another measurement. The quadcopter keeps doing this until the size of the ellipsoid is below the threshold. In that case, the quadcopter will navigate to the center of the ellipsoid and then turn off the motor. We illustrated the algorithm in Figure 4. Next, we elaborate on each state in the state machine. MeasuringIn this state, the quadcopter obtains an image from the camera and feeds it to the perception module. The perception module estimates the position of the landing pad as \(\hat{y}\). Then, the quadcopter calls the perception contract with its current state \(x\) and the perceived value \(\hat{y}\). The perception contract outputs an ellipsoid \(\mathcal{A}(x,\hat{y})\). Intuitively, the size of the ellipsoid reflects the uncertainty in the measurement. If the ellipsoid is small, then the quadcopter can trust the measurements. Otherwise, it has to take another measurement. This is formalized as follows. Let \(p_{xmin}\), \(p_{xmax}\), \(p_{ymin}\), \(p_{ymax}\), \(p_{zmin}\), and \(p_{zmax}\) be the minimum or maximum along each dimension of the ellipsoid \(\mathcal{A}(x,\hat{y})\). If \(p_{xmax}-p_{xmin}\leq 0.1\), \(p_{ymax}-p_{ymin}\leq 0.1\), and \(p_{zmax}-p_{zmin}\leq 0.05\), i.e., the ellipsoid can be completely contained in the box \(G\), then quadcopter trusts the measurement and switches to the Landing state. Otherwise, it enters the New Waypoint state and navigates to somewhere else in order to take another measurement. New WaypointThe quadcopter enters this state because the size of the ellipsoid \(\mathcal{A}(x,\hat{y})\) is beyond the threshold. In this state, the quadcopter will navigate to another waypoint to take a new measurement. The new waypoint \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Buffer size & 1 & 2 & 3 & 4 & 5 \\ \hline Error & \(0.19\%\) & \(0.93\%\) & \(2.05\%\) & \(4.07\%\) & \(11.05\%\) \\ \hline \end{tabular} \end{table} TABLE I: Error rate of the learned perception contract under different camera buffer size configurations. Fig. 3: The state machine of the safe landing algorithm. is selected as follows. Since the ground-truth position of the landing pad can be at any point in the ellipsoid, to avoid hitting the landing pad, the quadcopter sets the next waypoint as the intersection point between the surface of the ellipsoid and the line segment connecting its current position \([p_{x},p_{y},p_{z}]\) and the center of \(\mathcal{A}(x,\hat{y})\). Landing:In this state, the quadcopter will navigate to the point \(P_{turnoff}:=\left[\frac{p_{mmi}+p_{mmar}}{2},\frac{p_{mmi}+p_{mmar}}{2},p_{ zmax}\right]\) and shut off the motors. Recall that the quadcopter enters this state because the ellipsoid \(\mathcal{A}(x,\hat{y})\) can be completely contained in \(G\). Therefore, we immediately have the following proposition. **Proposition 1**.: _If the ground truth \(y\) is in the ellipsoid \(\mathcal{A}(x,\hat{y})\), then \(P_{turnoff}\in\{y\}\oplus G\)._ Combining Theorem 1 and Proposition 1, it follows that with a high probability, the quadcopter will shutoff its motors only inside the target region defined by \(\{y\}\oplus G\). In order to evaluate the above algorithm, we put the landing pad at \(10\) randomly selected locations and ran the above algorithm to land the quadcopter. Furthermore, in order to show the difficulty of the landing problem and emphasize the advantage of the perception contract, we also compare it with a simple baseline approach, where we adopt exactly the same state machine as in Figure 3 but replace the learned perception contract with a trivial one, namely, \(\mathcal{A}(x,\hat{y})=\mathcal{E}\left(\hat{y},10000I_{3}\right)\), i.e., it always outputs a very small ellipsoid around \(\hat{y}\). The results show that, with the learned PC, the quadcopter safely landed on the landing pad in all of the \(10\) runs, while the baseline approach using the trivial PC failed in \(9\) of them. In Figure 5, we visualized the \(10\) runs. For each run, we translated the ground-truth position of the landing pad to the origin. In order to make a clear visualization without occlusion, we also rotated the data of some runs. As can be seen from the figure, due to the existence of the perception error, the baseline approach using the trivial PC landed the quadcopter at locations far off the origin, while with the learned PC, the control algorithm refused to trust the first measurement and conduct another more accurate measurement, which leads the quadcopter to the precise location of the landing pad. A video of the runs is available at [https://youtu.be/Jep_6u_AxD4](https://youtu.be/Jep_6u_AxD4). ## V Conclusion We demonstrated that learned perception contracts can be easily used to achieve reliable and safe control. However, limitations and research opportunities still remain. In this paper, the control algorithm is straightforward and mainly used to demonstrate the perception contract. It does not have any convergence guarantee, i.e., new measurements are not necessarily more accurate. In future work, we will explore more sophisticated algorithms to jointly synthesize the perception contract and downstream controller.
2309.04069
Inferring physical laws by artificial intelligence based causal models
The advances in Artificial Intelligence (AI) and Machine Learning (ML) have opened up many avenues for scientific research, and are adding new dimensions to the process of knowledge creation. However, even the most powerful and versatile of ML applications till date are primarily in the domain of analysis of associations and boil down to complex data fitting. Judea Pearl has pointed out that Artificial General Intelligence must involve interventions involving the acts of doing and imagining. Any machine assisted scientific discovery thus must include casual analysis and interventions. In this context, we propose a causal learning model of physical principles, which not only recognizes correlations but also brings out casual relationships. We use the principles of causal inference and interventions to study the cause-and-effect relationships in the context of some well-known physical phenomena. We show that this technique can not only figure out associations among data, but is also able to correctly ascertain the cause-and-effect relations amongst the variables, thereby strengthening (or weakening) our confidence in the proposed model of the underlying physical process.
Jorawar Singh, Kishor Bharti, Arvind
2023-09-08T01:50:32Z
http://arxiv.org/abs/2309.04069v2
# Inferring physical laws by artificial intelligence based causal models ###### Abstract The advances in Artificial Intelligence (AI) and Machine Learning (ML) have opened up many avenues for scientific research, and are adding new dimensions to the process of knowledge creation. However, even the most powerful and versatile of ML applications till date are primarily in the domain of analysis of associations and boil down to complex data fitting. Judea Pearl has pointed out that Artificial General Intelligence must involve interventions involving the acts of doing and imagining. Any machine assisted scientific discovery thus must include causal analysis and interventions. In this context, we propose a causal learning model of physical principles, which not only recognizes correlations but also brings out causal relationships. We use the principles of causal inference and interventions to study the cause-and-effect relationships in the context of some well-known physical phenomena. We show that this technique can not only figure out associations among data, but is also able to correctly ascertain the cause-and-effect relations amongst the variables, thereby strengthening (or weakening) our confidence in the proposed model of the underlying physical process. ## I Introduction Artificial Intelligence (AI), specifically through its Machine Learning (ML) form, has been successfully applied to a wide range of fields including agriculture, social media, gaming, and robotics [1; 2]. ML plays a significant role in autonomous driving, natural language processing, finance, health care, understanding the human genome, manufacturing, energy harvesting, and much more [2; 3]. ML has also lent a hand to the scientific community and has found quite a few applications in scientific research. In physics specifically, ML has been used to explore many-body physics [4; 5], glassy dynamics [6], learning phases of matter [7; 8; 9], designing new experiments [10; 11; 12; 13], to interpret nature [14; 15], quantum foundations [16], quantum state tomography [17; 18], phase transition [19; 20], quantum matter [21], Monte Carlo simulation [22], polymer states [23], topological codes [24], the study of black hole detection [25], quantum circuit optimization and control [26; 27], anti-de Sitter/conformal field theory (AdS/CFT) correspondence [28], quantum state preparation [29; 30], thermodynamics [31], gravitational lenses [32], characterizing the landscape of string theories [33], and wave analysis [34], to name a few. An important aim for machine-assisted scientific discovery, proposed in the seminal work by Iten _et. al._ where they propose a neural network architecture modeled after the human physical reasoning process [35]. The currently prevalent ML architectures primarily identify correlations and associations in data and thus the models only uncover direct connections in the data. Based on the associations one must learn the causal model and the general AI systems should be able to uncover the underlying causal structures. Therefore, to fully realize the potential of artificial general intelligence, one needs to incorporate the essence of cognition within the scope of ML. Judea Pearl [36] divides this cognitive ability into three distinct levels as depicted in Fig. 1, distinguished by the type of query being answered, the levels are termed as: Association, Intervention, and Counterfactuals. The first level(association) of the ladder described in Fig. 1 involves predictions based on passive observations of the data, _i. e._ data-centric search for correlations, associations, regularities and patterns. This level answers queries pertaining to observations as to what can be found in the data. The second level(intervention), involves analysis of response to change in variables. Rather than just observing the data, one queries the effect of an induced change, and thus one is looking at a cause-and-effect relationship in the variables of the data. The final level utilizes the causal structure to estimate portions of data that don't exist or cannot be observed. It answers queries related to the hypothetical questions that one may imagine - the "what if" questions. Therefore this level involves Counterfactuals. One thus sees that most applications of ML in sci ence are basically at the first level of the ladder. For example, in the context of the spring-mass vibrating system, ML can find the relationship between the length of the spring and the weight attached to it. However ML models cannot answer the question, "is the change in spring length caused by the change in weight or vice-versa". Causal Inference takes us a step above on the ladder of causation and lets us answer such questions. Once armed with the knowledge of causal relations, one can begin exploring the counterfactuals leading to a framework which then becomes a motif for formulating the laws of nature. Posed a bit differently: "Had the weight on this spring doubled, its length would have doubled as well" (Hooke's law). - Judea Pearl [36]. We begin by studying the basics of causal discovery and causal inference in Section-II. In Section-III we analyze the causality relations of some physical phenomenon. The examples that we consider include tide height, Ohm's law, light dependent resistance (LDR) characteristics, and quantum measurement correlations. Finally, we close with a discussion on the results and possible paths ahead, in Section-IV. ## II Causal discovery and inference Causal inference refers to the process of answering questions based on the underlying causal model of the cause-and-effect relationship between different variables of the data. As seen from the ladder of causation (Fig. 1), causality relates to response to interventions. We do a certain action and observe a certain response. The limitations of correlations and importance of causal relations can be easily understood from a simple experiment of the atmospheric pressure reading of a barometer [36]. While there is a direct correlation between the barometer reading and pressure, this correlation cannot in itself establish the causal relationship. Is it the barometer reading that causes the atmospheric pressure to change or is it the atmospheric pressure that causes the barometer reading to change? One requires the knowledge of causal relations to conclude that it is the pressure that causes the change in reading leading to the observed correlation and not the other way around. Statistical algorithms are used to infer the causal structure from observational data. The model is assumed to be **acyclic** where a Directed Acyclic graph(DAG) can be used to depict the causal relationships as shown in Fig. 2. The nodes represent the variables and arrows depict the cause-and-effect relations. The model is considered to be **Markovian** where a given node is conditioned on its immediate parents only. The model is assumed to satisfy the conditions of **Sufficiency** and **Faithfulness** which respectively mean that there exists no external common cause to any pair of nodes in the graph and all conditional independences (from the underlying distribution) are completely represented in the graph. Most algorithms for causal discovery work with the assumption that statistical independence implies the absence of causal rela Figure 1: The Ladder of Causation depicting the 3 levels of cognitive ability. Present day ML is at ‘association’, the lowest level. A machine capable of understanding causal structures would be placed at ‘intervention’ level while the more sophisticated AI will also operate at the level of contrafactuals Figure 2: A directed acyclic graph with nodes representing variables and arrows showing the cause-and-effect relationship between variables. tion [37]. Specifically, the Peter Spirtes and Clark Glymour(PC) algorithm uses the conditional independence testing criterion to generate a DAG from a fully connected graph [38], while the Greedy Equivalence Search(GES) algorithm applies a greedy search in the graph space to fill an empty graph while maximizing a fitness measure [39]. Exploiting the asymmetries in models, LiNGAM (Linear non-Gaussian Acyclic Models) prioritizes the models that better fit a Linear Non-Gaussian relation among the variables [40]. The final goal of causal discovery process is to arrive at the DAG from the given data set. The standard statistics works with correlations which means working with probability of \(Y\) given \(X\) denoted by \(P(Y|X)\). Causal inference on the other hand works with probability of \(Y\) given that \(X\) is done denoted by \(P(Y|do(X))\) - the do-calculus [36]. This 'do', though a small change from statistics, is the representation of an intervention. The difference from standard ML predictions is that here we are approximating the effect of treatment \(X\) on the outcome \(Y,\) based on data that does not exist in the data-set. The basic idea behind causal inference is to estimate the effect of treatment \(X\) on the outcome \(Y\) while eliminating the dependence on any variable \(Z\) that has a direct influence on both the treatment and the outcome (confounding variables). This is schematically explained in Fig. 3. Many methods exist for the estimation of the effect that is produced by doing \(X\). These include observational studies (conducting and simulating randomized experiments), simple natural experiments, instrument variables (specific causal effect estimation criteria) and refutations. These methods are explained in detail in Reference [41]. We use Causal Discovery Toolkit (CDT) [42] to obtain causal models directly from the data and DoWhy, "An end-to-end library for causal inference" [43], to carry out the causal analysis. The basic analysis involves the following steps: **1. Creating a causal model:** We create an initial model of the phenomena that we are studying as a Directed Acyclic Graph. The DAG is input into the DoWhy library as a dot graph (a textual representation of the graph using DOT Language) [44]. This initial model is either extracted from the data using CDT or from domain knowledge. **2. Causal effect identification:** Based on causal model, we identify the causal effects to be estimated using a suitable criterion among the following: **a: Back-door:** Controlling for the set of variables that block all the back-door paths between the treatment and the outcome. A Back-door path is any path connecting treatment to outcome via an arrow inward on the treatment. In Fig. 4, \(X\gets Z\to Y\) is a back-door path from treatment \(X\) to outcome \(Y\). Adjusting for the variable \(Z\) will be the back-door criterion: \[P(Y|do(X))=\sum_{z}P(Y|X,z)P(z)\] **b: Front-door:** Controlling of variables in the forward path from the treatment to outcome. In Fig. 4, \(X\to W\to Y\) is the front-door path from treatment to outcome. Adjusting for variables \(X\) and \(W\) will be the front-door criterion: \[P(Y|do(X))=\sum_{w}P(w|X)\sum_{x}P(Y|x,w)P(x)\] **c: Instrumental variables [41]:** A special case of the front-door criteria, this method helps in identifying the direct causal estimate from \(X\) to \(Y\) when the back-door criterion fails (e.g. - obtaining data on \(Z\) is not possible, and hence \(Z\) cannot be controlled for). This method can only be applied if there exists a variable which is independent of confounders of treatment and outcome, has a direct relation with the treatment, and has no direct effect on the outcome as depicted in Fig. 5. **d: Mediation:** This method is applied when the treatment has multiple causal pathways to the Figure 4: A sample causal model as a DAG. \(X\) is the treatment, \(Y\) the outcome, \(Z\) a confounder, and \(W\) a mediator. \(X\gets Z\to Y\) constitutes the backward path while \(X\to W\to Y\) is the forward path from treatment to outcome Figure 3: Basic aim of causal inference is to estimate the effect of a treatment \(X\) on the outcome \(Y\) while controlling for the confounding variables \(Z\) outcome as shown in Fig. 6. It enables us to separate the total effect on \(Y\) into direct (\(X\to Y\)) and indirect (\(X\to W\to Y\)) causal estimates. **3. Estimate the target estimand:** Many statistical methods exist for estimating the indentified casual effect. Depending on the identification criteria one can use linear regression, distance matching, propensity score stratification [45] for backdoor; wald estimator [46], regression discontinuity [47] for instrument variable; two-stage linear regression for front-door and so on. The estimate is obtained in units of Average Treatment Effect (ATE), Average treatment effect for the treated (ATT) or Average treatment effect for the controls (ATC). **4. Refute the obtained estimate using multiple robustness checks:** Causal models are not absolute, as they cannot be proven to be correct or incorrect. One can however, increase faith in a model by checking the validity of the assumptions behind the model against various robustness checks which include: **a: Random Common Cause:** Check the variation of estimate over addition of an independent random common cause. Lesser the variation, higher our faith in the model. **b: Placebo Treatment Refuter:** Rerunning the analysis with an independent random variable as the treatment variable. If the initial treatment is in fact the cause, the new estimate should go to zero. **c: Data Subset Refuter:** How much is the variation in estimate when only a subset of the data is used? The variation is small for a strong causal relation. ## III Examples This section describes our main work where we have chosen four different examples to build causal models. For each case we consider different possible causal models and evaluate their relative efficacy by employing the methods described above. The examples are chosen from diverse fields. The first example of tides and cause of their varying height over the year is about a natural phenomenon where data is taken from documented sources. The second example is about a physics model involving Ohm's law and direct and indirect dependence of current on various possible parameters. The third example is about an actual experiment where we collect data for a light dependent resistance(LDR) and consider various possible causal models for it, which we evaluate and compare using data and domain knowledge. In the last example we consider quantum correlations in the two-party two-value setting, and ask the question as to what is the most plausible cause of these non-trivial quantum correlations. ### Height of Tides It is a well known fact that tides, the rise and fall of sea levels, are a cumulative effect of Sun and Moon's gravitational force on Earth, among other minor factors. We ask a ML model, which of the two - **Sun** or **Moon** - plays a bigger role in determining the maximum height of the tide on a given day. To that end, we prepare a data-set with daily Earth-Sun distance in astronomical units(AU), Earth-Moon distance (in AU), and the maximum height of tide at four different locations- Honolulu (Hawaii), Mumbai (India), Liverpool (England), Halifax (Canada) We collected the year round data of earth-moon distance, earth-sun distance, and tide height from documented sources. The **earth-sun** distance for a given day of the year is obtained from the csv file available on the USGS webpage. A sample of the dataset is shown in Fig. 7 The **earth-moon** distance is extracted from IMCCE VIRTUAL OBSERVATORY. Fig. 8 shows the sample of the table generated at the webpage. The data of **tide height** is avail Figure 5: Causal model for Instrumental Variable. Since \(W\) is independent of confounder \(Z\), and has direct effect on treatment \(X\), and has no direct effect on the outcome \(Y\), it can be used to estimate the effect of \(X\) on \(Y\) (given that controlling for \(Z\) is not possible) Figure 6: Mediation causal model. The treatment \(X\) has two causal pathways to the outcome \(Y\), direct (\(X\to Y\)) and indirect (\(X\to W\to Y\)) via the mediator \(W\) NOAA website. Fig. 9 shows a sample of the PDF. At any given day, the tide heights were recorded 3-4 times. We used the maximum value of height (in ft) for a given day. We prepared the models as described in Fig. 10 and computed two causal estimates with tide-height as the outcome. We used the Earth-Moon distance as target for the first estimate and Earth-Sun distance for the second. The causal diagram predicted from data only gives us the \(d_{EM}\to h\) causal relation, which is in fact the most significant one. The estimates (in ATE) from the predicted and ground-truth models differ marginally: -2964.45 and -2913.16 respectively (for Halifax). The causal-estimates for Earth-Moon and Earth-Sun distance (in ATE) for the ground-truth model are listed in table 1. It is clearly visible from the estimates and from the causal diagram obtained from data, that the Earth-Moon distance is the primary cause for the tide height. ### Ohm's law In this example, we look for the driving forces (cause) of the current \(I\) in a wire of length \(L\), cross-sectional area \(A\), resistivity \(\rho\), at a temperature \(T\), with a potential \(V\) applied across its ends. Using causal analysis one can test the validity of a given cause and effect relation. To that end, we consider and check the validity of a model with a direct \(T\to I\) arm added in addition to the known dependence of \(I\) on \(T\) via \(R\). Different causal-models that we evaluate are depicted in Fig. 12. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Causal Relation} & \multicolumn{4}{c|}{Estimate (ATE)} \\ \cline{2-5} & Halifax & Liverpool & Honolulu & Mumbai \\ \hline \(d_{EM}\rightarrow\) h & -2913.16 & -10045.83 & -1205.91 & -7232.59 \\ \(d_{ES}\rightarrow\) h & -2.20 & -8.62 & -3.34 & -22.15 \\ \hline \end{tabular} \end{table} Table 1: Causal estimates for \(d_{EM}\to h\) and \(d_{ES}\to h\) at the four locations. The ATE values are shown for Earth-Moon and Earth-Sun for all four locations. Figure 8: Sample Earth-Moon distance data, 2019. The website generates the ephemeris data for the Moon. Figure 7: Sample Earth-Sun distance data. The data includes day of the year (**DOY**) and the Earth-Sun distance **d**(in AU) Figure 9: Sample Tidal data for Liverpool, England, 2019. Each box of a given date records the time (in hours and minutes) height of high and low tides (in feet and cm) Using the known relations (Eq. (1)) between current and voltage, and the temperature dependence of resistance, we generate the required data. We use platinum as the material for our constants \((\alpha,\rho_{0})\). Fig. 11 shows a sample input data. \[V=IR\] \[R=\frac{\rho_{t}L}{A}\] \[\rho_{t}=\rho_{0}(1+\alpha\Delta T) \tag{1}\] The candidate causal models depicted in Fig. 12 are evaluated and estimates in terms of ATE values are computed which are tabulated in Table 2. We see that the major driving force is potential \(V\), with resistance showing an inverse relation as expected. We observe that the effect of \(T\) on \(I\) is not only non-zero, but equivalent to that of \(R\). The fact that this effect follows not from the direct \(T\to I\) path, but the \(T\rightarrow\rho\to R\to I\) path is confirmed by estimating the same effect of \(T\) on \(I\) using a causal model which does not have the \(T\to I\) path (Fig. 12c). We get the same ATE value of 0.218. One can also check the effect by removing the other branch: \(T\rightarrow\rho\) (Fig. 12b). This results in an estimate(ATE) of 1.35, but during the placebo treatment refutation, the new estimated effect in terms of ATE values, which should be 0, comes out to be -10.54 and thus shows that this model is less trustworthy. ### Power and LDR Resistance Next we perform causal analysis of real data obtained from an experiment. A light emitting diode(LED) light source, runs using a battery at voltage \(V\) and draws current \(I\). The light emitted by the LED shines on and light dependent resistance(LDR) and this provides power \(P\) to LDR thereby changing its resistance \(R\). The circuit is described in Fig. 13. Experiment. The LED and LDR are placed in a closed box at a fixed distance from each other. The LED is supplied with a variable voltage. The voltmeter measures the voltage across the LED, the ammeter measures the current through the LED, and the ohmmeter measures the resistance of LDR. The experiment is repeated with flux meter in place of LDR to obtain power readings. \begin{table} \begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{Causal Relation} & \multicolumn{3}{c|}{Estimate (ATE)} \\ \cline{2-4} & Model A & Model B & Model C \\ \hline V \(\rightarrow\) I & 1.735 & 1.735 & 1.735 \\ R \(\rightarrow\) I & -0.205 & -0.225 & -0.205 \\ T \(\rightarrow\) I & 0.218 & 1.35 & 0.218 \\ \hline \end{tabular} \end{table} Table 2: Causal estimates for different causal relations in the three models of Ohm’s Law Figure 11: Sample of the data-set used in the analysis. Current \(I\) resulting from Potential \(V\) applied across a wire of length \(L\), resistance \(R\) (resistivity \(\rho\), cross-section area \(A\)) at temperature \(T\) Figure 12: Causal diagram for Ohm’s law. For a wire with potential \(V\) across its length \(L\), resistivity \(\rho\), cross-section area \(A\) at temperature \(T\) Figure 13: Circuit Diagram for the LDR Experiment. The LED and LDR are placed in a closed box at a fixed distance from each other. The LED is supplied with a variable voltage. The voltmeter measures the voltage across the LED, the ammeter measures the current through the LED, and the ohmmeter measures the resistance of LDR. The experiment is repeated with flux meter in place of LDR to obtain power readings. The model obtained from data (Fig. 13(a)) suggests potential \(V\) as the cause for both power \(P\) and current \(I\), and finds no direct causal relation between power \(P\) and the LDR resistance \(R\). The model as expected provide only the most significant cause-effect relations (Table 4). We know, as depicted in the domain knowledge model (Fig. 13(b)), that current \(I\) acts as a mediator for \(V\)'s effect on \(P\). This becomes clear when we compare refutations of models with and without \(I\to P\) arm. Refutations suggest that we put more faith in the model with \(I\to P\) arm (p-value 0.912) than the ones without this arm (p-value 0.882) (Table 5). The analysis also suggest that we put more faith in the model which includes the \(P\to R\) arm over the one that does not contain this arm. ### Measurement correlation and quantum entanglement The last example we choose is from the domain of quantum mechanics. Quantum states of composite systems can show peculiar kinds of correlations. We analyze these correlations from the point of view of constructing a causal model. A quantum spin half particle is a two level quantum system with its state space consisting of normalized densities over a two dimensional complex linear vector space. [48] The measurables for each particle are spin components in any direction and the spin component takes two values 'up'(1) or 'down'(0) when measured and are the eigen values of the corresponding Hermitian operator. For example if we are measuring the \(z\) component of spin the corresponding observable is the Pauli matrix \(\sigma_{z}\). The scenario that we consider consists of two spin half particles which are in a joint quantum state \(\rho\). Alice and Bob are two observers with the capability of measuring spin components and the first particle is accessible to Alice while the second is accessible to Bob. The scenario is schematically depicted in Fig. 15. Consider the case where both Alice and Bob measure the spin of their respective particle along the \(z\)-axis which corresponds to measuring the operator \(\sigma_{z}\) in the appropriate state space. For each the possible outcomes thus are 0 or 1. Therefore, the joint measurement outcome for the composite system will be in the set {00,01,10,11}. One can compare this situation to the one of tossing two coins in the classical domain where the outcome set is the same and if the coins are unbiased, the probability of each outcome will be equal. Quantum states have a property called quantum entanglement [49] which is considered to be responsible for unusual correlation properties of composite quantum systems. The entanglement can be mathematically computed from the given density operator and can be quantified via a measure called log \begin{table} \begin{tabular}{|l|c|c|} \hline & \(P\to R\) present & \(P\to R\) absent \\ \hline \(I\to P\) present & 0.928 & 0.896 \\ \hline \(I\to P\) absent & 0.897 & 0.867 \\ \hline \end{tabular} \end{table} Table 5: Confidence levels of \(V\to P\) causal estimate for different causal models in the LDR experiment Figure 14: Causal diagrams for the LDR experiment. \begin{table} \begin{tabular}{|l|c|c|c|} \hline \multirow{2}{*}{Model} & \multicolumn{3}{c|}{Estimates (ATE)} \\ \cline{2-4} & \(V\to P\) & \(V\to I\) & \(P\to R\) \\ \hline Data & 251.533 & 15.42 & - \\ \hline Domain & 251.533 & 15.41 & -0.008 \\ Knowledge & & & \\ \hline \end{tabular} \end{table} Table 4: Causal estimates of three causal relations for data and the domain knowledge models in the LDR experiment \begin{table} \begin{tabular}{|c|c|c|c|} \hline Voltage & Current & Power & Resistance \\ (V) & (mA) & (lux) & (k\(\Omega\)) \\ \hline 2.67 & 100.3 & 5 & 37.000 \\ 2.90 & 104.7 & 9 & 25.400 \\ 3.17 & 109.9 & 15 & 17.800 \\ 3.68 & 119.2 & 36 & 6.800 \\ 3.84 & 122.2 & 47 & 5.790 \\ � ⋮ & ⋮ & ⋮ & ⋮ \\ 7.06 & 172.0 & 847 & 0.591 \\ 7.22 & 174.1 & 923 & 0.558 \\ 7.70 & 179.2 & 1158 & 0.459 \\ 7.86 & 181.5 & 1266 & 0.435 \\ 8.00 & 183.6 & 1386 & 0.413 \\ \hline \end{tabular} \end{table} Table 3: Sample data from the experiment. At each voltage setting, the current through LED is measured in mA and the LDR’s resistance is measured in k\(\Omega\). The experiment is repeated with LDR replaced by a flux meter to measure power. negativity [50]. For certain maximally entangled states the outcomes can be such that they always either fall in the set {01,10} or the set {00,11}, _i.e._ the outcomes are always (anti)correlated. This scenario is schematically described in Fig. 15. The data set the we analyze is generated by simulating the measurement setup between Alice and Bob. A state of the composite quantum system \(\rho\) is generated randomly. On this randomly generated state, both Alice and Bob, perform a \(\sigma_{z}\) measurement. They repeat these measurements on the state 100 times and these 100 measured values are used to compute the correlation. The entanglement is computed from the state density matrix mathematically by computing the log-negativity. This process is repeated for another randomly generated composite state of the two spins. Twenty such random states are chosen and thus a data-set with 2000 rows is generated with 100 rows corresponding to a given random state \(\rho\). The data set is schematically described in Table 6. As can be seen for each \(\rho\) we have 100 rows which are used to calculate the correlations and have mathematically computed log-negativities as documented in the second column. Where is the cause of the correlation between the measured values of Alice and Bob? The initial causal discovery attempts failed to reveal any relations between the variables. Upon further investigation into the data, we find that the present scenario is a special case where the variables of interest, though causally linked (as we know from Domain knowledge), have zero correlation between them. Entanglement ranging from 0 to 1, while Correlation ranging from -1 to +1 creates a case where the average correlation between these two variables is (very close to) zero. Tackling such cases involves looking at the causal relation among functions of the involved variables. We take the absolute value of correlation as the second variable of interest and continue with the analysis. Similar to the tide-height example, the predicted causal diagram only shows the most significant cause-effect relation. With \(M_{A}\) as the treatment and correlation (\(C\)) as the outcome (while accounting for entanglement(\(E\)) as a common cause), we get a causal estimate of -0.0002 ATE. Similarly, we get -0.0024 ATE for \(M_{B}\) (Table 7). The estimate for entanglement \(E\) as the treatment is 0.3733 ATE. This shows that the machine puts more faith in the model that has Entanglement (\(E\)), as the underlying variable as the cause of the correlation (\(C\)) between \(M_{A}\) and \(M_{B}\) over the model that assumes either \(M_{A}\) or \(M_{B}\) as the cause for \(C\). Therefore, the ML analysis confirms this fact that we know from the domain knowledge. This statement is further strengthened by the results of refuting the causal model in both the above scenarios. For example, the placebo treatment refutation gives a confidence of 94% (p-value: 0.94) in \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline State & Entanglement & \(M_{A}\) & \(M_{B}\) & Correlation & \\ \hline \(\rho_{1}\) & \(E_{1}\) & +1 & -1 & \(C_{1}\) & \multirow{2}{*}{instance 1 (100 rows)} \\.. &... &.. &.. &... & \\ \(\rho_{1}\) & \(E_{1}\) & -1 & -1 & \(C_{1}\) & \\ \hline \(\rho_{2}\) & \(E_{2}\) & +1 & +1 & \(C_{2}\) & \multirow{2}{*}{instance 2 (100 rows)} \\.. &... &.. &.. &... & \\ \(\rho_{2}\) & \(E_{2}\) & -1 & +1 & \(C_{2}\) & \\ \hline.. &... &.. &.. &... & \\.. &... &.. &.. &... & \\.. &... &.. &.. &... & \\ \hline \(\rho_{n}\) & \(E_{n}\) & -1 & +1 & \(C_{n}\) & \multirow{2}{*}{instance \(n\) (100 rows)} \\.. &... &.. &.. &... & \\ \(\rho_{n}\) & \(E_{n}\) & +1 & +1 & \(C_{n}\) & \\ \hline \end{tabular} \end{table} Table 6: Structural setup of the data generated in the simulation of Alice and Bob’s \(\sigma_{z}\) measurements on two entangled spin half particles. A single instance is 100 samples of measurements performed by Alice (\(M_{A}\)) and Bob (\(M_{B}\)) on the shared state \(\rho\). The correlation value \(C\) is evaluated from these 100 samples. Figure 16: Causal diagram for correlation between Alice and Bob’s measurement outcomes (\(M_{A}\) and \(M_{B}\) respectively). Figure 15: Alice and Bob with a shared quantum state \(\rho\) of two spin half particles. Each measures the spin of their particle along the \(z\)-axis and gets one of two possible outcomes (0 or 1). the former model and a p-value of \(\sim\)0.8 for the latter. ## IV Discussion and Future Work While standard AI and ML based techniques have an outstanding performance in association level tasks, they are unable provide answers to basic queries of cause and effect. One requires the use of causal diagrams and causal inference to equip the machine with said capability. Association level inference is possible even for a machine that understands cause-and-effect relation. Section III.1 shows that using causal analysis framework one can infer that while both Sun and Moon's gravitational pull affects the tides on Earth, the Earth-Moon distance is the major cause for the height of tides. The advantage of causal models over simple associative models is seen in Section III.2. Not only does the machine estimates potential to be the primary cause for current, it refutes the incorrect assumption of temperature _directly_ affecting the current. The LDR experiment analysis shows that, while the conclusions are not 100% accurate, using causal discovery to infer causal relations from experimental data can hint at where the focus in the experiment should be. In the problem related to cause of correlations between quantum measurements, we observe that the machine is able to figure out the underlying cause of the correlation between the measurement outcomes of Alice and Bob being the quantum entanglement. Causal Theory is still in its initial stages of development and therefore is in no way foolproof. There exist quite a few different algorithms for causal discovery and there is no guarantee of the outcomes of one agreeing with the outcomes of the other. The approach is data-centric and does not always yield relations that make sense. Nonetheless, having an initial estimate of a causal model helps speed up the process. One can always fine-tuned the estimates and relations using domain knowledge. Adding the layer of causal analysis can deepen the understanding of the phenomena and processes involved. The authors in [51] present a physics-inspired symbolic regression ML algorithm for discovering expression/equations from data alone. One can explore the advantage of incorporating causal inference to such ML applications. ###### Acknowledgements. J.S. would like to thank Amitoj Kaur Chandi (@nick_naysayer) for Fig. 1 and Dr. Paramdeep Singh for help with the experimental setup for Section III.3. J.S. acknowledges IISER Mohali for financial support.
2309.11464
Budget-Aware Pruning: Handling Multiple Domains with Less Parameters
Deep learning has achieved state-of-the-art performance on several computer vision tasks and domains. Nevertheless, it still has a high computational cost and demands a significant amount of parameters. Such requirements hinder the use in resource-limited environments and demand both software and hardware optimization. Another limitation is that deep models are usually specialized into a single domain or task, requiring them to learn and store new parameters for each new one. Multi-Domain Learning (MDL) attempts to solve this problem by learning a single model capable of performing well in multiple domains. Nevertheless, the models are usually larger than the baseline for a single domain. This work tackles both of these problems: our objective is to prune models capable of handling multiple domains according to a user-defined budget, making them more computationally affordable while keeping a similar classification performance. We achieve this by encouraging all domains to use a similar subset of filters from the baseline model, up to the amount defined by the user's budget. Then, filters that are not used by any domain are pruned from the network. The proposed approach innovates by better adapting to resource-limited devices while being one of the few works that handles multiple domains at test time with fewer parameters and lower computational complexity than the baseline model for a single domain.
Samuel Felipe dos Santos, Rodrigo Berriel, Thiago Oliveira-Santos, Nicu Sebe, Jurandy Almeida
2023-09-20T17:00:31Z
http://arxiv.org/abs/2309.11464v2
# Budget-Aware Pruning: Handling Multiple Domains with Less Parameters ###### Abstract Deep learning has achieved state-of-the-art performance on several computer vision tasks and domains. Nevertheless, it still has a high computational cost and demands a significant amount of parameters. Such requirements hinder the use in resource-limited environments and demand both software and hardware optimization. Another limitation is that deep models are usually specialized into a single domain or task, requiring them to learn and store new parameters for each new one. Multi-Domain Learning (MDL) attempts to solve this problem by learning a single model that is capable of performing well in multiple domains. Nevertheless, the models are usually larger than the baseline for a single domain. This work tackles both of these problems: our objective is to prune models capable of handling multiple domains according to a user-defined budget, making them more computationally affordable while keeping a similar classification performance. We achieve this by encouraging all domains to use a similar subset of filters from the baseline model, up to the amount defined by the user's budget. Then, filters that are not used by any domain are pruned from the network. The proposed approach innovates by better adapting to resource-limited devices while, to our knowledge, being the only work that handles multiple domains at test time with fewer parameters and lower computational complexity than the baseline model for a single domain. Pruning, Multi-Domain Learning, Parameter Sharing, User-Defined Budget, Neural Networks. ## I Introduction Deep learning has brought astonishing advances to computer vision, being used in several application domains, such as medical imaging [1], autonomous driving [2], road surveillance [3], and many others. However, to increase the performance of such methods, increasingly deeper architectures have been used [4], leading to models with a high computational cost. Also, for each new domain (or task to be addressed), a new model is usually needed [5]. The significant amount of model parameters to be stored and the high GPU processing power required for using such models can prevent their deployment in computationally limited devices, like mobile phones and embedded devices [6]. Therefore, specialized optimizations at both software and hardware levels are imperative for developing efficient and effective deep learning-based solutions [7]. For these reasons, there has been a growing interest in the Multi-Domain Learning (MDL) problem. The basis of this approach is the observation that, although the domains can be very different, it is still possible that they share a significant amount of low and mid-level visual patterns [8]. Therefore, to tackle this problem, a common goal is to learn a single compact model that performs well in several domains while sharing the majority of the parameters among them with only a few domain-specific ones. This reduces the cost of having to store and learn a whole new model for each new domain. Berriel et al. [5] point out that one limitation of those methods is that, when handling multiple domains, their computational complexity is at best equal to the backbone model for a single domain. Therefore, they are not capable of adapting their amount of parameters to custom hardware constraints or user-defined budgets. To address this issue, they proposed the modules named Budget-Aware Adapters (_BA\({}^{2}\)_) that were designed to be added to a pre-trained model to allow them to handle new domains and to limit the network complexity according to a user-defined budget. They act as switches, selecting the convolutional channels that will be used in each domain. However, as mentioned in [5], although this approach reduces the number of parameters required for each domain, the entire model still is required at test time if it aims to handle all the domains. The main reason is that they share few parameters among the domains, which forces loading all potentially needed parameters for all the domains of interest. This work builds upon the _BA\({}^{2}\)_[5] by encouraging multiple domains to share convolutional filters, enabling us to prune weights not used by any of the domains at test time. Therefore, it is possible to create a single model with lower computational complexity and fewer parameters than the baseline model for a single domain. Such a model can better fit into a users' budget that has access to limited computational resources. Figure 1 shows an overview of the problem addressed by our method, comparing it to previous MDL solutions and emphasizing their limitations. As it can be seen, standard adapters use the entire model, while _BA\({}^{2}\)_[5] reduces the number of parameters used in each domain, but requiring a different set of parameters per domain. Therefore, the entire model is needed for handling all the domains together and nothing can be effectively pruned. On the other hand, our approach increases the probability of using a similar set of parameters for all the domains. In this way, the parameters that are not used for any of the domains can be pruned at test time. These compact models have a lower number of parameters and computational complexity than the original backbone model, which facilitates their use in resource-limited environments. To enable the generation of the compact models, we propose three novel loss functions that encourage the sharing of convolutional features among distinct domains. Our proposed approach was evaluated on two well-known benchmarks, the Visual Decathlon Challenge [8], comprising 10 different image domains, and the ImageNet-to-Sketch setting, with 6 diverse image domains. Results show that our proposed loss function is essential to encourage parameter sharing among domains, since without direct encouragement, the sharing of parameters tends to be low. In addition, results also show that our approach is comparable to the state-of-the-art methods in terms of classification accuracy, with the advantage of having considerably lower computational complexity and number of parameters than the backbone. A preliminary version of this work was accepted for publication at the 22\({}^{nd}\) International Conference on Image Analysis and Processing (ICIAP 2023), where we presented our intersection parameters-sharing loss function and manually selected the importance weight of the loss function. In this work, we add several innovations. We introduce two new parameter-sharing loss functions and used a strategy to enable learning the importance weight for these losses as parameters of the model. These new strategies were crucial to increasing the performance of our method. We also added more detailed experiments in the form of ablation studies and additional analysis. Finally, we included additional comparisons with more state-of-the-art methods. ## II Related Work Previous approaches to adapt an existing model to a new domain used strategies like finetuning and pre-training, but faced the problem of catastrophic forgetting, in which the new domain is learned, but the old one is forgotten [9]. More recent MDL approaches usually leverage a pre-trained model as a backbone. A considerable amount of the parameters from the backbone are usually frozen and shared for all the domains, while attempting to learn a limited and much lower amount of new domain-specific parameters [5]. Approaches mostly differ from each other according to the manner the domain-specific parameters are designed, for example, domain-specific residual blocks and binary masks [5]. For methods that use residual blocks to learn new domains, an example is the work of Rebuffi et al. [8] that adds domain-specific parameters to the ResNet network in the form of serial residual adapter modules, and another is the extended version presented in [10] that proposes switching to parallel residual adapters. These modifications lead to an increase in accuracy and also a reduction in domain-specific parameters. Following a different path, some works use binary masks to prune different convolutional filters of the network for each domain, like the Piggyback method proposed by Mallya et al. [11]. The mask is initially learned with real values during training and then is thresholded to obtain binary values. In test time, the learned binary mask is multiplied by the weights of the convolutional layer, keeping the value of some of them and setting the others to zero, generating a selection of different weights for each domain learned. Expanding on this idea, Mancini et al. [12] also make use of masks. However, unlike Mallya et al. [11] that perform a multiplication, their approach learns an affine transformation of the weights through the use of the mask and some extra parameters. This work is further extended in [13] by using a more general affine transformation and combining it with the strategy used by Mallya et al. [11]. Focusing on increasing the accuracy with masks, Chattopadhyay et al. [14] propose a soft-overlap loss to encourage the masks to be domain-specific by minimizing the overlap between them. They were motivated by the fact that most domain generalization methods focus mainly on domain-invariant features, but domains usually have unique Fig. 1: The multi-domain learning (MDL) problem, where a pre-trained model is adapted to handle multiple new domains. In standard adapters, the amount of parameters from the domain-specific models (indicated in colored \(C\)) is equal to or greater than the backbone model (due to the mask represented in black). Budget-Aware Adapters can reduce the number of parameters required for each domain (unused parameters are denoted in gray). However, the whole model is needed at test time if handling distinct domains (colored areas share few parameters). Our model encourages different domains to use the same parameters (colored areas share most of the parameters). Thus, when handling multi-domains at test time, the unused parameters can be pruned without affecting the domains. characteristics. The works mentioned so far mainly focused on improving accuracy while attempting to add a small number of new parameters to the model, but they do not take into consideration the computational cost and memory consumption, making their utilization on resource-limited devices difficult [15]. Trying to address that, recent works have attempted to tackle the multi-domain learning problem while taking into account resource constraints. Yang et al. [15] proposed the Dynamic Additive Attention Adaption (DA\({}^{3}\)) with the objective of improving the training time on edge devices by reducing the activation memory used by the network. The authors achieve this goal by only updating the trainable parameters that have additive relationship with other weights or masks. Additive relationships calculations during backpropagation can be done independently from the activation feature maps from previous layers, which is not true for multiplicative relationships [15]. Therefore, it is not necessary to store the activation of the entire networks during training. Regarding parameters sharing, Wallingford et al. [16] proposed the Task Adaptive Parameter Sharing (TAPS), which learns to share layers of the network for multiple tasks while having a sparsity hyperparameter \(\lambda\) defined by the user. This method learns what layers should be task-specific during the training of the model. If a layer is defined as task-specific, a residual is learned, that is a perturbation to be added to the weights of the layer. The sparsity hyperparameter \(\lambda\) controls the number of shared layers, where higher values encourage more layers to be shared amount tasks. Although this method lessens the amount of additional domain-specific parameters, it still always has considerably more parameters than the backbone model for a single domain. Berriel et al. [5] proposed Budget-Aware Adapters (BA\({}^{2}\)), which are added to a backbone model, enabling it to learn new domains while limiting the computational complexity according to a user's budget. The BA\({}^{2}\) modules are similar to the approach from Mallya et al. [11], that is, masks are applied to the convolutional layers of the network, selecting a subset of filters to be used in each domain. The masks are made of real values that are binarized during the forward pass but are used as real values during backpropagation. The network is encouraged to use a smaller amount of filters per convolution layer than a user-defined budget, being implemented as a constraint to the loss function that is optimized by constructing a generalized Lagrange function. Also, the parameters from batch normalization layers are domain-specific, since they perform poorly when shared. This method and other continual learning strategies can reduce the number of parameters for a single domain. However, these methods usually load the relevant parameters for the desired domain at test time. In order to load them for each domain of interest, it would be necessary to keep all the parameters stored in the device so that the desired ones are available. This way, the model does not fit the user needs, consuming more memory and taking more time to load, which might make it difficult to use in environments with limited computational resources. With this motivation we propose our method that encourages the sharing of parameters and is able to effectively prune the model, reducing both the computational complexity and amount of parameters while handling all the domains. ## III Pruning a Multi-Domain Model This work was built upon the BA\({}^{2}\) modules from Berriel et al. [5] and proposes a new version to allow the pruning of unused weights at test time. As a result, the proposed method is able to obtain a pruned model that handles multiple domains, while having lower computational complexity and number of parameters than even the backbone model for a single domain. The pruned version is able to Fig. 2: Overview of our strategy for sharing parameters among domains. Our parameter-sharing loss function is calculated over a combination of the masks from all the domains and is used to encourage the sharing of parameters between them. The parameters that are not used by any domain (white squares) can be pruned, reducing the number of parameters and computational cost of the model. \(\circ\) represents the element-wise multiplication between a binary mask of a domain and the kernel weights and \(\oplus\) represents the union of the weights used by each domains. Colors represent data (i.e., weights, masks, etc.), therefore, the colored squares denote both the input data for each operation as well as its resulting output. keep a similar classification performance while considering optimizations that are paramount for devices with limited resources. Our user-defined budget allows the model to fit the available resources to a wider range of environments. To achieve our goals, we added an extra loss function to BA\({}^{2}\) in order to encourage parameter sharing among domains and prune the weights that are not used by any domain. It was also necessary to train simultaneously in all the domains to be able to handle them all together at test time (see Figure 2 for an overview). ### _Problem Formulation_ The main goal of MDL is to learn a single model that can be used in different domains. One possible approach is to have a fixed pre-trained backbone model with frozen weights that are shared among all domains, while learning only a few new domain-specific parameters. Equation 1 describes this approach, where \(\Psi_{0}\) is the pre-trained backbone model that when given input data \(x_{0}\) from the domain \(X_{0}\) returns a class from domain \(Y_{0}\) considering \(\theta_{0}\) as the models weights. Our goal is to have a model \(\Psi_{d}\) for each domain \(d\) that attributes classes from the domain \(Y_{d}\) to inputs \(x_{d}\) from the domain \(X_{d}\) while keeping the \(\theta_{0}\) weights from the backbone model and learning as few domain-specific parameters \(\theta_{d}\) as possible. \[\Psi_{0}(x_{0};\theta_{0}):X_{0}\to Y_{0} \tag{1}\] \[\Psi_{d}(x_{d};\theta_{0},\theta_{d}):X_{d}\to Y_{d}\] Our starting point was the BA\({}^{2}\)[5] modules, which are associated with the convolutional layers of the network, enabling them to reduce their complexity according to a user-defined budget. Equation 2 describes one channel of the output feature map \(m\) at the location \((i,j)\) of a convolutional layer, where \(g\) is the activation function, \(K\in\mathbb{R}^{(2K_{H}+1)\times(2K_{W}+1)\times C}\) is the kernel weights with height of \(2K_{H}+1\), width of \(2K_{W}+1\) and \(C\) input channels, and \(I\in\mathbb{R}^{H\times W\times C}\) is the input feature map with \(H\) height, \(W\) width, and \(C\) channels. \[m(i,j)=g(\sum_{c=1}^{C}\phi_{c}(i,j)) \tag{2}\] \[\phi_{c}(i,j)=\sum_{h=-K_{h}}^{K_{h}}\sum_{w=-K_{w}}^{K_{w}}K(h,w,c)I(i-h,j-w,c)\] Berriel et al. [5] proposed to add a domain-specific mask that is composed of \(C\) switches \(s_{c}\) for each input channel, as shown in Equation 3. At training time, \(s_{c}\in\mathbb{R}\) while, at test time, they are thresholded to be binary values. When \(s_{c}=0\), the weights \(K_{c}\) (i.e., the filters for the \(c\) input channel for a given output channel) can be removed from the computational graph, effectively reducing the computational complexity of the convolutional layers. \[m(i,j)=g(\sum_{c=1}^{C}s_{c}\phi_{c}(i,j)) \tag{3}\] The model is trained by minimizing the total loss \(L_{total}\), which is composed of the cross-entropy loss \(L\) and a budget loss \(L_{B}\), as shown in Equation 4, where \(\beta\in[0,1]\) is a user-defined budget hyperparameter that limits the amount of weights on each domain individually, \(\theta_{d}^{\beta}\) are the domain-specific parameters for the budget \(\beta\) and domain \(d\), \(\bar{\theta}_{d}^{\beta}\) is the mean value of the switches for all convolutional layers and \(\lambda\) is the Karush-Kuhn-Tucker (KKT) multiplier. \[L_{total}=L(\theta_{0},\theta_{d}^{\beta})+L_{B}(\theta_{d}^{\beta},\beta) \tag{4}\] The budget loss is given by \(L_{B}(\theta_{d}^{\beta},\beta)=\max(0,\lambda(\bar{\theta}_{d}^{\beta}-\beta))\). When the constraint \(\bar{\theta}_{d}^{\beta}-\beta\) is satisfied, \(\lambda=0\), otherwise, the optimizer increases the value of \(\lambda\) to boost the impact of the budget. ### _Sharing Parameters and Pruning Unused Ones_ Although BA\({}^{2}\) can reduce the computational complexity of the model, it can not reduce the number of parameters necessary to handle all the domains together. Switches \(s_{c}\) can only be pruned at test time when they are zero for _all_ domains, but they, in fact, assume different values if not forced to do so. For this reason, we introduced an additional parameter-sharing loss \(L_{PS}\) to \(L_{total}\), as shown in Equation 5, where \(N\) is the number of domains and \(\theta_{k}^{\beta}\) for \(k\in[1,...,N]\) are the domain-specific parameters (switches or mask) for each domain. \[L_{total}=L(\theta_{0},\theta_{d}^{\beta})+L_{B}(\theta_{d}^{\beta},\beta)+L_{ PS}(\theta_{1}^{\beta},...,\theta_{N}^{\beta},\beta) \tag{5}\] The parameter-sharing loss \(L_{PS}\) calculates the intersection of all the domains' masks and encourages it to increase up to the specified budget. Since the domain-specific weights from all the domains are required by this loss component, it is necessary to train on all of them simultaneously. Finally, the switches \(s_{c}\) and the associated kernel weights \(K_{c}\) can be pruned. It must be noted that the average sparsity over all domains may be higher than \(1-\beta\), since not all parameters are shared over all domains. We proposed and tested three different parameter-sharing loss \(L_{PS}\) functions: \(L_{PS}^{Int}\): that is calculated using the intersection between the masks of different domains, \(L_{PS}^{Union}\) that uses the masks union, and \(L_{PS}^{Jaccard}\) that uses the Jaccard similarity coefficient. In order to make these loss functions differentiable, the threshold operations are replaced by identity functions on backpropagation, following [11, 5]. #### Iii-B1 Parameter-Sharing Intersection Loss In this loss function, we penalize the model when the intersection between the masks \(\theta_{k}^{\beta}\) from all the domains \(k\in[1,...,N]\) have less switches than the maximum amount allowed by the budget, discouraging the use of different parameters by the domains. The intersection was calculated by using element-wise multiplication between the binary masks from all the domains. Equation 6 describes this loss function, where \(M\) is the total number of switches, \(\lambda_{PS}^{\beta}\) is a weight that defines the importance of this loss component for the budget \(\beta\), and \([x]_{+}=\max\{0,x\}\). \[L_{PS}^{Int.}(\theta_{1}^{\beta},...,\theta_{N}^{\beta},\beta)=\Bigg{[} \lambda_{PS}^{\beta}(1-\frac{|\theta_{1}^{\beta}\cap\theta_{2}^{\beta}\cap... \cap\theta_{N}^{\beta}|}{M\beta})\Bigg{]}_{+} \tag{6}\] #### Iii-B2 Parameter-Sharing Union Loss In this loss function, the model is penalized when the union between the masks \(\theta_{k}^{\beta}\) from all the domains \(k\in[1,...,N]\) have more switches than the amount allowed by the budget \(\beta\). This way, the loss function directly discourage the usage of more parameters than what the budget allows by all the domains jointly. Equation 7 describes our parameter-sharing union loss \(L_{PS}^{Union}\) function. In order to calculate the union (Equation 8) between two binary masks \(\theta_{i}^{\beta}\) and \(\theta_{j}^{\beta}\) and make it differentiable, we followed Rahman et al. [17], where \(\times\), \(+\) and \(-\) denotes element-wise operations of multiplication, addition and subtraction. \[L_{PS}^{Union}(\theta_{1}^{\beta},...,\theta_{N}^{\beta},\beta)=\Bigg{[} \lambda_{PS}^{\beta}(\frac{|\theta_{1}^{\beta}\cup\theta_{2}^{\beta}\cup... \cup\theta_{N}^{\beta}|}{M}-\beta)\Bigg{]}_{+} \tag{7}\] \[\theta_{i}^{\beta}\cup\theta_{j}^{\beta}=\theta_{i}^{\beta}+\theta_{j}^{ \beta}-\theta_{i}^{\beta}\times\theta_{j}^{\beta} \tag{8}\] #### Iii-B3 Parameter-Sharing Jaccard Loss Chattopadhyay et al. [14] proposed to use the Jaccard similarity coefficient [18] in order to encourage masks to be domain-specific, sharing less parameters, being the opposite of our objective. Motivated by this, we proposed a parameter-sharing Jaccard loss function that uses the complement of the Jaccard index in order to encourage the sharing of parameters. This loss function does not take the budget \(\beta\) as a parameters directly, this way, the budget loss \(L_{B}\) needs to control the budget by itself. \[L_{PS}^{Jaccard}(\theta_{1}^{\beta},...,\theta_{N}^{\beta})= \Bigg{[}\lambda_{PS}^{\beta}(1-\frac{|\theta_{1}^{\beta}\cap\theta_{2}^{\beta }\cap...\cap\theta_{N}^{\beta}|}{|\theta_{1}^{\beta}\cup\theta_{2}^{\beta} \cup...\cup\theta_{N}^{\beta}|})\Bigg{]}_{+} \tag{9}\] ## IV Experiments and Results In this section, we present the experiments that were carried out and their results. First, we describe the experimental setup in detail. Then, we present ablation studies we made in order to train simultaneously in multiple domains, tune hyperparameters and check the sparsity obtained for all the domains together and individually. Next, the results on the two well-known benchmarks, the Visual Decathlon Challenge and ImageNet-to-Sketch setting, are provided together with discussions. Finally, we perform and report the results of an additional analysis verifying the distribution of pruned weights per layer, the shared sparsity between different budgets, and the mean accuracy and standard deviation for 5 runs of our best model. ### _Experimental Setup_ Our approach was validated on two well-known MDL benchmarks, the Visual Decathlon Challenge [8], and the ImageNet-to-Sketch. The Visual Decathlon Challenge comprises classification tasks on ten diverse well-known image datasets from different visual domains: ImageNet, Aircraft, CIFAR-100, Daimler Pedestrian (DPed), Describable Textures (DTD), German Traffic Signs (GTSR), VGG-Flowers, Omniglot, SVHN, and UCF-101. Such visual domains are very different from each other, ranging from people, objects, and plants to textural images. The ImageNet-to-Sketch setting has been used in several prior works, being the union of six datasets: ImageNet, VGG-Flowers, Stanford Cars, Caltech-UCSD Birds (CUBS), Sketches, and WikiArt [11]. These domains are also very heterogeneous, having a wide range of different categories, from birds to cars, or art paintings to sketches [5]. In order to evaluate the classification performance, we use the accuracy on each domain, and the S-score [8] metric. Proposed by Rebuffi et al. [8], the S-score metric rewards methods that have good performance over all the domains compared to a baseline, and it is given by Equation 10: \[S=\sum_{d=1}^{N}\alpha_{d}\max\{0,Err_{d}^{max}-Err_{d}\}^{ \gamma_{d}} \tag{10}\] where \(Err_{d}\) is the classification error obtained on the dataset \(d\), \(Err_{d}^{max}\) is the maximum allowed error from which points are no longer added to the score and \(\gamma_{d}\) is a coefficient to ensure that the maximum possible \(S\) score is \(10.000\)[8]. In addition to the S score, we also reported the classification accuracy on each domain and the mean accuracy over all of them. To assess the computational cost of a model, we considered its amount of parameters and computational complexity. For the number of parameters, we measured their memory usage, excluding the classifier and encoding float numbers in 32 bits and the mask switches in 1 bit. For the computational complexity, we used the THOP library to calculate the amount of multiply-accumulate operations (MACs) for our approach1. For BA2 we reported the values from [5], and we also contacted the authors to confirm that the same methods for measuring were used. Footnote 1: We follow Berriel et al. [5] and report results in FLOPs (1 MAC = 2 FLOPs). Footnote 2: TAPS implementation available at: [https://github.com/MattWallingford/TAPs](https://github.com/MattWallingford/TAPs), as of August 2023. For TAPS [16], we used the implementation made available by the authors2. For the ImageNet-to-Sketch, we trained the available models and reported the results. For the Visual Domain Decathlon, we needed to adapt the code to run on the benchmark, given that the original code did not make it possible. In order to do so we followed what is described on [16] and contacted the authors for more details. We tested both the Wide-ResNet-28 and ResNet-26, since Wallingford et al. [16] comment that they are equivalent and do not specify which one was used. For the final experiments, the pre-trained ResNet-263 from Rebuffi et al. [8] was used as backbone, since it is the one with the most similar results to what was reported in the original TAPS paper. For these reasons, however, there are some differences on the results reported in [16] and what we obtained, but this process was necessary to guarantee that the same methods were used to measure the amount of parameters and computational complexity. In the ablation studies, we also reported the sparsity, that represents mean of the percentage of filters that are not used on each convolutional layer by each domain. We also report the sparsity for all the domains together, representing the percentage of filters that are not used by any domain and can be pruned. Similar to [5], in order to assess the trade-off between effectiveness on the MDL problem and computational efficiency, we consider two variations of the S score, named as \(\mathrm{S}_{O}\), which is the S score per operation; and \(\mathrm{S}_{P}\), the S score per parameter. For our methods, we adopted the same experimental protocol of Berriel et al. [5], making the necessary adjustments for our objective of pruning the model. We used the SGD optimizer with momentum of 0.9 for the classifier and the Adam optimizer for the masks. All weights from the backbone are kept frozen, only training the domain-specific parameters (i.e., classifiers, masks, and batch normalization layers) and the masks switches were initialized with the value of \(10^{-3}\). Data augmentation with random crop and horizontal mirroring with a probability of 50% was used in the training phase, except for DTD, GTSR, Omniglot, and SVHN, where mirroring did not improve results or was harmful. For testing, we used 1 crop for datasets with images already cropped (Stanford Cars and CUBS), five crops (center and 4 corners) for the datasets without mirroring and 10 crops for the ones with mirroring (5 crops and their mirrors). For the Visual Domain Decathlon, we used the Wide ResNet-28 [19] as backbone, training it for 60 epochs with batch size of 32, and learning rate of \(10^{-3}\) for the classifier and \(10^{-4}\) for the masks. Both learning rates are decreased by a factor of 10 on epoch 45. For the ImageNet-to-Sketch setting, the ResNet-50 was used as backbone, training for a total of 45 epochs with batch size of 12, learning rate of 5\(\times 10^{-4}\) for the classifier and 5\(\times 10^{-5}\) for the masks, dividing the learning rates by 10 on epochs 15 and 30. Differently from Berriel et al. [5], we needed to train all the domains simultaneously, since we want to encourage the sharing of weights among them. In order to do so, we run one epoch of each dataset in a round robin fashion. We repeat this process until the desired number of epochs are reached for each dataset. After obtaining the best hyperparameter configuration, the model was trained on both training and validation sets and evaluated on the test set of the Visual Domain Decathlon. The comparison of the results with baseline strategies and state-of-the-art methods is shown in Table V. Experiments were run using V100 and GTX 1080 TI NVIDIA GPUs, Ubuntu 20.04 distribution, CUDA 11.6, and PyTorch 1.12. ### _Ablation Studies_ In this section, we include ablation studies made while developing our method, providing more insights into the impact and robustness of some components of our proposal. Section IV-B1 discusses the results of training our approach simultaneously on all the domains without our loss function, comparing with the original BA\({}^{2}\) that trains independently on each domain. In Section IV-B2, we added our parameter-sharing loss to the model and tested different values for the hyperparameter \(\lambda_{PS}^{\beta}\), which weighs the contribution of our loss term. And finally, Section IV-B3 presents the sparsity values for each domain individually and for all the domains together. The results of our ablation studies are reported on the validation set of the Visual Domain Decathlon. #### Iv-B1 Training Simultaneously in Multiple Domains Table I presents the results from our run of the original BA\({}^{2}\) method [5], where each domain is trained independently; and an initial version of our approach, in which all domains are trained simultaneously, but without using the parameter-sharing loss. As can be seen, there was a small decrease in the mean accuracy after changing the training procedure to be simultaneous in all the domains, up to 2.4 percent. This indicates that domains can affect each other during the training process and that small changes in the training procedures of the method can have effects on the results. Nevertheless, it is necessary to keep this procedure in order to be able to use our loss function to share parameters among different domains, since it demands information from all the domains to work. We also tested freezing the weights of all other domains except for the one from the input data, but our loss function was not able to encourage the sharing of parameter in this scenario. Another aspect we tested was different strategies for performing the simultaneous training, like having one batch of each dataset, mixed batches with data from all domains, and a round robin approach where we ran one epoch of each dataset in a random order. We chose to use and report only the latter, as it was slightly faster and the accuracy of all were similar. #### Iv-B2 Defining the Weight of the Parameter-Sharing Loss Function After evaluating the training of our method in all the domains simultaneously, we added our parameter-sharing loss function to it. In order to do so, we needed to define an appropriate value to the weight \(\lambda_{PS}^{\beta}\), since it defines the impact of our parameter-sharing loss on the total loss. We tested two strategies to achieve this objective. Initially, we made a grid search with the fixed values of 1.0, 0.5, 0.25 and 0.125. Since some of our proposed loss functions did not achieve good results with this method, and manually testing a wider array of values would have a huge computational cost, we tested setting \(\lambda^{\beta}_{PS}\) as a parameter of the model, learning it jointly with the classification task. Table II shows the mean accuracy and sparsity over all domains we obtained by testing our three parameter sharing loss functions, manually selecting different values for \(\lambda^{\beta}_{PS}\). As it can be seen, for most of the values of \(\lambda^{\beta}_{PS}\) and \(\beta\), the Jaccard loss (\(L^{ Jaccard}_{PS}\)) obtained the best accuracy, followed by the Union loss and the Intersection loss. Although the Jaccard and Union losses obtained better accuracy than the Intersection loss, their sparsity over all the domains were lower. The union loss obtained at best 1.8\(\%\), achieving a very low sparsity for every configuration tested, this way, it would not be possible to prune the model and meaningfully reduce its cost. The Jaccard loss function obtained better results, but they were still low when compared to the Intersection loss, that obtained the best trade off between accuracy and sparsity. Comparing the different values of \(\lambda^{\beta}_{PS}\), the accuracy was similar. For this reason, we selected as the best model the one that obtained the higher sparsity. For the Intersection loss, higher values of \(\lambda^{\beta}_{PS}\) obtained slightly higher sparsity, with the best one being for \(\lambda^{\beta}_{PS}\)=1.0. For the Union loss, as mentioned before, the sparsity was low for every manually selected value, but the best one was for \(\lambda^{\beta}_{PS}\)= 0.250. For the Jaccard loss, sparsity seems to increase as \(\lambda^{\beta}_{PS}\) decreases, with the best model having \(\lambda^{\beta}_{PS}\)=0.125. In Table III, we compare the results of the best manually selected value of \(\lambda^{\beta}_{PS}\) for each loss function with our approach of learning \(\lambda^{\beta}_{PS}\) as a parameter of the model. When learning \(\lambda^{\beta}_{PS}\), for the Jaccard loss, the models reach lower sparsity values compared to the best manually selected \(\lambda^{\beta}_{PS}\), obtaining overall low sparsity for all the budgets \(\beta\). While for the Intersection loss there was an increase in sparsity with the learning values of \(\lambda^{\beta}_{PS}\), a decrease in accuracy can also be observed. Finally, for the Union loss, the accuracy was similar between the best manually selected \(\lambda^{\beta}_{PS}\) and the learned one, but the sparsity of the learned \(\lambda^{\beta}_{PS}\) was substantially higher. This way the Union loss function with learned \(\lambda^{\beta}_{PS}\) obtained the best ratio between accuracy and sparsity and, for this reason, we selected it as our best configuration to be used on the next experiments presented on this work. #### Iv-B3 Sparsity on the Visual Domain Decathlon Finally, we conducted a deeper analysis on the sparsity obtained on our method, comparing it to the ones obtained by the original BA\({}^{2}\), as shown on Table IV. The individual sparsity values for each domain indicates the mean amount of parameters per layer that are not used for predicting for that domain, being the main reason for the reduction on computational complexity of the model. The sparsity over all domains together indicates the mean amount of parameters per layer that are not used by any domain. These parameters can be pruned from the model and are the reason of the parameter reduction present on our approach. The values for ImageNet are not reported since the backbone pre-trained model is used. Observing the values of the individual sparsity per domain, it can be seen that our method obtained sparsity higher than the complement of the budget for every domain and budget. Comparing to the original BA\({}^{2}\), our strategy obtained higher sparsity for most domains and budgets, showing that our loss function leads to a decrease in computational cost for each domain. The sparsity over all the domains shows the key advantage of our method, since it is very small for most budgets in the original BA\({}^{2}\), while being considerably higher in our method, allowing for the pruning of the model and the reduction of the number of parameters. ### _Results on Visual Decathon Challenge_ After selecting the best loss function and hyperparameter configuration, the model was trained on both training and validation sets and evaluated on the test set of the Visual Domain Decathlon. The comparison of our results with state-of-the-art methods is shown in Table V. It must be noted that TAPS sparsity hyperparameters \(\lambda\) decreases the amount of parameters as its values increase, that is the inverse of our budget hyperparameter \(\beta\), that decreases the amount of parameters as its value is reduced. Also, as mentioned in the Section IV-A, results have some differences from the ones reported in [16] because we needed to replicate the experiments on this benchmark, since the implementation was not made available by the authors. Compared to the baseline strategies, our method was able to vastly outperform the feature extractor only, and compared to finetune, it was able to obtain higher score for the budgets of \(\beta=\)1.0 and 0.50 and similar score for \(\beta=\)0.75, but with almost 10 times less parameters. TAPS with \(\lambda\) of 0.25 was the method with the second best S-score, losing only to BA\({}^{2}\) with \(\beta=1.0\), but it also used more parameters than BA\({}^{2}\) and our method by a considerable margin, ranging from 3.78\(\times\) to 5.68\(\times\) times the size of the backbone model. TAPS always uses considerably more parameters than the backbone model since it needs to learn the residuals values that are added to the weights of some convolutional layers for each domain. Moreover, it also has a computational cost slightly higher than the backbone because of that. For these reasons, TAPS only has higher \(S_{O}\) and \(S_{P}\) metrics than the baseline feature and finetune, achieving values considerably lower than BA\({}^{2}\) and our strategy. This shows that the main focus of TAPS is different from our method and BA\({}^{2}\). The main objective of TAPS is to keep good accuracy, while adding a small amount of new parameters per new domain, while our work aims to prune the backbone model, reducing its computational complexity and number of parameters to fit an user defined budget, with the smallest reduction in accuracy as possible. BA\({}^{2}\) has a focus more similar to ours, since it tries to reduce the computational complexity of the model according to the user budget, but it is not capable of pruning parameters, since all domains together use almost all of them, it always has slightly more parameters than the backbone model. Compared to it, we obtained similar accuracy in most domains, facing drops in accuracy only for some domains. We believe the main reason for this drop in accuracy is the simultaneous training procedure, as we observed similar drop when switching from individual to simultaneous training without the addition of our loss function (as discussed in Section IV-B1), but we kept it since it is necessary to enable parameter sharing. The domains with the biggest accuracy drops were the smaller ones: aircraft, DTD, VGG-Flowers, and UCF-101. Other works, like Rebuffi et al. [8, 10] also mention subpar performance on these datasets, identifying the problem of overfitting. The S-score also dropped up to 453 points for the same issues. The drop is harsher since the metric was designed to reward good performance across all datasets, and the small datasets we mentioned had a subpar performance. Despite facing small drops in accuracy and S-score, our method offers a good trade-off between classification performance and computational cost. When comparing computational complexity (FLOP on Table V), our method obtained lower complexity than BA\({}^{2}\) for every budget. This happens due to the addition of our loss function that further incentivizes more weights to be discarded. It also must be noted that all our methods obtained lower complexity than the value defined by the budget, showing that it is a great tool to adapt a backbone model to the resources available to the user. By comparing the S\({}_{O}\) metric, we can observe that both methods have a good trade-off between computational complexity and S-score, as this metric greatly increases as the budget is reduced, showing that the reduction in computational complexity is considerably greater than the loss in S-score. As expected, our method had better S\({}_{O}\) for every single budget \(\beta\), since we also have a lower computational complexity. The main advantage of our proposed method is the reduction on the number of parameters of the model, as it is, to our knowledge, one of the only methods that is capable of tackling the problem of multiple-domain learning, while also reducing the number of parameters in relation to the backbone model. Other methods can reduce the amount of parameters for a single domain, but since the parameters are not shared, to handle all of them during test time, the entire model must be kept. As we can see (column Params of Table V), the original BA\({}^{2}\) had similar amount of parameters to the backbone model, being 3% more for all budgets. For the budget of \(\beta=\) 1.00, we obtained the same result, while for the budget of \(\beta=\) 0.75 we reduce the amount of parameters compared to the backbone model in 16%, for budget \(\beta\) = 0.50, the reduction was of 27% and for the for budget of \(\beta\) = 0.25 there were 59% less parameters. These results show that our method was successful in encouraging the sharing of parameters among domains and that this approach can lead to considerable reductions on the amount of parameters of the network. The S\({}_{P}\) metric provide additional evidence to this finding, as for the budgets of \(\beta\) = 0.75, 0.50, and 0.25 our method was able to outperform all configurations of BA\({}^{2}\) due to the considerable reduction on the amount of parameter. ### _Results on the ImageNet-to-Sketch Setting_ Table VI compares our best method with state-of-the-art works on the ImageNet-to-Sketch setting. In the comparison with the baseline methods, as it can be seen, TAPS, BA\({}^{2}\) and our method were all able to vastly outperform the "feature extractor only" model in all metrics measured, while no method was able to obtain better S-score than finentuning individual models for each domain, showing that ImageNet-to-Sketch is a challenging benchmark for Multi-Domain learning. TAPS was the method with the best S-score and overall accuracy, but once again at the cost of using considerably more parameters than the backbone (ranging from 2.91\(\times\) to 4.32\(\times\) according to \(\lambda\)). This way, their \(S_{O}\) and \(S_{P}\) metrics are again considerably worse than BA\({}^{2}\) and our method, reinforcing that although TAPS also deals with the problem of taking into consideration the amount of parameters in multi-domain/task learning, its objective is very different from ours, since it always need to increase the amount of parameters in relation to the backbone. Compared to BA\({}^{2}\), our method achieves a lower S-score for the same \(\beta\), but in most cases, our model with the budget \(\beta\) achieves slightly higher S-score than the next lower \(\beta\) from BA\({}^{2}\). For example, our method with \(\beta\)=1.0 achieves an S-score 108 points lower than BA\({}^{2}\) with \(\beta\)=1.0, but it is 151 points higher than BA\({}^{2}\) with \(\beta\)=0.75. For our method with \(\beta\)=0.50, the same pattern is repeated, achieving an S-score 254 points lower than BA\({}^{2}\) with \(\beta\)=0.50, but achieving 3 more points than BA\({}^{2}\) with \(\beta\)=0.25. The only exception is for our method with \(\beta\)=0.75, where the S-score achieved was lower than BA\({}^{2}\) with \(\beta\) of 0.75 and 0.50. These results show that although our method obtained slightly lower classification performance, it is still competitive, with the advantage of also being able to prune the model, reducing the number of parameters according to the user budget, something that \(BA^{2}\) is not capable of doing. With regards to the S\({}_{O}\) metric, our method obtained the best scores. For \(\beta\)=1.00, our method achieves higher S\({}_{O}\) than BA\({}^{2}\) with \(\beta\) of 1.0, 0.75 and 0.25, and for the \(\beta\) of 0.75, our method achieves S\({}_{O}\) score higher than every configuration of BA\({}^{2}\). This shows that our method is capable of obtaining a better trade-off between S-score and computational complexity than BA\({}^{2}\) in most cases. Comparing the S\({}_{P}\) metric, since BA\({}^{2}\) can not prune the model, as \(\beta\) decreases, S\({}_{P}\) is also reduced. In most cases for our methods, as the budget decreases, S\({}_{P}\) increases, and for the \(\beta\) of 0.50 and 0.25, we outperform all configurations of BA\({}^{2}\). The results obtained in this benchmark show that our method is capable of reducing both the number of parameters and computational complexity to fit a user budget, while keeping competitive classification performance with other state-of-the-art methods. This offers a good trade-off, as evidenced by the S\({}_{O}\) and S\({}_{P}\) metrics. ### _Additional Analysis_ Figure 3 shows the amount of pruned parameters per layer. Taking into consideration the percentage of weights from the layer that are discarded, a similar percentage of parameters were pruned for most layers. But since later layers have a considerably greater number of parameters, more weights are pruned there. Table VII shows the shared sparsity among different budgets of our best model. For reference, the upper bound for such values are also indicated. Notice that the budgets of \(\beta=0.25\) and \(0.50\) prune a lot of the same weights, around 26.1% of 50%. As the budgets increase, the shared sparsity decreases, being 11.9\(\%\) for \(\beta=0.50\) and 0.75 and 10.7\(\%\) for all budgets together. Motivated by these results, we also tested running our experiment incrementally from one budget to the other, starting with the \(\beta=1.00\) and then transferring the weights and training the model with a restriction of \(\beta=0.75\), repeating the process sequentially to the budgets of \(\beta=0.50\) and \(\beta=0.25\). But none of the models were able to increase the sparsity of the network. We believe this occurs because the initial budgets with smaller restrictions found local minimums, and the next more strict budget was not able to optimize the sparsity while forced to prune the same weights. To verify the robustness of our method to the training stochasticity, we did 5 runs of our best method. The results can be seen in Table VIII. Overall, the standard deviation was small for both mean accuracy and sparsity, being a maximum of 0.3\(\%\) and 1.0\(\%\), respectively. These results show that our method is robust to variance in the training phase. ## V Conclusions In this paper, we addressed the multi-domain learning problem while taking into account a user-defined budget for computational resources, a scenario addressed by few works, but of vital importance for devices with limited computational resources. We propose to prune a single model for multiple domains, making it more compact and efficient. To do so, we encourage the sharing of parameters among domains, allowing us to prune the weights that are not used in any of them, reducing both the computational complexity and the number of parameters to values lower than the original baseline for a single domain. Performance-wise, our results were competitive with other state-of-the-art methods while offering good trade-offs between classification performance and computational cost according to the user's needs. In future works, we intend to evaluate different strategies for encouraging parameter sharing, and test our method on different network models and benchmarks. Fig. 3: Representation of the sparsity on each convolutional layer for our best method with the budget of \(\beta=0.5\). The bar represents the total amount of parameters from each convolutional layer, while the part colored in red represents the pruned parameters. ## Acknowledgment This work was supported by the FAPESP-Microsoft Research Institute (2017/25908-6), by the Brazilian National Council for Scientific and Technological Development - CNPq (310330/2020-3, 314868/2020-8), by LNCC via resources of the SDumont supercomputer of the IDEepS project, by the MUR PNRR project FAIR (PE00000013) funded by the NextGenerationEU and by EU H2020 project AI4Media (No. 951911).
2306.17848
Hardwiring ViT Patch Selectivity into CNNs using Patch Mixing
Vision transformers (ViTs) have significantly changed the computer vision landscape and have periodically exhibited superior performance in vision tasks compared to convolutional neural networks (CNNs). Although the jury is still out on which model type is superior, each has unique inductive biases that shape their learning and generalization performance. For example, ViTs have interesting properties with respect to early layer non-local feature dependence, as well as self-attention mechanisms which enhance learning flexibility, enabling them to ignore out-of-context image information more effectively. We hypothesize that this power to ignore out-of-context information (which we name $\textit{patch selectivity}$), while integrating in-context information in a non-local manner in early layers, allows ViTs to more easily handle occlusion. In this study, our aim is to see whether we can have CNNs $\textit{simulate}$ this ability of patch selectivity by effectively hardwiring this inductive bias using Patch Mixing data augmentation, which consists of inserting patches from another image onto a training image and interpolating labels between the two image classes. Specifically, we use Patch Mixing to train state-of-the-art ViTs and CNNs, assessing its impact on their ability to ignore out-of-context patches and handle natural occlusions. We find that ViTs do not improve nor degrade when trained using Patch Mixing, but CNNs acquire new capabilities to ignore out-of-context information and improve on occlusion benchmarks, leaving us to conclude that this training method is a way of simulating in CNNs the abilities that ViTs already possess. We will release our Patch Mixing implementation and proposed datasets for public use. Project page: https://arielnlee.github.io/PatchMixing/
Ariel N. Lee, Sarah Adel Bargal, Janavi Kasera, Stan Sclaroff, Kate Saenko, Nataniel Ruiz
2023-06-30T17:59:53Z
http://arxiv.org/abs/2306.17848v1
# Hardwiring ViT Patch Selectivity into CNNs using Patch Mixing ###### Abstract Vision transformers (ViTs) have significantly changed the computer vision landscape and have periodically exhibited superior performance in vision tasks compared to convolutional neural networks (CNNs). Although the jury is still out on which model type is superior, each has unique inductive biases that shape their learning and generalization performance. For example, ViTs have interesting properties with respect to early layer non-local feature dependence, as well as self-attention mechanisms which enhance learning flexibility, enabling them to ignore out-of-context image information more effectively. We hypothesize that this power to ignore out-of-context information (which we name _patch selectivity_), while integrating in-context information in a non-local manner in early layers, allows ViTs to more easily handle occlusion. In this study, our aim is to see whether we can have CNNs _simulate_ this ability of patch selectivity by effectively hardwiring this inductive bias using Patch Mixing data augmentation, which consists of inserting patches from another image onto a training image and interpolating labels between the two image classes. Specifically, we use Patch Mixing to train state-of-the-art ViTs and CNNs, assessing its impact on their ability to ignore out-of-context patches and handle natural occlusions. We find that ViTs do not improve nor degrade when trained using Patch Mixing, but CNNs acquire new capabilities to ignore out-of-context information and improve on occlusion benchmarks, leaving us to conclude that this training method is a way of simulating in CNNs the abilities that ViTs already possess. We will release our Patch Mixing implementation and proposed datasets for public use. Project page: [https://arielnlee.github.io/PatchMixing/](https://arielnlee.github.io/PatchMixing/) ## 1 Introduction Convolutional neural networks (CNNs) and Vision Transformers (ViTs) are two dominant deep learning models for computer vision tasks. Although CNNs have established themselves as the go-to approach for many years, the introduction of ViTs has significantly changed the landscape and they have consistently achieved comparable or superior performance compared to CNNs for key computer vision tasks such as object recognition, object detection, semantic segmentation, and many others. In recent years, a relatively robust literature has developed comparing CNNs and Vision Transformers in terms of overall performance on standard benchmarks, robustness to OOD inputs, robustness to adversarial attacks, and other evaluations [18; 30; 2; 19; 1; 25; 17; 23], as well as analysis work that ompares the way both architecture types understand images and how they ultimately arrive at their predictions [25; 23; 20; 22]. We note that one important research topic remains under-explored: how these architectures handle occlusion. There exists work that compare both architectures using simple simulations of occlusion such as patch dropping [20], or occlusion in a simulated environment [25]. Additionally, in work by Pinto et al. [22], they found no clear winner between modern CNNs and ViTs for different robustness tests. In this work, we dive deeply into this specific area and present four main contributions: * We find a previously undiscovered **incontrovertible difference** in performance between modern ViTs and CNNs. ViTs are naturally more robust when _out-of-context information_ is added to an image compared to CNNs. We call this ability to ignore out-of-context patches: _patch selectivity_. * We revisit **Patch Mixing**, a data augmentation method where patches from other images are introduced into training images and ground-truth labels are interpolated. We show that by training CNNs using Patch Mixing, we _simulate_ the natural ability of ViTs to **ignore out-of-context information**. * We show that models with better patch selectivity tend to be **more robust to natural occlusion**. Specifically, we introduce two new challenging datasets to evaluate performance of image classifiers under occlusion: the Superimposed Masked Dataset (SMD) and the Realistic Occlusion Dataset (ROD). Moreover, our CNN models trained using Patch Mixing become more robust to occlusion in these, and other datasets. * a contrastive version of the RISE [21] explainability method that allows for agnostic analysis of input sensibility under occlusion for both CNNs and Transformers. Using c-RISE we are able to measure patch selectivity and show that augmentation using Patch Mixing improves CNN patch selectivity. ## 2 Deep Dive Into Patch Selectivity Modern CNN and ViT Inductive BiasesConvolutional neural networks (CNNs) are traditionally composed of a series of trainable convolutional layers. Modern CNN architectures such as ConvNeXt [17] differ in many respects, yet still follow a purely convolutional approach. A particularly important change is the use of a patchify stem - this change can both increase the overall receptive field in early layers in modern convnets as opposed to traditional convnets, as well as decrease strong local dependencies that are created in the early layers of the network, since the patches are non-overlapping. Nevertheless, this, and other changes, do not completely change the inductive bias of the architecture: the network remains a purely convolutional network that uses square conv filters, has a propensity to more strongly weight proximal evidence, and has relatively small effective receptive fields in early layers. Figure 1: Patch Mixing augmentation with label smoothing improves the ability of CNNs to handle a multitude of alterations and occlusions, bridging the gap with ViTs. The Vision Transformer (ViT) [6] is a neural network architecture for image recognition that uses self-attention based Transformer layers. An image is first divided into non-overlapping patches, that are then transformed into embeddings. These embeddings are used as inputs for the Transformer layers. ViTs possess distinct properties and inductive biases when compared to CNNs, some of which are particularly important to highlight. ViT Early Layer Long-Range DependenceIn CNNs the receptive field at a specific layer is fully determined by the size of the convolution kernel and the stride of that layer, as well as the layers that precede the layer of interest. For this reason, given limits on the convolutional kernel size and the stride of the kernel, the receptive field for early CNN layers does not encompass the full image input. In contrast, early layers of ViTs have a large receptive field because they use self-attention, which allows them to attend to any part of the input image beginning at the first layer of the architecture. As a result, ViTs can learn relationships between pixels that are far apart in the input image [23], while CNNs are limited to learning relationships between proximal pixels. In this way, ViTs have the property of early-layer long-range dependency that is not possible to structurally mimic in CNNs, even with modernized CNN architectures that include patchify stems. In this work we pose the following: **Hypothesis 1**: _Hierarchical attention in ViT-style networks allows them to more easily discount signal from out-of-context information in an image when compared to CNNs, which, due to their structure and inherent inductive biases, have a harder time discounting signal from out-of-context patches._ Specifically, in this work we evaluate this hypothesis using empirical means. This hypothesis has been discussed in the prominent work of Naseer et al. [20] that compares ViT and CNN performance when faced with occlusion. They study occlusion by simulating it using either random or saliency-guided patch dropping in images. In particular, the main conclusion is that ViTs were vastly better at dealing with out-of-context patches. Nevertheless, this study focused on older convnet architectures such as ResNet50, DenseNet121 and VGG19. Modern convnets such as ConvNeXt, proposed in the influential work of Liu et al. [17], possess very different architectures while remaining fully-convolutional. There is a relative scarcity of study of these new architectures with respect to occlusion, although recent work [25] proposes to study occlusion for Swin Transformers and ConvNeXt CNNs. Interestingly, they find that new innovations in architecture and training regime makes these new convnets much stronger than older convnets such as ResNet50 at ignoring dropped patches, yet still lagging behind ViTs at higher levels of information loss. One important issue to raise, is that patch drop is a poor approximation of real world occlusion, where occluders are usually other objects that have their own shape and texture, which adds another dimension to the problem. The question then remains: _Are ViTs truly better at handling occlusion and discounting signal from out-of-context patches than CNNs?_ We find that **the answer is a resounding yes**. Specifically, when comparing ViTs and modern convnets that have identical parameter count, FLOPs and very close ImageNet validation performance, ViTs degrade much less when out-of-context patches are introduced into an image. In Figure 2, we show the accuracy of comparable ConvNeXt and Swin models when out-of-context patches are introduced into test images. We see a much larger decrease in accuracy in ConvNeXt compared to Swin, with a widening gap as information loss increases. This finding is particularly interesting in the context of recent work by Pinto et al. [22], which finds no clear winner in a contest between ConvNeXt and Swin models of different sizes for different robustness tests such as simplicity bias, background bias, texture bias, OOD detection and other tasks. To the best of our knowledge we are the first to find an incontrovertible difference between these two classes of models that stands out. This experiment is a rough approximation of natural occlusions, where objects or surfaces occlude the main object in an image. We do, however, hypothesize that networks that can more easily discount signal from out-of-context patches will tend to perform better under naturalistic occlusion: **Hypothesis 2**: _A model with better patch selectivity will tend to perform better under naturalistic occlusion._ In order to test this, we first evaluate the patch selectivity of our trained models, and then extensively test them on four different benchmarks, including two datasets that we propose as contributions: the Superimposed Masked Dataset (SMD) and the Realistic Occlusion Dataset (ROD) which will be described further below. We find that there is indeed a positive correlation between patch selectivity and performance under occlusion, and supply the details in the Experiments section. Finally, we pose the following final hypothesis: **Hypothesis 3**: _A model that is explicitly trained to deal with out-of-context patches using data augmentation will tend to improve at ignoring out-of-context information at test-time._ In our experiments we evaluate this hypothesis and show that using Patch Mixing at training time improves CNN patch selectivity, but, surprisingly, does not improve ViT patch selectivity. We believe this is due to the fact that patch selectivity is already a natural capability of ViTs, whereas CNNs have lesser patch selectivity and attention _bleeds out_ from in-context patches to neighbouring out-of-context patches. By combining verified Hypotheses 2 and 3, we can conclude that CNNs trained using Patch Mixing are more robust to natural occlusions in the real world. We indeed confirm this experimentally. ### Augmentation by Patch Mixing Previous work has introduced the notion of inserting parts of different images into training images in different manners. CutMix [33] proposes to cut and paste one contiguous rectangle from another image into a training image, and mix the ground truth labels proportionally to the area of each image. Cascante-Bonilla et al. [3] propose random and evolutionary search-guided replacement of training image square patches with patches from another training image, also mixing ground truth labels in proportional fashion. [32] proposes replacing rectangular patches in an image, with patches from many other training images, in order to augment small datasets in few-shot learning. Our proposed augmentation is named Patch Mixing. Let \(x\in\mathbb{R}^{H\times W\times C}\) and \(y\) denote the image and its label respectively. We seek to generate an image/label pair \((\tilde{x},\tilde{y})_{i}\) by mixing patches from images \(x_{A}\) and \(x_{B}\) while appropriately mixing labels \(y_{A}\) and \(y_{B}\). For this we generate a mask composed of patches \(M\in\{0,1\}^{N\times P^{2}\times C}\), where \((H,W)\) is the resolution of the original image, \(C\) is the number of channels, \((P,P)\) is the resolution of each image patch, and \(N=\frac{HW}{P^{2}}\) is the resulting number of patches. We initialize the elements of this mask to \(0\). We then select \(N_{1}\) patches from this mask, following uniform random sampling and set the elements of those patches to \(1\). These are the patches that will be replaced in image \(x_{A}\). We select \(N_{1}\) based on a proportion hyperparameter \(r=N_{1}/N\) which represents the proportion of patches that are replaced. Finally, we generate \(\tilde{x}\): \[\tilde{x}=(1-M)\odot x_{A}+M\odot x_{B}. \tag{1}\] Labels \(y_{A}\) and \(y_{B}\) are mixed to generate label \(\tilde{y}\), using the proportion \(r\). The resulting vector is smoothed using label smoothing [27]. Our proposed Patch Mixing most resembles one method mentioned in [3], with some important differences in both application scenario and implementation. For the application scenario, their work does not study the effects of Patch Mixing on Transformers, doing so only on CNNs. Moreover, they solely study ResNet and MobileNet architectures, and the method was not applied to modern convnets given the concurrency of [17] and their work. Finally, most evaluations in their work are based on the CIFAR-10 dataset [16], while we evaluate improved networks on four datasets that present different types of occlusion simulations and real-world occlusions. Our Patch Mixing implementation has important differences with [3]. First, we find that in order to recover the strong performance exhibited by modern CNNs on ImageNet it is imperative to _disable_ random erasing when using patch mixing. When both are used simultaneously, information loss is too high, resulting in lower overall performance. Next, our version uses label smoothing [27] which Figure 2: ConvNeXt performance severely decreases as more out-of-context patches are inserted into test images, with Swin proving to be more resilient to this type of occlusion. increases performance. We also find that using a more granular grid for patch replacement improves results for modern CNNs - thus we use a 7x7 grid instead of a 4x4 grid. Their work focuses on a guided version of mixing patches using evolutionary search. We find that random patch mixing is less computationally expensive and suffices to evaluate the hypotheses of this work. ### Contrastive RISE (c-RISE) and Patch Selectivity Petsiuk et al. [21] proposed Randomized Input Sampling for Explanation of Black-box Models (RISE), a method that generates an image heatmap that highlights the importance of pixel evidence in that image for a specific prediction \(y_{\text{pred}}\). This method is a perfect fit for our problem since it is an empirical method that is model agnostic and can be applied to both modern CNNs and ViTs. Specifically, it uses iterative random masking of an image using Monte Carlo sampling, and evaluates the predictions of the model on the masked images to generate an importance map. Unfortunately, RISE is not a contrastive method that generates evidence maps for a specific class, and only that class. This is a direly needed property for us, since occlaters can be in the label space of the model, which can cause them to be highlighted as non-specific evidence using traditional RISE. We propose a grey-box modification of RISE called contrastive RISE (c-RISE), where the Monte Carlo equation becomes: \[S_{x,f}(\lambda)\overset{\mathrm{MC}}{\approx}\frac{1}{\mathbb{E}[B]\cdot N_ {B}}\sum_{i=1}^{N_{B}}[f(x\odot B_{i})-f^{\prime}(x\odot B_{i})]\cdot B_{i}( \lambda). \tag{2}\] Where \(B_{i}\) is the sample binary mask, and \(f^{\prime}\) is the classifier \(f\) with the weights of the last fc layer flipped (multiplied by \(-1\)) following the trick proposed in [35]. For more information on c-RISE please refer to the supplementary material. Finally, we present an empirical approximation of patch selectivity using c-RISE, which corresponds to the contrastive importance of in-context areas of the image. Simply, we sum the parts of the c-RISE importance heatmap that overlap with image patches that are from the original image (and not from the occluder image): \[\mathcal{P}_{f}(x)=\frac{1}{N}\sum S_{x,f}\odot(1-M). \tag{3}\] ## 3 Datasets Realistic Occlusion Dataset (ROD)The Realistic Occlusion Dataset is the product of a meticulous object collection protocol aimed at collecting and capturing 40+ distinct objects from 16 classes: _banana_, _baseball_, _cowboy hat_, _cup_, _dumbbell_, _hammer_, _laptop_, _microwave_, _mouse_, _orange_, _pillow_, _plate_, _screwdriver_, _skillet_, _gutula_, _and_ _vase_. Images are taken in a bright room with soft, natural light. All objects are captured on a brown wooden table against a solid colored wall. An iPhone 13 Pro ultra-wide camera with a tripod is used to capture images at an elevation of approximately 90\({}^{\circ}\) and distance of 1 meter from the object. Occluder objects are wooden blocks or square pieces of cardboard, painted red or blue. The occluder object is added between the camera and the main object and its x-axis position is varied such that it begins at the left of the frame and ends at the right. In total, 1 clean image and 12 occluded images are captured for each object. Each object is measured and the occluder step size is broken up into equal sizes. Superimposed Masked Dataset (SMD)We generate three versions of SMD, an occluded ImageNet-1K validation set, as an additional way to evaluate the impact of occlusion on model performance. This experiment used a variety of occluder objects that are not in the ImageNet-1K label space and are unambiguous in relationship to objects that reside in the label space. Two occluder objects for each of the following classes were segmented using Meta's Segment Anything [12]: _airpods_, _virtual reality headset_, _drone_, _graduation cap_, _anatomical heart_, _origami heart_, _skateboard_, _diamonds_ (_stones_, _not in a setting_)_, _Grogu_ (_baby yoda_)_, _person_, _popcorn_, _coronavirus_, _bacteriophage_, _and bacteria_. Figure 3 shows examples of images from the SMD datasets with varying levels of occlusion. ## 4 Experiments Models and TrainingThe Patch Mixing models are trained from scratch using the original training scripts. The only hyperparameter change made is the removal of random erasing. When augmenting, we set an equal probability of using Mixup, CutMix, or Patch Mixing. For each batch of images, the patching ratio is randomly sampled from a beta distribution. If not specified, experiments are conducted on the ImageNet validation set. Tiny networks were trained on 4 RTX8000 and Small networks on 4 A6000. ### Patch Selectivity ViTs have better patch selectivity than CNNsTo test a model's ability to ignore out-of-context patches, we run patch mixing experiments on ImageNet-1K val and report the Top-1 accuracy as a function of information loss in Figures 4. Note that no label smoothing is applied for attacked images and the information loss degree is deterministic. We present different experiments using different number of image patches. We observe that Original Swin models vastly outperform Original ConvNeXt models as information loss increases. Specifically, this shows that Swin can naturally ignore out-of-context patches better than ConvNeXt. Using Patch Mixing augmentation, CNNs have similar patch selectivity to ViTsBy examining Figures 4, we can see that using patch mixing augmentation ConvNeXt equals the performance of original Swin with respect to patch replacement attacks, gaining the ability of patch selectivity that ViTs inherently have. To add further evidence to this, Swin networks do not improve much on average using patch mixing, which suggests that we are supplying an inductive bias that is already present in the architecture. Figure 4: Patch Mixing experiments on tiny and small networks on ImageNet-1K val. _ViTs natively have better patch selectivity than CNNs, yet when we use Patch Mixing augmentation, CNNs have similar patch selectivity to ViTs._ Figure 3: Random examples from our proposed challenging occlusion datasets: SMD (left 3 images) and ROD (right 3 images) datasets. ### Spatial structure invariance Patch Mixing bestows better spatial structure invariance to CNNsThe fundamental architecture of ViTs offers inherent, "out-of-the-box" permutation invariance. We re-implement the patch permutation experiments conducted in [20] and find that, surprisingly, Patch Mixing reduces modern CNNs reliance on spatial structure, resulting in context-independence and robustness to permutations on par with ViT models. In Figure 5 we see that the performance gap between original and Patch Mixing trained ConvNeXt models increases with the shuffle grid size. Conversely, the performance gap between ConvNeXt-T trained with Patch Mixing and the original Swin-T network remains small even as the shuffle grid size increases. The accuracy of ConvNeXt-S patch is nearly identical to the original Swin-S network. Interestingly, this is the only experiment where Swin trained with Patch Mixing shows a consistent improvement over its original counterpart. ### Robustness to occlusion Patch Mixing improves robustness to occlusion for CNNs but not for ViTsTable 1 presents a summary of the results for different network architectures tested on three datasets: ImageNet-1K val (IN) top-1, SMD top-1 (avg. over 10-30% occlusion), NVD [25] simulated occlusion validation top-5, and ROD top-5. The ConvNeXt and Swin networks are compared in their standard and Patch versions, both in Tiny (T) and Small (S) configurations. In the Tiny category, ConvNeXt-T and ConvNeXt-T Patch Mixing both achieved an IN top-1 score of 82.1%, but the Patch Mixing version performed better in the NVD occlusion set (26.1% vs. 25.4%), SMD (48.9% vs. 47.6%), and ROD (42.6% vs. 40.4%). For the Swin-T versions, the Patch Mixing model showed minor improvements over the original in the IN and NVD occlusion datasets but slightly under-performed on ROD. The trend is mirrored for Small models. Overall, the table suggests that the Patch variants of CNNs generally showed improved performance on occluded datasets compared to their original counterparts, whereas ViTs do not substantially improve. Random Patch DropFigure 6 illustrates that for tiny and small networks with grid size (14, 14) ConvNeXt trained with Patch Mixing outperforms its counterpart, and in some cases achieves the best result with increasing information loss. We also see that Swin performance either stays static or slightly increases, but not by the same magnitude as ConvNeXt performance. c-RISEWe obtain c-RISE maps from images that are attacked using patch mixing for both original and improved ConvNeXt and Swin models. We normalize the importance map using a Softmax function and calculate the inverse of our defined patch selectivity metric in Equation 3 by summing the importance values in out-of-context patches. To obtain granular heatmaps we increase the number of RISE masks to 14,000 and use a stride of 14. Figure 5: **Better patch selectivity means greater resistance to abnormal spatial structure: Top-1 accuracy on IN-1k val set is plotted against shuffle grid size for the patch permutation experiments on Tiny (a) and Small (b) networks. Examples of patch permutations can be seen in (c) and (d).** CNNs trained with Patch Mixing exhibit increased patch selectivity, rivaling that of ViTsWe show the quantitative results of inverse patch selectivity in Table 2 for Tiny networks using grid sizes of (7, 7) and (14, 14). We also illustrate the differences between the models' heatmap appearances in Figure 7. Specifically, we can see how ConvNeXt Original's importance map _spills_ from in-context to out-of-context patches due to the convolutional architecture, a phenomenon that is addressed in ConvNeXt w/ Patch Mixing. ConvNeXt Patch Mixing and Swin Original both correctly classify the airplane carrier in Figure 7, but ConvNeXt original incorrectly classifies the image as carousel. This shows that ConvNeXt Patch Mixing more effectively ignores occluders that are out-of-context in general, with importance maps that mirror those of Swin. ## 5 Related Work Data AugmentationThere are many data augmentations that attempt to address the issue of occlusion, from stochastic elimination of regions within training images to regional dropout [37; 5; 31]. To effectively address the limitations of traditional empirical risk minimization approaches in training deep neural networks, Zhang et al. [34] introduced Mixup. A now widely utilized data augmentation technique, Mixup synthesizes new training instances by linearly interpolating between random image \begin{table} \begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{Model} & Inverse Patch Selectivity \\ \cline{2-4} & (7, 7) & (14, 14) \\ \hline ConvNeXt-T Original & 0.0201 & 0.0198 \\ ConvNeXt-T Patch Mixing & **0.0194** & **0.0196** \\ \hline Swin-T Original & 0.0196 & 0.0197 \\ Swin-T Patch Mixing & 0.0197 & 0.0198 \\ \hline \hline \end{tabular} \end{table} Table 2: Inverse patch selectivity (**lower** is better) using c-RISE and patch attack grid sizes of (7, 7) and (14, 14). We evaluate 5 images per class for 100 classes using Softmax normalized saliency maps. \begin{table} \begin{tabular}{l c c c} \hline \hline \multirow{2}{*}{Model} & IN & SMD & NVD & ROD \\ \hline ConvNeXt-T Original & 82.1 & 47.6 & 25.4 & 40.4 \\ ConvNeXt-T Patch Mixing & 82.1 & **48.9** & **26.1** & **42.6** \\ \hline ConvNeXt-S Original & 83.1 & 49.4 & 21.9 & 48.4 \\ ConvNeXt-S Patch Mixing & **83.2** & **50.1** & **25.8** & 48.4 \\ \hline \hline Swin-T Original & 81.2 & 56.5 & 18.4 & **41.9** \\ Swin-T Patch Mixing & **81.3** & **57.2** & **18.9** & 40.2 \\ \hline Swin-S Original & **83.2** & **60.4** & **20.5** & 44.3 \\ Swin-S Patch Mixing & 82.9 & 60.2 & 18.2 & **48.2** \\ \hline \hline \end{tabular} \end{table} Table 1: Mean accuracy results for IN, ROD, SMD, and NVD test sets (%). Figure 6: Random patch drop: Tiny and Small networks pairs and their respective labels. This approach encourages the model to produce smoother decision boundaries, leading to better generalization. As noted by Yun et al. [33], Mixup samples are locally ambiguous and unnatural, often confusing the model. To address this, Yun et al. presented CutMix, a regularization strategy for training robust classifiers with localizable features. CutMix combines the benefits of previous data augmentation techniques, such as Mixup and Cutout [5], by overlaying patches of one image onto another and adjusting the corresponding labels proportionally. OcclusionCurrent related works on occlusion in object detection and image classification indicate that while systems have evolved to be more robust, they still fail to accurately classify and detect objects under severe occlusion. Existing approaches like Region Proposal Networks [8], which are applied for learning fast detection approaches [9], perform well for object detection tasks but fail when the bounding box of an object is occluded. Recent works have shown that traditional approaches like Deep Convolutional Neural Networks (DCNNs) such as ResNet [10] or VGG [26] display little robustness to occlusion [38; 15]. Addressing this issue with data augmentations simulating partial occlusion has had limited success [5]. Conversely, generative compositional models have been shown to be robust to partial object occlusion with the ability to still detect object features [11; 7; 4; 29]. Recently, CompositionalNets, which incorporate DCNN architecture, have been proven to be far more robust to occlusion than their traditional counterparts [14; 13]. Building off this work, context-aware CompositionalNets were introduced to control the influence of the object's context on the classification result, increasing accuracy when confronted with largely occluded objects [28]. Other deep learning approaches require detailed part-level annotations to reconstruct occluded objects, which is costly [36; 24]. ## 6 Conclusion In this paper, we investigated the difference between CNNs and ViTs in terms of their ability to handle occlusion and ignore out-of-context information. In particular, we introduced the concept of _patch selectivity_ as a measure of this ability and showed that ViTs naturally possess higher patch selectivity than CNNs. We also proposed Patch Mixing, a data augmentation method that simulates patch selectivity in CNNs by inserting patches from other images onto training images. We demonstrated that Patch Mixing improves the performance of CNNs on various occlusion benchmarks, including two new datasets that we created: SMD and ROD. Furthermore, we developed c-RISE, a contrastive explainability method that allows us to visualize and quantify patch selectivity for both CNNs and ViTs. Our results suggest that patch selectivity is an important element for occlusion robustness and Patch Mixing is an effective method to amplify this characteristic within CNNs, bridging the gap with respect to ViTs that are naturally stronger in this area. Figure 7: Saliency maps of spider monkey (top) and airplane carrier (bottom). _ConvNeXt w/ Patch Mixing shows a strongly improved ability to ignore out-of-context patches._
2309.07206
Reversibility of quantum resources through probabilistic protocols
Among the most fundamental questions in the manipulation of quantum resources such as entanglement is the possibility of reversibly transforming all resource states. The key consequence of this would be the identification of a unique entropic resource measure that exactly quantifies the limits of achievable transformation rates. Remarkably, previous results claimed that such asymptotic reversibility holds true in very general settings; however, recently those findings have been found to be incomplete, casting doubt on the conjecture. Here we show that it is indeed possible to reversibly interconvert all states in general quantum resource theories, as long as one allows protocols that may only succeed probabilistically. Although such transformations have some chance of failure, we show that their success probability can be ensured to be bounded away from zero, even in the asymptotic limit of infinitely many manipulated copies. As in previously conjectured approaches, the achievability here is realised through operations that are asymptotically resource non-generating, and we show that this choice is optimal: smaller sets of transformations cannot lead to reversibility. Our methods are based on connecting the transformation rates under probabilistic protocols with strong converse rates for deterministic transformations, which we strengthen into an exact equivalence in the case of entanglement distillation.
Bartosz Regula, Ludovico Lami
2023-09-13T18:00:00Z
http://arxiv.org/abs/2309.07206v3
# Reversibility of quantum resources through probabilistic protocols ###### Abstract Among the most fundamental questions in the manipulation of quantum resources such as entanglement is the possibility of reversibly transforming all resource states. The most important consequence of this would be the identification of a unique entropic resource measure that exactly quantifies the limits of achievable transformation rates. Remarkably, previous results claimed that such asymptotic reversibility holds true in very general settings; however, recently those findings have been found to be incomplete, casting doubt on the conjecture. Here we show that it is indeed possible to reversibly interconvert all states in general quantum resource theories, as long as one allows protocols that may only succeed probabilistically. Although such transformations have some chance of failure, we show that their success probability can be ensured to be bounded away from zero, even in the asymptotic limit of infinitely many manipulated copies. As in previously conjectured approaches, the achievability here is realised through operations that are asymptotically resource non-generating. Our methods are based on connecting the transformation rates under probabilistic protocols with strong converse rates for deterministic transformations. We strengthen this connection into an exact equivalence in the case of entanglement distillation. How to measure and compare quantum resources? As evidenced by the plethora of commonly used quantifiers of resources such as entanglement [1; 2], this seemingly basic question has many possible answers, and it may appear as though there is no unambiguous way to resolve it. However, it is important to keep in mind that, instead of simply assigning some numerical value to a given quantum state, one often wishes to compare quantum resources operationally: how difficult is it to convert one resource state into another? Following this pathway is reminiscent of the operational approach used to study thermodynamics, where indeed a unique resource measure -- the entropy -- emerges naturally from basic axioms [3; 4]. A phenomenon which underlies this possibility is reversibility: two comparable states of equal entropy can always be connected by a reversible adiabatic transformation [4]. Reversibility was observed also in the asymptotic manipulation of quantum entanglement of pure states [5], prompting several conjectures about the connections between entanglement and thermodynamics, and in particular about the existence of a unique operational measure of entanglement [6; 7; 8; 9]. Here, the restrictions on resource manipulation are typically understood in terms of asymptotic transformation rates \(r(\rho\rightarrow\omega)\): given many copies of a state \(\rho\), how many copies of another state \(\omega\) can we obtain per each copy of \(\rho\)? The question of resource reversibility then asks whether \(r(\rho\rightarrow\omega)=r(\omega\rightarrow\rho)^{-1}\), meaning that exactly as many copies of \(\omega\) can be obtained in the transformation as is needed to transform them back into \(\rho\). Although entanglement of noisy states may exhibit irreversibility in many contexts [10; 11; 12; 13], hopes persisted that an operational approach allowing for universal reversibility can be constructed. A remarkable axiomatic framework emerged, first for entanglement [8; 9] and later for more general quantum resources [14], which claimed that reversibility can indeed always be achieved under suitable assumptions. Such a striking property would not only establish a unique entropic measure of quantum resources, but also connect the broad variety of different resources in a common operational formalism. However, issues have transpired in parts of the proof of these results [15], putting this general reversibility into question. As of now, there is no known framework that can establish the reversibility of general quantum resource theories -- and in particular quantum entanglement -- even under weaker assumptions. What is more, recent results demonstrated an exceptionally strong type of irreversibility of entanglement [13], casting doubt on the very possibility of recovering reversible manipulation whatsoever. Here we show that the reversibility of general quantum resource transformations can be established in a setting that very closely resembles the original assumptions of the reversibility conjectures [8; 9; 14], with only one change: we allow probabilistic conversion protocols. That is, we study transformations which allow for some probability of failure, and we demonstrate that in this setting the conversion rates are exactly given by the entropic resource measure known as the regularised relative entropy, identifying it as the unique operational resource quantifier. To ensure that the protocols are not unphysically difficult to realise, we constrain the probability of failure so that it does not become prohibitively large: there always remains a constant non-zero chance of successful conversion, even when manipulating an unbounded number of quantum states. We thus construct the first complete reversible framework for general quantum resources, including entanglement. We stress that, although conceptually similar, our approach does not exactly recover the reversibility conjectured in [8; 9; 14], since only strictly deterministic transformations were employed there. However, we consider it to be strong supporting evidence in favour of reversibility being an achievable phenomenon in general quantum resource theories. On the technical side, the way we avoid issues associated with the generalised quantum Stein's lemma [16] that undermined the original reversibility claims, is to use only the _strong converse_ part of the lemma, which is still valid [15]. Strong converse rates are typically understood as general no-go limitations on resource transformations, but here we turn them into achievable rates precisely by employing probabilistic protocols. For the special case of entanglement distillation, we show that these two concepts -- strong converse rates in deterministic transformations on one side, and probabilistic conversion rates on the other -- are exactly equivalent, which holds true also in the most practically relevant settings of entanglement manipulation such as under local operations and classical communication (LOCC). _Resource transformation rates.--_ Quantum resource theories represent various settings of restricted quantum information processing [2]. Let us denote by \(\mathcal{O}\) the set of operations which are freely allowed within the physical setting of the given resource theory. Our discussion of reversibility will require a specific choice of \(\mathcal{O}\), but for now it may be understood as a general set of permitted operations. The _deterministic transformation rate_\(r_{p=1}(\rho\to\omega)\) is defined as the supremum of real numbers \(r\) such that \(n\) copies of the state \(\rho\) can be converted into \(\lfloor rn\rfloor\) copies of the target state \(\omega\) using the free operations \(\mathcal{O}\). The conversion here is assumed to be deterministic, i.e. all transformations are realised by completely positive and trace-preserving maps (quantum channels). However, the process is only required to be approximate, in the sense that some error \(\varepsilon_{n}\) is allowed in the transformation, as long as it vanishes in the limit as \(n\to\infty\). In many practical contexts, one may be willing to relax the assumption that the error must tend to zero -- it may, for instance, be appealing to tolerate some manageable error in the transformation if it could lead to increased transformation rates. The ultimate upper bound that constrains the improvements that can be gained through such trade-offs is represented by the _strong converse rate_\(r_{p=1}^{t}(\rho\to\omega)\). It is defined as the least rate \(r\) such that, if we attempt the conversion \(\rho^{\otimes n}\to\omega^{\otimes\lfloor r^{\prime}n\rfloor}\) at any larger rate \(r^{\prime}>r\), then even approximate transformations with large error become impossible. Another common way to increase the capabilities in resource manipulation is to allow for probabilistic transformations [17; 18; 19; 20; 21; 22; 23]. Probabilistic protocols in quantum information theory are represented by a collection of completely positive, but not necessarily trace-preserving maps \(\{\mathcal{E}^{(i)}\}_{i}\), such that the total transformation \(\sum_{i}\mathcal{E}^{(i)}\) preserves trace. We say that \(\rho\) can be converted to \(\omega\) if there exists a free probabilistic operation \(\mathcal{E}^{(i)}\in\mathcal{O}\) such that \(\frac{\mathcal{E}^{(i)}(\rho)}{\operatorname{Tr}\mathcal{E}^{(i)}(\rho)}=\omega\); the probability of this transformation is \(p=\operatorname{Tr}\mathcal{E}^{(i)}(\rho)\). One can then define asymptotic transformation rates analogously as before, by considering sequences of probabilistic operations \((\mathcal{E}_{n})_{n}\) such that each \(\mathcal{E}_{n}\in\mathcal{O}\) converts \(\rho^{\otimes n}\) to a state which is \(\varepsilon_{n}\)-close to the target state \(\omega^{\otimes\lfloor rn\rfloor}\) and so that the error vanishes asymptotically. In our definition of a probabilistic rate, the probability \(p\) itself is not counted as part of the rate, and we instead focus solely on the number of copies of states that are being manipulated. There is a potential issue with such an approach, as leaving the probability of the transformation unconstrained effectively allows for postselecting on exponentially unlikely events, which then makes possible transformations which are conventionally known to be unachievable [23] -- but only at the expense of the overall probability of success \(p_{n}=\operatorname{Tr}\mathcal{E}_{n}(\rho^{\otimes n})\) becoming vanishingly small. To ensure that the protocol remains practically realisable, here we forbid such a possibility. We thus consider _probabilistic transformation rates with non-vanishing probability_, \(r_{p>0}(\rho\to\omega)\), where the probability is constrained to be bounded away from zero even in the limit \(n\to\infty\). Explicitly, \[r_{p>0}(\rho\to\omega) \coloneqq\sup_{(\mathcal{E}_{n})_{n}}\Bigg{\{}\,r\,\Bigg{|}\, \mathcal{E}_{n}\in\mathcal{O},\] \[\lim_{n\to\infty}F\Bigg{(}\frac{\mathcal{E}_{n}(\rho^{\otimes n})} {\operatorname{Tr}\mathcal{E}_{n}(\rho^{\otimes n})},\,\omega^{\otimes\lfloor rn \rfloor}\Bigg{)}=1, \tag{1}\] \[\liminf_{n\to\infty}\operatorname{Tr}\mathcal{E}_{n}(\rho^{\otimes n })>0\Bigg{\}},\] where \(F\) denotes fidelity. The fact that the deterministic transformation rate \(r_{p=1}\) is the smallest of the three types introduced above is clear from the definition. However, there is no obvious relation between the strong converse rate and the probabilistic one. We can nevertheless show that the rates actually form a hierarchy: \[r_{p=1}(\rho\to\omega)\,\leq\,r_{p>0}(\rho\to\omega)\,\leq\,r_{p=1}^{\dagger}( \rho\to\omega). \tag{2}\] This demonstrates in particular that the probabilistic rate \(r_{p>0}\) is well behaved, as it does not exceed conventional limitations imposed by strong converse rates. It may also provide a tighter restriction on deterministic transformation rates than those coming from strong converse bounds. _Free operations and reversibility.--_ The asymptotic transformation rates depend heavily on the choice of the free operations \(\mathbb{O}\). Typically, practically relevant choices of free operations are subsets of _resource-non-generating (RNG) operations_, defined as those maps \(\mathcal{E}\) (possibly probabilistic ones) that satisfy \(\frac{\mathcal{E}(\sigma)}{\operatorname{Tr}\mathcal{E}(\sigma)}\in\mathbb{F}\) for all \(\sigma\in\mathbb{F}\). Here, \(\mathbb{F}\) stands for the set of free (resourceless) states of the given theory. The definition of RNG operations then means that these maps are not allowed to generate any resources for free, which is a very basic and undemanding assumption to make. The framework of [8; 9; 14] studied the manipulation of quantum resources under transformations which slightly relax the above constraint, imposing instead that small amounts of resources may be generated, as long as they vanish asymptotically. Specifically, let us consider the resource measure known as _generalised (global) robustness_\(R_{\mathbb{F}}^{\delta}\), defined as [24] \[R_{\mathbb{F}}^{\delta}(\rho)\coloneqq\inf\left\{\lambda\geq 1\ \left|\ \frac{\rho+\lambda\omega}{1+\lambda}\in\mathbb{F},\ \omega\in\mathbb{D}\right.\right\}, \tag{3}\] where \(\mathbb{D}\) denotes the set of all states. The \(\delta\)-approximately resource-non-generating operations \(\mathbb{O}_{\text{RNG}_{\delta}}\) are then all maps \(\mathcal{E}\) such that \[R_{\mathbb{F}}^{\delta}\left(\frac{\mathcal{E}(\sigma)}{\operatorname{Tr} \mathcal{E}(\sigma)}\right)\leq\delta\quad\forall\sigma\in\mathbb{F}. \tag{4}\] Finally, the transformation rates under _asymptotically resource-non-generating maps_\(\mathbb{O}_{\text{ARNG}}\), whether deterministic or probabilistic, are defined as those where each transformation \(\rho^{\otimes n}\to\omega^{\otimes\{rn\}}\) is realised by a \(\delta_{n}\)-approximately RNG operation, with \(\delta_{n}\to 0\) in the limit as \(n\to\infty\). We will denote deterministic rates under such operations as \(r_{p=1}(\rho\xrightarrow[\alpha]{\alpha\text{RNG}}\omega)\), and analogously for the probabilistic rates \(r_{p>0}\). The main reason to study asymptotically RNG operations is their conjectured reversibility [8; 9; 14], combined with the fact that reversibility of entanglement has been ruled out under essentially all sets of operations smaller than \(\mathbb{O}_{\text{ARNG}}\)[13]. Specifically, the claim is that the deterministic rates always equal \[r_{p=1}(\rho\xrightarrow[\alpha]{\alpha\text{RNG}}\omega)\stackrel{{?}}{{=}}\frac{D_{\mathbb{F}}^{\infty}(\rho)}{D_{\mathbb{F}}^{ \infty}(\omega)}, \tag{5}\] where \(D_{\mathbb{F}}^{\infty}\) denotes the _regularised relative entropy of a resource_, \[D_{\mathbb{F}}^{\infty}(\rho)\coloneqq\lim_{n\to\infty}\frac{1}{n}\ \inf_{\sigma\in\mathbb{F}}D(\rho^{\otimes n}\|\sigma_{n}) \tag{6}\] with \(D(\rho\|\sigma)=\operatorname{Tr}\rho(\log\rho-\log\sigma)\) being the quantum relative entropy. This would precisely identify \(D_{\mathbb{F}}^{\infty}\) as the unique resource measure in the asymptotic setting. However, this conjecture relied crucially on the generalised quantum Stein's lemma [16], in whose proof a gap was recently discovered [15]. Hence, the statement in Eq. (5) is not known to be true [15]. _Probabilistic reversibility.--_ The conjectured resource reversibility in Eq. (5) is remarkably general. The original claim applied not only to entanglement, but also to more general quantum resources, as long as the set \(\mathbb{F}\) satisfies a number of mild assumptions -- notably, it must be convex, and it must be such that the tensor product of any two free states remains free, as does their partial trace [14; 16]. These are weak assumptions obeyed by the vast majority of theories of practical interest. Our main result is a general probabilistic reversibility of quantum resources under the exact same assumptions. **Theorem 1**.: _For all quantum states \(\rho\) and \(\omega\), the transformation rate with non-vanishing probability of success under asymptotically resource-non-generating operations satisfies_ \[r_{p>0}(\rho\xrightarrow[\alpha]{\alpha\text{RNG}}\omega)=\frac{D_{\mathbb{F }}^{\infty}(\rho)}{D_{\mathbb{F}}^{\infty}(\omega)}. \tag{7}\] _This implies in particular a general reversibility of state transformations: \(r_{p>0}(\rho\xrightarrow[\alpha]{\alpha\text{RNG}}\omega)=r_{p>0}(\omega \xrightarrow[\alpha]{\alpha\text{RNG}}\rho)^{-1}\) for any pair of states._ Both the converse and the achievability parts of this result make use of the asymptotic equipartition property for the generalised robustness, which was shown by Brandao and Plenio [16, Proposition II.1] and independently by Datta [25, Theorem 1]. This property says that, under a suitable'smoothing', the generalised robustness \(R_{\mathbb{F}}^{\delta}\) converges asymptotically to the regularised relative entropy of the resource: \[\lim_{\varepsilon\to 0}\limsup_{n\to\infty}\frac{1}{n}\min_{\frac{1}{2}\| \omega_{n-\omega^{\otimes\{rn\}}}\|_{1}\leq\varepsilon}\log\Bigl{(}1+R_{ \mathbb{F}}^{\delta}(\omega_{n})\Bigr{)}=r\,D_{\mathbb{F}}^{\infty}(\omega). \tag{8}\] Importantly, this finding directly leads to the strong converse of the generalised quantum Stein's lemma [16; Corollary III.3], but it does not appear to be enough to deduce the main achievability part of the lemma [15]. Our main contribution here is to show that the strong converse part is sufficient to show the reversibility of quantum resources, as long as probabilistic protocols are allowed. Proof sketch of Theorem 1.: The converse direction relies on the strong monotonicity properties of the generalised robustness \(R_{\mathbb{F}}^{\delta}\) as well as the aforementioned asymptotic equipartition property (8). This follows a related approach that was recently used to study postselected probabilistic transformation rates [23], and here we extend it to asymptotically resource-non-generating transformations ARNG. A point of note is that standard techniques based on the asymptotic continuity of the relative entropy [1; 26; 27] do not seem to be sufficient to establish a converse bound on probabilistic rates [23; Appendix H], requiring the use of a different toolset that explicitly makes use of the features of \(R_{\mathbb{F}}^{\delta}\). For the achievability part of the theorem, we use the exact calculation of the strong converse exponent in the generalised quantum Stein's lemma [16]. The lemma is concerned with the distinguishability of many copies of a quantum state \(\rho^{\otimes n}\) against all states in the set of the free states \(\mathbb{F}\). The result of [16] then says that, for every resource theory, there exists a sequence of measurement operators \((A_{n})_{n}\) such that \(\operatorname{Tr}\left(A_{n}\rho^{\otimes n}\right)\geq 1-\delta_{n}\) and \[-\frac{1}{n}\log\sup_{\sigma\in\mathbb{F}}\operatorname{Tr}(A_{n}\sigma) \xrightarrow{n\to\infty}D_{\mathbb{F}}^{\infty}(\rho). \tag{9}\] Here, \(\delta_{n}\) denotes the probability of incorrectly guessing that \(\rho^{\otimes n}\) is a free state, while the quantity in Eq. (9) characterises the opposite error of incorrectly guessing that a free state is \(\rho^{\otimes n}\). The issue here is that (9) is only valid in the strong converse regime, which means that the error \(\delta_{n}\) is not guaranteed to vanish: it may actually tend to a constant arbitrarily close to \(1\). This prevents a direct application of previous deterministic results [14]. What we do instead is define probabilistic operations of the form \[\mathcal{E}_{n}(\tau)\coloneqq\operatorname{Tr}(A_{n}\tau)\,\omega_{n}+\mu_{n }\operatorname{Tr}[(1-A_{n})\tau]\,\pi_{n}, \tag{10}\] where: \(\omega_{n}\) are states appearing in (8) which are \(\varepsilon_{n}\)-close to the target states \(\omega^{\otimes\left\lfloor\frac{n}{n}D_{\mathbb{F}}^{\otimes}(\rho)/D_{ \mathbb{F}}^{\otimes}(\omega)\right\rfloor},\pi_{n}\) are some suitably chosen states, and \(\mu_{n}\in[0,1]\) are parameters to be fixed. The basic idea is then that by decreasing \(\mu_{n}\), we can make the output of this operation closer to \(\omega_{n}\), even when \(\operatorname{Tr}(A_{n}\rho^{\otimes n})\neq 1\). However, one cannot just decrease \(\mu_{n}\) arbitrarily, as the maps \(\mathcal{E}_{n}\) must be ensured to be free operations. Our crucial finding is that \(\mu_{n}\) can always be chosen so that \(\mu_{n}\xrightarrow{n\to\infty}0\) while the operations \(\mathcal{E}_{n}\) generate asymptotically vanishing amounts of resources and the overall probability of success does not vanish. This means precisely that the sequence \((\mathcal{E}_{n})_{n}\) is an ARNG protocol that realises the desired conversion. The complete proof of Theorem 1 can be found in Section II of the Appendix. _Entanglement distillation._ -- Two of the most important problems in the understanding of asymptotic entanglement manipulation concern the tasks of extracting 'entanglement bits' (ebits), i.e. copies of the maximally entangled two-qubit singlet state \(\Phi_{+}\), and the reverse task of converting ebits into general noisy states. The rates of these two tasks are known as, respectively, the _distillable entanglement_\(E_{d,\mathcal{O}}^{x}(\rho)\coloneqq r_{x}(\rho\to\Phi_{+})\) and the _entanglement cost_\(E_{c,\mathcal{O}}^{x}(\rho)\coloneqq r_{x}(\Phi_{+}\to\rho)^{-1}\), where \(x\) stands for either \(p\)=1, \(p\) > 0, or the strong converse rate \(p\)=1,\(\dagger\). Although exact expressions can be obtained for the entanglement cost in various settings [28; 9; 9], the understanding of distillable entanglement appears to be an extremely difficult problem that has so far resisted most attempts at a conclusive solution, except in some special cases [30; 31]. Of note is the conjectured result that [9; 15] \[E_{d,\operatorname{NE}}^{p=1}(\rho)\stackrel{{?}}{{=}}D_{ \operatorname{SEP}}^{\infty}(\rho), \tag{11}\] where NE stands for the class of non-entangling operations (equivalent to RNG maps in this theory) and \(D_{\operatorname{SEP}}^{\infty}\) is the regularised relative entropy of entanglement. Establishing this result would recover the deterministic reversibility of entanglement theory (that is, Eq. (5)) [9; 15]. We note that distillation rates under NE operations are equal to rates under asymptotically non-entangling operations (ANE) [13], which correspond to \(\mathcal{O}_{\operatorname{ARNG}}\) in the notation of this work [9]. We now introduce a close relation that connects entanglement distillation transformations in the probabilistic and strong converse regimes. Namely, we show that one can always improve on the transformation error of a distillation protocol by sacrificing some success probability, and vice versa. What this means in particular is that every rate that can be achieved in the deterministic strong converse regime (i.e. with a possibly large error \(\varepsilon<1\)) can also be achieved probabilistically with error going to zero. Crucially, to construct the new, modified protocol from the original one we only need to employ local operations and classical communication (LOCC), which are the standard class of free operations in entanglement theory, meaning that the result applies to essentially all different types of operations that extend LOCC. **Theorem 2**.: _Let \(\mathbb{O}\) be any class of operations which is closed under composition with LOCC, i.e. such that \(\mathcal{E}\in\mathbb{O}\), \(\mathcal{F}\in\text{LOCC}\Rightarrow\mathcal{F}\circ\mathcal{E}\in\mathbb{O}\). This includes in particular the set LOCC itself. Then, for all states \(\rho\),_ \[E_{d,\mathbb{O}}^{p=1,\dagger}(\rho)=E_{d,\mathbb{O}}^{p>0}(\rho). \tag{12}\] _For the case of (asymptotically) non-entangling operations, we have that_ \[E_{d,\text{(A)NE}}^{p=1,\dagger}(\rho)=E_{d,\text{(A)NE}}^{p>0}(\rho)=D_{ \operatorname{SEP}}^{\infty}(\rho). \tag{13}\] Proof sketch.: Assume that two spatially separated parties, conventionally called Alice and Bob, share \(n\) copies of an entangled state \(\rho_{AB}\). Consider any sequence of protocols which allows them to distill entanglement from such states at a rate \(r\) with with error \(\varepsilon_{n}\xrightarrow{n\to\infty}\varepsilon\) and probability \(p_{n}\xrightarrow{n\to\infty}p\). (This includes the deterministic strong converse case where \(p_{n}=1\).) After performing the considered distillation protocol, they share a many-copy state \(\tau_{A^{\prime}B^{\prime}}\) which approximates \(\Phi_{+}^{\otimes[rn]}\). What they can do now is to sacrifice a fixed number \(k\) of their qubit pairs in order to perform a state discrimination protocol: by testing whether the \(k\)-copy subsystem is in the state \(\Phi_{+}^{\otimes k}\) or not, they can probabilistically bring the state of the rest of their shared system closer to \(\Phi_{+}^{\otimes[rn]-k}\). We show that this can be done by a simple LOCC protocol wherein Alice and Bob perform measurements in the computational basis and compare their outcomes. Since \(k\) is arbitrary here, Alice and Bob can perform the modified protocol without sacrificing the transformation rate. Conversely, in a similar manner they can also increase their probability of success by sacrificing some transformation fidelity. By deriving the exact conditions for when distillation protocols can be refashioned in such a away, we observe that another sequence of entanglement distillation protocols with error \(\varepsilon_{n}^{\prime}\stackrel{{ H\rightarrow\infty}}{{ \longrightarrow}}\varepsilon^{\prime}\) and probability \(p_{n}^{\prime}\stackrel{{ H\rightarrow\infty}}{{ \longrightarrow}}p^{\prime}\) can exist _if and only if_ \[p\left(1-\varepsilon\right)=p^{\prime}\left(1-\varepsilon^{\prime}\right). \tag{14}\] This directly implies Eq. (12). To see Eq. (13), it suffices to combine Theorem 1 with the known result that \(D_{\text{SEP}}^{\infty}(\rho)\) is a strong converse rate for distillation under ANE [9; 32]. We have already shown the probabilistic reversibility of entanglement theory in Theorem 1, so let us now discuss the deterministic case. Here, reversibility is fully equivalent to the question of whether \(E_{d,\text{O}}^{p=1}(\rho)=E_{c,\text{O}}^{p=1}(\rho)\) holds for all quantum states, which has been conjectured to be true for the class of asymptotically non-entangling operations[9]. Combining our results with the known findings of Brandao, Plenio, and Datta [9; 25], we have that \[\begin{split} E_{d,\text{ANE}}^{p>0}(\rho)& \stackrel{{\text{Thm.}}}{{=}}{}^{2}E_{d,\text{ANE}}^{p=1,\intercal }(\rho)\stackrel{{\text{[9,15]}}}{{=}}E_{c,\text{ANE}}^{p=1}( \rho)\\ &\stackrel{{\text{[9,25]}}}{{=}}D_{\text{SEP}}^{ \infty}(\rho)\stackrel{{\text{Thm.}}}{{=}}{}^{1}E_{c,\text{ANE}}^{p>0}( \rho).\end{split} \tag{15}\] The missing link is thus the question if \(E_{d,\text{ANE}}^{p=1}(\rho)\stackrel{{?}}{{=}}E_{d,\text{ANE}}^{p >0}(\rho)\), or the equivalent [15] question of whether \(E_{c,\text{ANE}}^{p=1,\intercal}(\rho)\stackrel{{?}}{{=}}E_{c, \text{ANE}}^{p>0}(\rho)\). Showing either of these statements would complete the proof of the deterministic reversibility of quantum entanglement under asymptotically non-entangling operations. An interesting consequence of the above is that establishing the equivalent of Theorem 2 for entanglement _dilution_ would be sufficient to recover a fully reversible entanglement theory. We remark that other quantum resource theories may not be amenable to a characterisation in terms of distillation and dilution because they may not possess a suitably well-behaved unit of a resource resembling the maximally entangled state [14; 33; 34]. Nonetheless, reversibility in all resource theories can be understood as in our Theorem 1. _Discussion.--_ We have shown that the conjectured reversibility as well as the unique entropic measure of general quantum resources can be recovered, provided one considers asymptotic transformation rates under asymptotically resource non-generating operations realised as probabilistic protocols with non-vanishing probability of success. Although the setting departs slightly from the original conjectures of [9; 14], in view of the close relations between probabilistic and deterministic rates (Eq. (2)) we regard our results as evidence that reversibility could indeed be recovered also in the deterministic setting. This is further motivated by the fact that, in many quantum information processing tasks, strong converse rates actually coincide with the optimal achievable rates [35; 36; 37; 38; 39; 40; 41], meaning that \(r_{p=1}=r_{p=1}^{\intercal}\) and the hierarchy in Eq. (2) collapses. However, a complete proof of this fact in the setting of resource manipulation remains elusive, and it is still possible that one of the inequalities in Eq. (2) may be strict for some states, thus ruling out deterministic reversibility. We hope that our results stimulate further research in this direction, leading to an eventual resolution of the open questions that cloud the understanding of asymptotic resource manipulation and quantum hypothesis testing. _Acknowledgments.--_ We acknowledge discussions with Tulja Varun Kondra and Alexander Streltsov. We are grateful to the Freie Universitat Berlin for hospitality.
2309.08135
Entanglement dynamics in $κ$-deformed spacetime
We treat two identical and mutually independent two-level atoms that are coupled to a quantum field as an open quantum system. The master equation that governs their evolution is derived by tracing over the degree of freedom of the field. With this, we compare the entanglement dynamics of the two atoms moving with different trajectories in $\kappa$-deformed and Minkowski spacetimes. Notably, when the environment-induced interatomic interaction does not exist, the entanglement dynamics of two static atoms in $\kappa$-deformed spacetime are reduced to that in Minkowski spacetime in the case that the spacetime deformation parameter $\kappa$ is sufficiently large as theoretically predicted. However, if the atoms undergo relativistic motion, regardless of whether inertial or non-inertial, their entanglement dynamics in $\kappa$-deformed spacetime behave differently from that in Minkowski spacetime even when $\kappa$ is large. We investigate various types of entanglement behavior, such as decay and generation, and discuss how different relativistic motions, such as uniform motion in a straight line and circular motion, amplify the differences in the entanglement dynamics between the $\kappa$-deformed and Minkowski spacetime cases. In addition, when the environment-induced interatomic interaction is considered, we find that it may also enhance the differences in the entanglement dynamics between these two spacetimes. Thus, in principle, one can tell whether she/he is in $\kappa$-deformed or Minkowski spacetime by checking the entanglement behavior between two atoms in certain circumstances.
Xiaobao Liu, Zehua Tian, Jiliang Jing
2023-09-15T04:06:53Z
http://arxiv.org/abs/2309.08135v2
# Entanglement dynamics in \(\kappa\)-deformed spacetime ###### Abstract: We treat two identical and mutually independent two-level atoms that are coupled to quantum field as an open quantum system. The master equation that governs their evolution is derived by tracing over the degree of freedom of field. With this, we comparatively study the entanglement dynamics of the two atoms moving with different trajectories in \(\kappa\)-deformed spacetime and Minkowski spacetime. It is found that when there is no the environment-induced interatomic interaction, the entanglement dynamics of two static atoms in \(\kappa\)-deformed spacetime are reduced to the case in Minkowski spacetime in the case that the spacetime deformation parameter \(\kappa\) is huge enough as theoretically predicted. However, if the atoms undergo relativistic motion, no matter inertial or non-inertial, their entanglement dynamics in \(\kappa\)-deformed spacetime behave quite differently with that in Minkowski spacetime even \(\kappa\) is huge. We investigate various entanglement behaviors, such as decay and generation, and discuss how different relativistic motion, such as uniform motion in a straight line and circular motion, amplifies the difference of entanglement dynamics between the \(\kappa\)-deformed spacetime case and the Minkowski spacetime case. Besides, when the environment-induced interatomic interaction is considered, we find that it may also enhance the difference of entanglement dynamics between in these two spacetimes. So, in principle, one can tell whether he is in \(\kappa\)-deformed spacetime or in Minkowski spacetime by checking the entanglement behaviors between two atoms in certain circumstances. + Footnote †: preprint: ###### Contents * 1 Introduction * 2 Master equation and Scalar field propagator * 2.1 Master equation of two-atom system * 2.2 Scalar field propagator in \(\kappa\)-deformed spacetime and Minkowski spacetime * 3 Entanglement dynamics for two atoms without the environment-induced interatomic interaction * 3.1 Entanglement dynamics of two static atoms * 3.1.1 Two static atoms initially prepared in a separable state \(|E\rangle\) * 3.1.2 Two static atoms initially prepared in entangled states \(|A\rangle\) and \(|S\rangle\) * 3.2 Entanglement dynamics of two uniformly moving atoms * 3.2.1 Two uniformly moving atoms initially prepared in a separable state \(|E\rangle\) * 3.2.2 Two uniformly moving atoms initially prepared in entangled states \(|A\rangle\) and \(|S\rangle\) * 3.3 Entanglement dynamics of two circularly accelerated atoms * 3.3.1 Two circularly accelerated atoms initially prepared in a separable state \(|E\rangle\) * 3.3.2 Two circularly accelerated atoms initially prepared in entangled states \(|A\rangle\) and \(|S\rangle\) * 4 Entanglement dynamics for two atoms with the environment-induced interatomic interaction * 4.1 Entanglement dynamics of two static atoms * 4.1.1 Two static atoms initially prepared in a separable state \(|10\rangle\) * 4.1.2 Two static atoms initially prepared in entangled state \(|\psi\rangle=\sqrt{p}|A\rangle+\sqrt{1-p}|S\rangle\) * 4.2 Entanglement dynamics of two uniformly moving atoms * 4.2.1 Two uniformly moving atoms initially prepared in a separable state \(|10\rangle\) * 4.2.2 Two uniformly moving atoms initially prepared in entangled state \(|\psi\rangle=\sqrt{p}|A\rangle+\sqrt{1-p}|S\rangle\) * 5 Entanglement dynamics of two circularly accelerated atoms \[4.3.1\] Two circularly accelerated atoms initially prepared in a separable state \[|10\rangle\] \[4.3.2\] Two circularly accelerated atoms initially prepared in entangled state \[|\psi\rangle=\sqrt{p}|A\rangle+\sqrt{1-p}|S\rangle\] * **Conclusions** ## 1 Introduction Quantum entanglement, as a crucial physical resource in technologies based on quantum effects, is the essential feature underlying quantum information, cryptography, quantum computation [1, 2, 3], and so on. Recently, quantum entanglement has been investigated together with the theories of relativity and quantum fields. For example, entanglement dynamics [4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], and entanglement harvesting [21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31] have been studied in a variety of different relativistic scenes and settings. The aim of these researches is to explore the effects of relativistic motion and gravity on quantum entanglement, and in turn, entanglement could also be used to probe the structure of spacetime and analyze gravitational-matter interaction. On the one hand, the unavoidable coupling between a quantum system and the external environment usually leads to decoherence and dissipation to quantum system. It may also cause disentanglement or even entanglement sudden death [32, 33, 34], and thus has been thought as one of the major obstacles to the realization of the quantum information technologies. However, on the other hand, a common bath may induce the indirect interactions between otherwise independent atoms immersed in which as a consequence of environment correlations. Thus in this case entanglement could be created for atoms although initially in separable states [35, 36, 37, 38], and even the destroyed entanglement may also revive [39]. What is more, entanglement for two atoms with vanishing separation still persists even in the asymptotic equilibrium regime [4, 5]. In Ref. [40] it has been shown that the environment-induced interatomic interaction would assist entanglement generation. Particularly, recently the entanglement character of two particle detectors in two different while quite similar spacetimes, e.g., in the de Sitter spacetime and in the thermal Minkowski spacetime [41, 42, 43, 44], has been discussed. It is found that the relevant different entangling power of spacetime could be expected to distinguish these different universes in principle [13, 41, 42, 43, 44, 45, 46, 47]. Such investigations aforementioned remind us that the study of entanglement dynamics within a framework of relativity can tell us about the nature of spacetime, which is what makes sense for us to explore related fields. The Minkowski spacetime has a continuous structure, whose spacetime coordinates are commutative, and the scalar field theory is well constructed in this commutative spacetime. However, on the microscopic level side, one of the significant topics of quantum gravity theories concerns the modification of the notion of spacetime and the quantization of spacetime coordinates. This requirement may result in that the notions of symmetry of the spacetime are modified. The symmetry algebra of a certain quantum gravity model is known to be the \(\kappa\)-Poincare algebra. The corresponding "Lie algebra"-type of noncommutative spacetime is named the \(\kappa\)-Minkowski spacetime [48; 49]. In this respect, the exploration of this noncommutative spacetime can help us deepen our understanding of the structure and properties of spacetime in the microscopic areas. There exists substantially related works on the \(\kappa\) spacetime, as well as the construction and relevant investigations of field theory models on this spacetime [50; 51; 52; 53; 54; 55; 56; 57; 58] (and references cited therein). Usually, the quantum field theory in \(\kappa\)-deformed spacetime is quite complicated as a result of the non-commutation of spacetime coordinates. However, recently in Ref. [59] an interesting model starts with \(\kappa\)-deformed Klein-Gordon theory which is invariant under a \(\kappa\)-Poincare algebra and written in commutative spacetime. Inspired by this approach, we in Ref. [60] investigated the quantum Fisher information in \(\kappa\)-deformed spacetime and found that the relativistic motion can effectively improve the quantum precision in the estimation of spacetime deformation. In this regard, we note that the possible and expected \(\kappa\) deformation of spacetime is quite weak so that usually all the physical theory consists with that in Minkowski spacetime. Hence, it is worthy to ask whether it is possible to distinguish the \(\kappa\)-deformed spacetime from the Minkowski spacetime? In this paper we investigate the entanglement dynamics for two-atom system coupled to a scalar field in \(\kappa\)-deformed spacetime, and compare the results with that in Minkowski spacetime. Firstly, the master equation that governs the system evolution is derived, and the standard field theory techniques developed for the commutative spacetime is used to explore this \(\kappa\)-deformed scalar theory [59; 60]. Then we consider the evolution behaviors of entanglement between the two atoms with different trajectories in \(\kappa\)-deformed spacetime and Minkowski background. Our results demonstrate that the relativistic motion of the atoms can help us to distinguish the entanglement dynamics between in \(\kappa\)-deformed spacetime and in Minkowski spacetime. Besides, when choosing special initial states which may introduce the environment-induced interatomic interaction, the atomic entanglement dynamics could also behave quite differently in these two universes. Thus this entanglement dynamics difference of two atoms can be used to distinguish the \(\kappa\)-deformed spacetime and the Minkowski spacetime in principle. The paper is organized as follows. In section 2, basic formulaes of the master equation for two-atom system interacting with scalar field are reviewed. We also take a review of \(\kappa\)-deformed scalar theory written in commutative spacetime and its propagator. In section 3, we consider the dynamics of entanglement for two-atom system in \(\kappa\)-deformed spacetime and Minkowski spacetime without the enviornment-induced interatomic interaction. In section 4, the influence of the enviornment-induced interatomic interaction on the behaviors of the entanglement dynamics is explored in detail. Finally, we end up with conclusions in section 5. Throughout the whole paper we employ natural units \(c=\hbar=1\). Relevant constants are restored when needed for the sake of clarity. ## 2 Master equation and Scalar field propagator We simply review the master equation of two atoms which interact with a scalar field in the vacuum fluctuation, and introduce the corresponding scalar field propagator in \(\kappa\)-deformed spacetime with the commutative theory. ### Master equation of two-atom system Let us consider two identical and mutually independent atoms weakly coupled to a bath of fluctuating scalar field in its vacuum state. We also applied this model in the relativistic scenario recently [61, 62, 63, 64, 65, 66, 67]. The total Hamiltonian \(H\) for the complete system, i.e., the two atoms together with the external scalar field, reads \[H=H_{S}+H_{E}+H_{I}. \tag{1}\] Here the free two-atom Hamiltonian \(H_{S}\) is given by \[H_{S}=\frac{1}{2}\omega_{0}\sigma_{3}^{(1)}+\frac{1}{2}\omega_{0}\sigma_{3}^{ (2)}, \tag{2}\] where \(\sigma_{i}^{(1)}=\sigma_{i}\otimes\mathbf{I}\), \(\sigma_{i}^{(2)}=\mathbf{I}\otimes\sigma_{i}\), \(\sigma_{i}\) with \(i\in\{1,2,3\}\) are Pauli matrices and \(\mathbf{I}\) is the \(2\times 2\) unit matrix, and \(\omega_{0}\) denotes the energy level spacing of an individual atom. Note that \(H_{E}\) is the free Hamiltonian of the scalar field, whose explicit expression is not required here, and \(H_{I}\) represents the interaction Hamiltonian between atoms and field. We assume that the coupling between the two atoms and the scalar field takes the form, specifically in analogy to the electric dipole interaction [68], \[H_{I}=\mu[\sigma_{2}^{(1)}\phi(\mathbf{x}_{1}(\tau))+\sigma_{2}^{(2)}\phi( \mathbf{x}_{2}(\tau))]. \tag{3}\] Here \(\mu\) is the coupling constant that we assume to be small, and \(\phi(\mathbf{x}_{\alpha}(\tau))\) with \(\alpha\in\{1,2\}\) corresponds to the scalar field operator with \(\tau\) being the proper time of the atoms. In the frame of atoms, the time evolution of the total system is governed by the von Neumann equation \[\frac{\partial\rho_{\rm tot}(\tau)}{\partial\tau}=-i[H,\rho_{\rm tot }(\tau)]. \tag{4}\] We assume the initial density matrix of the atoms-field system as \(\rho_{\rm tot}=\rho(0)\otimes|0\rangle\langle 0|\), in which \(\rho(0)\) is the initial reduced density matrix of the two-atom system and \(|0\rangle\) is the vacuum state of the scalar field. It is worth mentioning that we are interested in the time evolution of the two-atom system, thus we trace over the field degrees of freedom, i.e., \(\rho(\tau)={\rm Tr}_{E}[\rho_{\rm tot}(\tau)]\). Under the Born-Markov approximation [69], the reduced dynamics of the two-atom system can be described by the Kossakowski-Lindblad form [70; 71; 72] in the limit of weak coupling \[\frac{\partial\rho(\tau)}{\partial\tau}=-i[H_{\rm eff},\rho_{A}( \tau)]+{\cal D}[\rho(\tau)], \tag{5}\] where the effective Hamiltonian \(H_{\rm eff}\) is \[H_{\rm eff}=H_{S}-\frac{i}{2}\sum_{\alpha,\beta=1}^{2}\sum_{i,j= 1}^{3}H_{ij}^{(\alpha\beta)}\sigma_{i}^{(\alpha)}\sigma_{j}^{(\beta)}, \tag{6}\] and the dissipator \({\cal D}[\rho(\tau)]\) is \[{\cal D}[\rho(\tau)]=\frac{1}{2}\sum_{\alpha,\beta=1}^{2}\sum_{i,j=1}^{3}C_{ ij}^{(\alpha\beta)}[2\sigma_{j}^{(\beta)}\rho\sigma_{i}^{(\alpha)}-\sigma_{i}^{( \alpha)}\sigma_{j}^{(\beta)}\rho-\rho\sigma_{i}^{(\alpha)}\sigma_{j}^{(\beta) }]. \tag{7}\] In the master equation (5) that the dissipator \({\cal D}[\rho(\tau)]\) describes the environment-induced decoherence and dissipation, which means that the evolution of the quantum system is nonunitary. There is also modification of the free Hamiltonian of the two-atom system which incarnates in the effective Hamiltonian \(H_{\rm eff}\). The coefficients of the matrix \(C_{ij}^{(\alpha\beta)}\) in Eq. (7) reads \[C_{ij}^{(\alpha\beta)}=A^{(\alpha\beta)}\delta_{ij}-iB^{(\alpha \beta)}\epsilon_{ijk}\delta_{3k}-A^{(\alpha\beta)}\delta_{3i}\delta_{3j}, \tag{8}\] where \[A^{(\alpha\beta)} = \frac{\mu^{2}}{4}[{\cal G}^{(\alpha\beta)}(\omega)+{\cal G}^{( \alpha\beta)}(-\omega)],\] \[B^{(\alpha\beta)} = \frac{\mu^{2}}{4}[{\cal G}^{(\alpha\beta)}(\omega)-{\cal G}^{( \alpha\beta)}(-\omega)]. \tag{9}\] In the above, we have defined \[\mathcal{G}^{(\alpha\beta)}(\lambda)=\int_{-\infty}^{+\infty}d\Delta\tau e^{i \lambda\Delta\tau}\langle\phi(\mathbf{x}_{\alpha}(\tau))G^{(\alpha\beta)}( \Delta\tau)\rangle, \tag{10}\] which is the Fourier transform of the field correlation functions \[G^{(\alpha\beta)}(\Delta\tau)=\langle\phi(\mathbf{x}_{\alpha}(\tau))\phi( \mathbf{x}_{\beta}(\tau^{\prime}))\rangle. \tag{11}\] Similarly, \(H^{(\alpha\beta)}_{ij}\) in the above expressions can be derived by replacing the Fourier transform \(\mathcal{G}^{(\alpha\beta)}(\lambda)\) with the Hilbert transform \(\mathcal{K}^{(\alpha\beta)}(\lambda)\), which is \[\mathcal{K}^{(\alpha\beta)}(\lambda)=\frac{P}{\pi i}\int_{-\infty}^{+\infty}d \omega\frac{\mathcal{G}^{(\alpha\beta)}(\lambda)}{\omega-\lambda}, \tag{12}\] with \(P\) denoting the principal value. It was shown in Refs. [4; 5] that the effective Hamiltonian \(H_{\rm eff}=\tilde{H}_{S}+H_{\rm eff}^{(12)}\) includes two pieces. The first term is the renormalization of the transition frequencies, i.e., the Lamb shift of each individual atom, and is derived by replacing \(\omega\) in the atom's Hamiltonian \(H_{S}\) (2) with a renormalized energy level spacing \[\tilde{\omega}=\omega+\frac{i\mu^{2}}{2}[\mathcal{K}^{(11)}(-\omega)- \mathcal{K}^{(11)}(\omega)]. \tag{13}\] Note that this term can be regarded as a rescaling of the gap of the energy level, we shall not consider it any further. The second term is an environment-induced coupling between the atoms, which is \[H_{\rm eff}^{(12)}=-\sum_{i,j=1}^{3}\Omega_{ij}^{(12)}\sigma_{i}\otimes\sigma _{j}, \tag{14}\] where \[\Omega_{ij}^{(12)}=D\delta_{ij}-D\delta_{3i}\delta_{3j}, \tag{15}\] with \[D=\frac{i\mu^{2}}{4}[\mathcal{K}^{(12)}(-\omega)+\mathcal{K}^{(12)}(\omega)]. \tag{16}\] As a result, we can rewrite the master equation (5) as \[\frac{\partial\rho(\tau)}{\partial\tau} = -i\tilde{\omega}\sum_{\alpha=1}^{2}[\sigma_{3}^{(\alpha)},\rho_{( \tau)}]+i\sum_{i,j=1}^{3}\Omega_{ij}^{(12)}[\sigma_{i}\otimes\sigma_{j},\rho( \tau)] \tag{17}\] \[+\frac{1}{2}\sum_{\alpha,\beta=1}^{2}\sum_{i,j=1}^{3}C_{ij}^{( \alpha\beta)}[2\sigma_{j}^{(\beta)}\rho\sigma_{i}^{(\alpha)}-\sigma_{i}^{( \alpha)}\sigma_{j}^{(\beta)}\rho-\rho\sigma_{i}^{(\alpha)}\sigma_{j}^{(\beta) }].\] To study the dynamics of the two-atom system, it is convenient for us to solve the master equation (5) in the coupled basis, i.e., in the set: \(\{|G\rangle=|00\rangle,|A\rangle=\frac{1}{\sqrt{2}}(|10\rangle-|01\rangle),|S \rangle=\frac{1}{\sqrt{2}}(|10\rangle+|01\rangle),|E\rangle=|11\rangle\}\). Moreover, with the help of Eqs. (8)-(15), we can write the coefficient of the dissipator in the master equation (17) as \[C_{ij}^{(11)}=A_{1}\delta_{ij}-iB_{1}\epsilon_{ijk}\delta_{3k}- A_{1}\delta_{3i}\delta_{3j},\] \[C_{ij}^{(22)}=A_{2}\delta_{ij}-iB_{2}\epsilon_{ijk}\delta_{3k}- A_{2}\delta_{3i}\delta_{3j},\] \[C_{ij}^{(12)}=A_{3}\delta_{ij}-iB_{3}\epsilon_{ijk}\delta_{3k}- A_{3}\delta_{3i}\delta_{3j},\] \[C_{ij}^{(21)}=A_{4}\delta_{ij}-iB_{4}\epsilon_{ijk}\delta_{3k}- A_{4}\delta_{3i}\delta_{3j},\] \[\Omega_{ij}^{(12)}=D\delta_{ij}-D\delta_{3i}\delta_{3j}. \tag{18}\] Then, the master equation (17) in terms of coupled basis can be rewritten as [73] \[\dot{\rho}_{GG} = -2(A_{1}+A_{2}-B_{1}-B_{2})\rho_{GG}+(A_{1}+A_{2}-A_{3}-A_{4}+B_ {1}+B_{2}-B_{3}-B_{4})\rho_{AA}\] \[+(A_{1}+A_{2}+A_{3}+A_{4}+B_{1}+B_{2}+B_{3}+B_{4})\rho_{SS}+(A_{ 1}-A_{2}-A_{3}+A_{4}+B_{1}\] \[-B_{2}-B_{3}+B_{4})\rho_{AS}+(A_{1}-A_{2}+A_{3}-A_{4}+B_{1}-B_{2} +B_{3}-B_{4})\rho_{SA},\] \[\dot{\rho}_{EE} = -2(A_{1}+A_{2}+B_{1}+B_{2})\rho_{EE}+(A_{1}+A_{2}-A_{3}-A_{4}-B_ {1}-B_{2}+B_{3}+B_{4})\rho_{AA}\] \[+(A_{1}+A_{2}+A_{3}+A_{4}-B_{1}-B_{2}-B_{3}-B_{4})\rho_{SS}+(-A_ {1}+A_{2}+A_{3}-A_{4}+B_{1}\] \[-B_{2}-B_{3}+B_{4})\rho_{AS}+(-A_{1}+A_{2}-A_{3}+A_{4}+B_{1}-B_{2 }+B_{3}-B_{4})\rho_{SA},\] \[\dot{\rho}_{AA} = -2(A_{1}+A_{2}-A_{3}-A_{4})\rho_{AA}+(A_{1}+A_{2}-A_{3}-A_{4}-B_ {1}-B_{2}+B_{3}+B_{4})\rho_{GG},\] \[+(A_{1}+A_{2}-A_{3}-A_{4}+B_{1}+B_{2}-B_{3}-B_{4})\rho_{EE}+(-B_ {1}+B_{2}+B_{3}-B_{4})\rho_{AS}\] \[+(-B_{1}+B_{2}+B_{3}-B_{4})\rho_{SA},\] \[\dot{\rho}_{SS} = -2(A_{1}+A_{2}+A_{3}+A_{4})\rho_{SS}+(A_{1}+A_{2}+A_{3}+A_{4}-B_ {1}-B_{2}-B_{3}-B_{4})\rho_{GG},\] \[+(A_{1}+A_{2}+A_{3}+A_{4}+B_{1}+B_{2}+B_{3}+B_{4})\rho_{EE}+(-B_ {1}+B_{2}+B_{3}-B_{4})\rho_{AS}\] \[+(-B_{1}+B_{2}+B_{3}-B_{4})\rho_{SA},\] \[\dot{\rho}_{AS} = (A_{1}-A_{2}-A_{3}+A_{4}-B_{1}+B_{2}+B_{3}-B_{4})\rho_{GG}+(-A_{1}+A_ {2}+A_{3}-A_{4}-B_{1}\] \[+B_{2}+B_{3}-B_{4})\rho_{EE}-2(A_{1}+A_{2}+2iD)\rho_{AS},\] \[\dot{\rho}_{SA} = (A_{1}-A_{2}+A_{3}-A_{4}-B_{1}+B_{2}-B_{3}+B_{4})\rho_{GG}+(-A_{1 }+A_{2}-A_{3}+A_{4}-B_{1}\] \[+B_{2}-B_{3}+B_{4})\rho_{EE}-2(A_{1}+A_{2}-2iD)\rho_{SA},\] \[\dot{\rho}_{GE} = -2(A_{1}+A_{2})\rho_{GE},\qquad\qquad\dot{\rho}_{EG}=-2(A_{1}+A_{2 })\rho_{EG}, \tag{19}\] where \(\rho_{IJ}=\langle I|\rho|J\rangle\), \(I,J\in\{G,A,S,E\}\), and \(\dot{\rho}_{IJ}\) is the derivative with respect to the atomic proper time \(\tau\). Note that the parameter \(D\) contains the environment-induced interatomic interaction, so if \(D=0\), Eq. (19) recovers to scenario where the environment-induced interatomic interaction does not exist for the two atoms coupled to the scalar field as shown in Ref. [73]. It is worthy mentioning that if the initial matrix takes the X form, namely: states with nonzero elements only along the diagonal and antidiagonal of the density matrix, in the decoupled basis \(\{|00\rangle,|01\rangle,|10\rangle,|11\rangle\}\), the X structure will be maintained during the evolution. To learn the entanglement dynamics of the two-atom system, we use concurrence to characterize quantum entanglement, which is introduced by Wootters [74]. For the X states, the concurrence is analytically given by [75] \[C[\rho(\tau)]=\max\{0,K_{1}(\tau),K_{2}(\tau)\}, \tag{20}\] where \[K_{1}(\tau) = \sqrt{[\rho_{AA}(\tau)-\rho_{SS}(\tau)]^{2}-[\rho_{AS}(\tau)- \rho_{SA}(\tau)]^{2}}-2\sqrt{\rho_{GG}(\tau)\rho_{EE}(\tau)}, \tag{21}\] \[K_{2}(\tau) = -\sqrt{[\rho_{AA}(\tau)+\rho_{SS}(\tau)]^{2}-[\rho_{AS}(\tau)+ \rho_{SA}(\tau)]^{2}}+2|\rho_{GE}(\tau)|. \tag{22}\] ### Scalar field propagator in \(\kappa\)-deformed spacetime and Minkowski spacetime We are interested in entanglement dynamics in \(\kappa\)-deformed spacetime and Minkowski spacetime. Before that we here simply review the \(\kappa\)-deformed Klein-Gordon theory, especially the field correlation function in \(\kappa\)-deformed spacetime, which plays a very important role in the following calculation. Let us firstly give the basic ingredients for the field correlation function of the scalar field in \(\kappa\)-deformed spacetime. For more details, one can refer to Refs. [59; 60], where the \(\kappa\)-deformed Klein-Gordon theory has been investigated in the commutative spacetime itself. This processing allows us to explicitly define the trajectory of the motional atoms in the commutative spacetime. Specifically, in \(\kappa\)-deformed spacetime, the time and space coordinates are not commutative but obey the Lie algebra type commutation relations \[[\hat{x}_{i},\hat{x}_{j}]=0,\ \ \ [\hat{x}_{0},\hat{x}_{i}]=\frac{i}{ \kappa}\hat{x}_{i}, \tag{23}\] with \(i,j\in\{1,2,3\}\) and the positive parameter \(\kappa\) representing the deformation of the spacetime. In Refs. [48; 49] the authors indicated that the symmetry of \(\kappa\)-deformed spacetime is well known to be the \(\kappa\)-Poincare algebra, in which the defined relations of this algebra involve the deformation parameter \(\kappa\), and when \(\kappa\to\infty\), it is reduced to the Poincare algebra. In order to construct the \(\kappa\)-Poincare algebra, we can seek realizations of the noncommutative coordinates \(\hat{x}_{\mu}\) in terms of ordinary commutative coordinates \(x_{\mu}\) and corresponding derivatives \(\partial_{\mu}\): \(\partial_{\mu}=\frac{\partial}{\partial x_{\mu}}\). These realizations define a unique mapping between the functions on noncommutative space to functions on commutative space. In such references, a general ansatz for noncommutative coordinates satisfying the algebra (23) is given by \[\hat{x}_{i}=x_{i}\varphi(A),\ \ \ \hat{x}_{0}=x_{0}\psi(A)+\frac{i}{ \kappa}x_{i}\partial_{i}\gamma(A), \tag{24}\] where \(\varphi\), \(\psi\) and \(\gamma\) are functions of \(A=-\frac{i}{\kappa}\partial_{0}\). Inserting this ansatz (24) into (23), one has \[\gamma=1+\frac{\varphi^{\prime}}{\varphi}\psi, \tag{25}\] where \(\varphi^{\prime}\) is the derivative of \(\varphi\) with respect to \(A\). Note that here \(\varphi\), \(\psi\), and \(\gamma\) are positive functions with the boundary conditions \[\varphi(0)=1,\ \ \ \psi(0)=1, \tag{26}\] and \(\gamma(0)=1+\varphi^{\prime}(0)\) has to be finite. It is worth mentioning that, in the above equations, \(\varphi\) characterizes arbitrary realizations of the noncommutative coordinates in terms of the commutative coordinates and their derivatives. Furthermore, let \(M_{\mu\nu}\) denote the generators obeying the ordinary undeformed \(so(n-1,n)\) algebra: \[[M_{\mu\nu},M_{\lambda\rho}]=\eta_{\nu\lambda}M_{\mu\rho}-\eta_{ \mu\lambda}M_{\nu\rho}-\eta_{\nu\rho}M_{\mu\lambda}+\eta_{\mu\rho}M_{\nu \lambda},\] \[M_{\mu\nu}=-M_{\nu\mu},\ \ \ \eta_{\mu\nu}=\text{diag}(-1,1,1,1). \tag{27}\] It is required that the commutator \([M_{\mu\nu},\hat{x}_{\lambda}]\) between the generators \(M_{\mu\nu}\) and the noncommutative coordinates \(\hat{x}_{\lambda}\), is antisymmetric with respect to the indices \(\mu\) and \(\nu\), and is linear functions of \(\hat{x}_{\lambda}\) and \(M_{\mu\nu}\). Note that as \(\kappa\to\infty\), we have a smooth commutative limit. Therefore, in this regard, there emerges two classes of possible realizations, given by \(\psi=1\) and \(\psi=1+2A\). We will focus on \(\psi=1\) case, and the explicit corresponding form of \(M_{\mu\nu}\) are \[M_{i0} = x_{i}\partial_{0}\varphi\frac{e^{2A}-1}{2A}-x_{0}\partial_{i} \frac{1}{\varphi}+\frac{i}{\kappa}x_{i}\Delta\frac{1}{2\varphi}-\frac{i}{ \kappa}x_{k}\partial_{k}\partial_{i}\frac{\gamma}{\varphi},\] \[M_{ij} = x_{i}\partial_{j}-x_{j}\partial_{i}, \tag{28}\] where \(\Delta=\partial_{k}\partial_{k}\). In Refs. [76; 77; 78; 79; 80; 81; 82] the Dirac derivatives \(D_{\mu}\) and the invariant Laplace operator \(\square\) have been introduced, to obtain the generalized Klein-Gordon equation invariant under the \(\kappa\)-Poincare algebra, through the following relations \[[M_{\mu\nu},D_{\lambda}]=\eta_{\nu\lambda}D_{\mu}-\eta_{\mu\lambda }D_{\nu},\;\;\;[D_{\mu},D_{\nu}]=0,\] \[[M_{\mu\nu},\square]=0,\;\;\;\;\;[\square,\hat{x}_{\mu}]=2D_{\mu}, \tag{29}\] with \[D_{i}=\partial_{i}\frac{e^{-A}}{\varphi},\;\;\;D_{0}=\partial_{0 }\frac{\sinh A}{A}+\frac{i}{\kappa}\Delta(\frac{e^{-A}}{2\varphi^{2}}),\] \[\square=\Delta(\frac{e^{-A}}{\varphi^{2}})+2\partial_{0}^{2} \frac{1-\cosh A}{A^{2}}. \tag{30}\] Note that \(D_{\mu}\) and \(M_{\mu\nu}\) given above generate the \(\kappa\)-Poincare algebra, whose relations are the same as that of the usual Poincare algebra. However, the explicit form of these generators are modified and those modifications are dependent on the deformation parameter. With Eq. (30), one can find that the Casimir of this algebra, \(D_{\mu}D_{\mu}\), can be expressed in terms of the \(\square\) operator as \[D_{\mu}D_{\mu}=\square(1-\frac{1}{4\kappa^{2}}\square). \tag{31}\] When \(\kappa\to\infty\), we have \(D_{\mu}D_{\mu}\to\partial_{\mu}\partial_{\mu}\), which reduces to the usual relativistic dispersion relation. Generalizing the notions from commutative space, it is nature to write the generalized Klein-Gordon equation using the Casimir which is invariant under the \(\kappa\)-Poincare algebra as \[\biggl{(}\square\bigl{(}1-\frac{1}{4\kappa^{2}}\square\bigr{)}-m^{2}\biggr{)} \phi(\mathbf{x})=0\;. \tag{32}\] As a result of the realizations of the noncommutative coordinates in term of the commuting ones and corresponding derivatives, both the generators and Casimir of the \(\kappa\)-Poincare algebra can be expressed in terms of the commutative coordinates and their derivatives, then the scalar field and operators appearing in the \(\kappa\)-deformed Klein Gordon equation (32) are well defined in the commutative spacetime. Therefore, we can use the standard tools of field theory defined in the commutative spacetime to analyze the \(\kappa\)-deformed Klein-Gordon theory. The deformed dispersion relation rooting in Eq. (32) reads \[4\kappa^{2}\sinh^{2}(\frac{p_{0}}{2\kappa})-p_{i}^{2}\frac{e^{-\frac{p_{0}}{ \kappa}}}{\varphi^{2}(\frac{p_{0}}{\kappa})}-\frac{1}{4\kappa^{2}}\bigg{[}4 \kappa^{2}\sinh^{2}(\frac{p_{0}}{2\kappa})-p_{i}^{2}\frac{e^{-\frac{p_{0}}{ \kappa}}}{\varphi^{2}(\frac{p_{0}}{\kappa})}\bigg{]}^{2}=m^{2}, \tag{33}\] where \(p_{0}=i\partial_{0}\) and \(p_{i}=-i\partial_{i}\). We can see from (33) that the characters of field, i.e., nonlocal and noncausal, appear. The Hamiltonian for this field is too complicated and difficult to be expressed in a compact form. Therefore, in order to obtain the two-point correlation function in \(\kappa\)-deformed spacetime for simplicity, we chose \(\varphi(\frac{p_{0}}{\kappa})=e^{-\frac{p_{0}}{2\kappa}}\) as done in Refs. [59, 60]. Moreover, from here onwards we keep only term up to second order in \(1/\kappa\) since \(\kappa\) is expected to be quite big in the theory. Through the specific calculation, we find the two-point correlation function reads \[G^{+}(\mathbf{x},\mathbf{x}^{\prime}) = \frac{1}{4\pi^{2}}\frac{1}{(\mathbf{x}-\mathbf{x}^{\prime})^{2}- (t-t^{\prime})^{2}} \tag{34}\] \[-\frac{1}{16\pi^{2}\kappa^{2}}\frac{(\mathbf{x}-\mathbf{x}^{ \prime})^{2}+3(t-t^{\prime})^{2}}{[(\mathbf{x}-\mathbf{x}^{\prime})^{2}-(t-t^ {\prime})^{2}]^{3}}\] \[-\frac{1}{4\pi^{2}\kappa^{2}}\frac{[(\mathbf{x}-\mathbf{x}^{ \prime})^{2}+(t-t^{\prime})^{2}](t-t^{\prime})^{2}}{[(\mathbf{x}-\mathbf{x}^{ \prime})^{2}-(t-t^{\prime})^{2}]^{4}}.\] Note that for \(\kappa\rightarrow\infty\), the two-point correlation function in Eq. (34) recovers to the Minkowski spacetime case [83], as expected, which is \[G^{+}(\mathbf{x},\mathbf{x}^{\prime})=\frac{1}{4\pi^{2}}\frac{1}{(\mathbf{x}- \mathbf{x}^{\prime})^{2}-(t-t^{\prime})^{2}}. \tag{35}\] In what follows, these two correlation functions will be used to explore the two-atom system entanglement dynamics. ## 3 Entanglement dynamics for two atoms without the environment-induced interatomic interaction Now let us consider the entanglement dynamics of two-atom system interacting with an external environment. Three different initial state cases will be considered: 1) the separable state \(|E\rangle\), 2) the symmetric entangled state \(|S\rangle\), 3) the antisymmetric entangled state \(|A\rangle\). Note that all of these chosen initial states won't result in the environment-induced interatomic interaction. We focus on how the relativistic motion affects the entanglement dynamics. Specifically, we will analyse the entanglement dynamics for static atoms, inertial atoms moving with a constant velocity and circularly accelerated atoms, respectively, which are coupled with a massless scalar field in \(\kappa\)-deformed spacetime and Minkowski spacetime. In particular, we will comparatively investigate the phenomena of entanglement generation and degradation in these two universes. ### Entanglement dynamics of two static atoms We first consider the entanglement dynamics of two static atoms which are separated by a distance \(L\) along the trajectories \[t_{1}(\tau)=\tau, x_{1}(\tau)=0, y_{1}(\tau)=0, z_{1}(\tau)=0,\] \[t_{2}(\tau)=\tau, x_{2}(\tau)=0, y_{2}(\tau)=0, z_{2}(\tau)=L. \tag{10}\] Substituting the above trajectories into the two-point correlation function in \(\kappa\)-deformed spacetime (34), we obtain \[G^{11}(x,x^{\prime}) = G^{22}(x,x^{\prime})=-\frac{1}{4\pi^{2}}\frac{1}{\triangle\tau^{ 2}}-\frac{1}{16\pi^{2}\kappa^{2}}\frac{1}{\triangle\tau^{4}},\] \[G^{12}(x,x^{\prime}) = G^{21}(x,x^{\prime})=-\frac{1}{4\pi^{2}}\bigg{[}\frac{1}{ \triangle\tau^{2}-L^{2}}+\frac{1}{4\kappa^{2}}\frac{3\triangle\tau^{2}+L^{2}} {(\triangle\tau^{2}-L^{2})^{3}}-\frac{1}{\kappa^{2}}\frac{\triangle\tau^{4}+L ^{2}\triangle\tau^{2}}{(\triangle\tau^{2}-L^{2})^{4}}\bigg{]}.\] By invoking the method of residue theorem, the corresponding Fourier transforms shown in (10) to the field correlation functions (10) are found to be \[\mathcal{G}^{11}(\lambda) = \mathcal{G}^{22}(\lambda)=\frac{\lambda}{2\pi}\bigg{[}1-\frac{7 \lambda^{2}}{96\kappa^{2}}\bigg{]}\theta(\lambda),\] \[\mathcal{G}^{12}(\lambda) = \mathcal{G}^{21}(\lambda)=\frac{\lambda}{2\pi}\bigg{[}\frac{\sin \lambda L}{\lambda L}-\frac{\lambda^{2}\cos\lambda L}{24\kappa^{2}}\bigg{]} \theta(\lambda), \tag{11}\] where \(\theta(x)\) is the step function. Substituting the Fourier transforms (11) into Eq. (9), one can find \[A_{1}=A_{2}=B_{1}=B_{2}=\frac{\Gamma_{0}}{4}\bigg{[}1-\frac{ \omega^{2}}{24\kappa^{2}}\bigg{]},\] \[A_{3}=A_{4}=B_{3}=B_{4}=\frac{\Gamma_{0}}{4}\bigg{[}\frac{\sin \omega L}{\omega L}-\frac{\omega^{2}\cos\omega L}{24\kappa^{2}}\bigg{]}, \tag{12}\] with \(\Gamma_{0}=\frac{\mu^{2}\omega}{2\pi}\) being the spontaneous emission rate of each individual atom. Notice that for \(\kappa\rightarrow\infty\), the functions (11) in \(\kappa\)-deformed spacetime recover to that of two atoms case at rest in Minkowski spacetime as expected. Thus we have the relevant coefficients of Eq. (18) for this case as \[A_{1}=A_{2}=B_{1}=B_{2}=\frac{\Gamma_{0}}{4},\] \[A_{3}=A_{4}=B_{3}=B_{4}=\frac{\Gamma_{0}}{4}\frac{\sin\omega L}{ \omega L}. \tag{19}\] Preparing the initial state, e.g., \(|E\rangle\), \(|S\rangle\) or \(|A\rangle\), and inserting these relevant coefficients above into the Eqs. (19), we can solve the master equation correspondingly. Then we can find the corresponding entanglement in Eq. (20) is given by \[C[\rho(\tau)]=\max\{0,K_{1}(\tau)\}, \tag{20}\] where \[K_{1}(\tau)=\sqrt{[\rho_{AA}(\tau)-\rho_{SS}(\tau)]^{2}}-\sqrt{ \rho_{GG}(\tau)\rho_{EE}(\tau)}, \tag{21}\] from which we can see that the concurrence is independent of the environment-induced interatomic interaction. Explicit entanglement dynamics for various situations is investigated in the following. #### 3.1.1 Two static atoms initially prepared in a separable state \(|E\rangle\) We start with the entanglement dynamics for static atoms initially prepared in a separable state \(|E\rangle\). From Eqs. (20)-(21), we can see that entanglement can be generated only when the factor \(\sqrt{[\rho_{AA}(\tau)-\rho_{SS}(\tau)]^{2}}\) outweighs the threshold factor \(2\sqrt{\rho_{GG}(\tau)\rho_{EE}(\tau)}\). We note that such entanglement generation phenomenon inevitably occurs when the system undergoes only spontaneous emission evolution for a finite time. This phenomenon is called as the delayed sudden birth of entanglement [37]. Let us consider the case where the interatomic separation vanishes (\(L\to 0\)). Then \(A_{i}=B_{i}=\frac{\Gamma_{0}}{4}[1-\frac{\omega^{2}}{24\kappa^{2}}]\) in Eq. (19) and \(A_{i}=B_{i}=\frac{\Gamma_{0}}{4}\) in Eq. (19) with \(i\in\{1,2,3,4\}\). Therefore \(\rho_{AA}(\tau)\) remains zero during evolution both in \(\kappa\)-deformed spacetime and Minkowski spacetime. In such case the threshold always outweighs the population \(\rho_{SS}(\tau)\) and no quantum entanglement can be generated both in these two universes. For the interatomic separation comparable with the transition wavelength (\(L\sim\omega^{-1}\)), we solve Eq. (19) numerically. We show the corresponding results in Fig. 1. We find that unlike the vanishing interatomic separation case, the delayed sudden birth of entanglement occurs both for the \(\kappa\)-deformed spacetime case and Minkowski spacetime case. Specifically, one can note that the time when the entanglement begins to be generated depends on the interatomic separation and the spacetime deformation. The larger the distance between the two atoms is, the earlier the entanglement generation occurs. The deformation of spacetime may delay the entanglement creation. Furthermore, the amplitude of the created entanglement is also influenced by the interatomic separation and the spacetime deformation, which we will discuss detailedly in the following. When the value of spacetime deformation parameter is large, no matter how long the atomic separation is, the points list substantially coincides with the solid line, i.e., the entanglement generation for two static atoms during evolution in \(\kappa\)-deformed spacetime is almost identical to the Minkowski spacetime case in this situation. In Fig. 2, we study the effects of atomic separation on the maximum of entanglement generated during the evolution. It is shown that the maximum of entanglement is a periodic function of interatomic separation, and the amplitude decays with the increasing interatomic separation both in \(\kappa\)-deformed spacetime and Minkowski spacetime. Remarkably, when the modification of spacetime is relatively strong, e.g., \(\kappa/\omega=1\), we find that the entanglement behaviors of two atoms via the interatomic distance in \(\kappa\)-deformed spacetime are different from that in Minkowski spacetime [see Fig. 2 (a)]. However, when the modification of spacetime is relatively weak, e.g., \(\kappa/\omega=1000\), we find the entanglement for the \(\kappa\)-deformed spacetime case and the Minkowski spacetime case behaves almost the same [see Fig. 2 (b)]. In this regard, we infer that for the larger spacetime deformation parameter (LDP) case, all the laws of physics in \(\kappa\)-deformed spacetime substantially recover to that in flat spacetime. Thus in this case it seems to be difficult to distinguish these two spacetimes. #### 3.1.2 Two static atoms initially prepared in entangled states \(|A\rangle\) and \(|S\rangle\) Here we are going to investigate the entanglement degradation for two static atoms initially prepared in the antisymmetric entangled state \(|A\rangle\) and the symmetric entangled state \(|S\rangle\), both of which are maximally entangled. From Fig. 3, we find that the concurrence decreases monotonically via time and goes to zero in the infinite time limit for both of these two universe cases. Moreover, Figure 1: Time evolution of concurrence for two static atoms initially prepared in \(|E\rangle\). \(M\) denotes the Minkowski spacetime. as the interatomic separation increases, the entanglement magnitude difference between the \(\kappa\)-deformed spacetime case and Minkowski spacetime case becomes smaller. This means that it becomes more difficult for us to distinguish theses two universes through two-atom entanglement dynamics. We also find that the response of entanglement magnitude to the interatomic separation behaves quite differently for different initial entangled states (antisymmetric entangled state and symmetric entangled state). Specifically, the entanglement magnitude for the initial antisymmetric entangled state case (initial symmetric entangled state) decreases (increases) as the increase of the interatomic separation. In particular, for a fixed evolution time, the concurrence drops to an asymptotic value as the spacetime deformation parameter \(\kappa\) increases [see Figs. 3 (c) and (d)]. In other words, when the deformation parameter \(\kappa\) decreases, the entanglement magnitude is larger than the entanglement magnitude of the interatomic separation. Figure 3: Concurrence as a function of evolution time \(\Gamma_{0}\tau\) (a, b) and deformation parameter \(\kappa/\omega\) (c, d) for static atoms initially prepared in \(|A\rangle\) (left) and \(|S\rangle\) (right). Figure 2: The maximum of concurrence during evolution for two static atoms via the interatomic separation initially prepared in \(|E\rangle\). \(\kappa\) is large, the entanglement dynamics curves for two atoms in \(\kappa\)-deformed spacetime almost coincide with that for the Minkowski spacetime case [also see Figs. 3 (a) and (b)]. We give a brief summary here. For the case of two static atoms initially prepared in above three specific states, when the spacetime deformation parameter \(\kappa\) is large, the entanglement dynamics in \(\kappa\)-deformed spacetime are almost indistinguishable with that in Minkowski spacetime regardless of the choice of interatomic separation. This implies that we cannot distinguish these two spacetimes using the entanglement dynamics of static atoms with these initial states when the deformation parameter is large. However, it is known from the theoretical hypothesis that the value of deformation parameter should be large involved in \(\kappa\)-deformed spacetime [84, 85, 86], thus the \(\kappa\)-deformed spacetime exhibits almost the same properties as the Minkowski spacetime generally and obeys the Poincare algebra. Hence, an issue arises: can we find some external auxiliary conditions, such as relativistic motion, to distinguish these two spacetimes by means of entanglement dynamics when the spacetime deformation parameter is large? This is what we will study in the following. ### Entanglement dynamics of two uniformly moving atoms In this section we will investigate the entanglement dynamics of two atoms moving with a constant velocity in \(\kappa\)-deformed spacetime and Minkowski spacetime. We mainly focus on how the velocity affects the entanglement behaviors in these two different spacetimes. The trajectories of the two inertial atoms, which are moving with a constant velocity and are separated from each other by a distance \(L\), can be described as \[t_{1}(\tau)=\gamma\tau, x_{1}(\tau)=\upsilon\gamma\tau, y_{1}(\tau)=0, z_{1}(\tau)=0,\] \[t_{2}(\tau)=\gamma\tau, x_{2}(\tau)=\upsilon\gamma\tau, y_{2}(\tau)=0, z_{2}(\tau)=L, \tag{10}\] where \(\upsilon\) denotes the velocity and \(\gamma=1/\sqrt{1-\upsilon^{2}}\) is the usual Lorentz factor. Substituting the trajectories (10) into Eq. (34), the two-point correlation functions in \(\kappa\)-deformed spacetime can be rewritten as \[G^{11}(x,x^{\prime})=G^{22}(x,x^{\prime})=-\frac{1}{4\pi^{2}} \frac{1}{\triangle\tau^{2}}+\frac{1}{16\pi^{2}\kappa^{2}}\frac{\gamma^{2}(3+ \upsilon^{2})\triangle\tau^{4}}{-\frac{1}{4\pi^{2}\kappa^{2}}\frac{\gamma^{4}( \upsilon^{2}+1)}{\triangle\tau^{4}}},\] \[G^{12}(x,x^{\prime})=G^{21}(x,x^{\prime})=-\frac{1}{4\pi^{2}} \frac{1}{\triangle\tau^{2}-L^{2}}+\frac{1}{16\pi^{2}\kappa^{2}}\frac{\gamma^{2 }(3+\upsilon^{2})\triangle\tau^{2}+L^{2}}{(\triangle\tau^{2}-L^{2})^{4}}\] \[-\frac{1}{4\pi^{2}\kappa^{2}}\frac{\gamma^{4}(\upsilon^{2}+1) \triangle\tau^{4}+L^{2}\gamma^{2}\triangle\tau^{2}}{(\triangle\tau^{2}-L^{2})^ {4}}. \tag{11}\] Subsequently the Fourier transforms of the above correlation functions can be calcu lated with residue theorem \[\mathcal{G}^{11}(\lambda) =\mathcal{G}^{22}(\lambda)=\frac{\lambda}{2\pi}\bigg{[}1-\frac{ \lambda^{2}}{24\kappa^{2}}\frac{1+\upsilon^{2}}{(1-\upsilon^{2})^{2}}-\frac{ \lambda^{2}}{96\kappa^{2}}\frac{3+\upsilon^{2}}{1-\upsilon^{2}}\bigg{]}\theta( \lambda),\] \[\mathcal{G}^{12}(\lambda) =\mathcal{G}^{21}(\lambda)=\frac{\lambda}{2\pi}\bigg{[}\frac{ \sin\lambda L}{\lambda L}+\frac{f(\lambda,L,\upsilon)}{24\lambda\kappa^{2}L^ {3}}\bigg{]}\theta(\lambda), \tag{3.10}\] where \(f(\lambda,L,\upsilon)=\frac{(3\upsilon^{4}\lambda L-3\lambda^{2}L^{3})\cos \lambda L-3\upsilon^{2}(\upsilon^{2}+2\lambda^{2}L^{2})\sin\lambda L}{(1- \upsilon^{2})^{2}}\). Submitting the Fourier transforms (3.10) into Eq. (2.9), we can obtain the coefficients in Eq. (2.18) for the \(\kappa\)-deformed spacetime case \[A_{1}=A_{2}=B_{1}=B_{2}=\frac{\Gamma_{0}}{4}\bigg{[}1-\frac{ \omega^{2}}{6\kappa^{2}}\frac{1+\upsilon^{2}}{(1-\upsilon^{2})^{2}}+\frac{ \omega^{2}}{24\kappa^{2}}\frac{3+\upsilon^{2}}{1-\upsilon^{2}}\bigg{]},\] \[A_{3}=A_{4}=B_{3}=B_{4}=\frac{\Gamma_{0}}{4}\bigg{[}\frac{\sin \omega L}{\omega L}+\frac{1}{24\omega\kappa^{2}L^{3}}f(\omega,L,\upsilon) \bigg{]}. \tag{3.11}\] Similarly, the coefficients in Eq. (2.18) for two uniformly moving atoms in Minkowski spacetime can be derived as \[A_{1}=A_{2}=B_{1}=B_{2}=\frac{\Gamma_{0}}{4},\] \[A_{3}=A_{4}=B_{3}=B_{4}=\frac{\Gamma_{0}}{4}\frac{\sin\omega L}{ \omega L}. \tag{3.12}\] Note that the coefficients in Eq. (3.12) are similar to that of two static atoms for Minkowski spacetime case (3.5). It means that, in Minkowski spacetime, all the laws of physics for two uniformly moving atoms keep the same for two static atoms. According to the above coefficients, we can infer that the dynamics for two uniformly moving atoms in \(\kappa\)-deformed spacetime are related to the velocity of atoms, while that in Minkowski spacetime are not. Therefore, it is worth to investigate how the velocity affects on the entanglement dynamics, so that to seek whether it is possible to distinguish these two spacetime with the help of relativistic motion. #### 3.2.1 Two uniformly moving atoms initially prepared in a separable state \(|E\rangle\) Let us start with the effects of velocity on the entanglement generation for two uniformly moving atoms initially prepared in a separable state \(|E\rangle\). When the interatomic separation is vanishingly small (\(L\to 0\)), it can be directly obtained from Eqs. (3.11) and (3.12) that in this limit, \(A_{i}=B_{i}\), and thus the evolution of the population \(\rho_{AA}(\tau)\) remains zero. Therefore, the threshold factor \(2\sqrt{\rho_{GG}(\tau)\rho_{EE}(\tau)}\) always overweights the population \(\rho_{SS}(\tau)\), meaning that with the vanishing interatomic distance there is no generated entanglement for the two atoms in both of these two universes. For the interatomic separation comparable with the transition wavelength (\(L\sim\omega^{-1}\)), we make a comparison between the entanglement dynamics for the \(\kappa\)-deformed spacetime case and Minkowski spacetime case in Fig. 4. We find that the waiting time to generate entanglement depends on the velocity and interatomic separation. The farther the interatomic separation is, the earlier the entanglement is generated. Besides, with the increase of the velocity of atoms, the waiting time for generating entanglement becomes longer, i.e., the sudden creation of entanglement become later. However, here we want to note that with the increase of velocity, the difference of the entanglement dynamics between that for the \(\kappa\)-deformed spacetime case and that for the Minkowski spacetime case become more distinguishable. That is, even when the spacetime deformation parameter \(\kappa\) is large, it is still in principle possible to discriminate these two spacetimes with the help of relativistic motion. This result is quite different from that of two static atoms case discussed above. We also note that the maximum entanglement amplitude created quite depends on the interatomic separation. This result could also be seen from Fig. 5 (a) detailedly. In Fig. 5 (b), we show the behavior of the maximum of entanglement generated during evolution with the assistance of the atomic motion. We find that when the velocity of atoms is large enough, the maximum of concurrence for the \(\kappa\)-deformed spacetime case and that for the Minkowski spacetime case behaves differently even the spacetime deformation parameter is relative large. This result is completely different from that shown in Fig. 2 (b). Moreover, in the Minkowski spacetime the maximum of concurrence is irrespective of velocity and remains a constant, see Fig. 5 (b), which is also given by Eq. (3.12). In addition, as the velocity increases, the maximum of concurrence presents an increasing difference between the \(\kappa\)-deformed spacetime case and the Minkowski spacetime case. It tells us that when the velocity is large enough, the entanglement behaviors of two atoms in principle can be used to distinctly discriminate these two universes even the spacetime deformation parameter, \(\kappa\), is relative large. Figure 4: Time evolution of concurrence for two uniformly moving atoms initially prepared in \(|E\rangle\). 2.2 Two uniformly moving atoms initially prepared in entangled states \(|A\rangle\) and \(|S\rangle\) Now we investigate the effects of velocity on the entanglement dynamics for two uniformly moving atoms initially prepared in two kinds of maximally entangled states, i.e., \(|A\rangle\) and \(|S\rangle\), as shown in Fig. 6. We find that as the two-atom system evolves, their concurrence decreases monotonously and finally decays to zero in the infinite time limit both for the \(\kappa\)-deformed spacetime case and Minkowski spacetime case. However, we want to note that although the spacetime deformation parameter \(\kappa\) is relatively large, the atomic concurrence curve in \(\kappa\)-deformed spacetime with a LDP still does not overlap with that for the Minkowski spacetime case, which is totally different compared with those shown in Fig. 3 for two static atoms. This result originates from the influence of atomic velocity. Moreover, we also find that as the velocity of the atoms increases the difference between the entanglement dynamics in \(\kappa\)-deformed spacetime and that in Minkowski spacetime becomes more distinct. For the initial sates, \(|A\rangle\) and \(|S\rangle\), their evolution responds to the interatomic distance behave differently. In the former case, the entanglement decreases with increase of the interatomic distance, while in the latter case the entanglement increases as the interatomic distance increases. It means that the symmetry of atomic entangled Figure 5: The maximum of concurrence during evolution for two uniformly moving atoms initially prepared in \(|E\rangle\). Figure 6: Time evolution of concurrence for two uniformly moving atoms initially prepared in \(|A\rangle\) (a) and \(|S\rangle\) (b). state may play a very important role in distinguishing the two universes with the entanglement dynamics. ### Entanglement dynamics of two circularly accelerated atoms In the following, we will explore the entanglement dynamics of two circularly accelerated atoms (with a LDP) in \(\kappa\)-deformed spacetime and Minkowski spacetime. We are interested in whether the uniform circular motion of the atoms is more readily able to tell us the difference of \(\kappa\)-deformed spacetime from the Minkowski spacetime. We assume that these two atoms rotate synchronically with a separation \(L\) perpendicular to the rotating plane, whose trajectories are described as \[t_{1}(\tau) = \gamma\tau,\ \ x_{1}(\tau)=R\cos\frac{\gamma\upsilon\tau}{R},y_{1}( \tau)=R\sin\frac{\gamma\upsilon\tau}{R},\ \ z_{1}(\tau)=0,\] \[t_{2}(\tau) = \gamma\tau,\ \ x_{2}(\tau)=R\cos\frac{\gamma\upsilon\tau}{R},y_{2}( \tau)=R\sin\frac{\gamma\upsilon\tau}{R},\ \ z_{2}(\tau)=L, \tag{3.13}\] where \(R\) denotes the radius of the circular orbit. In the rest frame of the atoms, the centripetal acceleration is \(a=\frac{\gamma^{2}\upsilon^{2}}{R}\). Applying the trajectories (3.13) to the \(\kappa\)-deformed two-point correlation function (2.34), we have \[G^{11}(x,x^{\prime}) = G^{22}(x,x^{\prime})=-\frac{1}{4\pi^{2}}\frac{1}{\triangle\tau^ {2}[1+\frac{1}{12}(a^{2}\triangle\tau^{2})]}-\frac{1}{16\pi^{2}\kappa^{2}} \frac{(4\gamma^{2}-1)-\frac{1}{12}a^{2}\triangle\tau^{2}}{\triangle\tau^{4}(1 +\frac{1}{12}a^{2}\triangle\tau^{2})^{3}} \tag{3.14}\] \[-\frac{1}{4\pi^{2}\kappa^{2}}\frac{[(2\gamma^{2}-1)-\frac{1}{12} a^{2}\triangle\tau^{2}]\gamma^{2}}{\triangle\tau^{4}(1+\frac{1}{12}a^{2} \triangle\tau^{2})^{4}},\] and \[G^{12}(x,x^{\prime})=G^{21}(x,x^{\prime})=-\frac{1}{4\pi^{2}} \frac{1}{\triangle\tau^{2}[1+\frac{1}{12}(a^{2}\triangle\tau^{2})]-L^{2}}\] \[-\frac{1}{16\pi^{2}\kappa^{2}}\frac{(4\gamma^{2}-1)\triangle\tau^ {2}-\frac{1}{12}a^{2}\triangle\tau^{4}+L^{2}}{[\triangle\tau^{2}(1+\frac{1}{1 2}a^{2}\triangle\tau^{2})-L^{2}]^{3}}\] \[-\frac{1}{4\pi^{2}\kappa^{2}}\frac{[(2\gamma^{2}-1)\triangle\tau^ {2}-\frac{1}{12}a^{2}\triangle\tau^{4}+L^{2}]\gamma^{2}\triangle\tau^{2}}{[ \triangle\tau^{2}(1+\frac{1}{12}a^{2}\triangle\tau^{2})-L^{2}]^{4}}. \tag{3.15}\] Using the residual theorem, we can straightly derive the corresponding expression of the coefficients \(A_{i}\) and \(B_{i}\) in Eqs. (2.19) for this circularly acceleration case in \(\kappa\)-deformed spacetime. However, the expression is too complex to exhibit here. Note that when \(\kappa\to\infty\) for this case, we can recover the result obtained in Ref. [87] for a circularly accelerated two-atom system in Minkowski spacetime as expected. #### 3.3.1 Two circularly accelerated atoms initially prepared in a separable state \(|E\rangle\) To study the entanglement generation for two circularly accelerated atoms in \(\kappa\)-deformed spacetime with a LDP, we assume that these two atoms initially to be prepared in a separable state \(|E\rangle\). We focus on the case where the intermediate separations comparable with the transition wavelength of the atoms (\(L\sim\omega^{-1}\)). There exists a delayed feature of entanglement generation for two circularly accelerated atoms in \(\kappa\)-deformed spacetime and Minkowski spacetime, as depicted in Fig. 7. We note that, the waiting time to generate entanglement is not only related to the centripetal acceleration, but also associated to the interatomic separation. Even the spacetime deformation parameter is relatively large, the maximum generated entanglement and the waiting time are different for these two spacetime cases concerned. Especially, for the case of circularly accelerated atoms with a fixed separation distance in Minkowski spacetime, there exists a critical value, i.e., \(a/\omega\approx 1.35\) for the centripetal acceleration, beyond which entanglement generation does not happen. However, in \(\kappa\)-deformed spacetime, this critical value will be modified and increases with decreasing the value of spacetime deformation parameter. This tells us that, in some cases, entanglement can be generated in \(\kappa\)-deformed spacetime while in Minkowski spacetime this cannot, so this presence/absence of entanglement in principle gives us a good criterion to check which universe we are living in (see the detailed discussion of this criterion in the following). Furthermore, we also find that the existing time of entanglement generated depends on the centripetal acceleration \(a/\omega\) and interatomic distance \(\omega L\). Meanwhile, although the two-atom system undergoes the same conditions, i.e., the same \(a/\omega\) and \(\omega L\), as a result of the spacetime deformation, the existing time of entanglement generated is quite different for these two universes concerned. This property may also help us to distinguish the \(\kappa\)-deformed spacetime and the Minkowski spacetime in principle. In Fig. 8, we plot how the maximal concurrence during evolution is affected by Figure 7: Time evolution of concurrence for circularly accelerated atoms initially prepared in \(|E\rangle\), varying the centripetal acceleration \(a/\omega\) (a) and the interatomic distance \(\omega L\) (b). the interatomic separation and centripetal acceleration of atoms. As shown in Figs. 8 (a) and (b), when the two atoms are static (\(a/\omega=0\)) the maximal entanglement for the \(\kappa\)-deformed spacetime case and that for the Minkowski spacetime case can not be distinguished. However, one can find they become more distinguishable with the increase of the centripetal acceleration. The maximal entanglement behavior via the centripetal acceleration depends on the interatomic separation: it would first increase to a maximum and then decay to zero with the increase of the centripetal acceleration when the interatomic separation is relatively small; or decay monotonically to zero when the interatomic separation is relatively far. Furthermore, Figs. 8 (c) and (d) show how the maximum entanglement generated depends on the interatomic separation with fixed centripetal acceleration. Remarkably, the interatomic distance regime where entanglement can be created is the acceleration-dependent. Therefore, we can find that even the spacetime deformation parameter \(\kappa\) is relatively large, and one can find the spatial region where entanglement can be created in the \(\kappa\)-deformed spacetime while cannot in the Minkowski spacetime with the assistance of the centripetal acceleration (see Fig. 8 (d)). From the above analysis, we can find that, in the presence of the centripetal acceleration, the entanglement generation of two-atom system behaves quite differently in \(\kappa\)-deformed spacetime and Minkowski spacetime. Therefore, an interesting issue arises: in which parameter regions we can discriminate these two universes concerned. In Fig. 9, we show detailedly the parameter regions of the centripetal acceleration Figure 8: Comparison between the maximum of concurrence during evolution for circularly accelerated atoms initially prepared in \(|E\rangle\) via the centripetal acceleration \(a/\omega\) (a, b) and the interatomic distance \(\omega L\) (c, d). and interatomic separation where the entanglement can/cannot be generated in \(\kappa\)-deformed spacetime and Minkowski spacetime. It is found that there exists different regions which indicate different properties of entanglement in these two universes. We can see from this diagram that the two atoms could get entangled only in some special regime of the centripetal acceleration and interatomic separation. There exists upper bounds of centripetal acceleration and interatomic separation larger than which entanglement cannot be generated. Another fact shown in Fig. 9 is that the possible region of entanglement generation for two atoms in \(\kappa\)-deformed spacetime with a LDP does not completely overlap with that for two atoms in Minkowski spacetime. Thus, using the difference properties of entanglement generation as a criterion, one can in principle distinguish these two universes. 3.2 Two circularly accelerated atoms initially prepared in entangled states \(|A\rangle\) and \(|S\rangle\) We study the entanglement degradation when the two-atom system is initially prepared in two kinds of maximally entangled states, \(|A\rangle\) and \(|S\rangle\). In Fig. 10, we show the concurrence as a function of the evolution time by fixing the centripetal acceleration and interatomic separation in \(\kappa\)-deformed spacetime and in Minkowski spacetime. For the initial antisymmetry entangled state case, we can see from Figs. 10 (a) and (b) that with a fixed interatomic separation the entanglement decays rapidly with the evolution time for the two circularly accelerated atoms in these two spacetimes. The larger the centripetal acceleration is, the faster the entanglement decays, and so does the variation of entanglement via the interatomic separation with a fixed Figure 9: Entanglement profile for two-atom system initially prepared in \(|E\rangle\). Region A: two atoms in \(\kappa\)-deformed spacetime with a LDP cannot get entangled while two atoms in Minkowski spacetime can. Region B: two atoms in both of these universes can get entangled. Region C: two atoms in \(\kappa\)-deformed spacetime with a LDP can get entangled while two atoms in Minkowski spacetime cannot. Region D: two atoms in both of these two universes can not get entangled. Here, we fixed \(\kappa/\omega=1000\). centripetal acceleration. For the initial symmetry entangled state case in Figs. 10 (c) and (d), we can also find that with a fixed interatomic separation the two-atom entanglement decays rapidly with the evolution time in these two spacetimes. The larger the centripetal acceleration is, the faster the entanglement decays. However, the variation of entanglement via the interatomic separation with a fixed centripetal acceleration is just the opposite. This is quite different from the initial antisymmetry entangled state case. Furthermore, for both of the initial entangled state cases, the entanglement for two circularly accelerated atoms in Minkowski spacetime decays more quickly than that in \(\kappa\)-deformed spacetime. In this sense, with the help of the centripetal acceleration one can also exploit the entanglement behaviors of two atoms to discriminate the \(\kappa\)-deformed spacetime and the Minkowski spacetime in principle. ## 4 Entanglement dynamics for two atoms with the environment-induced interatomic interaction In this section we consider the two atoms initially prepared in a separable state \(|10\rangle\) and the superposition state of \(|A\rangle\) and \(|S\rangle\), i.e., \(|\psi\rangle=\sqrt{p}|A\rangle+\sqrt{1-p}|S\rangle\) (\(0<p<1,p\neq 1/2\)), to address how the environment-induced interatomic interaction affects the entanglement dynamics. More precisely, we try to find out whether the environment-induced interatomic interaction can help us to distinguish the \(\kappa\) Figure 10: Time evolution of concurrence for two circularly accelerated atoms, varying the centripetal acceleration \(a/\omega\) (left) and the interatomic distance \(\omega L\) (right), initially prepared in \(|A\rangle\) (a, b)and \(|S\rangle\) (c, d). deformed spacetime with a LDP and the Minkowski spacetime through the atomic entanglement dynamics. To address the above issue, we here study the evolution process of entanglement for two atoms with \(\rho_{AS}(0)=\rho_{SA}(0)\neq 0\). According to Eq. (19), it is easy to see that only the density elements, \(\rho_{AS}\) and \(\rho_{SA}\), are affected by the environment-induced interatomic interaction in the coupled basis. According to the calculation aforementioned, the time evolution of the density matrix elements \(\rho_{AS}(\tau)\) and \(\rho_{SA}(\tau)\) can be calculated as \[\rho_{AS}(\tau)=\rho_{AS}(0)e^{-4(A_{1}+iD)\tau},\ \ \ \ \ \rho_{SA}(\tau)=\rho_{SA}(0)e^{-4(A_{1}-iD)\tau}. \tag{116}\] Inserting Eq. (116) into the definition of concurrence (20), we get \[C[\rho(\tau)]=\max\{0,K_{1}(\tau)\}, \tag{117}\] where \[K_{1}(\tau) = \sqrt{[\rho_{AA}(\tau)-\rho_{SS}(\tau)]^{2}+[\rho_{AS}(0)e^{-4(A_{ 1}+iD)\tau}-\rho_{SA}(0)e^{-4(A_{1}-iD)\tau}]^{2}} \tag{118}\] \[-\sqrt{\rho_{GG}(\tau)\rho_{EE}(\tau)}.\] We can see that there exists an extra term \([\rho_{AS}(0)e^{-4(A_{1}+iD)\tau}-\rho_{SA}(0)e^{-4(A_{1}-iD)\tau}]^{2}\) in Eq. (118) due to the effects of the environment-induced interatomic interaction. It is worth emphasizing that the presence of this extra term may result in a number of intriguing physical properties, shown detailedly in the following. ### Entanglement dynamics of two static atoms To obtain the term of the environment-induced coupling between the two static atoms, we plug Eqs. (115)-(118) into Eq. (12), and the Hilbert transforms of the correlation functions for two static atoms in \(\kappa\)-deformed spacetime read \[\mathcal{K}^{12}(\omega)=\frac{P}{\pi i}\int_{-\infty}^{\infty}d \omega^{\prime}\frac{1}{\omega^{\prime}-\omega}\frac{\omega^{\prime}}{2\pi} \bigg{[}\frac{\sin\omega^{\prime}L}{\omega^{\prime}L}-\frac{\omega^{\prime 2} \cos\omega^{\prime}L}{24\kappa^{2}}\bigg{]},\] \[\mathcal{K}^{12}(-\omega)=\frac{P}{\pi i}\int_{-\infty}^{\infty} d\omega^{\prime}\frac{1}{\omega^{\prime}+\omega}\frac{\omega^{\prime}}{2\pi} \bigg{[}\frac{\sin\omega^{\prime}L}{\omega^{\prime}L}-\frac{\omega^{\prime 2} \cos\omega^{\prime}L}{24\kappa^{2}}\bigg{]}. \tag{119}\] Then, with the help of Eqs. (15)-(16), we can obtain \[D=\Gamma_{0}\frac{1}{2\pi}P\int_{0}^{\infty}dx\bigg{[}\frac{x}{x-1}+\frac{x}{ x+1}\bigg{]}\bigg{[}\frac{\sin x\omega L}{x\omega L}-\frac{x^{2}\cos x\omega L }{24(\frac{\kappa}{\omega})^{2}}\bigg{]}. \tag{120}\] Similarly, for the case of two static atoms in Minkowski spacetime, we have \[D=\Gamma_{0}\frac{1}{2\pi}P\int_{0}^{\infty}dx\bigg{[}\frac{x}{x-1}+\frac{x}{x+1} \bigg{]}\frac{\sin x\omega L}{x\omega L}. \tag{4.6}\] Note that for the case \(\kappa\rightarrow\infty\), the result given by Eq. (4.5) recovers to that in Eq. (4.6) for the Minkowski spacetime case. #### 4.1.1 Two static atoms initially prepared in a separable state \(|10\rangle\) We first study how the environment-induced interatomic interaction affects the entanglement dynamics of two atoms initially prepared in state \(|10\rangle\). In this case, the extra term in Eq. (4.3) can be written as \[[\rho_{AS}(0)e^{-4(A_{1}+iD)\tau}-\rho_{SA}(0)e^{-4(A_{1}-iD)\tau}]^{2}=\sin^{2 }(4D\tau)e^{-8A_{1}\tau}. \tag{4.7}\] In Fig. 11, we plot the evolution process of concurrence for two static atoms initially prepared in state \(|10\rangle\) in \(\kappa\)-deformed spacetime with a LDP and Minkowski spacetime, with fixed \(\kappa/\omega=1000\). Seen from Fig. 11 (a), the environment-induced interatomic interaction has a significant impact on the concurrence during initial period in \(\kappa\)-deformed spacetime, but after long time the asymptotic concurrence for the case of \(\kappa\)-deformed spacetime almost coincide with that of Minkowski spacetime case. This interesting behaviors are caused by the parameter term \(\sin^{2}(4D\tau)e^{-8A_{1}\tau}\) shown in Eq. (4.3), which is dominated by the trigonometric term \(\sin^{2}(4D\tau)\) in a short initial time, and it is determined by the exponential term \(e^{-8A_{1}\tau}\) after a long enough time [see Fig. 11 (b)]. Therefore, as a result of the environment-induced interatomic interaction, we can find for the \(\kappa\)-deformed spacetime case the entanglement generated evolves periodically in early time, and finally decays to zero asymptotically. The interesting phenomenon is quite different from that of the Minkowski spacetime case. In this sense, the environment-induced interatomic interaction between two static atoms would help us to distinguish these two universes in principle. Figure 11: (a) Time evolution of concurrence for two static atoms initially prepared in \(|10\rangle\). (b) The factor \(\sin^{2}(4D\tau)e^{-8A_{1}\tau}\) is a function of time. 1.2 Two static atoms initially prepared in entangled state \(|\psi\rangle=\sqrt{p}|A\rangle+\sqrt{1-p}|S\rangle\) We will investigate the entanglement behaviors for two static atoms initially prepared in the entangled states \(|\psi\rangle=\sqrt{p}|A\rangle+\sqrt{1-p}|S\rangle\) (\(0<p<1,p\neq 1/2\)) under the effects of the environment-induced interatomic interaction. We note here that different \(p\) denotes the different weights of the symmetric entangled state and the antisymmetric entangled state to construct the initial entangled state. In this case, we can calculate the extra term in Eq. (4.3) as \[[\rho_{AS}(0)e^{-4(A_{1}+iD)\tau}-\rho_{SA}(0)e^{-4(A_{1}-iD)\tau}]^{2}=4p(1-p) \sin^{2}(4D\tau)e^{-8A_{1}\tau}. \tag{4.8}\] For \(p=1/4\) [see Fig. 12 (a)], meaning that the symmetric entangled state contributes to the initial entangled state mainly, we find that the concurrence for the Minkowski spacetime case behaves quite differently compared with the initial entangled state \(|A\rangle\) and \(|S\rangle\) cases (discussed above). Although this entanglement decays with the increase of time, it is non-monotonous as a result of the environment-induced interatomic interaction. Moreover, in early period this concurrence shows an oscillatory behavior in \(\kappa\)-deformed spacetime with a LDP compared with the case in Minkowski spacetime under the effects of environment-induced interatomic interaction. However, in the long enough time limit, the entanglement behaviors for atoms in \(\kappa\)-deformed spacetime almost coincide with that for the Minkowski space time case, which implies that all the laws of physics in \(\kappa\)-deformed spacetime with a LDP recover to that in flat spacetime. For \(p=3/4\) [see Fig. 12 (b)], it means that the antisymmetric entangled state contributes to the initial entangled state mainly. We can also find that with the environment-induced interatomic interaction, the entanglement dynamics for the \(\kappa\)-deformed spacetime with a LDP case behave distinguishably compared with that of the Minkowski spacetime case. We note that these different behaviors are ultimately due to the trigonometric term \(\sin^{2}(4D\tau)\) in a short initial time, while after a long enough time the entanglement behaviors are dominated by the exponential term \(e^{-8A_{1}\tau}\) [see Fig. 12 (c)]. ### Entanglement dynamics of two uniformly moving atoms Now we reconsider the above case while assuming that the two atoms move with a uniformly speed. When the two-atom system moves uniformly in \(\kappa\)-deformed spacetime, inserting Eqs. (3.8)-(3.10) into Eq. (2.12), we can calculate the corresponding Hilbert transform of the correlation function as \[\mathcal{K}^{12}(\omega) =\frac{P}{\pi i}\int_{-\infty}^{\infty}d\omega^{\prime}\frac{1}{ \omega^{\prime}-\omega}\frac{\omega^{\prime}}{2\pi}\bigg{[}\frac{\sin\omega^ {\prime}L}{\omega^{\prime}L}+\frac{f(\omega^{\prime},L,\upsilon)}{24\omega^{ \prime}\kappa^{2}L^{3}}\bigg{]},\] \[\mathcal{K}^{12}(-\omega) =\frac{P}{\pi i}\int_{-\infty}^{\infty}d\omega^{\prime}\frac{1}{ \omega^{\prime}+\omega}\frac{\omega^{\prime}}{2\pi}\bigg{[}\frac{\sin\omega^ {\prime}L}{\omega^{\prime}L}+\frac{f(\omega^{\prime},L,\upsilon)}{24\omega^{ \prime}\kappa^{2}L^{3}}\bigg{]}. \tag{4.9}\] Using Eq. (4.9), one has \[D=\Gamma_{0}\frac{1}{2\pi}P\int_{0}^{\infty}dx\bigg{[}\frac{x}{x-1}+\frac{x}{ x+1}\bigg{]}\bigg{[}\frac{\sin x\omega L}{x\omega L}+\frac{f(x,\omega L, \upsilon)}{24x(\frac{\kappa}{\omega})^{2}(\omega L)^{3}}\bigg{]}. \tag{4.10}\] Similarly, for two uniformly moving atoms in Minkowski spacetime, we have \[D=\Gamma_{0}\frac{1}{2\pi}P\int_{0}^{\infty}dx\bigg{[}\frac{x}{x-1}+\frac{x}{ x+1}\bigg{]}\frac{\sin x\omega L}{x\omega L}, \tag{4.11}\] which is the same as the case of two static atoms in Minkowski spacetime, and completely unaffected by the velocity. #### 4.2.1 Two uniformly moving atoms initially prepared in a separable state \(|10\rangle\) To analyze how the environment-induced interatomic interaction for two uniformly moving atoms affects the entanglement generation in these two different spacetime, we show the dynamics of concurrence with different velocity in Fig. 13, for fixed \(\kappa/\omega=1000\), and \(\omega L=1\). When the environment-induced interatomic interaction is introduced, there is an oscillatory manner in the time evolution of concurrence during initial period in \(\kappa\)-deformed spacetime, and this oscillation frequency increases as the velocity of atoms increases. Besides, the oscillation is damped during evolution, so the asymptotic concurrence will be consistent to the entanglement behavior of Minkowski spacetime case. We note that this is because the oscillatory behavior is dominated by the trigonometric term \(\sin^{2}(4D\tau)\) during initial stage, while it determined by the exponential term \(e^{-8A_{1}\tau}\) and decays to the case of Minkowski spacetime after a long time. Hence, when the environment-induced interatomic interaction is taken into account, the difference of entanglement dynamics between the \(\kappa\)-deformed spacetime with a LDP and Minkowski spacetime is more pronounced during initial period. Therefore, the environment-induced interatomic interaction between two uniformly moving atoms can assist us in distinguishing these two universes. 2.2 Two uniformly moving atoms initially prepared in entangled state \(|\psi\rangle=\sqrt{p}|A\rangle+\sqrt{1-p}|S\rangle\) We consider the scenario where the initial state of two uniformly moving atoms is prepared in the superposition state \(|\psi\rangle=\sqrt{p}|A\rangle+\sqrt{1-p}|S\rangle\) (\(0<p<1,p\neq 1/2\)), which may induce the environment-induced interatomic interaction. In Figs. 14 (a) and (c), we take \(p=1/4\) and plot the relevant dynamics of entanglement with different fixed velocity of atoms. One can see that the concurrence decays with the increase of evolution time, while it decays differently for these two spacetimes. Also, the oscillatory decay still dominates the entanglement behaviors of the \(\kappa\)-deformed spacetime case at the early stage. Meanwhile, the oscillatory frequency is velocity dependent, which increases with the increase of the atomic velocity. At the late stage, the entanglement in \(\kappa\)-deformed spacetime case share the same behaviors with that Figure 13: Time evolution of concurrence initially prepared in \(|10\rangle\) with velocity \(v=0.01\) (a) and \(v=0.5\) (c). The factor \(\sin^{2}(4D\tau)e^{-8A_{1}\tau}\) is a function of time with velocity \(v=0.01\) (b) and \(v=0.5\) (d). in Minkowski spacetime case. Furthermore, we prepare \(p=3/4\) initial state case and plot its entanglement dynamics in Figs. 14 (b) and (d). It is easy to see at the early evolution stage that the oscillatory manners of entanglement are also dependent on the atomic velocity, but they are different from the \(p=1/4\) case. Similarly, at the late evolution stage one cannot distinguish these two spacetimes through the atomic entanglement behaviors. Remarkably, as shown in Figs. 14 (e) and (f), the oscillatory behaviors of entanglement appears as a consequence of the environment-induced interatomic interaction, which is dominated by the trigonometric term \(\sin^{2}(4D\tau)\) during initial stage, while it determined by the exponential term \(e^{-8A_{1}\tau}\) for a long time. Compared with Fig. 12, we can see from Fig. 14 that under the influence of velocity, the behaviors of entanglement in \(\kappa\)-deformed spacetime are different from that in Minkowski spacetime: the higher the velocity, the greater the difference between entanglement behaviors in these two universes. This tells us that even when the spacetime deformation parameter \(\kappa\) is relatively large, we can more easy to distinguish these two spacetimes with the help of environment-induced interatomic interaction between two uniformly moving atoms in principle. ### Entanglement dynamics of two circularly accelerated atoms In the following, we investigate how the entanglement dynamics between two circularly accelerated atoms are dependent on the environment-induced interatomic interaction in \(\kappa\)-deformed spacetime and Minkowski spacetime, respectively. With the trajectories of the two circularly accelerated atoms (3.13) and the Fourier transforms of correlation function, we can straightforwardly derive an analytic expression for the corresponding Hilbert transforms in Eq. (2.16). It is needed to note that the expression is too long to exhibit here. Similarly, when the two circularly accelerated atoms interact with a bath of fluctuating massless scalar field in Minkowski vacuum, we can also calculate the corresponding Hilbert transforms for the uniform circular motion case straightly. #### 4.3.1 Two circularly accelerated atoms initially prepared in a separable state \(|10\rangle\) We assume that the two atoms are initially prepared at the separable state \(|10\rangle\). In Fig. 15 we take \(\kappa/\omega=1000\) and \(\omega L=1\) and plot the concurrence dynamics in \(\kappa\)-deformed spacetime and Minkowski spacetime. This chosen initial sate may induce the environment-induced interatomic interaction, which is embodied by the extra term \(\sin^{2}(4D\tau)e^{-8A_{1}\tau}\) and has significant contributes to the two-atom entanglement dynamics. We first note that entanglement will be generated as a result of the vacuum fluctuation of quantum field and the motion of atoms. We can also find that, as above, there is an oscillatory manner for the entanglement dynamics during the initial period, which is from the environment-induced interatomic interaction dominated by the trigonometric term \(\sin^{2}(4D\tau)\). The oscillation frequency decreases as the growth of the centripetal acceleration. In addition, this oscillation form decays to the case of Minkowski spacetime after a long time, which is determined by the exponential term \(e^{-8A_{1}\tau}\). Therefore, the difference of entanglement dynamics between the \(\kappa\)-deformed spacetime case with a LDP and Minkowski spacetime case is more obvious during initial period when the environment-induced interatomic interaction is considered. This tells us that the environment-induced interatomic interaction between two circularly accelerated atoms is beneficial to discriminate these two universes through entanglement generation dynamics. 3.2 Two circularly accelerated atoms initially prepared in entangled state \(|\psi\rangle=\sqrt{p}|A\rangle+\sqrt{1-p}|S\rangle\) Here we study the effects of the environment-induced interatomic interaction on the entanglement dynamics for two circularly accelerated atoms initially prepared in \(|\psi\rangle=\sqrt{p}|A\rangle+\sqrt{1-p}|S\rangle\) (\(0<p<1,p\neq 1/2\)). In Fig. 16, it is shown that the enhancement behaviors depend on the atomic initial state prepared. When \(p=1/4\) is taken, that is to say, the symmetry entangled state contributes to the initial state mainly, and we find under the effect of the environment-induced interatomic interaction, from Figs. 16 (a) and (c), that the two-atom entanglement exhibits an intriguing phenomenon: its decay and revival are quite different from the static atoms case in Fig. 12 (a) and the uniformly moving atoms cases in Figs. 14 (a) and (c). In particular, when the centripetal acceleration is large there could be no entanglement revival and entanglement will suffer "sudden death". Besides, even at the late evolution stage, unlike the above cases, the entanglement dynamics in \(\kappa\)-deformed spacetime does not consist with that in Minkowski spacetime. In Figs. 16 (b) and (d), the entanglement dynamics of the \(p=3/4\) initial state case is shown. We can see that the initial entanglement can be enhanced in the initial phase, while finally decays to zero asymptotically. However, the entanglement dies off in an oscillatory manner in \(\kappa\)-deformed spacetime and will never consist with the Minkowski spacetime case at anytime. Remarkably, as the centripetal acceleration increasing, the difference between the \(\kappa\)-deformed spacetime and Minkowski spacetime becomes larger. Moreover, entanglement dynamics in both the \(\kappa\)-deformed spacetime and the Minkowski spacetime could suffer entanglement "sudden death", which means that the entanglement decays to zero at finite time. A quite interesting phenomenon for Figure 15: Time evolution of concurrence for two circularly accelerated atoms initially prepared in \(|10\rangle\) with centripetal acceleration \(a/\omega=1\) (a) and \(a/\omega=2\) (c). The factor \(\sin^{2}(4D\tau)e^{-8A_{1}\tau}\) is a function of time with centripetal acceleration \(a/\omega=1\) (b) and \(a/\omega=2\) (d). the entanglement in the large acceleration situation is that at certain time interval the entanglement for the \(\kappa\)-deformed spacetime exists, while that for the Minkowski spacetime vanishes. This distinct character of entanglement dynamics may assist us to distinguish these two universes. We also note that the environment-induced interatomic interaction is embodied in the extra term \(\frac{3}{4}\sin^{2}(4D\tau)e^{-8A_{1}\tau}\). In Figs. 16 (e) and (f) we show how this term performs under the effects of atomic acceleration in \(\kappa\)-deformed spacetime and Minkowski spacetime. Because of the acceleration, the extra term seems to be always determined by the term \(\sin^{2}(4D\tau)\) for the \(\kappa\)-deformed spacetime case and shows differently compared with the Minkowski spacetime case. This character is quite different from the situations of static and uniformly moving atoms that have been discussed above. Besides, when the acceleration increases, for the \(\kappa\)-deformed spacetime case the oscillation frequency decreases, so does the oscillation amplitude. Figure 16: Time evolution of concurrence for two circularly accelerated atoms, varying the centripetal acceleration \(a/\omega\) (left) and the interatomic distance \(\omega L\) (right), initially prepared in \(\frac{1}{2}|A\rangle+\frac{\sqrt{3}}{2}|S\rangle\) (a, b) and \(\frac{\sqrt{3}}{2}|A\rangle+\frac{1}{2}|S\rangle\) (c, d). Therefore, here we conclude to that even the spacetime deformation parameter \(\kappa\) is large, by checking the atoms entanglement dynamics one may distinguish these two universes with the help of the environment-induced interatomic interaction between two circularly accelerated atoms in principle. ## 5 Conclusions In this paper, we have investigated the dynamical behaviors of entanglement for a pair of static, uniformly moving and circularly accelerated atoms with different initial states coupled with the massless scalar field in \(\kappa\)-deformed spacetime and Minkowski spacetime. Through numerical evaluation, two different scenarios, i.e., with and without the environment-induced interatomic interaction, are considered. We have shown that the relativistic motion as well as the environment-induced interatomic interaction may have a significant effect on the entanglement dynamics of a two-atom system. On the one hand, when the two static atoms are initially prepared in the excited state, antisymmetric entangled state and symmetric entangled state respectively, it is shown that, without the environment-induced interatomic interaction, the differences of entanglement dynamics between in the \(\kappa\)-deformed spacetime and in the Minkowski spacetime are not obvious when the deformation parameter is large. However, when the inertial atoms moves with a constant velocity, the entanglement dynamics is different between in these two universe cases when the velocity is large. Furthermore, for two circularly accelerated atoms with a nonvanishing separation, the entanglement evolves quite differently with respective to the acceleration and the interatomic distance or other parameters. More importantly, for certain conditions, the circularly accelerated atoms initially prepared in the excited state in \(\kappa\)-deformed spacetime (in Minkowski spacetime) can get entangled, while they would not become entangled in the corresponding Minkowski case (in the corresponding \(\kappa\)-deformed case). Thus, the relativistic motion of the atoms significantly influences on the difference of entanglement dynamics between these two universes. On the other hand, when the atoms are initially prepared in a separable state \(|10\rangle\) and a superposition entangled states, we have demonstrated how the environment-induced interatomic interaction affects the entanglement dynamics of atoms coupled with the massless scalar fields. The numerical results have told us that when the environment-induced interatomic interaction is considered, it is more beneficial to distinguish the \(\kappa\)-deformed spacetime from the Minkowski spacetime. ## Acknowledgments Xiaobao Liu thanks Shifeng Huang and Jiaozhen She for advice and discussions. This work was supported by the National Natural Science Foundation of China under Grant Nos. 12065016 and 11905218; X. Liu thanks for the talent recruitment program of Liupanshui normal university of China under Grant No. LPSSYKYJJ201906 and the Discipline-Team of of Liupanshui Normal University of China under Grant No. LPSSY2023KKTD11.
2307.16894
A reduced order model for geometrically parameterized two-scale simulations of elasto-plastic microstructures under large deformations
In recent years, there has been a growing interest in understanding complex microstructures and their effect on macroscopic properties. In general, it is difficult to derive an effective constitutive law for such microstructures with reasonable accuracy and meaningful parameters. One numerical approach to bridge the scales is computational homogenization, in which a microscopic problem is solved at every macroscopic point, essentially replacing the effective constitutive model. Such approaches are, however, computationally expensive and typically infeasible in multi-query contexts such as optimization and material design. To render these analyses tractable, surrogate models that can accurately approximate and accelerate the microscopic problem over a large design space of shapes, material and loading parameters are required. In this work, we develop a reduced order model based on Proper Orthogonal Decomposition (POD), Empirical Cubature Method (ECM) and a geometrical transformation method with the following key features: (i) large shape variations of the microstructure are captured, (ii) only relatively small amounts of training data are necessary, and (iii) highly non-linear history-dependent behaviors are treated. The proposed framework is tested and examined in two numerical examples, involving two scales and large geometrical variations. In both cases, high speed-ups and accuracies are achieved while observing good extrapolation behavior.
Theron Guo, Ondřej Rokoš, Karen Veroy
2023-07-31T17:59:14Z
http://arxiv.org/abs/2307.16894v2
A reduced order model for geometrically parameterized two-scale simulations of elasto-plastic microstructures under large deformations ###### Abstract In recent years, there has been a growing interest in understanding complex microstructures and their effect on macroscopic properties. In general, it is difficult to derive an effective constitutive law for such microstructures with reasonable accuracy and meaningful parameters. One numerical approach to bridge the scales is computational homogenization, in which a microscopic problem is solved at every macroscopic point, essentially replacing the effective constitutive model. Such approaches are, however, computationally expensive and typically infeasible in multi-query contexts such as optimization and material design. To render these analyses tractable, surrogate models that can accurately approximate and accelerate the microscopic problem over a large design space of shapes, material and loading parameters are required. In previous works, such models were constructed in a data-driven manner using methods such as Neural Networks (NN) or Gaussian Process Regression (GPR). However, these approaches currently suffer from issues, such as need for large amounts of training data, lack of physics, and considerable extrapolation errors. In this work, we develop a reduced order model based on Proper Orthogonal Decomposition (POD), Empirical Cubature Method (ECM) and a geometrical transformation method with the following key features: (i) large shape variations of the microstructure are captured, (ii) only relatively small amounts of training data are necessary, and (iii) highly non-linear history-dependent behaviors are treated. The proposed framework is tested and examined in two numerical examples, involving two scales and large geometrical variations. In both cases, high speed-ups and accuracies are achieved while observing good extrapolation behavior. keywords: Reduced order modelling, proper orthogonal decomposition, computational homogenization, hyperreduction, empirical cubature method, geometrical transformation + Footnote †: journal: ## 1 Introduction Driven by advances in additive manufacturing and tailorable effective properties of metamaterials, there has been a growing interest in understanding structure-property relationships of complex microstructures. These microstructures can typically be described by a few shape parameters, leading to distinct types of effective behavior. To investigate such structure-property relations and to find the optimal shape for a given application, simulations are often considered. These simulations are in general computationally expensive or even intractable for direct numerical simulation, especially for large engineering applications, since considerably fine meshes are required to capture the complex microstructural geometry. By employing multi-scale methods [1; 2] or domain decomposition methods [3; 4], such large-scale problems can be separated into many smaller subproblems, thus rendering them amenable for efficient numerical simulation. If scale separation is assumed, i.e., when the length scale of the typical microstructural features is much smaller than that of the macrostructure, first-order computational homogenization can be employed. Here, the behavior of the microstructure dictates the (average) constitutive behavior of an effective macrostructural continuum model. By defining a Representative Volume Element (RVE) which models the fine-scale geometry of the microstructure in full detail, a coarse-grained representation of the macrostructure with a much coarser discretization can be assumed at the macroscale. At every macroscopic integration point, the macroscopic strain is used to specify a microscopic boundary value problem which, after solution, returns the effective stress and stiffness. Since a partial differential equation (PDE) needs to be solved at every macroscopic Gauss integration point, this methodology is still computationally expensive, and efficient ways for its solution are needed. Several approaches to tackle this problem have been reported in the literature. For instance, the Fast Fourier Transform (FFT) [5, 6] allows to directly use pixelized images of real microstructures, thus avoiding costly meshing of complex microstructures. The solution can further be accelerated by the (nonuniform) Transformation Field Analysis (see, e.g., [7, 8]), or Self-consistent Clustering Analysis [9, 10]. One disadvantage of FFT is that geometrical parameterizations of the RVE cannot be directly treated and, hence, sensitivities for material optimization cannot be directly computed. Another class of methods aims at solving the microscopic problem via the Finite Element (FE) method, resulting in a multi-scale formulation referred to as FE\({}^{2}\)[11, 12, 13]. By directly solving the microscopic PDE, material or shape parameterizations can be considered in a straightforward manner, making the approach more suitable for inverse problems and optimization. To speed up the microstructural simulation, proper orthogonal decomposition (POD) [14, 15] can be utilized to find a reduced set of basis functions; the method then computes the Galerkin projection of the solution onto the space spanned by the snapshots. Although POD generally requires many full-order solves for training, it typically works well for all input parameters. In the context of first-order homogenization, POD was first applied in Yvonnet et al. [16] for a hyper-elastic RVE, and later explored in Radermacher et al. [17] for an elasto-plastic RVE under small strains. However, due to the non-linearities of the microscopic problem, the speed-ups were limited since the global force vector and stiffness matrix must be assembled by full integration in every microscopic Newton iteration. To address this issue, a further reduction called hyperreduction is required, which aims at finding an efficient way of assembling microstructural force and stiffness quantities. Notable hyperreduction methods are empirical interpolation method (EIM) (see, e.g., [18]), a variant of EIM called discrete empirical interpolation method (DEIM) (see, e.g., [19]), energy-based mesh sampling and weighting [20], reduced integration domain [21], empirical quadrature procedure [22], and empirical cubature method (ECM) [23]. EIM and DEIM interpolate the non-linear integrand of the global force vector such that the integrals can be pre-computed. In [24, 25], DEIM was used successfully to accelerate the solution of the microscopic PDE. However, these works only discussed the solution of the microscopic PDE and did not derive the effective stress and stiffness quantities required for the macroscopic problem. A possible disadvantage of EIM and DEIM is that they lead to non-symmetric tangent matrices, which might result in convergence issues, observed in, for instance, [25, 26]. The remainder of the above-mentioned hyperreduction methods aim at approximating the integrals by finding a subset of integration points with corresponding weights among the set of all integration points used in the formulation of the microstructural PDE. This has the advantage that the stiffness matrix is always symmetric, ensuring a good convergence of the microscopic problem. Example applications of hyperreduction methods have been successfully employed in two-scale simulations in [27], where an elasto-plastic composite RVE under large deformations was considered. In [28], a damage model for a composite RVE under small deformations was shown. While both works obtained accurate results and successfully accelerated the forward simulations of a two-scale problem, such formulations were limited to fixed microstructures only, i.e., did not account for possible parameterizations. In order to allow for optimization of microstructures, the surrogate model needs to be extended to a wide range of different design parameters (including geometrical as well as material). This work aims to address this gap, by developing a hyper-reduced surrogate model for geometrically parameterized microstructures, to enable (shape) sensitivity analysis and optimization of materials. Furthermore, we intend to provide a detailed analysis of the reduced RVE problem for arbitrary loading paths and geometries and elucidate possible issues due to reduction. Our main contributions are: 1. development of a hyper-reduced POD model for a family of geometrically parameterized microstructures, by employing a geometrical transformation method [29] and by extending the ECM algorithm to geometrical parameters, 2. consistent derivation of the effective stress and stiffness of the hyper-reduced model, 3. an empirical analysis of the accuracy of the surrogate model for elasto-plastic RVEs under large deformations for different geometries and loading conditions, 4. a quantitative comparison of a two-scale example with continuous change in microstructural heterogeneities. The remainder of this paper is organized as follows. In Section 2, the microscopic problem arising in first-order computational homogenization is briefly summarized, together with the computation of the effective quantities. Section 3 covers in-depth development of the reduced order model with particular focus on the empirical cubature method for geometrically parameterized microstructures, and includes a detailed derivation of the effective stress and stiffness. In Section 4, the proposed method is examined and tested in detail, first for a single RVE and then also for a full two-scale problem. Finally, a summary on the findings and concluding remarks are given in Section 5. In this work, the following notational convention is adopted. Italic bold symbols are used for coordinates \(\mathbf{X}\) and vectorial or tensorial fields, such as the displacement field \(\mathbf{u}\) or stress field \(\mathbf{P}\). Upright bold symbols are used for algebraic vectors and matrices, such as the global stiffness matrix \(\mathbf{K}\) or the coefficients of the discretized displacement field \(\mathbf{u}\). A field quantity \(\mathbf{u}\) for given parameters \(\mathbf{\mu}\) is denoted as \(\mathbf{u}(\mathbf{X};\mathbf{\mu})\). Given second-order tensors \(\mathbf{A}\) and \(\mathbf{B}\), fourth-order tensor \(\mathbf{C}\), and vector \(\mathbf{v}\), the following operations are used: \(\mathbf{AB}=A_{ij}B_{jk}\), \(\mathbf{A}:\mathbf{B}=A_{ij}B_{ij}\), \(\mathbf{A}:\mathbf{C}:\mathbf{B}=A_{ij}C_{ijkl}B_{kl}\) and \(\mathbf{A}\mathbf{v}=A_{ij}v_{j}\), where the Einstein summation convention is implied. ## 2 Formulation Of The Microscopic Problem In multiscale schemes based on first-order homogenization, the macroscopic constitutive model is replaced by a microscopic PDE which is defined on an RVE. By prescribing the macroscopic deformation gradient on the microscale, the PDE can be solved and an effective stress and stiffness is returned to the macroscopic solver, see Fig. 1. For applications such as microstructure optimization, it is reasonable to additionally introduce a parameterization of the RVE in order to compute the sensitivities with respect to design variables. The microscopic boundary value problem is formulated below on a parameterized domain, as is usually the case in shape optimization. For brevity, the dependence on the macroscopic coordinates is omitted and a fixed macroscopic material point is assumed unless otherwise specified. ### Boundary Value Problem Consider a family of domains \(\Omega^{\mathbf{\mu}}\subset\mathbb{R}^{d}\) with space dimension \(d=2,3\) the space dimension, parameterized by geometrical parameters \(\mathbf{\mu}\), and spanned by a position vector \(\mathbf{X^{\mu}}\in\Omega^{\mathbf{\mu}}\). In Fig. 1, an example parent domain with a circular inclusion \(\Omega^{\mathrm{p}}\) is geometrically parameterized and mapped to two distinct parameterized domains with elliptical inclusions, \(\Omega^{\mathbf{\mu}_{1}}\) and \(\Omega^{\mathbf{\mu}_{2}}\). The volume and the topology of the domain \(|\Omega^{\mathbf{\mu}}|\) are assumed to remain fixed for all parameters (the outer boundaries of the RVE domain are fixed while the shape of the interior geometry can change). With the assumption of scale separation between macro- and microscale, the microscopic displacement field on the parameterized domain \(\mathbf{u}(\mathbf{X^{\mu}})\) can be written as the summation of a mean field \(\bar{\mathbf{u}}(\mathbf{X^{\mu}})\) and a fluctuation field \(\mathbf{w}(\mathbf{X^{\mu}})\), i.e., \(\mathbf{u}(\mathbf{X^{\mu}})=\bar{\mathbf{u}}(\mathbf{X^{\mu}})+\mathbf{w}(\mathbf{X^{\mu}})\). The mean field is fully specified through \(\bar{\mathbf{u}}(\mathbf{X^{\mu}})\coloneqq(\bar{\mathbf{F}}-\mathbf{I})\mathbf{X^{\mu}}\), where \(\bar{\mathbf{F}}\) is the macroscopic deformation gradient tensor and \(\mathbf{I}\) is the identity tensor. The total deformation gradient tensor \(\mathbf{F}\) is defined as \[\mathbf{F}(\mathbf{w}(\mathbf{X^{\mu}}))\coloneqq\frac{\partial\mathbf{u}}{\partial\mathbf{X^{\mu} }}=\bar{\mathbf{F}}+\frac{\partial\mathbf{w}}{\partial\mathbf{X^{\mu}}}. \tag{1}\] The governing microscopic PDE is given as \[\mathrm{Div}\,\mathbf{P}^{T}(\mathbf{F}(\mathbf{w}(\mathbf{X^{\mu}})))=\mathbf{0} \text{on }\Omega^{\mathbf{\mu}}, \tag{2}\] \[\mathbf{w}\text{ periodic} \text{on }\partial\Omega^{\mathbf{\mu}},\] where \(\mathrm{Div}(\bullet)\) is the divergence operator with respect to \(\mathbf{X^{\mu}}\) and \(\mathbf{P}\) denotes the second-order first Piola-Kirchhoff (1PK) stress tensor. No constitutive model is specified at this point, although we assume that the stress \(\mathbf{P}\) is a non-linear function of the deformation gradient \(\mathbf{F}\) (or its history). The weak form of the problem is then: given the macroscopic deformation gradient \(\bar{\mathbf{F}}\), find the fluctuation field \(\mathbf{w}^{*}\in\mathcal{V}\coloneqq\{\mathbf{v}\in(H^{1}(\Omega^{\mathbf{\mu}}))^{d}\mid \mathbf{v}\text{ periodic on }\partial\Omega^{\mathbf{\mu}}\}\) that fulfills \[G(\mathbf{w})\coloneqq\int_{\Omega^{\mathbf{\mu}}}\frac{\partial\delta \mathbf{w}}{\partial\mathbf{X^{\mu}}}:\mathbf{P}\left(\bar{\mathbf{F}}+\frac{\partial\mathbf{w}}{ \partial\mathbf{X^{\mu}}}\right)d\mathbf{X^{\mu}}\overset{!}{=}0,\qquad\forall\delta \mathbf{w}\in\mathcal{V}, \tag{3}\] where the integral bounds depend on the parameters \(\mathbf{\mu}\), \(\delta\mathbf{w}\) denotes a test function, and \(H^{1}(\Omega^{\mathbf{\mu}})\) is a Hilbert space with square integrable functions and square integrable derivatives. The inner product in \(\mathcal{V}\) is defined as \[(\mathbf{u},\mathbf{v})_{\mathcal{V}}\coloneqq\int_{\Omega^{\mathbf{\mu}}} \left(\mathbf{u}\cdot\mathbf{v}+\frac{\partial\mathbf{u}}{\partial\mathbf{X^{\mu}}}:\frac{ \partial\mathbf{v}}{\partial\mathbf{X^{\mu}}}\right)d\mathbf{X^{\mu}}. \tag{4}\] From Eq. (3), it is apparent that the macroscopic deformation gradient \(\bar{\mathbf{F}}\) represents the external loading, while the fluctuation displacement field \(\mathbf{w}\) balances the system. To simplify the problem in Eq. (3) and remove the parameter dependence of the integral bounds, a parent domain \(\Omega^{\mathrm{p}}\) is defined. To this end, we assume that there exists a parameter-dependent diffeomorphism \(\mathbf{\Phi}_{\mathbf{\mu}}:\Omega^{\mathrm{p}}\to\Omega^{\mathbf{\mu}},\mathbf{X}^{\mathrm{ p}}\mapsto\mathbf{X^{\mu}}\), see Fig. 1. Using integration by substitution, the problem of Eq. (3) can be restated as follows: given the macroscopic deformation gradient \(\bar{\mathbf{F}}\), find \(\mathbf{w}^{*\mathrm{p}}\in\mathcal{V}^{\mathrm{p}}\coloneqq\{\mathbf{v}\in(H^{1}( \Omega^{\mathrm{p}}))^{d}\mid\mathbf{v}\text{ periodic on }\partial\Omega^{\mathrm{p}}\}\) that fulfills \[G^{\mathrm{p}}(\mathbf{w}^{\mathrm{p}})\coloneqq\int_{\Omega^{\mathrm{p}}}\left( \frac{\partial\delta\mathbf{w}^{\mathrm{p}}}{\partial\mathbf{X}^{\mathrm{p}}}\mathbf{F}_{ \mathbf{\mu}}^{-1}\right):\mathbf{P}\left(\bar{\mathbf{F}}+\frac{\partial\mathbf{w}^{\mathrm{ p}}}{\partial\mathbf{X}^{\mathrm{p}}}\mathbf{F}_{\mathbf{\mu}}^{-1}\right)\left|\det\mathbf{F}_{\mathbf{\mu}} \right|d\mathbf{X}^{\mathrm{p}}=0,\qquad\forall\delta\mathbf{w}^{\mathrm{p}}\in \mathcal{V}^{\mathrm{p}}, \tag{5}\] with the transformation gradient \(\mathbf{F}_{\mathbf{\mu}}\coloneqq\frac{\partial\mathbf{\Phi}_{\mathbf{\mu}}}{\partial\mathbf{X}^{ \mathrm{p}}}\) and \(d\mathbf{X^{\mu}}=\left|\det\mathbf{F}_{\mathbf{\mu}}\right|d\mathbf{X}^{\mathrm{p}}\). The superscript \(\mathrm{p}\) is used to denote quantities pertinent to the parent domain, e.g., \(\mathbf{w}(\mathbf{X^{\mu}})=(\mathbf{w}\circ\mathbf{\Phi}_{\mathbf{\mu}})(\mathbf{X}^{\mathrm{p}})= \mathbf{w}^{\mathrm{p}}(\mathbf{X}^{\mathrm{p}})\). To iteratively solve Figure 1: Two-scale problem based on first-order homogenization. At every macroscopic point, a microscopic simulation is defined through deformation gradient \(\bar{\mathbf{F}}\) and shape parameters \(\mathbf{\mu}\), and solved to obtain an effective stress \(\bar{\mathbf{P}}\) and stiffness \(\bar{\mathbf{A}}\). For different macroscopic points, different parameterized microstructures can be considered through \(\mathbf{\mu}\). As an example of a family of geometrically parameterized microstructures, a parent domain with a circular inclusion \(\Omega^{\mathrm{p}}\) (center), can be mapped onto parameterized domains \(\Omega^{\mathbf{\mu}_{1}}\) (left) and \(\Omega^{\mathbf{\mu}_{2}}\) (right) with mapping \(\mathbf{\Phi}_{\mathbf{\mu}_{1}}\) and \(\mathbf{\Phi}_{\mathbf{\mu}_{2}}\). the non-linear problem in Eq. (5), a linearization using the Gateaux derivative around the current state \(\mathbf{w}^{\mathrm{p}}\) in direction \(\Delta\mathbf{w}^{\mathrm{p}}\in\mathcal{V}^{\mathrm{p}}\) is required and can be written as, \[\left.\frac{\partial G^{\mathrm{p}}(\mathbf{w}^{\mathrm{p}}+\tau\Delta\mathbf{w}^{ \mathrm{p}})}{\partial\tau}\right|_{\tau=0}=\int_{\Omega^{\mathrm{p}}}\left( \frac{\partial\delta\mathbf{w}^{\mathrm{p}}}{\partial\mathbf{X}^{\mathrm{p}}}\mathbf{F}_{ \mathbf{\mu}}^{-1}\right):\mathbf{A}\left(\bar{\mathbf{F}}+\frac{\partial\mathbf{w}^{\mathrm{ p}}}{\partial\mathbf{X}^{\mathrm{p}}}\mathbf{F}_{\mathbf{\mu}}^{-1}\right):\left(\frac{ \partial\Delta\mathbf{w}^{\mathrm{p}}}{\partial\mathbf{X}^{\mathrm{p}}}\mathbf{F}_{\mathbf{\mu }}^{-1}\right)\left|\det\mathbf{F}_{\mathbf{\mu}}\right|d\mathbf{X}^{\mathrm{p}}, \tag{6}\] where \(\mathbf{A}\coloneqq\frac{\partial\mathbf{P}}{\partial\mathbf{F}}\) is the fourth-order stiffness tensor. Once the transformation map \(\mathbf{\Phi}_{\mathbf{\mu}}\) is known, Eq. (3) can be solved on the parent domain using Eqs. (5) and (6). Further details on how to find these transformations for a range of geometrical parameters are provided in Section 3.4. By employing a finite element discretization for \(\mathbf{w}^{\mathrm{p}}\approx\mathbf{w}^{\mathrm{p}}_{h}\in\mathcal{V}^{\mathrm{p}}_{ h}\subset\mathcal{V}^{\mathrm{p}}\), with \(\dim\mathcal{V}^{\mathrm{p}}_{h}=\mathcal{N}\), the number of degrees of freedom of the dicretization, the internal force vector \(\mathbf{\mathrm{f}}\in\mathbb{R}^{\mathcal{N}}\) and global stiffness matrix \(\mathbf{\mathrm{K}}\in\mathbb{R}^{\mathcal{N}\times\mathcal{N}}\) can be derived from Eqs. (5) and (6), resulting in the following non-linear system of equations \[\mathbf{\mathrm{f}}(\mathbf{\mathrm{w}})=\mathbf{0}, \tag{7}\] where \(\mathbf{\mathrm{w}}\in\mathbb{R}^{\mathcal{N}}\) is the column vector of unknown coefficients of the discretized fluctuation field. This problem can be solved with the Newton method, i.e., \[\mathbf{\mathrm{K}}(\mathbf{\mathrm{w}}^{m})\Delta\mathbf{\mathrm{w}} =-\mathbf{\mathrm{f}}(\mathbf{\mathrm{w}}^{m}), \tag{8}\] \[\mathbf{\mathrm{w}}^{m+1} =\mathbf{\mathrm{w}}^{m}+\Delta\mathbf{\mathrm{w}},\] where \(m\) is the number of Newton iteration and Eq. (8) is repeated until \(||\mathbf{\mathrm{f}}(\mathbf{\mathrm{w}}^{m})||_{2}\leq\varepsilon_{\mathrm{newton}}\) with \(\varepsilon_{\mathrm{newton}}\) a user-defined tolerance. #### 2.1.1 Effective Quantities For conciseness of notation, the following abbreviations are introduced to denote quantities after the solution \(\mathbf{w}^{*\mathrm{p}}\) has been obtained: \[\mathbf{P}^{*\mathrm{p}} \coloneqq\mathbf{P}\left(\bar{\mathbf{F}}+\frac{\partial\mathbf{w}^{*\mathrm{p }}}{\partial\mathbf{X}^{\mathrm{p}}}\mathbf{F}_{\mathbf{\mu}}^{-1}\right), \tag{9}\] \[\mathbf{A}^{*\mathrm{p}} \coloneqq\mathbf{A}\left(\bar{\mathbf{F}}+\frac{\partial\mathbf{w}^{*\mathrm{ p}}}{\partial\mathbf{X}^{\mathrm{p}}}\mathbf{F}_{\mathbf{\mu}}^{-1}\right). \tag{10}\] Upon obtaining solution \(\mathbf{w}^{*\mathrm{p}}\) from Eq. (5), the effective stress is computed as \[\bar{\mathbf{P}}\coloneqq|\Omega^{\mathrm{p}}|^{-1}\int_{\Omega^{\mathrm{p}}}\mathbf{ P}^{*\mathrm{p}}|\det\mathbf{F}_{\mathbf{\mu}}|d\mathbf{X}^{\mathrm{p}}, \tag{11}\] and the effective stiffness (in index notation) as \[\bar{A}_{ijkl}\coloneqq \frac{\partial\bar{P}_{ij}}{\partial\bar{F}_{kl}}\] \[= |\Omega^{\mathrm{p}}|^{-1}\frac{\partial}{\partial\bar{F}_{kl}} \int_{\Omega^{\mathrm{p}}}P_{ij}^{*\mathrm{p}}|\det\mathbf{F}_{\mathbf{\mu}}|d\mathbf{X}^{ \mathrm{p}} \tag{12}\] \[= |\Omega^{\mathrm{p}}|^{-1}\int_{\Omega^{\mathrm{p}}}A_{ijmn}^{* \mathrm{p}}\left(\mathbb{I}_{mnkl}+\frac{\partial}{\partial\bar{F}_{kl}} \left(\frac{\partial w_{m}^{*}}{\partial X_{r}}\right)\left(F_{\mathbf{\mu}}^{-1} \right)_{rn}\right)|\det\mathbf{F}_{\mathbf{\mu}}|d\mathbf{X}^{\mathrm{p}},\] where \(\mathbb{I}_{mnkl}\coloneqq\delta_{mk}\delta_{nl}\) is the fourth-order identity tensor. To determine \(\frac{\partial}{\partial\bar{F}_{kl}}\left(\frac{\partial w_{m}^{*}}{\partial X _{r}}\right)\), Eq. (5) is differentiated with respect to \(\bar{\mathbf{F}}\). For one particular component \(\bar{F}_{kl}\) (where the indices \(k\) and \(l\) are assumed to be temporarily fixed), the differentiation yields \[\int_{\Omega^{\mathrm{p}}}\left(\frac{\partial\delta\mathbf{w}^{\mathrm{p}}}{\partial \mathbf{X}^{\mathrm{p}}}\mathbf{F}_{\mathbf{\mu}}^{-1}\right):\mathbf{A}^{*\mathrm{p}}:\left( \frac{\partial\mathbf{q}_{kl}}{\partial\mathbf{X}^{\mathrm{p}}}\mathbf{F}_{\mathbf{\mu}}^{-1} \right)|\det\mathbf{F}_{\mathbf{\mu}}|\,d\mathbf{X}^{\mathrm{p}}=-\left(\int_{\Omega^{ \mathrm{p}}}\left(\frac{\partial\delta\mathbf{w}^{\mathrm{p}}}{\partial\mathbf{X}^{ \mathrm{p}}}\mathbf{F}_{\mathbf{\mu}}^{-1}\right):\mathbf{A}^{*\mathrm{p}}\left|\det\mathbf{F}_ {\mathbf{\mu}}|\,d\mathbf{X}^{\mathrm{p}}\right):\mathbb{E}_{kl}, \tag{13}\] where a new auxiliary vector field \(\mathbf{q}_{kl}\coloneqq\dfrac{\partial\mathbf{w}^{\text{sp}}}{\partial\bar{F}_{kl}}\in \mathcal{V}^{\text{p}}\) has been defined (reflecting the sensitivity of the microfluctuation field with respect to the change of the applied macroscopic loading), and \(\mathbb{E}_{kl}\in\mathbb{R}^{d\times d}\) is a second order tensor with all entries zero, except for the \(kl\)-th entry which is \(1\). The linear tangent problem of Eq. (13) is then solved for all combinations \(k,l=1,...,d\) to obtain \(\mathbf{q}_{kl}\) for each component of \(\bar{\mathbf{F}}\). For shape optimization, the sensitivities of the effective stress \(\bar{\mathbf{P}}\) with respect to the geometrical parameters \(\mathbf{\mu}\) are required and are computed as follows (in index notation) \[\dfrac{\partial\bar{F}_{ij}}{\partial\mu_{k}}=|\Omega^{\text{p}} |^{-1}\int_{\Omega^{\text{p}}}\left(A^{\text{sp}}_{ijmn}\left(\dfrac{\partial }{\partial\mu_{k}}\left(\dfrac{\partial w^{*}_{m}}{\partial X_{r}}\right) \left(F^{-1}_{\mathbf{\mu}}\right)_{rn}+\dfrac{\partial w^{*}_{m}}{\partial X_{r}} \dfrac{\partial\left(F^{-1}_{\mathbf{\mu}}\right)_{rn}}{\partial\mu_{k}}\right)| \det\mathbf{F}_{\mathbf{\mu}}|+P^{\text{sp}}_{ij}\dfrac{\partial|\det\mathbf{F}_{\mathbf{\mu}}| }{\partial\mu_{k}}\right)d\mathbf{X}^{\text{p}}. \tag{14}\] The left-hand expression is complicated due to the derivatives of \(\mathbf{F}^{-1}_{\mathbf{\mu}}\) and \(|\det\mathbf{F}_{\mathbf{\mu}}|\). If the effective stress \(\bar{\mathbf{P}}\) can be assumed to vary smoothly with the parameters \(\mathbf{\mu}\), which may be a reasonable assumption for smoothly varying shapes using, for instance, splines, finite differences can be used to approximate these sensitivities. ## 3 Surrogate Modelling Since the microscopic problem has to be solved at every macroscopic quadrature point, the solution of the microscopic PDE must be efficient. Solving it directly using FE is in general too computationally expensive, and, hence, the microscopic solver must be accelerated. In this section, a surrogate model for the geometrically parameterized microscopic PDE is developed by employing the reduced basis method (RBM) [14] to reduce the number of degrees of freedom and the empirical cubature method (ECM) [23] to reduce the number of quadrature points. The key idea is to construct the surrogate model on the parent domain \(\Omega^{\text{p}}\), adapt it to each geometry \(\Omega^{\mathbf{\mu}}\), and then solve the reduced problem. ### Reduced Basis Method For complex problems and geometries, typically a fine mesh is required for FE, leading to a high-dimensional solution space \(\mathcal{V}^{\text{p}}_{h}\) for the fluctuation displacement field \(\mathbf{w}^{\text{p}}\) with \(\dim\mathcal{V}^{\text{p}}_{h}=\mathcal{N}\). The idea of the RBM is to approximate the field with global parameter-independent basis functions and parameter-dependent coefficients, i.e., \[\mathbf{w}^{\text{p}}(\mathbf{X}^{\text{p}};\bar{\mathbf{F}},\mathbf{\mu})\approx\sum_{n=1}^{ N}a_{n}(\bar{\mathbf{F}},\mathbf{\mu})\mathbf{\phi}_{n}(\mathbf{X}^{\text{p}}), \tag{15}\] where \(N\) is the number of basis functions, ideally much smaller than the dimension of the FE space, i.e., \(N\ll\mathcal{N}\). The basis functions, \(\{\mathbf{\phi}_{n}\}_{n=1}^{N}\), span a subset of \(\mathcal{V}^{\text{p}}_{h}\) and can be obtained by applying proper orthogonal decomposition (POD) on a set of pre-computed full solutions for different parameter values. Additionally, they are orthonormal with respect to \(\mathcal{V}^{\text{p}}\), i.e., \[(\mathbf{\phi}_{m},\mathbf{\phi}_{n})_{\mathcal{V}^{\text{p}}}=\delta_{mn}, \tag{16}\] where \(\delta_{mn}\) denotes the Kronecker delta. By utilizing the POD space for both the trial and test space and inserting \(\mathbf{w}^{\text{p}}\) from Eq. (15) into Eqs. (5) and (6), the components for the reduced internal force vector \(\mathbf{f}^{\text{POD}}\in\mathbb{R}^{N}\) and reduced global stiffness matrix \(\mathbf{K}^{\text{POD}}\in\mathbb{R}^{N\times N}\) can be derived as \[f^{\text{POD}}_{i}(\mathbf{a}) \coloneqq\int_{\Omega^{\text{p}}}\left(\dfrac{\partial\mathbf{\phi}_ {i}}{\partial\mathbf{X}^{\text{p}}}\mathbf{F}^{-1}_{\mathbf{\mu}}\right):\mathbf{P}\left(\bar{ \mathbf{F}}+\left(\sum_{n=1}^{N}a_{n}\dfrac{\partial\mathbf{\phi}_{n}}{\partial\mathbf{X}^ {\text{p}}}\right)\mathbf{F}^{-1}_{\mathbf{\mu}}\right)|\det\mathbf{F}_{\mathbf{\mu}}|\,d\mathbf{X} ^{\text{p}}, \tag{17}\] \[K^{\text{POD}}_{ij}(\mathbf{a}) \coloneqq\int_{\Omega^{\text{p}}}\left(\dfrac{\partial\mathbf{\phi}_ {i}}{\partial\mathbf{X}^{\text{p}}}\mathbf{F}^{-1}_{\mathbf{\mu}}\right):\mathbf{A}\left(\bar{ \mathbf{F}}+\left(\sum_{n=1}^{N}a_{n}\dfrac{\partial\mathbf{\phi}_{n}}{\partial\mathbf{X}^ {\text{p}}}\right)\mathbf{F}^{-1}_{\mathbf{\mu}}\right):\left(\dfrac{\partial\mathbf{\phi}_ {j}}{\partial\mathbf{X}^{\text{p}}}\mathbf{F}^{-1}_{\mathbf{\mu}}\right)|\det\mathbf{F}_{\mathbf{\mu}}| \,d\mathbf{X}^{\text{p}}, \tag{18}\] where \(\mathbf{a}=[a_{1},\ldots,a_{N}]^{T}\) is the column vector of unknown coefficients to be solved for, and \(i,j=1,\ldots,N\) span over all basis functions. Analogously to Eqs. (7) and (8), the resulting non-linear system of equations \[\mathbf{f}^{\mathrm{POD}}(\mathbf{a})=\mathbf{0} \tag{19}\] can be solved using Newton method: \[\begin{split}\mathbf{K}^{\mathrm{POD}}(\mathbf{a}^{m})\Delta \mathbf{a}&=-\mathbf{f}^{\mathrm{POD}}(\mathbf{a}^{m}),\\ \mathbf{a}^{m+1}&=\mathbf{a}^{m}+\Delta\mathbf{a}. \end{split} \tag{20}\] ### Empirical Cubature Method Even though the solution field and linear system of equations have been reduced to dimension \(N\ll\mathcal{N}\), computing the components of the force vector in Eq. (17) and global stiffness matrix in Eq. (18) still requires integrating over the RVE. For the full integration, a numerical quadrature rule (usually based on Gauss quadrature) with integration points and corresponding weights \(\{(\hat{\mathbf{X}}_{q},\hat{w}_{q})\}_{q=1}^{\hat{Q}}\), where \(\hat{Q}\) is the total number of integration points, is employed, i.e., \[f_{i}^{\mathrm{POD}}(\mathbf{a})\approx\sum_{q=1}^{\hat{Q}}\hat{w}_{q}\left. \left[\left(\frac{\partial\mathbf{\phi}_{i}}{\partial\mathbf{X}^{\mathrm{p}}}\mathbf{F}_{ \mathbf{\mu}}^{-1}\right):\mathbf{P}\left(\bar{\mathbf{F}}+\left(\sum_{n=1}^{N}a_{n}\frac {\partial\mathbf{\phi}_{n}}{\partial\mathbf{X}^{\mathrm{p}}}\right)\mathbf{F}_{\mathbf{\mu}}^{ -1}\right)|\det\mathbf{F}_{\mathbf{\mu}}|\right]\right|_{\hat{\mathbf{X}}_{q}}, \tag{21}\] for \(i=1,\ldots,N\). For a fine mesh, \(\hat{Q}\) is very large and thus evaluating Eq. (21) leads to high computational costs. To address this issue, we employ the Empirical Cubature Method (ECM), which was proposed in Hernandez et al. [23] for a fixed geometry, and extend it to parameterized geometries. The idea of ECM is to find a subset of points \(\{\mathbf{X}_{q}\}_{q=1}^{Q}\subset\{\hat{\mathbf{X}}_{q}\}_{q=1}^{\hat{Q}}\) with \(Q\ll\hat{Q}\) among the set of all integration points with corresponding weights \(\{w_{q}\}_{q=1}^{Q}\) that approximates Eq. (21) up to a user-defined error \(\varepsilon\). To find such a subset that approximates Eq. (21) well for all admissible geometrical parameters \(\mathbf{\mu}\), Eq. (21) is first rewritten as \[\begin{split} f_{i}^{\mathrm{POD}}(\mathbf{a})&=\sum _{q=1}^{\hat{Q}}\hat{w}_{q}\left.\left[\frac{\partial\mathbf{\phi}_{i}}{\partial \mathbf{X}^{\mathrm{p}}}:\underbrace{\left(\mathbf{P}\left(\bar{\mathbf{F}}+\left(\sum_{ n=1}^{N}a_{n}\frac{\partial\mathbf{\phi}_{n}}{\partial\mathbf{X}^{\mathrm{p}}}\right)\mathbf{F}_{ \mathbf{\mu}}^{-1}\right)\mathbf{F}_{\mathbf{\mu}}^{-T}\left|\det\mathbf{F}_{\mathbf{\mu}}\right| \right)}_{\mathbf{W}=}\right]\right|_{\hat{\mathbf{X}}_{q}},\\ &=\sum_{q=1}^{\hat{Q}}\hat{w}_{q}\left.\left[\frac{\partial\mathbf{ \phi}_{i}}{\partial\mathbf{X}^{\mathrm{p}}}:\mathbf{W}(\mathbf{X}^{\mathrm{p}};\bar{\mathbf{F} },\mathbf{\mu})\right]\right|_{\hat{\mathbf{X}}_{q}},\end{split} \tag{22}\] where the weighted stress \(\mathbf{W}\) is defined. To remove the parameter dependence of the integrand in Eq. (22), the weighted stress is approximated by another reduced basis, i.e., \[\mathbf{W}(\mathbf{X}^{\mathrm{p}};\bar{\mathbf{F}},\mathbf{\mu})\approx\sum_{l=1}^{L}\alpha_{ l}(\bar{\mathbf{F}},\mathbf{\mu})\mathbf{B}_{l}(\mathbf{X}^{\mathrm{p}}), \tag{23}\] where \(\{\mathbf{B}_{l}\}_{l=1}^{L}\) is a set of \(L\) basis functions obtained using POD, which are orthonormal with respect to \(L^{2}(\Omega)\), i.e., \[\int_{\Omega}\mathbf{B}_{m}:\mathbf{B}_{n}d\mathbf{X}^{\mathrm{p}}=\delta_{mn}. \tag{24}\] Inserting Eq. (23) into Eq. (22) and rearranging yields, \[f_{i}^{\mathrm{POD}}(\mathbf{a})\approx\sum_{l=1}^{L}\alpha_{l}(\bar{\mathbf{F}}, \mathbf{\mu})\sum_{q=1}^{\hat{Q}}\hat{w}_{q}\left.\left[\frac{\partial\mathbf{\phi}_{i }}{\partial\mathbf{X}}:\mathbf{B}_{l}\right]\right|_{\hat{\mathbf{X}}_{q}},\qquad i=1, \ldots,N. \tag{25}\] Since Eq. (25) should be accurate for any choice of coefficients \(\alpha_{l}(\bar{\mathbf{F}},\mathbf{\mu})\), all the \(N\cdot L\) terms in Eq. (25) that approximate the integral have to be approximated as accurately as possible. Hence, the goal becomes to find a subset \(Q(\ll\hat{Q})\) of integration points with corresponding weights \(\{(\mathbf{X}_{q},w_{q})\}_{q=1}^{Q}\) that approximates Eq. (25) well, i.e., \[\sum_{q=1}^{\hat{Q}}\hat{w}_{q}\left.\left[\frac{\partial\mathbf{\phi}_{i}}{ \partial\mathbf{X}}:\mathbf{B}_{l}\right]\right|_{\mathbf{X}_{q}}\approx\sum_{q=1}^{Q}w_{ q}\left.\left[\frac{\partial\mathbf{\phi}_{i}}{\partial\mathbf{X}}:\mathbf{B}_{l}\right] \right|_{\mathbf{X}_{q}},\qquad i=1,\ldots,N,\ l=1,\ldots,L. \tag{26}\] These \(Q\) points and corresponding weights are found using a greedy algorithm, the details of which can be found in [23] and are omitted here. The algorithm is terminated when the mean squared error of all \(N\cdot L\) terms is less than a user-defined tolerance \(\varepsilon\). Compared to the original algorithm for a fixed geometry, as proposed in [23], the only differences are that the weighted stress \(\mathbf{W}\) is employed instead of the stress \(\mathbf{P}\) and that the parent domain \(\Omega^{\mathrm{p}}\) is considered instead of a fixed domain \(\Omega\). With the ECM integration rule, the hyper-reduced force vector and global stiffness matrix are computed as \[f_{i}^{\mathrm{PODECM}}(\mathbf{a}) \coloneqq\sum_{q=1}^{Q}w_{q}\left.\left[\left(\frac{\partial\mathbf{ \phi}_{i}}{\partial\mathbf{X}^{\mathrm{p}}}\mathbf{F}_{\mathbf{\mu}}^{-1}\right):\mathbf{P} \left(\bar{\mathbf{F}}+\left(\sum_{n=1}^{N}a_{n}\frac{\partial\mathbf{\phi}_{n}}{ \partial\mathbf{X}^{\mathrm{p}}}\right)\mathbf{F}_{\mathbf{\mu}}^{-1}\right)\left|\det\mathbf{ F}_{\mathbf{\mu}}\right|\right]\right|_{\mathbf{X}_{q}}, \tag{27}\] \[K_{ij}^{\mathrm{PODECM}}(\mathbf{a}) \coloneqq\sum_{q=1}^{Q}w_{q}\left.\left[\left(\frac{\partial\mathbf{ \phi}_{i}}{\partial\mathbf{X}^{\mathrm{p}}}\mathbf{F}_{\mathbf{\mu}}^{-1}\right):\mathbf{A} \left(\bar{\mathbf{F}}+\left(\sum_{n=1}^{N}a_{n}\frac{\partial\mathbf{\phi}_{n}}{ \partial\mathbf{X}^{\mathrm{p}}}\right)\mathbf{F}_{\mathbf{\mu}}^{-1}\right):\left(\frac{ \partial\mathbf{\phi}_{j}}{\partial\mathbf{X}^{\mathrm{p}}}\mathbf{F}_{\mathbf{\mu}}^{-1} \right)\left|\det\mathbf{F}_{\mathbf{\mu}}\right|\right]\right|_{\mathbf{X}_{q}}. \tag{28}\] ### Effective Quantities Once the new set of integration points and weights is found, the integrands of Eqs. (27) and (28) only need to be evaluated at the points \(\{\mathbf{X}_{q}\}_{q=1}^{Q}\) during the solution of the reduced problem. This also means that the stress and stiffness field are available at these points only. To compute the effective quantities, the most straightforward method is to use the integration rule obtained by ECM, i.e., \[\bar{\mathbf{P}} =|\Omega^{\mathrm{p}}|^{-1}\int_{\Omega^{\mathrm{p}}}\mathbf{P}^{ \mathrm{*p}}|\det\mathbf{F}_{\mathbf{\mu}}|d\mathbf{X}^{\mathrm{p}} \tag{29}\] \[\approx|\Omega^{\mathrm{p}}|^{-1}\sum_{q=1}^{Q}w_{q}\left.\left( \mathbf{P}^{\mathrm{*p}}|\det\mathbf{F}_{\mathbf{\mu}}|\right)\right|_{\mathbf{X}_{q}}.\] Since the stress field \(\mathbf{P}^{\mathrm{*p}}\) is known at all integration points \(\{\mathbf{X}_{q}\}_{q=1}^{Q}\), the effective stress can be directly evaluated. The method yields very accurate results in the examples considered below in Section 4. However, it should be noted that there is currently no guarantee that the integration rule found by ECM will generally be accurate for the computation of the effective stress. As discussed in Section 2.1.1, derivatives \(\frac{\partial\mathbf{w}^{*}}{\partial\bar{\mathbf{F}}}\) are needed to find the effective stiffness \(\bar{\mathbf{A}}\), see Eq. (12). For each component of \(\bar{\mathbf{F}}\), the linear tangent problem of Eq. (13) needs to be solved. By employing the trial space of the fluctuation field for the auxiliary function \(\mathbf{q}_{kl}\), i.e., \[\mathbf{q}_{kl}=\sum_{n=1}^{N}q_{n}\mathbf{\phi}_{n}(\mathbf{X}^{\mathrm{p}}), \tag{30}\] and the integration rule found by ECM, the following linear system results: \[\mathbf{K}^{\mathrm{*p}}\mathbf{q}=\mathbf{b}, \tag{31}\] where \(\mathbf{q}=[q_{1},\ldots,q_{N}]^{T}\) is the column vector of unknowns to be solved for and \[K^{*\mathrm{p}}_{ij} =\sum_{q=1}^{Q}w_{q}\left.\left[\left(\frac{\partial\mathbf{\phi}_{i}}{ \partial\mathbf{X}^{\mathrm{p}}}\mathbf{F}_{\mathbf{\mu}}^{-1}\right):\mathbf{A}^{*\mathrm{p}} :\left(\frac{\partial\mathbf{\phi}_{j}}{\partial\mathbf{X}^{\mathrm{p}}}\mathbf{F}_{\mathbf{ \mu}}^{-1}\right)\left|\det\mathbf{F}_{\mathbf{\mu}}\right|\right]\right|_{\mathbf{X}_{q}}, \tag{32}\] \[b_{i} =-\left(\sum_{q=1}^{Q}w_{q}\left.\left[\left(\frac{\partial\mathbf{ \phi}_{i}}{\partial\mathbf{X}^{\mathrm{p}}}\mathbf{F}_{\mathbf{\mu}}^{-1}\right):\mathbf{A}^{* \mathrm{p}}\left|\det\mathbf{F}_{\mathbf{\mu}}\right|\right]\right|_{\mathbf{X}_{q}}): \mathbb{E}_{kl}. \tag{33}\] Note that the matrix \(\mathbf{K}^{*\mathrm{p}}\in\mathbb{R}^{N\times N}\) is exactly the same as the global stiffness matrix \(\mathbf{K}\) of Eq. (28) evaluated at the solution \(\mathbf{w}^{*\mathrm{p}}\). After solving the tangent problems, the effective stiffness \(\bar{\mathbf{A}}\) can be computed (in index notation) as \[\bar{A}_{ijkl}=|\Omega^{\mathrm{p}}|^{-1}\sum_{q=1}^{Q}w_{q}\left.\left(\frac{ \partial P^{*\mathrm{p}}_{ij}}{\partial\bar{F}_{kl}}|\det\mathbf{F}_{\mathbf{\mu}}| \right)\right|_{\mathbf{X}_{q}}, \tag{34}\] where \[\frac{\partial\mathbf{P}^{*\mathrm{p}}}{\partial\bar{F}_{kl}}=\mathbf{A}^{*\mathrm{p} }:\left(\mathbb{E}_{kl}+\left(\frac{\partial\mathbf{q}_{kl}}{\partial\mathbf{X}^{ \mathrm{p}}}\mathbf{F}_{\mathbf{\mu}}^{-1}\right)\right). \tag{35}\] The derivation of the effective sensitivities (recall Eq. (14)) can be performed analogously to the effective stiffness. However, due to the derivatives of the inverse and determinant of the transformation gradient \(\mathbf{F}_{\mathbf{\mu}}\), such derivations become rather technical and hence are omitted. ### Auxiliary Problem For Geometrical Transformation Thus far, the geometrical transformation \(\mathbf{\Phi}_{\mathbf{\mu}}:\Omega^{\mathrm{p}}\to\Omega^{\mathbf{\mu}}\) has been assumed to be known and has not been discussed in more detail. However, such transformations are in general not known analytically and have to be found numerically by using, for example, radial basis functions, see, e.g., [30; 31], or mesh-based methods, see, e.g., [29; 32]. For each of those methods, an auxiliary problem arises which needs to be solved. In order to rapidly solve the surrogate model for a wide range of different geometries, it must therefore also be ensured that the auxiliary problem can be solved rapidly. In this work, the method in [29] is employed, in which the auxiliary problem is formulated as a linear elasticity problem by defining \(\mathbf{\Phi}_{\mathbf{\mu}}(\mathbf{X}^{\mathrm{p}})=\mathbf{X}^{\mathrm{p}}+\mathbf{d}(\mathbf{X}^{ \mathrm{p}})\), with \(\mathbf{d}\) the transformation displacement obtained from \[\mathrm{Div}\left(\mathbb{C}^{\mathrm{aux}}:\frac{1}{2}\left(\frac{\partial \mathbf{d}}{\partial\mathbf{X}^{\mathrm{p}}}+\left(\frac{\partial\mathbf{d}}{\partial\mathbf{X} ^{\mathrm{p}}}\right)^{T}\right)\right)=\mathbf{0}\text{ in }\Omega^{\mathrm{p}}. \tag{36}\] In the above equation, \(\mathbb{C}^{\mathrm{aux}}\) is the fourth-order elasticity tensor, fully specified by the Young's modulus \(E^{\mathrm{aux}}\) and Poisson's ratio \(\nu^{\mathrm{aux}}\). The boundary conditions for this PDE are problem-dependent and are specified by the geometrical parameters \(\mathbf{\mu}\). For the RVE problem, the outer boundaries are fixed (\(\mathbf{d}=\mathbf{0}\)), while \(\mathbf{d}\) is prescribed on parts of the interior that are parameterized by \(\mathbf{\mu}\). In [29] the effect of the choice of \(E^{\mathrm{aux}}\) and \(\nu^{\mathrm{aux}}\) was studied and it was demonstrated empirically that the choice only has a minor effect on the final approximation quality. Hence, in all numerical examples considered in this work, a Young's modulus of \(E^{\mathrm{aux}}=1\) and Poisson's ratio \(\nu^{\mathrm{aux}}=0.25\) is assumed. The auxiliary problem can then be significantly accelerated with the RBM in combination with a DEIM (e.g., [29; 31]), resulting in \[\hat{\mathbf{A}}\hat{\mathbf{d}}=\hat{\mathbf{b}}(\mathbf{\mu}), \tag{37}\] where \(\hat{\mathbf{A}}\in\mathbb{R}^{N_{p}\times N_{p}}\) is the reduced system matrix, \(\hat{\mathbf{d}}\in\mathbb{R}^{N_{p}}\) is the reduced transformation displacement, \(\hat{\mathbf{b}}(\mathbf{\mu})\in\mathbb{R}^{N_{p}}\) is the reduced forcing vector and \(N_{p}\) is the number of geometrical parameters. Since \(N_{p}\) is usually small, Eq. (37) can be rapidly solved. From \(\hat{\mathbf{d}}\), the transformation gradient \(\mathbf{F}_{\mathbf{\mu}}\), its inverse \(\mathbf{F}_{\mathbf{\mu}}^{-1}\), and its determinant \(\det\mathbf{F}_{\mathbf{\mu}}\) can be computed. Moreover, expressions for the derivative of the inverse and determinant of the transformation gradient \(\mathbf{F}_{\mathbf{\mu}}\) can be derived, which are needed for computing the sensitivities with respect to \(\mathbf{\mu}\), see Eq. (14). For more information on the auxiliary problem and its reduction, the reader is referred to [29]. ### Summary For convenience, the offline-online decomposition for constructing and solving the surrogate model is summarized in Algorithm 1. ``` Offline Stage: 1: Define a parent domain \(\Omega^{p}\) and its finite element discretization. 2: Generate parameter samples \(\{\bar{\mathbf{F}}^{i},\mathbf{\mu}^{i}\}_{i=1}^{N_{s}}\) from a random distribution. 3: For each different set of geometrical parameters \(\mathbf{\mu}^{i}\), solve the auxiliary problem in Eq. (36) to obtain the transformation map \(\mathbf{\Phi}_{\mathbf{\mu}^{i}}\). 4: Compute \(\mathbf{F}_{\mathbf{\mu}^{i}}^{-1}\) and \(\det\mathbf{F}_{\mathbf{\mu}^{i}}\) for each parameter sample \(\mathbf{\mu}^{i}\), then run full simulations (Eqs. (5) and (6)) for \(\bar{\mathbf{F}}^{i}\) and collect fluctuation displacement and weighted stress snapshots. 5: Compute POD for the fluctuation displacement and weighted stress, cf. Eqs. (15) and (23). 6: Run ECM algorithm and find integration points and weights, cf. Eq. (26). 7: Assemble the reduced system matrix and forcing vector for the auxiliary problem in Eq. (37) by applying POD and DEIM. Details are provided in [29]. Online Stage: 1: Given a new parameter set \((\bar{\mathbf{F}}^{*},\mathbf{\mu}^{*})\), solve reduced auxiliary problem Eq. (37) and compute \(\mathbf{F}_{\mathbf{\mu}^{*}}^{-1}\) and \(\det\mathbf{F}_{\mathbf{\mu}^{*}}\). 2: Solve reduced problem for \(\bar{\mathbf{F}}^{*}\) with Eqs. (27) and (28). 3: Compute effective stress using Eq. (29). 4: Solve the linear problem Eq. (31) for each component of \(\bar{\mathbf{F}}^{*}\). 5: Compute components of the effective stiffness with Eq. (34). ``` **Algorithm 1**Offline-online decomposition of the proposed PODECM framework with microstructures parameterized with external loading \(\bar{\mathbf{F}}\) and geometrical features \(\mathbf{\mu}\). ## 4 Example Problems The proposed framework, referred to as PODECM, is first tested on a non-linear composite microstructure under various loading conditions and analyzed in depth regarding its capabilities and accuracy. The RVE consists of an elasto-plastic matrix with stiff inclusions of variable size and is considered under non-monotonic loading. The surrogate model is analyzed in terms of the number of basis functions of the fluctuation displacement field \(N\), number of basis functions of the weighted stress \(L\) and the ECM integration error tolerance \(\varepsilon\). Subsequently, a two-scale problem involving a porous microstructure under non-monotonic loading conditions and varying porosities is studied to illustrate the accuracy and speed-up of PODECM in a two-scale setting. All experiments are defined in two dimensions under plane strain conditions. The RVEs are assumed to be of size \([0,1]^{2}\) and all quantities are assumed to be normalized and hence dimensionless. Since the macroscopic deformation gradient \(\bar{\mathbf{F}}\) can always be decomposed into a rotation \(\bar{\mathbf{R}}\) and a symmetric stretch tensor \(\bar{\mathbf{U}}\) with a polar decomposition, i.e., \(\bar{\mathbf{F}}=\bar{\mathbf{R}}\bar{\mathbf{U}}\), it is sufficient to generate training data for the stretch tensor \(\bar{\mathbf{U}}\), having only 3 independent components (6 in 3D). To measure the quality of the approximation, the following error measures to compare the full FE simulations against PODECM solutions are defined: 1. Error of effective stress \[\epsilon_{\bar{\mathbf{P}}}=\frac{\sum_{k=1}^{K}||\bar{\mathbf{P}}^{\text{ PODECM}}(\bar{\mathbf{U}}^{k})-\bar{\mathbf{P}}^{\text{FE}}(\bar{\mathbf{U}}^{k})||_{F}}{ \sum_{k=1}^{K}||\bar{\mathbf{P}}^{\text{FE}}(\bar{\mathbf{U}}^{k})||_{F}},\] (38) where \(\bar{\mathbf{P}}^{\text{PODECM}}(\bar{\mathbf{U}}^{k})\) and \(\bar{\mathbf{P}}^{\text{FE}}(\bar{\mathbf{U}}^{k})\) denote the effective stress obtained with PODECM and FE for \(\bar{\mathbf{U}}^{k}\), \(||\bullet||_{F}\) denotes the Frobenius norm, \(K\) is the total number of loading steps and \(\bar{\mathbf{U}}^{k}\) is the applied external load at load step \(k\). 2. Error of fluctuation field \[\epsilon_{\mathbf{w}}=\frac{\sum_{k=1}^{K}||\mathbf{w}^{\mathrm{PODECM}}(\bar{\mathbf{U}}^{k})- \mathbf{w}^{\mathrm{FE}}(\bar{\mathbf{U}}^{k})||_{\mathcal{V}}}{\sum_{k=1}^{K}||\mathbf{w}^ {\mathrm{FE}}(\bar{\mathbf{U}}^{k})||_{\mathcal{V}}},\] (39) where \(\mathbf{w}^{\mathrm{PODECM}}\) and \(\mathbf{w}^{\mathrm{FE}}\) denote the fluctuation displacement field obtained with PODECM and FE, and \(||\bullet||_{\mathcal{V}}^{2}=(\bullet,\bullet)_{\mathcal{V}}\), c.f., Eq. (4). Recall that the integral in Eq. (4) is defined over the parameterized domain \(\Omega^{\mathbf{\mu}}\). ### Elasto-Plastic Composite RVE With Random Stiff Inclusions #### 4.1.1 Problem Description The considered RVE in this example consists of two phases, an elasto-plastic matrix and stiff elastic inclusions. The geometry of the parent domain is shown in Fig. 1(a), where the volume fraction of the inclusions is \(23.4\%\). For the geometrical parameterization, one geometrical parameter \(\mathbf{\mu}=\{\zeta\}\) that scales the size of the inclusions uniformly (and is proportional to the volume fraction of the inclusions) is introduced, see Figs. 1(b) and 1(c) showing two example domains for distinct values of \(\zeta\). The simulation mesh is depicted in Fig. 1(d), where six-noded quadratic triangular elements are used in conjunction with three quadrature points per element. In total, the mesh has \(62194\) degrees of freedom, \(15450\) triangular elements and \(46350\) quadrature points. For the constitutive model of both matrix and inclusion the small-strain \(J_{2}\)-plasticity model with linear isotropic hardening is chosen and extended to large strains with the method presented in Cuitino and Ortiz Cuitino and Ortiz (2013). The small-strain model obeys: \[\mathbf{\sigma} =\mathbb{D}:(\mathbf{\epsilon}-\mathbf{\epsilon}_{\mathrm{pl}}), \tag{40}\] \[f^{\mathrm{yield}} =||\mathbf{\sigma}||_{\mathrm{miss}}-(\sigma_{y0}+H\xi),\] (41) \[\mathbf{r} =\frac{\partial f}{\partial\mathbf{\sigma}}=\sqrt{\frac{3}{2}}\frac{ \mathrm{Dev}(\mathbf{\sigma})}{\sqrt{\mathrm{Dev}(\mathbf{\sigma}):\mathrm{Dev}(\mathbf{ \sigma})}},\] (42) \[\dot{\xi} =\gamma,\] (43) \[\dot{\mathbf{\epsilon}}_{\mathrm{pl}} =\gamma\mathbf{r},\] (44) \[\gamma \geq 0,\ f^{\mathrm{yield}}\geq 0,\ \gamma f^{\mathrm{yield}}=0, \tag{45}\] where \(\mathbf{\epsilon}\) is the small-strain tensor, \(\mathbf{\epsilon}_{\mathrm{pl}}\) the plastic strain tensor, \(\mathbb{D}\) is the fourth-order elasticity tensor that can be fully specified by Young's modulus \(E\) and Poisson's ratio \(\nu\), and \(\mathbf{\sigma}\) is the corresponding stress tensor; \(f^{\mathrm{yield}}\) Figure 2: Parent with two parameterized domains and simulation mesh. (a) The parent domain consists of a matrix material (blue) with \(23\) random elliptical inclusions (orange). The problem has one geometrical parameter \(\zeta\) that scales all ellipses uniformly (\(\zeta=1\) for parent domain). (b) A parameterized domain for \(\zeta=1.2\) and (c) for \(\zeta=0.5\). (d) The considered mesh consists of six-noded triangular elements and contains in total \(62194\) degrees of freedom, \(15450\) triangular elements and \(46350\) quadrature points. defines the yield surface, \(\text{Dev}(\bullet)\) takes the deviatoric part of a tensor (\(\bullet\)), \(||\bullet||_{\text{miss}}=\sqrt{\frac{3}{2}}\,\text{Dev}(\bullet):\text{Dev}(\bullet)\) computes the von Mises stress, \(H\) is the hardening constant, \(\sigma_{y0}\) yield stress, \(\mathbf{r}\) the plastic flow direction, \(\xi\) the equivalent plastic strain that defines the isotropic hardening of the yield surface, and \(\gamma\) is the consistency parameter. For more information on the material model, see Simo and Hughes [34]. Following the procedure of [33], by employing a multiplicative split of the deformation gradient \(\mathbf{F}=\mathbf{F}^{\text{el}}\mathbf{F}^{\text{pl}}\), the elastic logarithmic strain can be defined as \[\mathbf{C}^{\text{el}}_{\text{log}}\coloneqq\ln\mathbf{C}^{\text{el}}=\ln((\mathbf{F}^{ \text{el}})^{T}\mathbf{F}^{\text{el}}). \tag{46}\] By interpreting the elastic logarithmic strain as the small-strain tensor, the small-strain constitutive model defined in Eqs. (40)-(45) is used to compute the stress tensor \(\hat{\mathbf{S}}\) on the intermediate configuration, \[\hat{\mathbf{S}}\coloneqq 2\mathbf{\sigma}(\mathbf{C}^{\text{el}}_{\text{log}}):\frac{ \partial\mathbf{C}^{\text{el}}_{\text{log}}}{\partial\mathbf{C}^{\text{el}}}, \tag{47}\] while the 1PK stress is recovered from \(\hat{\mathbf{S}}\) as \[\mathbf{P}=(\mathbf{F}^{\text{el}})^{-1}\hat{\mathbf{S}}\ (\mathbf{F}^{\text{pl}})^{-T}. \tag{48}\] Instead of evolving the plastic strain with Eq. (44), the plastic deformation gradient \(\mathbf{F}^{\text{pl}}\) is evolved according to \[\dot{\mathbf{F}}^{\text{pl}}=\exp(\gamma\mathbf{r})\mathbf{F}^{\text{pl}}. \tag{49}\] For the matrix, the following material parameters are selected: \(E=10\), \(\nu=0.3\), \(\sigma_{y0}=0.2\) and \(H=5\). For the inclusions, \(E=100\) and \(\nu=0.3\) is selected, corresponding to a stiffness contrast ratio of 10. Since no plastic deformation is assumed for the inclusions, its yield stress is set to a large value such that yielding never occurs. Three loading \(\bar{U}_{xx}\), \(\bar{U}_{xy}\), \(\bar{U}_{yy}\) and one geometrical \(\zeta\) parameters are considered with bounds \(\zeta\in[0.5,1.2]\), \(\bar{U}_{xx}\in[0.9,1.1]\), \(\bar{U}_{yy}\in[0.9,1.1]\) and \(\bar{U}_{xy}\in[-0.1,0.1]\). Through \(\zeta\), the volume fraction of the inclusions is varied from \(5.85\%\) to \(33.7\%\). The macroscopic stretch tensor is applied in \(K=40\) load steps. For the first 10 load steps the applied stretch increases linearly from \(\mathbf{0}\) to \(\bar{\mathbf{U}}\). In the next 20 steps, the applied loading decreases linearly from \(\bar{\mathbf{U}}\) to \(-\bar{\mathbf{U}}\), such that at \(k=20\) the applied stretch is \(\mathbf{0}\). Finally, in the last 10 steps, the stretch increases linearly from \(-\bar{\mathbf{U}}\) to \(\mathbf{0}\). Even though only macroscopic strains of up to 10% are applied, local strains reached values up to 83%. Evolution of the effective von Mises stress for an example with \(\zeta=1.010\), \(\bar{U}_{xx}=1.1\), \(\bar{U}_{yy}=1.0\), \(\bar{U}_{xy}=0.0\) is shown in Fig. 3, including the local von Mises stress fields at steps \(k=\{10,20,30,40\}\). #### 4.1.2 Results In total 20 samples are generated from a Sobol sequence to train PODECM whereas 100 testing samples are generated from a uniform distribution to test PODECM. Each sample consists of 40 snapshots for each load step. The accuracy and speed-up of PODECM depends on the number of basis functions used for the fluctuation displacement field \(N\) and the number of quadrature points \(Q\). While \(N\) is typically chosen directly, \(Q\) depends on the choice of the number of basis functions used for the weighted stress \(L\) and the ECM integration error \(\varepsilon\). To study the influence of \(L\) on the resulting number of quadrature points \(Q\) and mean errors in effective stress and fluctuation field on the testing dataset, several combinations of \(N\) and \(L\) for a fixed \(\varepsilon=10^{-2}\) are tested, with resulting errors shown in the top row of Fig. 4. The projection error (for \(N\) basis functions and using full integration) is shown as well. It can be clearly seen that the number of quadrature points \(Q\) increases drastically with increasing \(N\) and \(L\), as more information needs to be integrated accurately. For the mean errors, a higher \(L\) leads to better results on average, although we observe that errors fluctuate significantly, and for some values of \(N\) a worse approximation is obtained with a higher \(L\). This occurs since the ECM algorithm is a greedy algorithm, meaning that it does not necessarily find an optimal set of integration points. When more basis functions are included into the algorithm, a completely different set of points may be found that finally leads to a worse approximation. It can furthermore be observed that the gap between the projection error and the PODECM solution grows larger for increasing \(N\). This is because the basis functions typically become more oscillatory and difficult to approximate with higher \(N\), see, e.g., [35; 36], and thus require significantly more quadrature points for a good approximation. It is interesting that the gap for the errors in the fluctuation field are smaller than the ones in the effective stress. This happens because the ECM integration rule used to compute the effective stress introduces an additional approximation and hence also a source of error. Several combinations of \(N\) and \(\varepsilon\) for a fixed \(L=15\) are next tested to study the influence of \(\varepsilon\) on the number of quadrature points and approximation errors. Obtained results are shown in the bottom row of Fig. 4. Similarly to the previous analysis, a lower \(\varepsilon\) leads to more quadrature points \(Q\) and a lower mean error in the effective stress and fluctuation field on average, as the integrals are approximated more accurately. Interestingly, lowering the tolerance from \(0.01\) to \(0.001\) does not significantly improve the approximation quality, even though substantially more quadrature points are included, meaning that the errors can be attributed to the higher modes of the weighted stress (the additional quadrature points barely contain any information). Therefore, choosing a tolerance smaller than \(\varepsilon=0.01\) leads to no improvement. From Fig. 4 we further observe that the errors of the fluctuation field are considerably higher (order of magnitude) than the errors of the effective stresses. This results from the fact that the POD basis functions aim to minimize the \(H_{1}(\Omega^{\rm p})\) error, and thus approximate the field accurately on average rather than locally, suggesting a favorable approximation for averaged quantities such as the effective stress. In [37], the authors showed a similar result for FFT: the effective stresses converge with an order of \(h\), while the local fields converge with \(h^{1/2}\), where \(h\) is the used voxel edge-length. To conclude, the more basis functions \(N\) and \(L\) are used and the lower the integration error \(\varepsilon\) is chosen, the more accurate the final result is. However, at the same time the surrogate model grows in size and the speed-up decreases. A user must thus make a compromise between accuracy and cost. In our experience, the speed-up correlates nearly linearly with \(Q\), i.e., if the number of quadrature points is reduced by a factor of \(100\), this results in a speed-up of roughly \(100\). In contrast, the number of basis functions \(N\) only plays a minor role for the speed-up. For this example, use of \(N=20,\,L=20\) and \(\varepsilon=0.01\) lead to a reduction in the number of degrees of freedom from \(62194\) to \(20\) and quadrature points from \(46350\) to \(212\), suggesting a speed-up on the order of roughly 200. ### Two-Scale Compression With Porous Microstructure #### 4.2.1 Problem Description In the second example, the macroscopic structure, depicted in Fig. 5 together with the employed simulation mesh, is compressed under an external loading \(T(x)\). Here, we assume \(H=1\), \(W=2\), \(T(x)=\bar{T}\left(1-\left(\frac{2x}{W}-1\right)^{2}\right)\), \(x\in[0,W]\), with \(\bar{T}\) the magnitude of the applied load. The simulation mesh has 1322 degrees of freedom, 200 8-noded quadrilateral elements, and in total 800 quadrature points. The structure is assumed to have a porous microstructure, modelled by the parameterized RVE, shown in Fig. 6, and the same material model as the matrix material in the previous example Eqs. (40)-(49). Such microstructures (with circular holes, i.e., \(a=b\)) have been considered in several works, see, e.g., [38; 39], due to their auxetic behavior under compression, i.e., negative Poisson's ratio. During compression, the center part of the material starts to rotate, thus pulling the material from the sides inwards. In our work, we define two independent parameters, namely the volume fraction of the voids, \(v_{\rm void}\coloneqq 4\pi ab\), and the ratio of the semi-major axis \(b\) to semi-minor axis \(a\) of each hole, \(\kappa\coloneqq b/a\). The semi-minor axis \(a\) and semi-major axis \(b\) depend on \(v_{\rm void}\) and \(\kappa\) as \[a(v_{\rm void},\kappa) =\sqrt{v_{\rm void}/(4\pi\kappa)}, \tag{50}\] \[b(v_{\rm void},\kappa) =\kappa\cdot a(v_{\rm void},\kappa). \tag{51}\] Figure 4: The left column shows the number of quadrature points \(Q\) obtained from ECM for different choices of number of basis functions of fluctuation field \(N\), number of basis functions of weighted stress \(L\), and ECM integration error \(\varepsilon\). The middle and right columns show the average errors of the effective stress and the fluctuation field when tested on the testing data for different choices of \(N\), \(L\) and \(\varepsilon\). The top row assumes a fixed \(\varepsilon=0.01\), while the bottom row assumes a fixed \(L=15\). Depending on the values of the parameters, the resulting effective properties change significantly. To illustrate this, linear analyses of this RVE for different parameters have been carried out, similarly to [40], where a small compression in the \(y\)-direction with \(\Delta u_{y}=0.001\) has been applied, while allowing the RVE to contract freely in the \(x\)-direction. With the resulting displacement in the \(x\)-direction, \(\Delta u_{x}\), the Poisson's ratio in the initial state can be estimated as \[\nu^{\text{eff}}=-\frac{\Delta u_{x}}{\Delta u_{y}}. \tag{52}\] Similarly, the initial Young's modulus is estimated as \[E^{\text{eff}}=\frac{\bar{P}_{yy}}{\Delta u_{y}}, \tag{53}\] where \(\bar{P}_{yy}\) is the \(yy\)-component of the effective stress. For parameter ranges \(v_{\text{void}}\in[0.4,0.5]\) and \(\kappa\in[1.01,1.5]\), the estimated Poisson's ratio and Young's modulus are plotted in Fig. 7. It can be observed that removing material (by increasing \(v_{\text{void}}\) while keeping \(\kappa\) fixed) or increasing \(\kappa\) while keeping \(v_{\text{void}}\) fixed both Figure 5: Geometry (a) and mesh (b) of the considered macroscopic structure. The body is fixed on the bottom and an external compression force \(T\) is applied on the top. The mesh consists of 1322 degrees of freedom, 200 8-noded quadrilateral elements and 4 quadrature points per element. Figure 6: Geometry (a) and mesh (b) of the porous RVE. The elliptical holes have semi-minor axis \(a\) and semi-major axis \(b\), and are parameterized by the volume fraction of the pores \(v_{\text{void}}\) and the \(\kappa=b/a\). The employed simulation mesh has 21042 degrees of freedom, 4964 6-noded triangular elements and in total 14892 quadrature points. The parent domain corresponds to \(\kappa=1.25\) and \(v_{\text{void}}=0.45\). lead to a softer response with lower Young's modulus. While the Poisson's ratio is barely affected by \(v_{\rm void}\) for values of \(\kappa\) close to 1, the effect becomes apparent for larger values of \(\kappa\). In particular, for \(\kappa\geq 1.4\), the Poisson's ratio changes from a positive value to a negative one. Therefore, by tuning \(v_{\rm void}\) and \(\kappa\), the RVE behavior can be significantly modified. The parameters \(v_{\rm void}\) and \(\kappa\) are chosen to vary smoothly through the macrostructural domain as \[\kappa(x,y) =1.5-(1.5-1.01)y, \tag{54}\] \[v_{\rm void}(x,y) =0.4+(0.5-0.4)(1-x)^{2}. \tag{55}\] The parent mesh is selected with \(\kappa=1.25\) and \(v_{\rm void}=0.45\). The external loading is applied in \(K=50\) load steps. In the first 25 load steps, the applied load \(\bar{T}\) increases linearly from 0 to 0.2. In the next 25 steps \(\bar{T}\) is decreased linearly from 0.2 to 0. #### 4.2.2 Results Several two-scale simulations with different PODECM surrogate models are run and compared to the full reference FE\({}^{2}\) solution. To compare the accuracy of the surrogate solutions, the compliance \(C\coloneqq\int_{\Gamma}T(x)u_{y}(x)dx\), where \(\Gamma\) denotes the top horizontal edge of the macrostructure and \(u_{y}\) its vertical displacement, is computed at every load step. The compliance is an important quantity, often employed in optimization problems. Subsequently, the relative error in compliance \(\epsilon_{C}\) and the relative error averaged over all load steps \(\bar{\epsilon}_{C}\) are defined as \[\epsilon_{C,k}\coloneqq\frac{|C_{k}-C_{k}^{\rm FE^{2}}|}{|C_{k}^{\rm FE^{2}}|},\qquad\bar{\epsilon}_{C}\coloneqq\frac{1}{K}\sum_{n=1}^{K}\epsilon_{C,k}, \tag{56}\] where the subscript \(k\) denotes the \(k\)-th load step and \(C^{\rm FE^{2}}\) is the compliance computed with the full solution. The tested PODECM RVE models are generated for different numbers of training samples \(N_{\rm train}\) and number of basis functions \(N\). The training data is sampled from \(\bar{U}_{xx}\in[0.85,1]\), \(\bar{U}_{yy}\in[0.85,1]\), \(\bar{U}_{xy}\in[-0.15,0.15]\), \(v_{\rm void}\in[0.4,0.5]\) and \(\kappa\in[1.01,1.5]\) with a Sobol sequence. Each sample consists of 50 load steps, where the macroscopic stretch tensor is linearly increased for 25 load steps and then linearly decreased to 0 in 25 steps. For the ECM algorithm, the number of basis functions for the weighted stress \(L\) and the integration error \(\varepsilon\) are assumed as fixed with \(L=20\) and \(\varepsilon=0.01\) for all models. The exact settings for Figure 7: Initial effective Poisson’s ratio (a) and Young’s modulus (b) of RVE for different values of \(v_{\rm void}\) and \(\kappa\). each surrogate model are summarized in Table 1, alongside the averaged relative error in compliance \(\bar{\epsilon}_{C}\), and the run time and speed-up in comparison to the full FE2 solution. For comparison, the total number of degrees of freedom and quadrature points of the full FE model of the RVE are also provided. As can be seen, for all surrogate models the number of quadrature points are reduced by a factor of up to 100 which results in high speed-ups up to 100 times, while errors are below 5% for all models. By increasing \(N\) from 10 to 50 for rom_1 to rom_5, the error decreases from 4.74% to 1.54%, whereas the speed-up reduces from 92 to 26. Including more training samples with the same number of basis functions \(N=50\) (from \(N_{\rm train}=20\) for rom_5 to \(N_{\rm train}=100\) for rom_6) improves the error from 1.54% to 0.65% while the speed-up remains roughly the same. This means that by increasing the sample size, the first 50 basis functions contain more general information that result in a better approximation. Footnote 2: All computations are executed using 20 cores of an Intel Platinum 8260. In Fig. 8 the force displacement curve is shown for the FE\({}^{2}\) and a few selected surrogate solutions. The displacement is defined as the vertical displacement at the mid point of the top edge (which is also the maximal displacement). It can be observed that all surrogate models underpredict the displacement, indicating that the surrogate models overpredict the stiffness of the macrostructure. The relative error in compliance is plotted over the load steps \(k\) in Fig. 9. For all models, the error slowly \begin{table} \begin{tabular}{c|c c c|c c c} & \(N_{\rm train}\) & \(N\) & \(Q\) & \(\bar{\epsilon}_{C}\) & run timea & speed-up \\ \hline full & - & 21042 & 14892 & - & 12573s & - \\ rom\_1 & 20 & 10 & 132 & 4.74\% & 136s & 92.45 \\ rom\_2 & 20 & 20 & 259 & 4.66\% & 205s & 61.33 \\ rom\_3 & 20 & 30 & 372 & 2.28\% & 284s & 44.27 \\ rom\_4 & 20 & 40 & 490 & 2.19\% & 385s & 32.66 \\ rom\_5 & 20 & 50 & 595 & 1.54\% & 488s & 25.76 \\ rom\_6 & 100 & 50 & 577 & 0.65\% & 496s & 25.34 \\ \end{tabular} \end{table} Table 1: Summary of results for full FE\({}^{2}\) and different PODECM surrogate models. Reduced order models are generated for different numbers of training samples \(N_{\rm train}\) and basis functions of the displacement \(N\). The number of quadrature points \(Q\) follow from the ECM algorithm with a fixed number of basis functions of the weighted stress field \(L=20\) and an integration error of \(\varepsilon=0.01\). All reduced order models achieve errors less than 5% with speed-ups up to 100 times. By generating more training data and maintaining the same \(N\) (rom_5 and rom_6), better results are achieved. Figure 8: Force-displacement curve of two-scale simulations. The displacement \(\tilde{u}\) is defined as the vertical displacement of the mid point on the top edge, cf. Fig. 5a. The structure starts deforming plastically for \(\tilde{T}>0.07\) and a residual displacement of roughly \(\tilde{u}=0.04\) remains after unloading. All surrogate models achieve accurate results, although they generally predict a slightly stiffer response than the full FE\({}^{2}\) solution. increases over \(k\). The reason for this behavior is that all training samples are generated for simple loading cases, where the macroscopic stretch tensor is linearly varied in only one direction throughout the entire simulation. In the macroscopic simulation, the macroscopic stretch tensor for one integration point generally does not evolve along one direction, but changes its direction continuously, leading to highly complicated deformation paths and histories that are not included in the training data. To tackle this problem, random loading paths during training could be used, as performed in, e.g., [41; 42], to generate a more general surrogate model. Solely increasing the number of samples from 20 to 100 (rom_5 to rom_6) also decreases the observed errors to less than 1% for all load steps. This shows that PODECM generalizes well to loading paths that are not part of the training data. ## 5 Conclusions In this work, we developed a reduced order model, termed PODECM, by combining the proper orthogonal decomposition (POD), the empirical cubature method (ECM) and a geometrical transformation method. This method is designed to accelerate microscopic computations within two-scale simulations, parameterized by material and geometrical parameters, hence enabling PODECM to be used for two-scale optimization problems. We showed how to compute the effective stress and the corresponding consistent effective stiffness for this model, as required by the macroscopic solver, and how to obtain the effective sensitivities with respect to geometrical parameters. The framework was first tested on a single-scale problem involving an RVE of a composite microstructure that consisted of a soft elasto-plastic matrix with stiff inclusions of variable size, controlled by a single geometrical parameter. With PODECM, the number of degrees of freedom and integration points was reduced to a fraction of the full FE model while maintaining a high accuracy in effective stress. The performance of PODECM was further evaluated for a two-scale simulation, in which a porous microstructure, characterized by two geometrical parameters, was considered. Both geometrical parameters were varied throughout the macrostructure, and depending on their values, the effective Poisson's ratio changed from positive to negative. For this example, different PODECM models were constructed with good accuracies (of errors less than 1%), while achieving speed-ups up to 100. Even though highly accurate and fast solutions were obtained with the proposed method, several open questions prevail. For instance, optimality and accuracy of the ECM integration rule cannot be ensured and thus needs to be further analyzed and understood. An attractive feature of the proposed framework is its versatility and generality. Moreover, since the underlying microscopic PDE still needs to be solved, only very little training data is required to construct Figure 9: Relative error \(\epsilon_{C}\) in compliance over load step \(k\). All surrogate models (specified in Table 1) begin with a lower error which slowly grows with \(k\). By increasing the sample size of the training data (comparing rom_5 and rom_6), the prediction becomes more accurate for the same number of basis functions; the errors for all load steps for rom_6 are below 1%. a good approximation of the full model with no special treatment of history-dependent material behavior, which makes this framework viable for two-scale shape optimization problems. ## Data availability The data that support the findings of this study are available from the corresponding author upon request. ## Acknowledgements This result is part of a project that has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 Research and Innovation Programme (Grant Agreement No. 818473). The authors would in addition like to thank Martin Horak from Czech Technical University in Prague for his help with the implementation of the large-strain \(J_{2}\)-plasticity model.
2309.13298
A Rogers--Brascamp--Lieb--Luttinger inequality in the space of matrices
We consider convex bodies in $M_{n,m}(\mathbb R)$, the space of matrices of $n$-rows and $m$-columns. A special case of fiber-symmetrization in $M_{n,m}(\mathbb R)$ was recently introduced in [5,6]. We prove a Rogers--Brascamp--Lieb--Luttinger type inequality with respect to this symmetrization, for quasi-concave functions and provide some applications.
Julián Haddad
2023-09-23T08:00:24Z
http://arxiv.org/abs/2309.13298v3
# A Rogers-Brascamp-Lieb-Luttinger Inequality in the Space of Matrices ###### Abstract. We consider convex bodies in \(\mathrm{M}_{n,m}(\mathbb{R})\), the space of matrices of \(n\)-rows and \(m\)-columns. A special case of fiber-symmetrization in \(\mathrm{M}_{n,m}(\mathbb{R})\) was recently introduced in [5, 6]. We prove a Rogers-Brascamp-Lieb-Luttinger type inequality with respect to this symmetrization, for quasi-concave functions and provide some applications. ## 1. Introduction Let \(K\subseteq\mathbb{R}^{n}\) be a Lebesgue-measurable set, and \(v\) be a unit vector in \(\mathbb{R}^{n}\). The Steiner symmetrical of \(K\) with respect to \(v\) is \[S_{v}K=\{x+tv\in\mathbb{R}^{n}:t\in\mathbb{R},x\perp v,|t|\leq\ell_{v}(x)/2\}\] where \(\ell_{v}(x)\) is the one-dimensional measure of the set obtained intersecting \(K\) with the line parallel to \(v\) passing through \(x\). Steiner symmetrization has a huge number of applications in geometry ranging from isoperimetric inequalities to functional analysis and it's at the core of the solution of many problems in Convex Geometry. For an overview of the many applications we refer to the books [1, 11] and the references therein. The operator \(S_{v}\) enjoys two important properties. First, it preserves the Lebesgue measure, and second, any convex body (compact, convex with non-empty interior) can be transformed by an infinite sequence of successive symmetrizations, converging to a Euclidean ball. Let \(f:\mathbb{R}^{n}\to\mathbb{R}\) be a non-negative measurable function and \(t\geq 0\). Denote \(\{f\geq t\}:=\{x\in\mathbb{R}^{n}:f(x)\geq t\}\), and similarly with \(\leq\). We say that \(f\) is quasi-concave if its super-level sets \(\{f\geq t\}\) are convex for every \(t\in\mathbb{R}\). A function \(f\) is quasi-convex if its sub-level sets \(\{f\leq t\}\) are convex for every \(t\in\mathbb{R}\). The Steiner symmetrization of a quasi-concave function \(f\) with respect to \(v\) is defined by \[f^{(v)}(x)=\int_{0}^{\infty}1_{S_{v}\{f\geq t\}}(x)dt\] where \(1_{K}\) is the characteristic function of the set \(K\). The function \(f^{(v)}\) can be characterized by being the unique function whose super-level sets are the Steiner symmetricals of the super-level sets of \(f\), in the direction \(v\). Rogers [9] and Brascamp, Lieb and Luttinger [3] proved independently a general integral inequality, which generalizes the celebrated Riesz convolution inequality: **Theorem 1.1**.: _Let \(f_{i}:\mathbb{R}^{n}\to\mathbb{R},i=1,\ldots,k\) be non-negative and measurable functions, and let \(a_{j}^{(i)}\) be real numbers, then_ \[\int_{\mathbb{R}^{n}}\cdots\int_{\mathbb{R}^{n}}\prod_{i=1}^{k}f_{i }\left(\sum_{j=1}^{d}a_{j}^{(i)}x_{j}\right)dx_{1}\ldots dx_{d}\\ \leq\int_{\mathbb{R}^{n}}\cdots\int_{\mathbb{R}^{n}}\prod_{i=1}^{k }f_{i}^{(v)}\left(\sum_{j=1}^{d}a_{j}^{(i)}x_{j}\right)dx_{1}\ldots dx_{d}. \tag{1}\] Notice that if the matrix with entries \(a_{j}^{(i)}\) has rank less than \(d\), the integrals at both sides are either \(0\) (if all \(f_{i}\) are zero a.e.) or infinite. If \(k=d\) and the matrix is invertible, there is equality by change of variables, and the fact that symmetrization preserves the integral. This inequality can be applied iteratively for different directions \(v\in S^{n-1}\), and by taking limits, we may replace \(f^{(v)}\) in the right-hand side of (1) by the symmetric decreasing rearrangement \(f^{*}\) (see [3]). An interesting generalization of Theorem 1.1 was used to study randomized isoperimetric inequalities by Paouris and Pivovarov [8] (Theorem 1.3 below). This inequality is attributed to Christ [4]. The following definition is given in [8]. **Definition 1.2**.: _A function \(f:(\mathbb{R}^{n})^{d}\to\mathbb{R}\) is called Steiner concave (resp. Steiner convex) if for every \(v\in S^{n-1}\) and \(y_{1},\ldots,y_{d}\in v^{\perp}\), the function \(G:\mathbb{R}^{d}\to\mathbb{R}\) defined by_ \[G(t_{1},\ldots,t_{d})=f(y_{1}+t_{1}v,\ldots,y_{d}+t_{d}v)\] _is even and quasi-concave._ The evenness in this definition is of absolute central importance, as we shall see in the course of the paper. **Theorem 1.3** ([8, Theorem 3.8]).: _Let \(f_{1},\ldots,f_{k_{1}}\) be non-negative integrable functions on \(\mathbb{R}^{n}\), \(a_{j}^{(i)}\) real numbers with \(j=1,\ldots,d,i=1,\ldots,k_{1}\). Let \(F^{(i)}:(\mathbb{R}^{n})^{d}\to\mathbb{R}\) be non-negative Steiner concave functions \(1\leq i\leq k_{2}\), and let \(\mu\) be a non-negative measure with a rotationally invariant quasi-concave density in \(\mathbb{R}^{n}\). Then,_ \[\int_{\mathbb{R}^{n}}\cdots\int_{\mathbb{R}^{n}}\prod_{i=1}^{k_{ 2}}F^{(i)}(x_{1},\ldots,x_{d})\prod_{i=1}^{k_{1}}f_{i}\left(\sum_{j=1}^{m}a_{ j}^{(i)}x_{j}\right)d\mu(x_{1})\ldots d\mu(x_{d})\\ \leq\int_{\mathbb{R}^{n}}\cdots\int_{\mathbb{R}^{n}}\prod_{i=1}^{ k_{2}}F^{(i)}(x_{1},\ldots,x_{d})\prod_{i=1}^{k_{1}}f_{i}^{(v)}\left(\sum_{j=1}^{m}a_ {j}^{(i)}x_{j}\right)d\mu(x_{1})\ldots d\mu(x_{d}).\] We shall see that this theorem can be generalized in an elegant way if one considers a symmetrization procedure adapted to the product structure of \((\mathbb{R}^{n})^{d}\). Both sets of functions \(F^{(j)},f_{i}\) together with the measure \(\mu\) will be seen as objects of the same nature under this perspective. Here each \(F^{(i)}\) is a function that is already symmetric in some sense. Let \(m\geq 1\). It is natural to identify \((\mathbb{R}^{n})^{m}\) with the set of matrices of \(n\) rows and \(m\) columns, \(\operatorname{M}_{n,m}(\mathbb{R})\). For \((x_{1},\ldots,x_{m})\in(\mathbb{R}^{n})^{m}\) we write \(x\in\operatorname{M}_{n,m}(\mathbb{R})\) for the matrix whose columns are the vectors \(x_{i}\). The expression \(\sum a_{j}^{(i)}x_{j}\) in inequality (1) can be written as \(xa^{(i)}\), where \(a^{(i)}\) is the column vector whose entries are \(a_{j}^{(i)}\). In [7] McMullen defined the fiber combination of convex sets, and later in [2] Bianchi, Gardner and Gronchi defined the fiber-symmetrization with respect to a subspace of \(\mathbb{R}^{d}\). We will use this symmetrization procedure in \(\mathrm{M}_{n,m}(\mathbb{R})\), with respect to a specific set of subspaces. This particular case of the fiber symmetrization, which is well adapted to the product structure of \(\mathrm{M}_{n,m}(\mathbb{R})=(\mathbb{R}^{n})^{m}\), has found applications only recently, see [5] and [6]. **Definition 1.4**.: _For \(v\in S^{n-1}\), consider the \(m\)-dimensional space_ \[v^{m}:=\{vt:t\in\mathrm{M}_{1,m}(\mathbb{R})\}\subseteq\mathrm{M}_{n,1}( \mathbb{R})\] _and let_ \[v^{\perp m}=\{x\in\mathrm{M}_{n,m}(\mathbb{R}):v^{t}x=0\in\mathrm{M}_{m,1}( \mathbb{R})\}.\] The subspace \(v^{m}\) consists of matrices whose columns are all multiples of \(v\), while \(v^{\perp m}\) consists of matrices, whose columns are all perpendicular to \(v\). Of course, we have \(\mathrm{M}_{n,m}(\mathbb{R})=v^{m}\oplus v^{\perp m}\). For any set \(K\subseteq\mathbb{R}^{m}\), its difference body is the set \(DK=\{x-y\in\mathbb{R}^{m}:x,y\in K\}\). **Definition 1.5**.: _Let \(K\subseteq\mathrm{M}_{n,m}(\mathbb{R})\) be a measurable set. We define the \(m\)-th higher-order Steiner symmetrical of \(K\) with respect to \(v\) by_ \[\bar{S}_{v}K=\bigcup_{y\in v^{\perp m}}\left(y+\frac{1}{2}D(K\cap(y+v^{m})) \right),\] _Furthermore, we have_ \[\bar{S}_{v}K=\left\{y+v\frac{t-s}{2}\in\mathrm{M}_{n,m}(\mathbb{R})\colon y\in v ^{\perp m},\text{ and }y+vt,y+vs\in K\right\}. \tag{2}\] Notice that the notion of Steiner-concavity of a quasi-concave function corresponds to the fact that its super-level sets are invariant by \(\bar{S}_{v}\) for every \(v\in S^{n-1}\). In particular, the characteristic function \(1_{K}\) of a convex set \(K\subseteq\mathrm{M}_{n,m}(\mathbb{R})\), is Steiner concave if and only if \(\bar{S}_{v}K=K\) for every \(v\in S^{n-1}\). **Remark 1.6**.: _Notice that for \(m=1\), \(v^{m}\) is just a line and, if \(K\) convex, the intersection \(K\cap(y+v^{m})\) is an interval, while \(y+\frac{1}{2}D(K\cap(y+v^{m}))\) is the translated interval centered at \(y\). This is, for \(m=1\) and \(K\) convex, the operator \(\bar{S}_{v}\) is the usual Steiner Symmetrization operator \(S_{v}\). However, if \(K\) is not convex this is no longer true, as the difference body of an arbitrary set might not be an interval._ **Definition 1.7**.: _Let \(f:\mathrm{M}_{n,m}(\mathbb{R})\to\mathbb{R}\) be non-negative and quasi-concave. Define the (higher-order) Steiner symmetrical of \(f\) as_ \[f^{(v)}(x)=\int_{0}^{\infty}1_{\bar{S}_{v}\{f\geq t\}}(x)dt.\] **Definition 1.8**.: _If \(f:\mathrm{M}_{n,m}(\mathbb{R})\to\mathbb{R}\) is a non-negative quasi-convex function we define_ \[f_{(v)}(x)=\int_{0}^{\infty}(1-1_{\bar{S}_{v}\{f<t\}}(x))dt.\] The operator \(\bar{S}_{v}\) was recently used in [5, 6] to prove the higher-order Petty projection inequality. In this note we study the operator \(\bar{S}_{v}\) acting on sets and functions, and prove a Rogers-Brascamp-Lieb-Luttinger type inequality. Some applications will be given. Our main theorem reads as follows: **Theorem 1.9**.: _Let \(f_{i}:\mathrm{M}_{n,m}(\mathbb{R})\to\mathbb{R},i=1,\ldots,k\) be non-negative and quasi-concave, and let \(L_{i}\in\mathrm{M}_{d,m}(\mathbb{R}),i=1,\ldots,k\) be real matrices of rank \(m\), then for \(v\in S^{n-1}\),_ \[\int_{\mathrm{M}_{n,d}(\mathbb{R})}\prod_{i=1}^{k}f_{i}(xL_{i})dx\leq\int_{ \mathrm{M}_{n,d}(\mathbb{R})}\prod_{i=1}^{k}f_{i}^{(v)}(xL_{i})dx.\] Inequality (1) in the quasi-concave case corresponds to the case where \(m=1\) so all matrices \(L_{i}\) are non-zero column vectors. Theorem 1.3 is a particular case of Theorem 1.9 where \(m=d\), \(L_{i}=\mathrm{Id}_{d},i=1,\ldots,k_{1}\), and the first \(k_{1}\) functions satisfy \(f_{i}^{(v)}=f_{i},\forall v\in S^{n-1}\), while the rest of the functions (\(f_{i}\) and \(\mu(x_{i})\)) depend on only one variable. We insist here that Theorem 1.9 extends the aforementioned results, only for quasi-concave functions. If the functions \(f_{i}\) are general measurable functions, there is the possibility that \(f_{i}^{(v)}\) is no longer measurable. This owes to the fact that the difference body of a Lebesgue measurable set may be non-measurable. This problem can be overcome by using a measure-theoretic definition of \(D\) and \(\bar{S}_{v}\), but we shall not pursue this line of research here. For general measurable non-negative functions the inequality still holds, since Theorem 1.9 is proved using only the Brunn-Minkowski inequality, which is valid also in this context. But we obtain a weaker result, due to the observation in Remark 1.6. For this reason, and for a clearer exposition, we deal here only with convex sets and quasi-concave or quasi-convex functions. The rest of the paper is organized as follows: In Section 2 we recall basic facts about convexity. In Section 3 we prove basic properties satisfied by the operator \(\bar{S}_{v}\) and its analogue on functions. In Section 4 we establish symmetrization inequalities and prove Theorem 1.9. In Section 5 we give some applications, and finally in Section 6 we study the sets that remain invariant under \(\bar{S}_{v}\). ## 2. Notation and Preliminaries We consider the vector space \(\mathrm{M}_{n,m}(\mathbb{R})\) of real matrices of \(n\) rows and \(m\) columns, with the usual euclidean structure given by \[\langle A,B\rangle=\sum_{i=1}^{n}\sum_{j=1}^{m}A_{i,j}B_{i,j},\quad\|A\|_{2}= \sqrt{\sum_{i=1}^{n}\sum_{j=1}^{m}A_{i,j}^{2}} \tag{3}\] for \(A,B\in\mathrm{M}_{n,m}(\mathbb{R})\). Here \(A_{i,j}\) denote \((i,j)\)-th entry of \(A\in\mathrm{M}_{n,m}(\mathbb{R})\). The Lebesgue measure in \(\mathrm{M}_{n,m}(\mathbb{R})\) is inherited from the natural identification between \(\mathrm{M}_{n,m}(\mathbb{R})\) and \(\mathbb{R}^{nm}\). We denote \(\left|\cdot\right|_{n}\) and \(\left|\cdot\right|_{nm}\) for the volume in \(\mathbb{R}^{n}\) and \(\mathrm{M}_{n,m}(\mathbb{R})\) respectively. For notational convenience we identify \(\mathbb{R}^{n}\) with \(\mathrm{M}_{n,1}(\mathbb{R})\), and \(\mathbb{R}^{m}\) with \(\mathrm{M}_{1,m}(\mathbb{R})\), unless stated otherwise. We denote the unit Euclidean spheres \(S^{n-1}\subseteq\mathbb{R}^{n}=\mathrm{M}_{n,1}(\mathbb{R})\), and \(\mathbb{S}^{nm-1}\subseteq\mathrm{M}_{n,m}(\mathbb{R})\). Also, we identify \(\mathrm{M}_{n,a+b}(\mathbb{R})=\mathrm{M}_{n,a}(\mathbb{R})\times\mathrm{M}_{ n,b}(\mathbb{R})\) in the natural way. For a set \(U\subseteq\mathrm{M}_{n,m}(\mathbb{R})\), \(A\in\mathrm{M}_{k,n}(\mathbb{R})\), and \(B\in\mathrm{M}_{m,l}(\mathbb{R})\), we write \[AU=\{Ax\in\mathrm{M}_{k,m}(\mathbb{R}):x\in U\}\ \ \text{and}\ \ \ UB=\{xB\in\mathrm{M}_{n,l}(\mathbb{R}):x\in U\}.\] We denote the set of convex bodies in \(\mathrm{M}_{n,m}(\mathbb{R})\) by \(\mathcal{K}^{n,m}\). The support function of \(K\in\mathcal{K}^{n,m}\) is given by \[h_{K}(x)=\sup\{\langle x,y\rangle:y\in K\}\] for \(x\in\mathrm{M}_{n,m}(\mathbb{R})\), where the inner product is the one given in (3). **Definition 2.1**.: _Let \(A\in\mathrm{M}_{d,m}(\mathbb{R})\) then \(A\) induces a linear map_ \[\bar{A}:\mathrm{M}_{n,d}(\mathbb{R})\to\mathrm{M}_{n,m}(\mathbb{R})\] _by right-multiplication, \(\bar{A}(x)=xA\)._ Observe that \[\overline{AB}=\bar{B}\circ\bar{A},\ \overline{I_{m}}=I_{nm}. \tag{4}\] The general Brunn-Minkowski inequality states that if \(K,L\subseteq\mathbb{R}^{n}\) are convex bodies, then \[|K+L|_{n}^{1/n}\geq|K|_{n}^{1/n}+|L|_{n}^{1/n} \tag{5}\] where \(K+L=\{x+y:x\in K,y\in L\}\) is the Minkowski sum of sets. Equality holds in (5) if and only if \(K\) and \(L\) are homothetic. A convex body \(K\in\mathcal{K}^{n,m}\) is origin-symmetric if \(K=-K\), while it is symmetric with respect to a point \(c\in\mathrm{M}_{n,m}(\mathbb{R})\) if \(K-c\) is origin-symmetric. The difference body of \(K\) is defined by \(DK=K+(-K)\). One can see analyzing the support functions of \(K\) and \(\frac{1}{2}DK\), that \(K=\frac{1}{2}DK\) if and only if \(K\) is origin symmetric. If \(K\) is symmetric with respect to a point, then \(\frac{1}{2}DK\) is just a translate of \(K\). By the Brunn-Minkowski inequality, \(\left|\frac{1}{2}DK\right|_{n}\geq|K|_{n}\) with equality if and only if \(K\) is symmetric with respect to some point. The mean width of \(K\) is \(w(K)=2\int_{S^{nm-1}}h_{K}(x)d\sigma(x)\) where \(\sigma\) is the rotation invariant probability measure The Hausdorff distance between two convex bodies \(K,L\) is defined by \(\|h_{K}-h_{L}\|_{\infty}\), where \(\|\cdot\|_{\infty}\) denotes supremum norm. For \(v\in S^{n-1}\) we denote by \(R_{v}\in\mathrm{M}_{n,n}(\mathbb{R})\) the matrix of the reflection in \(\mathbb{R}^{n}\) with respect to \(v^{\perp}\). This is, \(R_{v}(w+\lambda v)=w-\lambda v\) for every \(w\in v^{\perp}\) and \(\lambda\in\mathbb{R}\). Notice that also \(R_{v}(x+vt)=x-vt\) for every \(x\in v^{\perp m}\) and \(t\in\mathrm{M}_{1,m}(\mathbb{R})\). ## 3. Symmetrization In this section we establish basic properties of the fiber symmetrization in convex sets and quasi-concave functions. Some of these results appear already in [5, 6]. **Proposition 3.1**.: _The operator \(\bar{S}_{v}\) satisfies the following properties:_ 1. \(\left|\bar{S}_{v}K\right|_{nm}\geq\left|K\right|_{nm}\) _with equality if and only if_ \(K\cap(y+v^{m})\) _is symmetric (with respect to a point possibly depending on_ \(y\)_) for almost every_ \(y\in v^{\perp m}\)_._ 2. _If_ \(K\subseteq\mathrm{M}_{n,1}(\mathbb{R})\) _is convex then_ \(\bar{S}_{v}K=S_{v}K\) _is the usual Steiner symmetrization. In particular it preserves_ \(n\)_-dimensional volume._ 3. _It_ \(K\subseteq\mathrm{M}_{1,m}(\mathbb{R})\) _and_ \(v\in S^{0}=\{-1,+1\}\)_, then_ \(\bar{S}_{\pm 1}K=\frac{1}{2}DK\)_._ 4. _If_ \(K\subseteq L\) _then_ \[\bar{S}_{v}K\subseteq\bar{S}_{v}L\] _3.1-5 If_ \(K\) _is a convex body, then so is_ \(\bar{S}_{v}K\) _3.1-6_: _Let \(A\in\mathrm{M}_{n,n}(\mathbb{R})\) be an orthogonal matrix and \(K\in\mathcal{K}^{n,m},v\in S^{n-1}\), then_ \[\bar{S}_{Av}(AK)=A.\bar{S}_{v}K\] _3.1-7_: _If \(K\in\mathcal{K}^{n,d}\) and \(A\in\mathrm{M}_{d,m}(\mathbb{R})\) is any matrix, we have_ \[(\bar{S}_{v}K)A\subseteq\bar{S}_{v}(KA).\] _Moreover, if the rank of \(A\) is \(d\) (we implicitly assume \(m\geq d\)) then there is equality._ _3.1-8_: _If \(K_{i}\in\mathcal{K}^{n,m_{i}}\) with \(m_{i}\in\mathbb{N}\), \(m_{1}+\cdots+m_{r}=m\), and \(K_{1}\times\cdots\times K_{r}\subseteq\mathrm{M}_{n,m}(\mathbb{R})\),_ \[\bar{S}_{v}(K_{1}\times\cdots\times K_{r})=S_{v}K_{1}\times\cdots\times S_{v} K_{r}.\] _3.1-9_: _If \(K\in\mathcal{K}^{n,m}\) and \(A\in\mathrm{M}_{d,m}(\mathbb{R})\) is a rank \(m\) matrix (we implicitly assume \(d\geq m\)), then_ \[\bar{A}^{-1}(\bar{S}_{v}K)=\bar{S}_{v}\bar{A}^{-1}(K).\] _Proof._ 3.1-1 If \(\left|L\right|_{m}<\infty\), by Brunn-Minkowski, \(\left|\frac{1}{2}DL\right|_{m}\geq\left|L\right|_{m}\) with equality if and only if \(L\) is symmetric with respect to some point. Then by Fubini, \(\left|\bar{S}_{v}K\right|_{nm}\geq\left|K\right|_{nm}\) with equality if and only if \(K\cap(y+v^{m})\) is symmetric (with respect to a point possibly depending on \(y\)) for almost every \(y\in v^{\perp m}\). 3.1-2 If \(m=1\) and \(K\subseteq\mathbb{R}^{n}\) is convex, then \(K\cap(x+v^{m})\) is a one dimensional interval, and \(\frac{1}{2}D\left(K\cap(x+v^{m})\right)\) is the same interval centered at the origin. 3.1-3 This is clear from the definition. 3.1-4 This follows from the fact that \(D\) is a monotone operator. 3.1-5 This fact was proven in [7, Theorem 2.3]. 3.1-6 This is clear from the definition. 3.1-7 Let \(z\in(\bar{S}_{v}K)A\). By formula (2), \[z=(x+v\frac{t-s}{2})A,\text{ with }x+vt,x+vs\in K,\text{ and }v^{t}x=0.\] Clearly \(xA+vtA,xA+vsA\in KA\) and \(v^{t}xA=0\), which implies that \[z=xA+v\frac{tA-sA}{2}\in\bar{S}_{v}(KA).\] Conversely, let \(z\in\bar{S}_{v}(KA)\), then \[z=x+v\frac{t-s}{2}\] with \(t,s\in\mathrm{M}_{1,m}(\mathbb{R})\), \(x\in\mathrm{M}_{n,m}(\mathbb{R})\), \(x+vt,x+vs\in KA\) and \(v^{t}x=0\). We can write \[x+vt=\tilde{x}_{1}A+v\tilde{t}A\] \[x+vs=\tilde{x}_{2}A+v\tilde{s}A\] with \(\tilde{t},\tilde{s}\in\mathrm{M}_{1,d}(\mathbb{R})\), \(\tilde{x}_{1},\tilde{x}_{2}\in\mathrm{M}_{n,d}(\mathbb{R})\), \(\tilde{x}_{1}+v\tilde{t},\tilde{x}_{2}+v\tilde{s}\in K\) and \(v^{t}\tilde{x}_{1}=v^{t}\tilde{x}_{2}=0\). But since \(v^{t}\tilde{x}_{1}A=v^{t}\tilde{x}_{2}A=0\), by uniqueness of the decomposition \(\mathrm{M}_{n,m}(\mathbb{R})=v^{m}\oplus v^{\perp m}\), we must have \(t=\tilde{t}A,s=\tilde{s}A\) and \(\tilde{x}_{1}A=x=\tilde{x}_{2}A\). Now using that \(A\) has rank \(d\), from \(\tilde{x}_{1}A=\tilde{x}_{2}A\) we get \(\tilde{x}_{1}=\tilde{x}_{2}\). Finally, we obtain \[z=(\tilde{x}_{1}+v\frac{\tilde{t}-\tilde{s}}{2})A\] with \(\tilde{x}_{1}+v\tilde{t},\tilde{x}_{1}+v\tilde{s}\in K,v^{t}\tilde{x}_{1}=0\) and we conclude that \(z\in(\bar{S}_{v}K)A\). 3.1-8 By formula (2), \[\bar{S}_{v}K =\left\{(x_{1},\ldots,x_{r})+v\frac{(s_{1},\ldots,s_{r})-(t_{1}, \ldots,t_{r})}{2}:x_{i}+vt_{i}\in K_{i}\right\}\] \[=\left\{(x_{i}+v\frac{t_{i}-s_{i}}{2})_{i}:x_{i}+vt_{i}\in K_{i}\right\}\] \[=K_{1}\times\cdots\times K_{r}.\] 3.1-9 Consider the \(d\times m\) matrix \[T_{d,m}=\left(\begin{array}{c}\operatorname{Id}_{m}\\ 0_{d-m,m}\end{array}\right)\] so that \(\overline{T_{d,m}}(x_{1},\ldots,x_{d})=(x_{1},\ldots,x_{m})\) for \(x_{i}\in\mathbb{R}^{n}\). The proposition is true replacing \(A\) by \(T_{d,m}\), by property 3.1-8 with \(r=2,m_{1}=m,m_{2}=m-d,K_{1}=K,K_{2}=\mathbb{R}^{d-m}\). If \(d=m\), the matrix \(A\) must be invertible and the proposition is also true by property 3.1-7. For general \(A\), we use that \(A\) can be decomposed as \(A=\varphi T_{d,m}\psi\) where \(\varphi\in GL_{m},\psi\in GL_{d}\). We conclude by equation (4) and properties 3.1-8 and 3.1-7 again. The following properties of the symmetrization on functions are obtained immediately. **Proposition 3.2**.: 1. _A quasi-concave function_ \(f:\operatorname{M}_{n,m}(\mathbb{R})\to\mathbb{R}\) _is Steiner concave if, and only if,_ \(f^{(v)}=f\) _for every_ \(v\in S^{n-1}\)_._ 2. _If_ \(f(x_{1},\cdots,x_{m})=g(x_{i})\) _then_ \[f^{(v)}(x_{1},\cdots,x_{m})=g^{(v)}(x_{i}).\] 3. _If_ \(K\subseteq\mathbb{R}^{m}=\operatorname{M}_{1,m}(\mathbb{R})\)_, then_ \(S^{0}=\{-1,+1\}\)_, and_ \[f^{(\pm 1)}(x)=\int_{0}^{\infty}1_{\frac{1}{2}D\{f\geq t\}}(x)dt.\] 4. _If_ \(A\in\operatorname{M}_{d,m}(\mathbb{R})\) _is a rank_ \(m\) _matrix, then_ \[(f\circ\bar{A})^{(v)}=f^{(v)}\circ\bar{A}.\] The measure-preserving property of the usual Steiner symmetrization is a useful property that is lost for the operator \(\bar{S}_{v}\). Nevertheless, \(\bar{S}_{v}\) still preserves some weaker measures on convex bodies like the mean width and the volumes of some projections. **Proposition 3.3**.: _Let \(K\subseteq\operatorname{M}_{n,m}(\mathbb{R})\) be a convex body. For any \(A\in GL_{m}\) and \(p\geq 1\),_ \[\int_{S^{n-1}}h_{(\bar{S}_{v}K)A}(v)^{p}dv\leq\int_{S^{n-1}}h_{KA}(v)^{p}dv.\] _If \(p>1\) there is equality if and only if \(R_{v}K=K\)._ Proof.: By property 3.1-7 it suffices to consider the case \(A=I_{m}\). Recall that \(R_{v}\in\mathrm{M}_{n,n}(\mathbb{R})\) is the matrix of the reflection in \(\mathbb{R}^{n}\) with respect to \(v^{\perp}\). Let \(x\in v^{\perp m}\), \(x+vt,x+vs\in K\) and \(z\in\mathrm{M}_{n,m}(\mathbb{R})\). Then \[\langle x+v\frac{t-s}{2},z\rangle=\frac{1}{2}(\langle x+vt,z\rangle+\langle x- vs,z\rangle)\leq\frac{1}{2}(h_{K}(z)+h_{K}(R_{v}z))\] Taking supremum for all \(x+vt,x+vs\in K\) and using (2), we get \(h_{\bar{S}_{v}K}(z)\leq\frac{1}{2}(h_{K}(z)+h_{R_{v}K}(z))\). Since \(R_{v}S^{nm-1}=S^{nm-1}\), we obtain \[\|h_{\bar{S}_{v}K}\|_{p}\leq\frac{1}{2}(\|h_{K}\|_{p}+\|h_{R_{v}K}\|_{p})=\|h_ {K}\|_{p}.\] For the equality case, notice that if \(\|h_{\bar{S}_{v}K}\|_{p}=\|h_{K}\|_{p}\), \[\frac{1}{2}\|h_{K}+h_{R_{v}K}\|_{p}\geq\|h_{\bar{S}_{v}K}\|_{p}=\|h_{K}\|_{p}= \frac{1}{2}(\|h_{K}\|_{p}+\|h_{R_{v}K}\|_{p}),\] which for \(p>1\) implies that that there exists \(\lambda\in\mathbb{R}\) with \(h_{K}(w)=\lambda h_{R_{v}K}(w)\) for almost every \(w\). Integrating on \(w\in S^{nm-1}\) on both sides of this inequality we get \(w(K)=\lambda w(R_{v}K)\) which implies that \(\lambda=1\). Thus we obtain \(h_{K}(w)=h_{R_{v}K}(w)\) for almost every \(w\in S^{nm-1}\). By continuity, this occurs for every \(w\in S^{nm-1}\) and we get \(K=R_{v}K\). For \(w\in\mathrm{M}_{m,1}(\mathbb{R})\setminus\{0\}\) the function \(\bar{w}:\mathrm{M}_{n,m}(\mathbb{R})\to\mathbb{R}^{n}\) is a linear projection onto an \(n\)-dimensional space. In particular, \(\overline{e_{i}}=\pi_{i}:\mathrm{M}_{n,m}(\mathbb{R})\to\mathbb{R}^{n}\) is the projection onto the \(i\)-th column, this is, \(\pi_{i}(x)\) is the \(i\)-th column of \(x\). **Proposition 3.4**.: _For \(K\subseteq\mathrm{M}_{n,m}(\mathbb{R})\) and \(w\in\mathrm{M}_{m,1}(\mathbb{R})\),_ \[\left|\bar{w}(\bar{S}_{v}K)\right|_{n}\leq\left|\bar{w}(K)\right|_{n}.\] _In particular,_ \[\left|\pi_{i}(\bar{S}_{v}K)\right|_{n}\leq\left|\pi_{i}(K)\right|_{n}.\] Proof.: By the properties 3.1-7, 3.1-2, we have \[\left|\bar{w}(\bar{S}_{v}K)\right|_{n}=\left|(\bar{S}_{v}K)w\right|_{n}\leq \left|S_{v}(Kw)\right|_{n}=\left|Kw\right|_{n}=\left|\bar{w}(K)\right|_{n}\] ## 4. Rogers-Brascamp-Lieb-Luttinger inequalities We start with an inequality of convex sets which is the geometric core of Theorem 1.9. **Theorem 4.1**.: _Let \(K_{i}\subseteq\mathrm{M}_{1,m}(\mathbb{R})=\mathbb{R}^{m},i=1,\ldots,k\) be convex sets, (not necessarily convex bodies, possibly with infinite volume), then_ \[\left|\bigcap_{i=1}^{k}K_{i}\right|_{m}\leq\left|\bigcap_{i=1}^{k}\frac{1}{2} DK_{i}\right|_{m}.\] _If the left-hand side has infinite volume, then so has the right-hand side._ _Moreover, if \(K\) is symmetric and has finite volume,_ \[\left|K\setminus L\right|_{m}\geq\left|K\setminus\frac{1}{2}DL\right|_{m}.\] Proof.: Let \(x\in\frac{1}{2}D(\bigcap_{i=1}^{k}K_{i})\), then \(x=\frac{a-b}{2}\) with \(a,b\in\bigcap_{i=1}^{k}K_{i}\). In particular, \(x\in\frac{1}{2}DK_{i}\) for every \(i\). This proves that \[\frac{1}{2}D\left(\bigcap_{i=1}^{k}K_{i}\right)\subseteq\bigcap_{i=1}^{k} \frac{1}{2}DK_{i}.\] By Brunn-Minkowski inequality, \[\left|\bigcap_{i=1}^{k}K_{i}\right|_{m}\leq\left|\frac{1}{2}D\bigcap_{i=1}^{k} K_{i}\right|_{m}\leq\left|\bigcap_{i=1}^{k}\frac{1}{2}DK_{i}\right|_{m}\] and the first part of the theorem follows. For the second part, \[\left|K\setminus L\right|_{m} =\left|K\right|_{m}-\left|K\cap L\right|_{m}\] \[\geq\left|K\right|_{m}-\left|\frac{1}{2}DK\cap\frac{1}{2}DL\right| _{m}\] \[\geq\left|K\right|_{m}-\left|K\cap\frac{1}{2}DL\right|_{m}\] \[=\left|K\setminus\frac{1}{2}DL\right|_{m}.\] We remark here that Theorem 4.1 is neither stronger nor weaker than the inequality \(\left|\bigcap_{i=1}^{k}K_{i}\right|_{m}\leq\max_{i}\{\left|K_{i}\right|_{m}\}\) which is obtained applying inequality (1) with the symmetric decreasing rearrangement. For example, if \(K_{1},K_{2}\) are two symmetric convex bodies of volume \(1\) with very small intersection, then Theorem 4.1 gives an equality, while the inequality \(\left|K_{1}\cap K_{2}\right|_{m}<1\) is strict. **Theorem 4.2**.: _Let \(K_{i}\subseteq\mathrm{M}_{n,m}(\mathbb{R}),i=1,\ldots,k\) be convex sets (not necessarily convex bodies), and \(v\in S^{n-1}\), then_ \[\left|\bigcap_{i=1}^{k}K_{i}\right|_{nm}\leq\left|\bigcap_{i=1}^{k}\bar{S}_{v }K_{i}\right|_{nm}.\] _If the left-hand side has infinite volume, then so has the right-hand side. Moreover, if \(K\) satisfies \(\bar{S}_{v}K=K\) has finite volume,_ \[\left|K\setminus L\right|_{nm}\geq\left|K\setminus\bar{S}_{v}L\right|_{nm}.\] Proof.: By Fubini and Theorem 4.1, \[\left|\bigcap_{i=1}^{k}K_{i}\right|_{nm} =\int_{v^{\perp m}}\left|(x+v^{m})\cap\bigcap_{i=1}^{k}K_{i}\right| _{m}dx\] \[=\int_{v^{\perp m}}\left|\bigcap_{i=1}^{k}\left((x+v^{m})\cap K_{i }\right)\right|_{m}dx\] \[\leq\int_{v^{\perp m}}\left|\bigcap_{i=1}^{k}\frac{1}{2}D\left((x +v^{m})\cap K_{i}\right)\right|_{m}dx\] \[=\int_{v^{\perp m}}\left|\bigcap_{i=1}^{k}(x+v^{m})\cap\bar{S}_{v }K_{i}\right|_{m}dx\] \[=\int_{v^{\perp m}}\left|(x+v^{m})\cap\bigcap_{i=1}^{k}\bar{S}_{v }K_{i}\right|_{m}dx=\left|\bigcap_{i=1}^{k}\bar{S}_{v}K_{i}\right|_{nm}\] The second statement follows as in the second part of the proof of Theorem 4.1. **Theorem 4.3**.: _Let \(f_{i}:\mathrm{M}_{n,m}(\mathbb{R})\to\mathbb{R},i=1,\ldots,k\) be non-negative and quasi-concave. Then_ \[\int\prod_{i=1}^{k}f_{i}(x)dx\leq\int\prod_{i=1}^{k}f_{i}^{(v)}(x)dx\] Proof.: By the layer-cake formula, \[\int\prod_{i=1}^{k}f_{i}(x)dx =\int_{0}^{\infty}\cdots\int_{0}^{\infty}\left|\bigcap_{i=1}^{k} \{f_{i}\geq t_{i}\}\right|_{nm}dt_{1}\cdots dt_{k}\] \[\leq\int_{0}^{\infty}\cdots\int_{0}^{\infty}\left|\bigcap_{i=1}^ {k}\bar{S}_{v}\{f_{i}\geq t_{i}\}\right|_{nm}dt_{1}\cdots dt_{k}\] \[=\int_{0}^{\infty}\cdots\int_{0}^{\infty}\left|\bigcap_{i=1}^{k} \{f_{i}^{(v)}\geq t_{i}\}\right|_{nm}dt_{1}\cdots dt_{k}\] \[=\int\prod_{i=1}^{k}f_{i}^{(v)}(x)dx\] Finally, we can deduce the Rogers-Brascamp-Lieb-Luttinger inequality in its full generality. Proof of Theorem 1.9.: Apply Theorem 4.3 and use property 3.2-4. For convex functions we obtain the following: **Theorem 4.4**.: _Let \(f:\mathrm{M}_{n,m}(\mathbb{R})\to\mathbb{R}\) be a non-negative and quasi-concave and Steiner-concave function, and let \(g:\mathrm{M}_{n,m}(\mathbb{R})\to\mathbb{R}\) be a non-negative and quasi-convex, then_ \[\int f(x)g(x)dx\geq\int f(x)g_{(v)}(x)dx\] Proof.: By the layer-cake formula, Definition 1.8, and the second part of Theorem 4.2, \[\int f(x)g(x)dx =\int_{0}^{\infty}\int_{0}^{\infty}\left|\{f\geq t\}\cap\{g\geq t_{ 2}\}\right|_{nm}dt_{1}dt_{2}\] \[=\int_{0}^{\infty}\int_{0}^{\infty}\left|\{f\geq t\}\setminus\{g<t _{2}\}\right|_{nm}dt_{1}dt_{2}\] \[\geq\int_{0}^{\infty}\int_{0}^{\infty}\left|\bar{S}_{v}\{f\geq t \}\setminus\bar{S}_{v}\{g<t_{2}\}\right|_{nm}dt_{1}dt_{2}\] \[=\int_{0}^{\infty}\int_{0}^{\infty}\left|\{f^{(v)}\geq t\} \setminus\{g^{(v)}<t_{2}\}\right|_{nm}dt_{1}dt_{2}\] \[=\int_{0}^{\infty}\int_{0}^{\infty}\left|\{f^{(v)}\geq t\}\cap\{g^ {(v)}\geq t_{2}\}\right|_{nm}dt_{1}dt_{2}=\int f(x)g_{(v)}(x)dx.\] ## 5. Some Consequences ### Particular cases of Theorem 1.9 As mentioned in the introduction, we see that Theorem 1.3 is a particular case of Theorem 1.9. A particular case of Theorem 4.3 (which is Theorem 1.9 with \(m=d\) and \(L_{i}=\mathrm{Id}\)) is the following **Corollary 5.1**.: _Let \(f:\mathrm{M}_{n,m}(\mathbb{R})\to\mathbb{R}\) be Steiner-concave, and \(K\in\mathcal{K}^{n,m}\), then_ \[\int_{K}f(x)dx\leq\int_{\bar{S}_{v}K}f(x)dx.\] For example we have **Corollary 5.2**.: _Let \(K\subseteq\mathrm{M}_{n,m}(\mathbb{R})\) a measurable set and \(\gamma:(0,\infty)\to(0,\infty)\) is a decreasing function, then_ \[\int_{K}\gamma(\left|\mathrm{conv}(\{x_{1},\ldots,x_{m}\})\right|_{n})dx\leq \int_{\bar{S}_{v}K}\gamma(\left|\mathrm{conv}(\{x_{1},\ldots,x_{m}\})\right|_{ n})dx\] For \(n=1\) we obtain **Corollary 5.3**.: _Let \(f_{i}:\mathbb{R}^{m}\to\mathbb{R},i=1,\ldots k\), then_ \[\int_{\mathbb{R}^{m}}\prod_{i=1}^{k}f_{i}(x)dx\leq\int_{\mathbb{R}^{m}}\prod_ {i=1}^{k}f_{i}^{(1)}(x)dx.\] _where \(f_{i}^{(1)}\) is defined as in property (3.2-3)._ Notice that, as in the remark after Theorem 4.1, this inequality cannot be proven directly with inequality (1). ### Operator norms For any convex bodies \(K\subseteq\mathbb{R}^{m},L\subseteq\mathbb{R}^{n}\) consider the set \[B_{K,L}=\{x\in\mathrm{M}_{n,m}(\mathbb{R}):xw\in L,\ \ \forall w\in K\}.\] Where we identify vectors in \(\mathbb{R}^{m}\) and \(\mathbb{R}^{n}\) with column vectors in \(\mathrm{M}_{m,1}(\mathbb{R})\) and \(\mathrm{M}_{n,1}(\mathbb{R})\) respectively. If \(K,L\) are symmetric, the set \(B_{K,L}\subseteq\operatorname{M}_{n,m}(\mathbb{R})\) is the unit ball in the operator norm, of the set of linear maps between Banach spaces \((\mathbb{R}^{m},K)\to(\mathbb{R}^{n},L)\). Each one of these map induces a linear map of the dual spaces \[(\mathbb{R}^{m},L)^{*}\to(\mathbb{R}^{n},K)^{*}\] by transposition. In general (for non symmetric convex bodies \(K,L\)) this reads as \[B_{K,L}^{t}=B_{L^{\circ},K^{\circ}}. \tag{6}\] In [6] the following result is obtained as a limit case of the higher-order \(L^{p}\) Petty-projection inequality. We give here a shorter and clearer proof. **Theorem 5.4**.: _For any convex bodies \(K\subseteq\mathbb{R}^{m},L\subseteq\mathbb{R}^{n}\),_ \[\bar{S}_{v}B_{K,L}\subseteq B_{K,S_{v}L}\] Proof.: Take any \(w\in K\subseteq\operatorname{M}_{m,1}(\mathbb{R})\), so that \(B_{K,L}w\subseteq L\). By (3.1-7) and (3.1-4), \[(\bar{S}_{v}B_{K,L})w\subseteq\bar{S}_{v}(B_{K,L}w)\subseteq\bar{S}_{v}L.\] This implies that \(\bar{S}_{v}B_{K,L}\subseteq B_{K,S_{v}L}\). The theorem obviously implies that \(\left|B_{K,L}\right|_{nm}\leq\left|B_{K,B_{L}}\right|_{nm}\) where \(B_{L}\) is the centered euclidean ball of same volume as \(L\). Some more can be said: **Theorem 5.5**.: _Let \(F:\operatorname{M}_{n,m}(\mathbb{R})\to\mathbb{R}\) be non-negative and Steiner concave, then_ \[\int_{B_{K,L}}F(x)dx\leq\int_{B_{K,B_{L}}}F(x)dx.\] _Let \(F:\operatorname{M}_{m,n}(\mathbb{R})\to\mathbb{R}\) be non-negative and Steiner concave, then_ \[\int_{B_{K,L}}F(x^{t})dx\leq\int_{B_{B_{K^{0},L}}}F(x^{t})dx.\] _For example, for \(n=m\) and \(\beta\in(-1,0)\),_ \[\int_{B_{K,L}}|\det(x)|^{\beta}dx \leq\left(\frac{\left|L\right|_{n}\left|K^{\circ}\right|_{n}}{ \left|B_{2}^{n}\right|_{n}^{2}}\right)^{n+\beta}\int_{B_{B,B}}|\det(x)|^{\beta }dx\] \[\leq\left(\frac{\left|L\right|_{n}}{\left|K\right|_{n}}\right)^{n+ \beta}\int_{B_{B,B}}|\det(x)|^{\beta}dx=\int_{B_{B_{K},B_{L}}}|\det(x)|^{\beta }dx\] Proof.: The first inequality is a direct consequence of Theorem 4.3. The second one, comes from the first inequality, formula (6) and the change of variables \(x\mapsto x^{t}\). For the third inequality apply both inequalities and the fact that \(x\mapsto|\det(x)|^{\beta}\) is Steiner concave and invariant by transposition, to obtain \[\int_{B_{K,L}}|\det(x)|^{\beta}dx \leq\int_{B_{K,B_{L}}}|\det(x)|^{\beta}dx\] \[\leq\int_{B_{B_{L}^{\circ},K^{\circ}}}|\det(y)|^{\beta}dy\] \[\leq\int_{B_{B_{L}^{\circ},B_{K^{\circ}}}}|\det(y)|^{\beta}dy\] \[\leq\int_{B_{(B_{K^{\circ}})^{\circ},B_{L}}}|\det(x)|^{\beta}dx\] \[\leq(\frac{\left|L\right|_{n}\left|K^{\circ}\right|_{n}}{\left|B_ {2}^{n}\right|_{n}^{2}})^{n+\beta}\int_{B_{2,2}}|\det(x)|^{\beta}dx.\] The last inequality follows from the Blaschke-Santalo inequality. ### Schneider's Difference body For \(K\subseteq\mathbb{R}^{n}\) and \(m=1\), Schneider [10] defined the higher-order difference body as \[D^{m}K=\left\{(x_{1},\ldots,x_{m}):K\cap(K+x_{1})\cap\cdots\cap(K+x_{m})\neq \emptyset\right\}.\] Here we prove that Schneider's operator intertwines with the higher-order symmetrization. **Theorem 5.6**.: _If \(K\in\mathcal{K}^{n,m}\), \(\bar{S}_{v}(D^{m}K)\supseteq D^{m}(S_{v}K)\)._ Proof.: Define the matrix \(P_{m}\in\mathrm{M}_{m+1,m}(\mathbb{R})\) by \[P_{m}=\left(\begin{array}{cccc}1&1&\cdots&1\\ -1&0&\cdots&0\\ 0&-1&\cdots&0\\ \cdots&\cdots&\cdots&\cdots\\ 0&0&\cdots&-1\end{array}\right).\] It is easy to see that \(D^{m}K=K^{m+1}P_{m}\). By properties (3.1-7) and (3.1-8), \[\bar{S}_{v}(D^{m}K)\supseteq\bar{S}_{v}(K^{m+1})P_{m}=(S_{v}K)^{m+1}P_{m}=D^{m }(S_{v}K).\] In [10] Schneider conjectured that the volume of \(D^{m}K\) is minimized among convex sets if \(K\) is an ellipsoid. Unfortunately Theorem 5.6 is not appropriate to study this conjecture, since the operator \(\bar{S}_{v}\) is volume-increasing. However, combining Theorems 5.6 and Proposition 3.3 we get a (much weaker) result in this direction. **Theorem 5.7**.: _Among all convex bodies \(K\subseteq\mathbb{R}^{n}\) of a fixed volume, the Euclidean balls minimize the mean width of \(D^{m}K\)._ Some more can be said: **Theorem 5.8**.: _Let \(L\in\mathcal{K}^{n,m}\) satisfy \(R_{v}L=L\) for every \(v\in S^{n-1}\) (see Section 6 for examples of invariant sets). Among all convex bodies \(K\subseteq\mathbb{R}^{n}\) of a fixed volume, the Euclidean balls minimize the integral_ \[\int_{L}h_{D^{m}K}(x)dx.\] Proof.: As in the proof of Proposition 3.3, for every \(E\in\mathcal{K}^{n,m}\), \[\int_{L}h_{\bar{S}_{v}E}(x)dx\leq\int_{L}h_{E}(x)dx.\] For \(E=D^{m}K\), by Theorem 5.6, \[\int_{L}h_{D^{m}S_{v}K}\leq\int_{L}h_{\bar{S}_{v}D^{m}K}\leq\int_{L}h_{D^{m}K}.\] ## 6. Invariant sets In this section we study the sets \(K\) that remain invariant under \(\bar{S}_{v}\). **Definition 6.1**.: _A convex body \(K\in\mathcal{K}^{n,m}\) is said to be \(O_{n}\) invariant if \(RK=K\) for every \(n\)-dimensional rotation \(R\in O_{n}\). Notice that this does not necessarily imply that \(K\) is a ball, since \(R\) is an \(n\times n\) matrix, not \(nm\times nm\)._ Contrary to the classical case \(m=1\) where we only have the centered Euclidean balls of different radius, many convex bodies in \(\mathrm{M}_{n,m}(\mathbb{R})\) are \(O_{n}\) invariant. As examples we have the unit ball \(B_{2}^{nm}\), the product of balls \((B_{2}^{n})^{m}\), the unit ball of any operator norm of the type \(B_{K,B_{2}^{n}}\) and the ball of any unitary invariant norm (in particular any Schatten class). Here we show that \(O_{n}\) invariant convex bodies characterize the sets invariant by the operator \(\bar{S}_{v}\). **Proposition 6.2**.: _Let \(K\in\mathcal{K}^{n,m}\). The following statements are equivalent:_ 1. _For every_ \(v\in S^{n-1}\)_,_ \(\bar{S}_{v}K=K\)_._ 2. _For every_ \(v\in S^{n-1}\)_,_ \(R_{v}K=K\)_._ 3. \(K\) _is_ \(O_{n}\) _invariant._ If \(K\) is also symmetric, it is the unit ball of a matrix norm \(\|\cdot\|_{K}\). Then \(O_{n}\) invariance just means left-orthogonal invariance for the norm. Proof.: Since the reflections \(R_{v},v\in S^{n-1}\) generate \(O_{n}\), \((2)\Leftrightarrow(3)\) are clearly equivalent. To see \((1)\Leftrightarrow(2)\) take \(v\in S^{n-1}\). The property \(\bar{S}_{v}K=K\) holds if and only if for any \(y\in v^{\perp m}\), the set \(K_{y}=\{t\in\mathrm{M}_{1,m}(\mathbb{R}):y+vt\in K\}\) is equal to half its difference body, and this is equivalent to \(K_{y}=-K_{y}\). This happens when, for any such \(y\), \(x=y+vt\) belongs to \(K\) if and only if \(R_{v}x=y-vt\) does. For functions we obtain the following characterization: **Proposition 6.3**.: _A quasi-concave function \(f\) is Steiner-concave if and only if \(f(Rx)=f(x)\) for every \(R\in O_{n}\)._ Finally, as in the classical case, every convex body can be transformed into an extremal set. **Theorem 6.4**.: _Let \(K\subseteq\mathrm{M}_{n,m}(\mathbb{R})\) be a convex body, then there exists an \(O_{n}\) invariant convex body \(L\subseteq\mathrm{M}_{n,m}(\mathbb{R})\) such that for every \(\varepsilon>0\) there are vectors \(v_{1},\ldots,v_{k}\) with_ \[d_{H}(\bar{S}_{v_{m}}\circ\cdots\circ\bar{S}_{v_{1}}K,L)<\varepsilon,\] _where \(d_{H}\) is the Hausdorff distance._ Proof.: For a \(k\)-tuple of vectors \(\bar{v}=(v_{1},\ldots,v_{k})\) we denote \(\bar{S}_{\bar{v}}=\bar{S}_{v_{m}}\circ\cdots\circ\bar{S}_{v_{1}}\) the composition of the symmetrizations \(\bar{S}_{v_{i}}\). Let \(R>r>0\) such that \(B(0,R)\supseteq K\supseteq B(0,r)\). Recall that by property (3.1-4) we always have \(B(0,R)\supseteq\bar{S}_{\bar{v}}K\supseteq B(0,r)\). Fix any \(p>1\) and consider the infimum \[w_{p,\min}=\inf\{\|h_{\bar{S}_{v}K}\|_{p}:k\geq 1,\bar{v}=(v_{1},\ldots,v_{k}),v _{i}\in S^{n-1}\}\] which in view of \(\bar{S}_{\bar{v}}K\supseteq B(0,r)\), has to be positive. Take a sequence of tuples of vectors \(\bar{v}^{(i)}\) such that \(\|h_{\bar{S}_{v^{(i)}}K}\|_{p}\to w_{p,\min}\). Since \(B(0,R)\supseteq\bar{S}_{\bar{v}^{(i)}}K\) we may apply Blaschke's selection theorem to obtain a subsequence (again denoted by \(\bar{v}^{(i)}\)) such that \(S_{\bar{v}^{(i)}}K\) converges in the Hausdorff distance to a convex body \(L\). By continuity, \(\|h_{L}\|_{p}=w_{p,\min}\). Take any \(v\in S^{n-1}\). By continuity and Proposition 3.3, \[w_{p,\min} =\|h_{L}\|_{p}\] \[\geq\|h_{\bar{S}_{v}L}\|_{p}\] \[=\lim_{i\to\infty}\|h_{S_{v}\bar{S}_{v^{(i)}}}K\|_{p}\] \[\geq w_{p,\min}\] and we get \(\|h_{L}\|_{p}=\|h_{\bar{S}_{v}L}\|_{p}\). The equality case of Proposition 3.3 (notice that we chose \(p>1\)) implies \(L=R_{v}L\). Since \(v\in S^{n-1}\) was arbitrary, \(L\) must be \(O_{n}\) invariant by Proposition 6.2. The results in this last section point to the fact that \(\bar{S}_{v}\) makes a matrix norm \(\|\cdot\|_{K}\) more and more left-invariant. Of course one can also consider the transformation \(K\mapsto(\bar{S}_{v}K^{t})^{t}\) which makes the norm more right-invariant. If the norm is transformed by a sequence of left and right symmetrizations into a limiting norm \(\|\cdot\|_{L}\), then it will be unitarily invariant and, by singular value decomposition, \(L\) will be uniquely determined by the set of singular values of the matrices inside \(L\). Unfortunately it appears that the set of singular values may vary from \(K\) to \(\bar{S}_{v}K\) for arbitrary \(K\), so it is not clear how (if possible) to determine \(L\) from \(K\).
2309.04082
Curve Your Attention: Mixed-Curvature Transformers for Graph Representation Learning
Real-world graphs naturally exhibit hierarchical or cyclical structures that are unfit for the typical Euclidean space. While there exist graph neural networks that leverage hyperbolic or spherical spaces to learn representations that embed such structures more accurately, these methods are confined under the message-passing paradigm, making the models vulnerable against side-effects such as oversmoothing and oversquashing. More recent work have proposed global attention-based graph Transformers that can easily model long-range interactions, but their extensions towards non-Euclidean geometry are yet unexplored. To bridge this gap, we propose Fully Product-Stereographic Transformer, a generalization of Transformers towards operating entirely on the product of constant curvature spaces. When combined with tokenized graph Transformers, our model can learn the curvature appropriate for the input graph in an end-to-end fashion, without the need of additional tuning on different curvature initializations. We also provide a kernelized approach to non-Euclidean attention, which enables our model to run in time and memory cost linear to the number of nodes and edges while respecting the underlying geometry. Experiments on graph reconstruction and node classification demonstrate the benefits of generalizing Transformers to the non-Euclidean domain.
Sungjun Cho, Seunghyuk Cho, Sungwoo Park, Hankook Lee, Honglak Lee, Moontae Lee
2023-09-08T02:44:37Z
http://arxiv.org/abs/2309.04082v1
# Curve Your Attention: Mixed-Curvature Transformers for Graph Representation Learning ###### Abstract Real-world graphs naturally exhibit hierarchical or cyclical structures that are unfit for the typical Euclidean space. While there exist graph neural networks that leverage hyperbolic or spherical spaces to learn representations that embed such structures more accurately, these methods are confined under the message-passing paradigm, making the models vulnerable against side-effects such as oversmoothing and oversquashing. More recent work have proposed global attention-based graph Transformers that can easily model long-range interactions, but their extensions towards non-Euclidean geometry are yet unexplored. To bridge this gap, we propose Fully Product-Stereographic Transformer, a generalization of Transformers towards operating entirely on the product of constant curvature spaces. When combined with tokenized graph Transformers, our model can learn the curvature appropriate for the input graph in an end-to-end fashion, without the need of additional tuning on different curvature initializations. We also provide a kernelized approach to non-Euclidean attention, which enables our model to run in time and memory cost linear to the number of nodes and edges while respecting the underlying geometry. Experiments on graph reconstruction and node classification demonstrate the benefits of generalizing Transformers to the non-Euclidean domain. ## 1 Introduction Learning from graph-structured data is a challenging task in machine learning, with various downstream applications that involve modeling individual entities and relational interactions among them [45; 52; 22]. A dominant line of work consists of graph convolutional networks (GCNs) that aggregate features across graph neighbors through _message-passing_[20; 29; 50; 54; 26]. While most GCNs learn features that lie on the typical Euclidean space with zero curvature, real-world graphs often comprise of complex structures such as hierarchical trees and cycles that Euclidean space requires excessive dimensions to accurately embed [44]. In response, the graph learning community has developed generalizations of GCNs to spaces with non-zero curvature such as hyperbolic, spherical, or mixed-curvature spaces with both negative and positive curvatures [5; 37; 61; 2; 56]. Unfortunately, non-Euclidean GCNs are not immune to harmful side-effects of message-passing such as oversmoothing [41; 4; 58] and oversquashing [48; 1]. These drawbacks make it difficult to stack GCN layers towards large depths, limiting its expressive power [17; 38] as well as predictive performance on tasks that require long-range interactions to solve [16; 36]. To cope with such limitations, recent works have instead proposed Transformer-based graph encoders that can easily exchange information across long-range distances through global self-attention [28; 59; 15; 32]. However, existing graph Transformers are still confined within the Euclidean regime, and their extensions towards non-Euclidean geometry has not yet been studied. In this paper, we bridge this gap by generalizing the Transformer architecture [49] towards non-Euclidean spaces with learnable curvatures. Specifically, we endow each attention head a stereographic model [2] that can universally represent Euclidean, hyperbolic, and spherical spaces (Figure 1). We generalize each operation of the Transformer architecture to inputs on the product-stereographic model, all of which are end-to-end differentiable with respect to the sectional curvatures, thereby allowing the model to jointly train embeddings as well as the underlying curvature. The resulting model, which we name as **Fully Product-Stereographic Transformer (FPS-T)**, takes advantage of both non-Euclidean geometry and long-range interactions. We empirically show that the learnable sectional curvature of FPS-T successfully converges to the geometry of the input graph, leading to better predictive performance and parameter efficiency in graph reconstruction and node classification compared to its Euclidean counterpart. To the best of our knowledge, our work is the first to propose a natural generalization of Transformers to non-Euclidean spaces. We summarize our core contributions as follows: * We propose FPS-T, a generalization of Transformer towards operating entirely on the product-stereographic model with curvatures that are learnable in an end-to-end fashion. * For graph representation learning, we integrate FPS-T with Tokenized Graph Transformer [28], and develop a kernelized approximation of non-Euclidean attention to reduce the computational cost to linear in number of nodes and edges. * Experiments on graph reconstruction and node classification with real-world graphs demonstrate the benefits of FPS-T such as better parameter efficiency and downstream performance. ## 2 Related Work Non-Euclidean graph representations.Non-Euclidean spaces are known to well-preserve specific types of graph structure where Euclidean space fails. Especially, non-Euclidean spaces with constant sectional curvature, _e.g._, hyperbolic and spherical spaces, are widely used in graph representation learning due to its tractable operations. Hyperbolic spaces are capable of efficiently embedding complex hierarchical structures in graphs [40; 39; 19; 33; 44]. Graphs with cyclic structures are well-suited for spherical spaces [53; 23]. Riemannian manifolds with varying curvature and constant sign are also proposed for graph encoding [10]. However, Riemannian manifolds where the sign of the curvature is fixed are not a good choice for more complex graphs that exhibit both hierarchy and cycles. Instead, the product of constant-curvature spaces [24], heterogeneous manifolds [21], and pseudo-Riemannian manifolds [34] are found to be well-suited for learning representations of such complex graphs. Figure 1: Illustration of our proposed FPS-T architecture. Well-known constant curvature spaces can be projected to the stereographic model, with a common chart map isomorphic to the \(d\)-dimensional Euclidean space. Each space can efficiently embed different types of graphs (_e.g._, trees in hyperbolic space, lines in Euclidean space, and cycles in spherical space). In FPS-T, each layer chooses a set of curvatures that fits the input graph by changing the sign of the curvature \(\kappa\) in a differentiable manner. Message passing GCNs also benefit from considering a non-Euclidean representation space. Hyperbolic GCNs are known to outperform Euclidean counterparts in various tasks on hierarchical graphs such as citation networks [5; 61; 43] and molecules [5; 37]. Deepsphere [11] also adopted the spherical space to GCNs with applications such as 3D object and earth climate modeling. To take the advantage of multiple spaces, [63] proposed a hybrid architecture that fuses Euclidean and hyperbolic graph representations together. [12] similarly proposed modeling interactions between three constant-curvature spaces (_i.e._, Euclidean, hyperbolic, and spherical). To allow smooth connections between the three constant-curvature spaces, [2] proposed a model of constant-curvature space called the stereographic model, on which geometric operations such as distances and inner products are differentiable at all curvature values including zero. Incorporating pseudo-Riemannian manifolds with the GCN architecture also showed promising results [56], but its performance is sensitive to the time dimension of the manifold, which requires extensive hyperparameter tuning. Overall, GCNs achieve great predictive performance in homophilic graphs where connected nodes share the same features, but they tend to fail in hetereophilic graphs, as stacking up GCN layers to capture message passing between distant nodes induces oversmoothing [41; 4] and oversquashing [48]. To relieve this architectural limitation while utilizing non-Euclidean geometrical priors, we instead develop a Transformer-based graph encoder that operates on the steregraphic model to learn graph representations. Graph Transformers.Inspired by huge success of Transformers in NLP and CV [13; 3; 14], there exist various work that extend Transformers for encoding graphs with edge connectivities that are neither sequential nor grid-like. Graph Transformer [15] and Spectral Attention Network [32] were the first pioneers to explore this direction by replacing sinusoidal positional encodings widely used in NLP with Laplacian eigenvectors of the input graph. Graphormer [59] then proposed utilizing edge connectivities by using shortest-path distances as an attention-bias, showing state-of-the-art performance on molecular property prediction. TokenGT proposed a tokenization technique that views each graph as a sequence of nodes and edges. Unlike other methods, TokenGT allows straightforward integration of engineering techniques of pure Transformers such as linearized attention [27], while enjoying theoretical expressivity that surpasses that of message-passing GCNs. Nonetheless, existing Transformer architectures for graphs are yet confined within the Euclidean domain, making them unable to precisely embed graphs onto the feature space similar to geometric GCNs. While Hyperbolic Attention Network [25] proposed an attention mechanism that operates on hyperbolic space, its distance-based attention imposes a computational cost quadratic to the graph size and the geometry is limited to hyperbolic space. Instead, we generalize the representation space of Transformer to stereographic model and integrate with TokenGT, which can cover more various types of graphs. We also linearize the attention mechanism on the stereographic model similar to [27], which allows our final model to run in cost linear to the number of nodes and edges. ## 3 Preliminaries In this section, we first explain the concepts related to our main geometrical tool, the product-stereographic model [2]. We then briefly discuss multi-head attention, the main driving force of the Transformer [49] model. ### Product-Stereographic Model Riemannian manifolds.A Riemannian manifold is consisted of a smooth manifold \(\mathcal{M}\) and a metric tensor \(g\). Each point \(\mathbf{x}\) on the manifold \(\mathcal{M}\) defines a tangent space \(\mathcal{T}_{\mathbf{x}}\mathcal{M}\), which is a collection of all vectors that are tangent to \(\mathbf{x}\), also called the tangent vector. The metric tensor \(g:\mathcal{M}\rightarrow\mathbb{R}^{n\times n}\) assigns a positive-definite matrix to each point \(\mathbf{x}\), which defines its inner product \(\langle\cdot,\cdot\rangle_{\mathbf{x}}:\mathcal{T}_{\mathbf{x}}\mathcal{M}\times \mathcal{T}_{\mathbf{x}}\mathcal{M}\rightarrow\mathbb{R}\) as \(\mathbf{v}_{1}^{T}g(\mathbf{x})\mathbf{v}_{2}\) where \(\mathbf{v}_{1},\mathbf{v}_{2}\in\mathcal{T}_{\mathbf{x}}\mathcal{M}\) are the tangent vectors of \(\mathbf{x}\). The metric tensor is used to define geometrical properties and operations of the Riemannian manifold. Geodesic \(\gamma\) is the shortest curve between two points \(\mathbf{x},\mathbf{y}\in\mathcal{M}\) and its distance can be computed as \(d_{\mathcal{M}}(\mathbf{x},\mathbf{y})=\int_{0}^{1}\langle\dot{\gamma}(t),\dot{\gamma}( t)\rangle_{\gamma(t)}dt\), where \(\gamma:[0,1]\rightarrow\mathcal{M}\) is a unit-speed curve satisfying \(\gamma(0)=\mathbf{x}\) and \(\gamma(1)=\mathbf{y}\). We can move the point \(\mathbf{x}\in\mathcal{M}\) along a tangent vector \(\mathbf{v}\in\mathcal{T}_{\mathbf{x}}\mathcal{M}\) using exponential map \(\exp_{\mathbf{x}}:\mathcal{T}_{\mathbf{x}}\mathcal{M}\rightarrow\mathcal{M}\) which is defined as \(\exp_{\mathbf{x}}(\mathbf{v})=\gamma(1)\) where \(\gamma\) is a geodesic and \(\gamma(0)=\mathbf{x},\gamma(0)=\mathbf{v}\) The logarithmic map \(\log_{\mathbf{x}}:\mathcal{M}\rightarrow\mathcal{T}_{\mathbf{x}}\mathcal{M}\) is the inverse of \(\exp_{\mathbf{x}}\). A tangent vector \(\mathbf{v}\in\mathcal{T}_{\mathbf{x}}\mathcal{M}\) can be transferred along a geodesic from \(\mathbf{x}\) to \(\mathbf{y}\) using parallel transport \(\mathrm{PT}_{\mathbf{x}\rightarrow\mathbf{y}}:\mathcal{T}_{\mathbf{x}}\mathcal{M} \rightarrow\mathcal{T}_{\mathbf{y}}\mathcal{M}\). Note that the product of Riemannian manifolds is also a Riemannian manifold. A point on the product Riemannian manifold \(\mathbf{x}\in\otimes_{i=1}^{n}\mathcal{M}_{i}\) is consisted of the parts from each Riemannian manifold \(\mathcal{M}_{i}\) which is written as \(\mathbf{x}=\|_{i=1}^{n}\mathbf{x}_{i}\), where \(\mathbf{x}_{i}\in\mathcal{M}_{i}\) and \(\|\) is the concatenation operation. The distance between \(\mathbf{x},\mathbf{y}\in\otimes_{i=1}^{n}\mathcal{M}_{i}\) is calculated as \(\sqrt{\sum_{i=1}^{n}d_{\mathcal{M}_{i}}^{2}(\mathbf{x}_{i},\mathbf{y}_{i})}\). Other operations such as exponential map, logarithmic map, and parallel transport are applied manifold-wise. For example, \(\exp_{\mathbf{x}}(\mathbf{v})=\|_{i=1}^{n}\exp_{\mathbf{x}_{i}}(\mathbf{v}_{i})\), where \(\mathbf{v}=\|_{i=1}^{n}\mathbf{v}_{i}\) and \(\mathbf{v}_{i}\in\mathcal{T}_{\mathbf{x}_{i}}\mathcal{M}_{i}\). Constant-curvature spaces.Curvature is an important geometrical property used to characterize Riemannian manifolds. One of the widely-used curvatures to explain Riemannian manifolds is the sectional curvature: given two linearly independent tangent vector fields \(U,V\in\mathfrak{X}(\mathcal{M})\), the sectional curvature \(K(U,V)\) is computed as \(K(U,V)=\frac{\langle R(U,V)V,U\rangle}{\langle U,U\rangle\langle V,V\rangle- \langle U,V\rangle^{2}}\), where \(R(\cdot,\cdot):\mathfrak{X}(\mathcal{M})\times\mathfrak{X}(\mathcal{M}) \times\mathfrak{X}(\mathcal{M})\rightarrow\mathfrak{X}(\mathcal{M})\) is a Riemannian curvature tensor. The sectional curvature measures the divergence between the geodesics starting with the tangent vector fields \(U,V\) for each point of the manifold. For the positive or negative sectional curvatures, geodesics become closer or farther than the zero-curvature case, respectively. Throughout this paper, we refer to a space of a constant sectional curvature as a constant-curvature space. For example, the Euclidean space is the special case of the constant-curvature space with zero curvature. For positive and negative cases, we call the spaces as hyperbolic and spherical spaces, respectively. Stereographic models.A \(d\)-dimensional stereographic model \(\mathfrak{st}_{\kappa}^{d}\) is a constant-curvature space with curvature value \(\kappa\). One attractive property of the stereographic model is that the operations such as distance, exponential map, logarithmic map, and parallel transport are differentiable at any curvature value \(\kappa\), including \(\kappa=0\). This enables the stereographic model to learn the curvature value \(\kappa\) without any constraint. The manifold of the stereographic model \(\mathfrak{st}_{\kappa}^{d}\) is \(\{\mathbf{x}\in\mathbb{R}^{d}|-\kappa\|\mathbf{x}\|^{2}<1\}\). The metric tensor is defined as \(g^{\kappa}(\mathbf{x})=\frac{4}{1+\kappa\|\mathbf{x}\|^{2}}\mathbf{I}=:(\lambda_{\mathbf{x}}^{ \kappa})^{2}\mathbf{I}\), where \(\lambda_{\mathbf{x}}^{\kappa}\) is known as the conformal factor. The mobius addition between two points \(\mathbf{x},\mathbf{y}\in\mathfrak{st}_{\kappa}^{d}\) is computed as \(\mathbf{x}\oplus_{\kappa}\mathbf{y}=\frac{(1-2\kappa\mathbf{x}^{T}y-\kappa\|\mathbf{y}\|^{2} )\mathbf{x}+(1+\kappa\|\mathbf{x}\|^{2})\mathbf{y}}{1-2\kappa\mathbf{x}^{T}y+\kappa^{2}\|\mathbf{ x}\|^{2}\|\mathbf{y}\|^{2}}\). Based on mobius addition, we can derive other geometric operations as Table 2 in Appendix A. The table also shows that when \(\kappa\) converges to zero, the operations become equivalent to Euclidean space operations, so the stereographic model essentially recovers Euclidean geometry. ### Multi-Head Attention In vanilla Transformer [49], each attention block contains multiple attention heads, each taking a sequence of token embeddings as input \(\mathbf{X}\in\mathbb{R}^{n\times d}\) with sequence length \(n\) and feature dimension \(d\). Three trainable linear weights \(\mathbf{W}^{Q},\mathbf{W}^{K},\mathbf{V}^{V}\in\mathbb{R}^{d\times d^{\prime}}\) first map each token embedding into queries \(\mathbf{Q}\), keys \(\mathbf{K}\), and values \(\mathbf{V}\) with head-dimension \(d^{\prime}\), respectively. Then, the attention score matrix is computed by scaled Euclidean dot-product between \(\mathbf{Q}\) and \(\mathbf{K}\), followed by row-wise softmax activation \(\sigma(\cdot)\). The attention score matrix is then multiplied to value \(\mathbf{V}\), returning contextualized token embeddings. The overall procedure can be written as \[\mathbf{Q}=\mathbf{X}\mathbf{W}^{Q},\ \ \mathbf{K}=\mathbf{X}\mathbf{W}^{K},\ \ \mathbf{V}=\mathbf{X}\mathbf{W}^{V}, \tag{1}\] \[\text{Attn}(\mathbf{X})=\sigma\left(\frac{\mathbf{Q}\mathbf{K}^{T}}{\sqrt{d^{\prime}}} \right)\mathbf{V}. \tag{2}\] The output from multiple attention heads are concatenated together, then processed through a feed-forward layer before proceeding to the next Transformer block. ## 4 Fully Product-Stereographic Transformer Here, we describe the inner wirings of our proposed method. We generalize each operation in Transformer to the product-stereographic model, together forming a geometric Transformer architecture that operates entirely within the stereographic model. ### Stereographic Neural Networks We first introduce the stereographic analogies of the Euclidean neural networks such as the linear layer, activation, layer normalization, and logit functions. We denote the product-stereographic model \(\otimes_{i=1}^{H}\mathfrak{s}_{\kappa_{i}}^{d}\) as \(\mathfrak{s}_{\mathbb{O}\times\kappa}^{d}\), where \(\mathbf{\kappa}=(\kappa_{1},\ldots,\kappa_{H})\) is the ordered set of curvatures of \(d\)-dimensional component spaces within a Transformer block with \(H\) attention heads. We also use the superscript \(\otimes\mathbf{\kappa}\) to denote Riemannian operations on product-stereographic model that decompose representations into equal parts, apply the operation, then concatenate back to the product space (_e.g._, if \(\mathbf{v}=[v_{1},\ldots,v_{H}]\), then \(\exp_{\mathbf{0}}^{\otimes\mathbf{\kappa}}(\mathbf{v})\coloneqq\|_{i=1}^{H}\exp_{ \mathbf{0}}^{\kappa_{i}}(v_{i})\)). Stereographic linear layer, activation, and layer normalization.Given a Euclidean neural network \(f\), we can define its stereographic counterpart as \(\exp_{\mathbf{0}}^{\otimes\mathbf{\kappa}}\left(f\left(\log_{\mathbf{0}}^{\otimes \mathbf{\kappa}}(\mathbf{X})\right)\right)\). The stereographic linear layer \(\operatorname{Linear}_{\otimes\mathbf{\kappa}}(\mathbf{X};\mathbf{W})\) is thus defined by setting \(f\) as the Euclidean linear layer \(f(\mathbf{X};\mathbf{W})=\mathbf{X}\mathbf{W}\). The same approach can be used for any Euclidean activation function \(f_{\text{act}}\) (_e.g._, ReLU, Tanh, ELU, and Sigmoid), from which we obtain stereographic activation functions. Stereographic layer normalization \(\text{LN}_{\otimes\mathbf{\kappa}}\) is defined in the same manner. Stereographic logits.Suppose that \(\mathbf{x}\in\mathfrak{s}_{\kappa}^{d}\) is a stereographic embedding retrieved from the last transformer layer. For prediction tasks such as node classification, we need to compute the probability that the node with embedding \(\mathbf{x}\) belongs to class \(c\). Inspired by logistic regression of Euclidean space, [2] proposes its stereographic variant as: \[p(y=c\mid\mathbf{x})\propto\exp\left(\text{sign}(\langle-\mathbf{p}_{c}\oplus_{\mathbf{ \kappa}}\mathbf{x},\mathbf{a}_{c}\rangle)\|\mathbf{a}_{c}\|_{\mathbf{p}_{c}}d_{\mathbf{\kappa}}( \mathbf{x},H_{\mathbf{a}_{c},\mathbf{p}_{c}})\right), \tag{3}\] where \(H_{\mathbf{a}_{c},\mathbf{p}_{c}}=\{\mathbf{x}\in\mathfrak{s}_{\mathbf{\kappa}}^{d}\mid\langle- \mathbf{p}_{c}\oplus_{\mathbf{\kappa}}\mathbf{x},\mathbf{a}_{c}\rangle=0\}\) is a hyperplane formed by \(\mathbf{a}_{c}\in\mathcal{T}_{\mathbf{p}_{c}}\mathfrak{s}_{\mathbf{\kappa}}^{d}\) and \(\mathbf{p}_{c}\in\mathfrak{s}_{\mathbf{\kappa}}^{d}\). For a stereographic model \(\mathfrak{s}_{\kappa}^{d}\), the distance between \(\mathbf{x}\in\mathfrak{s}_{\kappa}^{d}\) and the hyperplane \(H_{\mathbf{a},\mathbf{p}}\) is derived as: \[d_{\kappa}(\mathbf{x},H_{\mathbf{a},\mathbf{p}})=\sin_{\kappa\ast|\kappa|}^{-1}\left(\frac {2|\langle-\mathbf{p}\oplus_{\kappa}\mathbf{x},\mathbf{a}\rangle|}{(1+\kappa\|\langle-\mathbf{ p}\oplus_{\kappa}\mathbf{x},\mathbf{a}\rangle\|^{2})\|\mathbf{a}\|}\right). \tag{4}\] This distance function can be easily extended to the product-stereographic model as mentioned in Section 3.1. The parameters \(\mathbf{a},\mathbf{p}\) that define the hyperplane are learned together with the model parameters during the training phase. ### Stereographic Multi-Head Attention Using the stereographic operations and neural networks above, we propose a multi-head attention mechanism under product-stereographic models. The key intuition is that each \(h\)-th attention head operates on the \(\kappa_{h}\)-stereographic space. Given a sequence of \(n\) product-stereographic embeddings \(\mathbf{X}\in\mathfrak{s}_{\kappa}^{n\times d}\), the attention head with curvature \(\kappa\) first obtains values using the stereographic linear layer. For queries and keys, it maps each stereographic embedding to the tangent space of the values as: \[\mathbf{Q}=\mathbf{X}\mathbf{W}^{Q}\in\mathcal{T}_{\mathbf{V}}\mathfrak{s}_{\kappa}^{n\times d ^{\prime}},\ \ \mathbf{K}=\mathbf{X}\mathbf{W}^{K}\in\mathcal{T}_{\mathbf{V}}\mathfrak{s}_{\kappa}^{n\times d^{ \prime}},\ \ \mathbf{V}=\operatorname{Linear}_{\kappa}(\mathbf{X};\mathbf{W}^{V})\in\mathfrak{s}_{ \kappa}^{n\times d^{\prime}}, \tag{5}\] Figure 2: Illustration of our attention mechanism on the non-Euclidean space. FPS-T considers each value-vector as a point that resides on the stereographic model, and query/key-vectors as tangent vectors on the corresponding tangent spaces. All query/key-vectors are parallel-transported to the origin prior to dot-product attention, thereby taking the given geometry into account. where \(\mathbf{W}^{Q}\), \(\mathbf{W}^{K}\in\mathbb{R}^{d\times d^{\prime}}\) are the query/key weight matrices, and \(\mathbf{W}^{V}\in\mathbb{R}^{d\times d^{\prime}}\) is the weight matrix for values. Note that the constraint of the tangent space of the stereographic model \(\mathfrak{st}_{\kappa}^{d}\) is the same at all the points as \(\mathbb{R}^{d}\). Then, the attention-score between the \(i\) th query \(\mathbf{Q}_{i}\) and \(j\) th key \(\mathbf{K}_{j}\) is computed by parallel transporting the vectors to the origin, and taking the Riemannian inner product at the origin as \[\alpha_{ij}=\langle\mathrm{PT}_{\mathbf{V}_{i}\to\mathbf{0}}(\mathbf{Q}_{i}),\mathrm{PT}_ {\mathbf{V}_{j}\to\mathbf{0}}(\mathbf{K}_{j})\rangle_{\mathbf{0}}. \tag{6}\] Figure 2 illustrates the geometric attention mechanism. Because the metric tensor of the origin of the stereographic model is simply \(4\mathbf{I}\) with identity matrix \(\mathbf{I}\), the Riemannian inner product becomes equivalent to the Euclidean inner product at the origin. Finally, we aggregate the values based on the attention scores using the Einstein midpoint [2] as \[\text{Aggregate}_{\kappa}\left(\mathbf{V},\mathbf{\alpha}\right)_{i}\coloneqq\frac{ 1}{2}\otimes_{\kappa}\left(\sum_{j=1}^{n}\frac{\alpha_{ij}\lambda_{\mathbf{V}_{j} }^{\kappa}}{\sum_{k=1}^{n}\alpha_{ik}(\lambda_{\mathbf{V}_{k}}^{\kappa}-1)}\mathbf{V} _{j}\right), \tag{7}\] with conformal factors \(\lambda_{\mathbf{V}_{i}}^{\kappa}\) at point \(\mathbf{V}_{i}\in\mathfrak{st}_{\kappa}^{d^{\prime}}\). By concatenating the aggregated results from each attention head, the final outcome of product-stereographic multi-head attention is \[\text{MHA}_{\otimes\mathbf{\kappa}}(\mathbf{X})=\|_{h=1}^{H}\text{Aggregate}_{\kappa_{ h}}(\mathbf{V}^{h},\mathbf{\alpha}^{h})\in\otimes_{h=1}^{H}\mathfrak{st}_{\kappa_{h}}^{ n\times d}, \tag{8}\] where \(\kappa_{h}\) denotes the curvature of the \(h\)-th attention head. ### Wrap-up For completeness, we fill in the gap on how intermediate steps such as skip-connection are generalized towards non-zero curvatures, and how representations are processed between Transformer layers with distinct curvatures. First, recall that vanilla Transformer utilizes residual connections and Layer normalization to mitigate vanishing gradients and induce better convergence [49]. To apply these operations on representations in the product-stereographic space, we switch to \[\mathbf{X}_{l}=\text{MHA}_{\otimes\mathbf{\kappa}}(\text{LN}_{\otimes\bm {\kappa}}(\mathbf{X}_{l}^{\text{in}}))\oplus_{\kappa}\mathbf{X}_{l}^{\text{in}} \tag{9}\] \[\mathbf{X}_{l}^{\text{out}}=\text{FFN}_{\otimes\mathbf{\kappa}}(\text{LN} _{\otimes\mathbf{\kappa}}(\mathbf{X}_{l}))\oplus_{\kappa}\mathbf{X}_{l}. \tag{10}\] Note that while each attention head in stereographic multi-head attention operates on each stereographic model independently, the product-stereographic feed-forward network \(\text{FFN}_{\otimes\mathbf{\kappa}}\), for which we use two stereographic linear layers with an activation in between, fuses representations from distinct geometries and performs interactions between different stereographic models similarly to previous work [63; 12]. Furthermore, note that each \(l\)-th Transformer layer operates on a distinct product-stereographic space \(\mathfrak{sd}_{\otimes\mathbf{\kappa}^{l}}^{d}\) where \(\mathbf{\kappa}^{l}=(\kappa_{1}^{l},\ldots,\kappa_{H}^{l})\) together forms the geometric signature of the layer. For consistency, we assume that the input embeddings are on the product-stereographic model of the first layer (_i.e._, \(\mathfrak{sd}_{\otimes\mathbf{\kappa}^{1}}^{d}\)). In case of classification tasks where logits are computed, the product-stereographic logit layer operates on the last set of curvatures (_i.e._, \(\mathfrak{st}_{\otimes\mathbf{\kappa}^{L}}^{d}\) where \(L\) denotes the number of Transformer layers). In between layers, representations are translated from \(\mathfrak{st}_{\otimes\mathbf{\kappa}^{l}}^{d}\) to \(\mathfrak{sd}_{\otimes\mathbf{\kappa}^{l+1}}^{d}\) by assuming a shared tangent space at the origin (_i.e._, \(\mathbf{X}_{l+1}^{\text{in}}=(\exp_{\mathbf{0}}^{\otimes\mathbf{\kappa}_{l+1}}\circ\log_{ \mathbf{0}}^{\otimes\mathbf{\kappa}_{l}})(\mathbf{X}_{l}^{\text{out}})\)). Altogether, it is straightforward to find that **FPS-T becomes equivalent to the original Transformer as all \(\mathbf{\kappa}\) approaches 0**, but it possesses the capability to deviate itself away from Euclidean geometry if it leads to better optimization. For all experiments, we initialize all curvatures as zero to demonstrate the practicality of our method by not requiring extensive hyperparameter tuning over different combinations of curvatures. ### Extension to Graph Transformer In order to learn graph-structured data with FPS-T, we borrow the tokenization technique proposed by TokenGT [28]. Let graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) be an input graph with \(N\) nodes in node-set \(\mathcal{V}\), \(M\) edges in edge-set \(\mathcal{E}\), and respective features \(\mathbf{X}^{\mathcal{V}}\in\mathbb{R}^{N\times d}\), \(\mathbf{X}^{\mathcal{E}}\in\mathbb{R}^{M\times d}\). Then, we tokenize the graph into a sequence \(\mathbf{X}=[\mathbf{X}^{\mathcal{V}},\mathbf{X}^{\mathcal{E}}]\in\mathbb{R}^{(N+M)\times d}\) by treating each node and edge as an independent token, and augment the tokens with 1) node identifiers that serve as positional encoding and 2) type identifiers that allows the model to distinguish between node- and edge-tokens. TokenGT feeds this sequence into a pure Euclidean Transformer, an approach proven to pass the 2-dimensional Weisfeiler-Lehman (2-WL) graph isomorphism test and surpass the theoretical expressivity of message-passing GCNs [28; 38]. More details on the tokenization procedure can be found in Appendix B. In our work, we encode the input sequence through FPS-T instead, such that nodes and edges exchange information globally on the product-stereographic space. As augmented tokens \(\mathbf{X}\) are Euclidean vectors, we assume each token lies within the tangent space at the origin of the product-stereographic model of the first layer \(\mathcal{T}_{\mathbf{0}}\mathbf{\xi}_{\otimes\mathbf{\kappa}^{1}}^{d^{\prime}}\cong \mathbb{R}^{H\times d^{\prime}}\), where \(|\mathbf{\kappa}^{1}|=H\) and \(Hd^{\prime}=d\). Therefore, apply exponential mapping on the tokens to place them on the product-stereographic model via \(\exp_{\mathbf{0}}^{\otimes\mathbf{\kappa}^{1}}(\mathbf{X})\), the output of which is forwarded through FPS-T. ### Cost Linearization of Stereographic Attention One drawback of the graph tokenization method above is its computational cost that becomes intractable when encoding large graphs. As computing the attention score matrix takes time and memory quadratic to the sequence length, a graph with \(N\) nodes and \(M\) edges incurs an asymptotic cost of \(\mathcal{O}((N+M)^{2})\), which can be \(\mathcal{O}(N^{4})\) for dense graphs. Fortunately, there exist various advancements used to make Transformers more efficient [47; 30; 8; 51; 57; 7]. In previous work [27], it is shown that the Euclidean attention score \(\langle\mathbf{Q}_{i},\mathbf{K}_{j}\rangle\) can be approximated with the product of kernel function \(\phi(\mathbf{Q}_{i})\phi(\mathbf{K}_{j})\), where \(\phi(\mathbf{X})=\text{ELU}(\mathbf{X})+1\). For stereographic attention (Equation 6), computing dot-products on the tangent space of the origin allows us to extend this kernelization to FPS-T. Let \(\tilde{\mathbf{Q}}_{i}=\text{PT}_{\mathbf{V}_{i}\rightarrow\mathbf{0}}(\mathbf{Q}_{i})\) and \(\tilde{\mathbf{K}}_{j}=\text{PT}_{\mathbf{V}_{j}\rightarrow\mathbf{0}}(\mathbf{K}_{j})\) be the tangent vectors on the origin prior to taking the dot-product. By applying the kernelization to stereographic attention, we can rewrite the stereographic aggregation (Equation 7) as: \[\frac{1}{2}\otimes_{\kappa}\left(\sum_{j=1}^{n}\frac{\langle\tilde{\mathbf{Q}}_{i},\tilde{\mathbf{K}}_{j}\rangle_{\mathbf{0}}\lambda_{\tilde{\mathbf{V}}_{j}}^{\kappa}}{ \sum_{k=1}^{n}\langle\tilde{\mathbf{Q}}_{i},\tilde{\mathbf{K}}_{k}\rangle_{\mathbf{0}}( \lambda_{\mathbf{V}_{k}}^{\kappa}-1)}\mathbf{V}_{j}\right)\approx\frac{1}{2}\otimes_{ \kappa}\left[\phi(\tilde{\mathbf{Q}})\left(\phi^{\prime}(\tilde{\mathbf{K}})^{T}\tilde {\mathbf{V}}\right)\right]_{i}, \tag{11}\] where \(\phi^{\prime}(\mathbf{K})_{i}=\phi(\mathbf{K})_{i}(\lambda_{\mathbf{V}_{i}}^{\kappa}-1)\) and \(\tilde{\mathbf{V}}_{i}=\frac{\lambda_{\tilde{\mathbf{V}}_{i}}^{\kappa}}{\lambda_{\tilde {\mathbf{V}}_{i}}^{\kappa}-1}\mathbf{V}_{i}\). This approximation enables FPS-T to encode graphs with \(\mathcal{O}(N+M)\) cost, which matches the complexity of message-passing GCNs [55], while taking the non-Euclidean geometry into account. In our experiments, we use the linearized FPS-T and find that this approach performs well in practice. ## 5 Experiments We empirically test the performance of FPS-T on graph reconstruction and node classification tasks. We compare the performance to existing baselines such as message passing-based Euclidean (GCN [29], GAT [50], SAGE [26], SGC [54]), hyperbolic (HGCN [5], HGNN [37], HAT [61]), and mixed-curvature (\(\kappa\)-GCN [2], \(\mathcal{Q}\)-GCN [56]) GCNs. We also add TokenGT as our baseline, which is equivalent to FPS-T with fixed zero curvatures. Our model is implemented using PyTorch [42], PyTorch Geometric [18], and Geoopt [31]. All experiments are run on NVIDIA A100 GPUs. ### Graph Reconstruction Datasets.We experiment graph reconstruction of four different real-world networks. Web-Edu [22] is a web-page network under the _.edu_ domain connected with hyperlinks. Power [52] is a network that models the electrical power grid in western US. Bio-Worm [6] is a genetics network of the _C. elegans_ worm. Facebook [35] is a social network. The detailed statistics of the datasets can be found in Appendix D. Training.The goal of graph reconstruction is to learn continuous node representations of the given graph that preserve the edge connectivity structure through distances among the learned representations. Let \(\mathbf{h}_{u}\) denote the encoded representation of node \(u\in\mathcal{V}\) given a graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\). Given the continous representations \(\mathbf{h}\), we minimize the loss function that aims for preserving the local connections [56]: \[\mathcal{L}_{GR}(\mathbf{h},\mathcal{G})=\sum_{(u,v)\in\mathcal{E}}\log\frac{e^{-d(h _{u},h_{v})}}{\sum_{v^{\prime}\in\bar{\mathcal{E}}(u)}e^{-d(h_{u},h_{v^{\prime} })}},\] where \(\bar{\mathcal{E}}\) is the set of non-neighbors of node \(u\) and \(d(\cdot,\cdot)\) is the distance function between the representations on the representation space, the geometry of which depends on the model. For instance, GCN and HGCN uses Euclidean and hyperbolic space, respectively, while FPS-T uses the product-stereographic model with curvatures from the last layer. For fair comparison, we set the number of layers to one and latent dimension to 16 for all the models. For \(\kappa\)-GCN, we use the product of two stereographic models, both of which the curvature is initialized as zero. For \(\mathcal{Q}\)-GCN, we test different time dimensions in \(\{1,8,16\}\), and report the best performance among the models. For FPS-T, we use two attention heads with all curvatures initialized as zero. We train all models for 10k epochs using an Adam optimizer with learning rate \(1\mathrm{e}{-2}\). The node features are given as one-hot encodings with additional random noise following [56]. Results.The table in Figure 3 shows the average sectional curvature of each network, and the results in mean average-precision (mAP) which measures the average ratio of nearest points that are actual neighbors of each node. We find that FPS-T outperforms the baselines in all datasets. More importantly, FPS-T shows significant performance gains compared to Euclidean TokenGT on three networks that are largely hyperbolic. On Web-Edu with an average sectional curvature of -0.63, FPS-T shows a 10.5% gain in mAP against TokenGT, showing that executing attention on the product-stereographic space is especially effective when encoding graphs containing of many non-zero sectional curvatures. Analysis.For further comparison, we train a single-head FPS-T and TokenGT on Web-Edu. The upper right plot of Figure 3 shows the curvature and mAP scores during training. We find that the curvature is adjusted towards the hyperbolic domain, which matches with the sign of the overall sectional curvature of the Web-Edu network. The mAP score also converges to a larger mAP as the absolute curvature value deviates further from zero, indicating that the non-Euclidean regime can contain better local optima for graph reconstruction. Note that non-Euclidean spaces are known to well-embed complex structures in low dimensions, while Euclidean spaces require a large number of dimension to attain reasonable precision [44]. Based on this observation, we test whether FPS-T enjoys better parameter efficiency compared to TokenGT by training two models with varying feature dimensions in \(\{2,4,8,16\}\). In the lower right plot of Figure 3, we report the performance of TokenGT and FPS-T post-training. We observe that FPS-T preserves the reconstruction performance better as we decrease the dimension from 16, as FPS-T using only 4 dimensions (92.00 mAP with 12.7k parameters) outperforms TokenGT with \(d=16\) (89.13 mAP with 53.6k parameters). Figure 3: **Left:** Graph reconstruction results. We run each method on 5 different random initializations and report the average mAP score alongside 95% confidence intervals. **Upper right:** mAP (solid lines) and curvature (dashed line) of FPS-T vs. TokenGT during training on Web-Edu. **Lower right:** Test mAP scores using smaller feature dimensions. Using non-Euclidean geometry leads to better parameter efficiency. ### Node Classification Datasets.For node classification we experiment on eight different networks: three WebKB networks (Texas, Cornell, Wisconsin) that connect web-pages via hyperlinks [9], a co-occurrence network from Wikipedia pages related to English films (Actor) [46], three citation networks (Citeseer, Pubmed, Cora) [45], and an airline network (Airport) [5]. These networks are chosen to test our approach under a wide spectrum of graph homophily \(\mathcal{H}(G)\), which measures the ratio of edges that connect nodes that share the same label [62]. In other words, a hetereophilic graph with small graph homophily requires capturing long-range interactions for proper labeling, which is naturally difficult for message passing-based approaches with small receptive fields. More detailed statistics on the networks can be found in Appendix D. Training.For all methods, we fix the embedding dimension to 16 and train each model to minimize the cross-entropy loss using an Adam optimizer with a learning rate of \(1\mathrm{e}{-2}\). For models that use learnable curvatures (_i.e._, HGCN, \(\kappa\)-GCN and FPS-T), we use a learning rate of \(1\mathrm{e}{-4}\) for the curvatures. The optimal number of layers, activation function, dropout rate, and weight decay of each method are chosen via grid search on each dataset. Details on the hyperparameter search-space and dataset splits can be found in Appendix E.2. Results.Table 1 shows the results from node classification. Overall, our method shows best accuracy on 6 out of 8 datasets, showing that FPS-T is effective across networks with various graph homophily. In case of hetereophilic networks, we find that the small receptive fields of message-passing GCNs are extremely inadequate, often being outperformed by MLPs that completely ignore the graph connectivity. On the other hand, FPS-T consistently outperforms MLP as well as GCNs, due to being able to exchange information through long distances via global-attention. It also significantly outperforms TokenGT by 8.3% on Actor, showing that adjusting the geometry towards non-Euclidean can further enhance predictive performance. In homophilic networks where message-passing is more well-suited, FPS-T shows competitive performance against GCN baselines. This is expected as FPS-T enjoys the same capacity as TokenGT to mimic any order-2 equivariant bases [28], which includes local message-passing, through attention score computation. ## 6 Conclusion We propose FPS-T, a natural generalization of the Transformer architecture towards the non-Euclidean domain. When combined with the graph tokenization technique of TokenGT [28], our model can embed graphs with less distortion and higher parameter-efficiency than its Euclidean counterpart by operating on the product-stereographic model with learnable curvatures. We also show that our model outperforms existing hyperbolic and mixed-curvature message-passing GCN baselines on node classification via global-attention that can capture long-range interactions. By linearizing the cost of self-attention through kernelized approximation, FPS-T runs in cost linear to the number of nodes and edges, allowing practical use on large-scale networks. For future work, we plan to extend towards heterogeneous manifolds [21] with input-dependent sectional curvatures as well as optimize Stereographic operations towards better stability and efficiency under machine precision. As we propose a foundational generalization of the Transformer framework, we do not expect any immediate negative societal impact from this work. \begin{table} \begin{tabular}{l|c c c c|c c c c} \hline \hline Dataset & Texas & Cornell & Wisconsin & Actor & Airport & Citeseer & Pubmed & Cora \\ \(\mathcal{H}(G)\) & 0.11 & 0.13 & 0.20 & 0.22 & 0.72 & 0.74 & 0.80 & 0.81 \\ \hline MLP & 70.54\(\pm\)3.00 & 58.38\(\pm\)4.04 & 81.20\(\pm\)1.87 & 33.62\(\pm\)0.55 & 54.05\(\pm\)1.78 & 52.58\(\pm\)1.97 & 67.17\(\pm\)0.91 & 52.44\(\pm\)1.08 \\ GCN & 57.84\(\pm\)1.62 & 47.84\(\pm\)1.77 & 45.40\(\pm\)1.62 & 27.09\(\pm\)0.36 & 92.00\(\pm\)0.63 & 71.38\(\pm\)0.43 & 78.37\(\pm\)0.26 & 80.40\(\pm\)0.53 \\ GAT & 59.46\(\pm\)1.12 & 55.14\(\pm\)1.20 & 46.20\(\pm\)1.29 & 27.43\(\pm\)0.23 & 92.35\(\pm\)0.36 & 71.70\(\pm\)0.28 & 78.14\(\pm\)0.31 & 82.29\(\pm\)0.46 \\ SAGE & 68.38\(\pm\)3.54 & 70.54\(\pm\)2.01 & 78.40\(\pm\)0.52 & 36.87\(\pm\)0.50 & 93.21\(\pm\)0.57 & 70.58\(\pm\)0.42 & 77.31\(\pm\)0.59 & 78.88\(\pm\)0.87 \\ SGC & 57.57\(\pm\)2.96 & 52.97\(\pm\)2.87 & 46.40\(\pm\)2.01 & 27.14\(\pm\)0.46 & 90.48\(\pm\)1.01 & **72.11\(\pm\)0.38** & 75.11\(\pm\)1.27 & 79.68\(\pm\)0.65 \\ TokenGT & 88.65\(\pm\)2.06 & 71.62\(\pm\)2.13 & 83.00\(\pm\)0.65 & 36.59\(\pm\)0.39 & 95.09\(\pm\)0.59 & 71.23\(\pm\)0.51 & **78.93\(\pm\)0.27** & 81.42\(\pm\)0.79 \\ \hline HGCN & 54.59\(\pm\)3.93 & 55.68\(\pm\)1.80 & 55.60\(\pm\)2.53 & 28.89\(\pm\)0.16 & 92.47\(\pm\)0.63 & 69.92\(\pm\)0.61 & 75.67\(\pm\)0.99 & 80.00\(\pm\)0.85 \\ HGNN & 50.81\(\pm\)3.60 & 52.70\(\pm\)1.42 & 54.60\(\pm\)2.68 & 28.90\(\pm\)0.19 & 90.55\(\pm\)0.71 & 69.82\(\pm\)0.63 & 76.72\(\pm\)0.86 & 79.30\(\pm\)0.51 \\ HAT & 82.16\(\pm\)3.25 & 70.54\(\pm\)1.67 & 81.80\(\pm\)1.36 & 38.34\(\pm\)0.26 & 92.88\(\pm\)0.57 & 68.14\(\pm\)0.53 & 77.50\(\pm\)0.42 & 79.81\(\pm\)0.58 \\ \hline \(\kappa\)-GCN & 56.22\(\pm\)3.38 & 55.68\(\pm\)5.99 & 46.60\(\pm\)2.41 & 26.39\(\pm\)0.60 & 52.58\(\pm\)3.70 & 54.06\(\pm\)4.45 & 68.61\(\pm\)3.05 & 73.70\(\pm\)0.69 \\ \(\mathcal{Q}\)-GCN & 51.35\(\pm\)3.44 & 55.95\(\pm\)2.85 & 52.80\(\pm\)2.20 & 28.18\(\pm\)0.55 & 91.39\(\pm\)0.15 & 66.15\(\pm\)0.45 & 77.13\(\pm\)0.59 & 79.63\(\pm\)0.57 \\ \hline FPS-T & **89.19\(\pm\)2.37** & **72.16\(\pm\)2.96** & **83.60\(\pm\)1.14** & **39.61\(\pm\)0.54** & **96.01\(\pm\)0.85** & 70.03\(\pm\)0.71 & 78.52\(\pm\)0.58 & **82.32\(\pm\)0.70** \\ \hline \hline \end{tabular} \end{table} Table 1: Node classification results. We run each method under 10 different random initializations and report the average F1 scores alongside 95% confidence intervals.
2310.00387
Privacy-Preserving Distributed Market Mechanism for Active Distribution Networks
Amidst the worldwide efforts to decarbonize power networks, Local Electricity Markets (LEMs) in distribution networks are gaining importance due to the increased adoption of renewable energy sources and prosumers. Considering that LEMs involve data exchange among independent entities, privacy and cybersecurity are some of the main practical challenges in LEM design. This paper proposes a secure market protocol using innovations from distributed optimization and Secure MultiParty Computation (SMPC). The considered LEM is formulated as an uncertainty-aware joint market for energy and reserves with affine balancing policies. To achieve scalability and enable the use of SMPC, market clearing is solved using the Consensus ADMM algorithm. Subsequently, the data exchange among participants via ADMM iterations is protected using the Shamir secret-sharing scheme to ensure privacy. The market protocol is further reinforced by a secure and verifiable settlement process that uses SMPC and ElGamal commitments to verify market quantities and by a secure recovery scheme for missing network measurements. Finally, the feasibility and performance of the proposed LEM are evaluated on a 15-bus test network.
Matthias Franke, Ognjen Stanojev, Lesia Mitridati, Gabriela Hug
2023-09-30T14:11:18Z
http://arxiv.org/abs/2310.00387v1
# Privacy-Preserving Distributed Market Mechanism for Active Distribution Networks ###### Abstract Anindst the worldwide efforts to decarbonize power networks, Local Electricity Markets (LEMs) in distribution networks are gaining importance due to the increased adoption of renewable energy sources and prosumers. Considering that LEMs involve data exchange among independent entities, privacy and cybersecurity are some of the main practical challenges in LEM design. This paper proposes a secure market protocol using innovations from distributed optimization and Secure MultiParty Computation (SMPC). The considered LEM is formulated as an uncertainty-aware joint market for energy and reserves with affine balancing policies. To achieve scalability and enable the use of SMPC, market clearing is solved using the Consensus ADMM algorithm. Subsequently, the data exchange among participants via ADMM iterations is protected using the Shamir secret-sharing scheme to ensure privacy. The market protocol is further reinforced by a secure and verifiable settlement process that uses SMPC and ElGamal commitments to verify market quantities and by a secure recovery scheme for missing network measurements. Finally, the feasibility and performance of the proposed LEM are evaluated on a 15-bus test network. Consensus ADMM, Local Electricity Markets, Cyber Security, Secure Multiparty Computation ## I Introduction With the goal to mitigate climate change, an increasing effort is made to decarbonize power networks. This effort primarily involves the increased adoption of Distributed Energy Resources (DERs) in distribution grids and empowering households to become _prosumers_ that can contribute to decarbonization [1]. Despite these advancements, current electricity markets still operate in a hierarchical, top-down manner, without adapting their market structures to the emergence of DERs [2]. Furthermore, technical challenges arise in the operation of distribution networks due to high bi-directional power flows and voltage fluctuations [3, 4]. These challenges suggest the development of Local Electricity Markets (LEMs) in which small-scale producers and consumers can transact electricity in a decentralized fashion while respecting the physical limits of the distribution network. Nevertheless, such a collaborative principle involves data exchange among independent entities, therefore rising privacy and cybersecurity concerns. We propose an uncertainty-aware LEM that is operated in a secure and distributed manner using innovations from optimization and theoretical cryptography. There has been a wide range of research into innovating LEMs that promise to safely and efficiently operate future networks and markets with DERs [1]. The three main approaches for such LEMs are [2]: (i) pure Peer-to-Peer markets, whose fully decentralized approach offers customers high flexibility at the cost of lacking convergence and safety guarantees, (ii) community markets with an operator efficiently managing trading both locally and with the upper-level grid, however, resulting in computational scaling issues, and (iii) hybrid methods that use a bilevel approach to leverage the advantages of the former two. Of key interest for this paper are the community markets, which are proven to successfully operate LEMs with DERs in the existing literature [5, 6]. However, these markets have encountered challenges due to the use of oversimplified network models [5], the absence of DER uncertainty management [2], and the lack of effective coordination with the upper-level grid [6]. Additionally, most works in this field focus on theoretical explorations of such markets, and there is thus a lack of research into how to provide cyber security and resilience to measurement failure for the actual implementation of LEMs. Efforts to overcome these gaps have been made, although they have not been consolidated yet. For instance, improved models of distribution networks together with techniques to capture the stochastic nature of DERs are available [7], such as the LinDistFlow formulation [8, 9] that achieves high accuracy through the utilization of linear and convex constraints. This formulation overcomes shortcomings of the DC power flow formulations for distribution networks [8] and is further highly suitable for adding chance constraints to capture the uncertainty of DERs within a market model [6]. Of particular interest to this paper is the usage of chance constraints to concurrently optimize day-ahead energy and reserve needs [5]. To overcome computational complexity and scalability issues, the usage of distributed optimization for the solution of LEM formulations is suitable and further allows the integration of the relevant cryptographic techniques. The work in [10] decomposes the collaborative optimization problem using Lagrangian relaxation and solves the master problem via Secure Multi-Party Computation (SMPC) with secret sharing protocols. However, it mainly serves as a proof of concept, with intentionally simple network and market models, and the authors themselves highlight the Lagrangian Relaxation's inability to provide convergence guarantees. Such guarantees can be provided using the Consensus version of the Alternating Direction Method of Multipliers (ADMM) [11], which has already been used with the LinDistFlow grid model [8, 12]. The main cryptographic technique employed within this study is SMPC, which belongs to a family of cryptographic protocols that can execute arbitrary multiparty computations with information-theoretic security in a threshold model [13]. This characteristic renders SMPC well-suited for use in LEM design. Notably, it sidesteps the challenges that previous privacy-preserving optimization methods encountered, such as poor computational performance, degraded convergence guarantees, and weaker security notions like differential privacy [14, 15]. Furthermore, the applicability of SMPC extends to reinforcing the security of LEMs protecting against network sensor failures due to potential interference of adversaries [16]. This paper proposes a fully privacy-preserving LEM framework based on distributed optimization and SMPC with secret sharing protocols. Building upon prior research [6], we develop an uncertainty-aware joint market for energy and reserves using the chance-constrained LinDistFlow formulation. The model is further extended by introducing batteries as local energy storage [17] and by capturing the tariff-switching behavior of the substation [18]. To overcome the computational limitations encountered in [10], we propose a distributed and privacy-preserving market clearing mechanism using an SMPC-secured Consensus ADMM algorithm, as described in Sec. II-D and Sec. III-D. The secure version of the LEM requires mechanisms to preserve the security of the system between SMPC sessions, for which we use standard techniques called commitment schemes and Zero-Knowledge Proofs that can be applied without impinging on the overall privacy preservation [19]. In particular, we leverage SMPC and ElGamal commitments [20] to provide a double verification scheme and a measurement recovery scheme (Sec. III-E) to provide a secure and verifiable settlement process (Sec. III-F). This setup allows the proposed market mechanism to achieve similar solutions as a central insecure solver but with added privacy preservation, as shown by the results in Sec. IV. ## II Local Electricity Market Design ### _Preliminaries_ This paper studies a LEM in an active distribution grid. The distribution network is represented as an undirected and connected tree graph \(\mathcal{G}(\mathcal{N},\mathcal{E})\), where \(\mathcal{N}=\{0,1,\ldots,N\}\) is the set of nodes in the graph and \(\mathcal{E}\subset\mathcal{N}\times\mathcal{N}\) is the set of \(N\) edges. The substation node is indexed by 0 and represents the interface between the distribution network and the upper-level grid. Furthermore, we use \(\mathcal{N}^{+}=\mathcal{N}\setminus\{0\}\) to denote the set of non-substation nodes. For a node \(n\in\mathcal{N}\), its ancestor node is denoted by \(A_{n}\), while the set of its children nodes is denoted by \(\mathcal{C}_{n}\). The notation related to the physical quantities of the LinDistFlow network model is introduced in the following. For each node \(n\in\mathcal{N}^{+}\), let \(u_{n}\) denote the squared voltage magnitude at the node, and \(u_{n}^{\max}\) and \(u_{n}^{\min}\) the corresponding maximum and minimum voltage limits. The substation node is assumed to have a fixed predefined voltage magnitude \(u_{0}\). The active and reactive line flows to a node \(n\in\mathcal{N}^{+}\) from its ancestor node \(A_{n}\) are denoted by \(f_{n}^{P}\) and \(f_{n}^{Q}\), respectively. Furthermore, each such line is characterized by a resistance \(r_{n}\), a reactance \(x_{n}\), and a maximum apparent power limit \(S_{n}\). The participants in the considered LEM include: (i) consumers at all non-substation nodes \(i\in\mathcal{N}^{+}\), who each have known and inflexible active \(d_{i}^{P}\) and reactive \(d_{i}^{Q}\) demand profiles, (ii) prosums \(r\in\mathcal{R}\subseteq\mathcal{N}^{+}\) who own DERs with a forecasted active power production \(h_{r}^{f}\), (iii) batteries \(m\in\mathcal{M}\subseteq\mathcal{N}^{+}\), characterized by their State of Charge (SoC) \(B_{m}\), and the corresponding maximum \(B_{m}^{\max}\) and minimum \(B_{m}^{\min}\) SOC limits, and (iv) the substation node, which is centrally operated to trade energy and procure reserves with the wholesale markets. Batteries and prosumers constitute flexible generators \(v\in\mathcal{V}=\mathcal{R}\cup\mathcal{M}\) with adjustable active power generation \(g_{v}^{P}\) between maximum \(P_{v}^{\max}\) and minimum \(P_{v}^{\min}\) limits. The cost of active power adjustment is characterized by a quadratic \(c_{v}^{q}\) and a linear \(c_{v}^{l}\) cost coefficient. The substation node is the "infinite bus", with its generation modeled as the difference between the active \(l^{P}\) and reactive \(l^{Q}\) inflow and active \(s^{P}\) and reactive \(s^{Q}\) outflow to capture a tariff-switching scheme without the need for binary variables [18], i.e., the system operator charges inflow to the LEM by the forecasted wholesale price plus a flat usage tariff \(\Phi^{+}\), and pays for outflow at the wholesale price minus the tariff \(\Phi^{-}\). ### _Uncertainty Modeling_ The DER generation of a prosumer \(r\in\mathcal{R}\) introduces uncertainty in the LEM and is thus modeled by \[h_{r}=h_{r}^{f}+\omega_{r}, \tag{1}\] where \(h_{r}\) is the stochastic DER generation and \(\omega_{r}\) is the random forecast error. The forecast error is modeled as an independent Gaussian random variable with zero mean \(\mathbb{E}[\omega_{r}]=0\) and variance \(\mathrm{Var}[\omega_{r}]=\sigma_{r}\), known only to the prosumer \(r\in\mathcal{R}\) itself. The total DER forecast error in the system can thus be calculated as \(\Delta=\sum_{r\in\mathcal{R}}\omega_{r}\), resulting in a zero-mean multivariate distribution with \(\Sigma=\mathrm{Var}[\Delta]=\mathrm{diag}(\{\sigma_{r}\}_{r\in\mathcal{R}})\). The realized total forecast error creates a power imbalance in the system that requires a global response from the flexible assets. To this end, we equip each flexible generator \(v\in\mathcal{V}\) with a linear reserve policy [5], characterized by a participation factor \(\alpha_{v}\), such that its total generation is given by \[\hat{g}_{v}^{P}=g_{v}^{P}-\alpha_{v}\Delta. \tag{2}\] Given that \(\sum_{v\in\mathcal{V}}\alpha_{v}=1\), the flexible generators completely balance the observed overall active power deviation. ### _Market Clearing Formulation_ In this section, we present the chance-constrained market formulation. The variable set includes the previously defined network and adjustable generation quantities and is defined by \[\Xi^{\mathrm{p}}= \{u_{i,t},f_{i,t}^{P},f_{i,t}^{Q}\}_{i\in\mathcal{N}^{+},t\in \mathcal{T}}\cup\{g_{v,t}^{P},\alpha_{v,t}\}_{v\in\mathcal{V},t\in\mathcal{T}}\] \[\cup\{l_{t}^{P},s_{t}^{P},l_{t}^{Q},s_{t}^{Q}\}_{t\in\mathcal{T} }\cup\{B_{m,t}\}_{m\in\mathcal{M},t\in\mathcal{T}}, \tag{3}\] where \(\mathcal{T}\) is the set of considered time steps. The aim of the proposed local market is to minimize the expected cost of generation and energy procurement from the upper-level grid while at the same time respecting the network and generation constraints for all time steps \(t\in\mathcal{T}\): \[\min_{\Xi^{\mathrm{p}}}\mathbb{E}\Big{[}\sum_{t\in\mathcal{T}} \sum_{v\in\mathcal{V}}\Big{(}c_{v}^{q}(\tilde{g}_{v,t}^{P})^{2}+c_{r}^{l} \tilde{g}_{v,t}^{P}\Big{)}+l_{t}^{P}\Phi_{t}^{+}-s_{t}^{P}\Phi_{t}^{-}\Big{]} \tag{4a}\] \[\min_{\Xi^{\mathrm{p}}}\mathbb{E}\Big{[}\sum_{t\in\mathcal{T}} \sum_{v\in\mathcal{V}}\Big{(}c_{v}^{q}(\tilde{g}_{v,t}^{P})^{2}+c_{r}^{l} \tilde{g}_{v,t}^{P}\Big{)}+l_{t}^{P}\Phi_{t}^{+}-s_{t}^{P}\Phi_{t}^{-}\Big{]}\] (4b) \[\min_{\Xi^{\mathrm{p}}}\mathbb{E}\Big{[}\sum_{t\in\mathcal{T}} \sum_{v\in\mathcal{V}}\Big{(}c_{v}^{q}(\tilde{g}_{v,t}^{P})^{2}+c_{r}^{l} \tilde{g}_{v,t}^{P}\Big{)}+l_{t}^{P}\Phi_{t}^{+}-s_{t}^{P}\Phi_{t}^{-}\Big{]}\] s.t. \[l_{t}^{P}-s_{t}^{P}=\sum_{j\in\mathcal{C}_{0}}f_{j,t}^{P}\] (4b) \[l_{t}^{Q}-s_{t}^{Q}=\sum_{j\in\mathcal{C}_{0}}f_{j,t}^{Q}\] (4c) \[l_{t}^{P}\geq 0,s_{t}^{P}\geq 0,l_{t}^{Q}\geq 0,s_{t}^{Q}\geq 0\] (4d) \[f_{n,t}^{P}+(\tilde{g}_{n,t}^{P}-a_{n,t}^{P}+h_{n,t})=\sum_{j\in \mathcal{C}_{n}}f_{j,t}^{P}, \forall n\in\mathcal{N}^{+}\] (4e) \[f_{n,t}^{Q}-d_{n,t}^{Q}=\sum_{j\in\mathcal{C}_{n}}f_{j,t}^{Q}, \forall n\in\mathcal{N}^{+}\] (4f) \[u_{n,t}=u_{A_{n,t}}-2(r_{n}f_{n,t}^{P}+x_{n}f_{n,t}^{Q}), \forall n\in\mathcal{N}^{+}\] (4g) \[B_{m,t}=B_{m,t-1}-\tilde{g}_{m,t}^{P}, \forall m\in\mathcal{M}\] (4h) \[\mathbb{P}[u_{n,t}\leq u_{\max}^{\max}]\geq 1-\epsilon_{u}, \forall n\in\mathcal{N}^{+}\] (4i) \[\mathbb{P}[u_{n}^{\min}\leq u_{n,t}]\geq 1-\epsilon_{u}, \forall n\in\mathcal{N}^{+}\] (4j) \[\mathbb{P}[a_{n,t}^{P}f_{n,t}^{P}+a_{n,t}^{2}f_{n,t}^{Q}+a_{n}^{3 }S_{n}\leq 0]\geq 1-\epsilon_{f}, \forall n\in\mathcal{N}^{+}\] (4k) \[\mathbb{P}[\tilde{g}_{v,t}^{P}\leq P_{v}^{\max}]\geq 1-\epsilon_{g}, \forall v\in\mathcal{V}\] (4l) \[P[P_{v}^{\min}\leq\tilde{g}_{v,t}^{P}]\geq 1-\epsilon_{g}, \forall v\in\mathcal{V}\] (4m) \[\mathbb{P}[B_{m,t}\leq B_{m}]\geq 1-\epsilon_{b}, \forall m\in\mathcal{M}\] (4n) \[\mathbb{P}[B_{m}^{\min}\leq B_{m,t}]\geq 1-\epsilon_{b}, \forall m\in\mathcal{M}\] (4o) \[\sum_{v\in\mathcal{V}}\alpha_{v,t}=1,\quad 0\leq\alpha_{v,t}\leq 1, \forall v\in\mathcal{V}.\] (4p) Constraints related to inflow/outflow at the substation node are given in (4b)-(4d). The LinDistFlow network model is established in (4e)-(4g), and the battery SoC model1 in (4h). The chance constraints on bus voltages are enforced in (4i)-(4j), power generation limits in (4l)-(4m), battery SoC in (4n)-(4o), and the dodecagon linear approximations (defined by coefficients \(a_{n}^{1},a_{n}^{2},a_{n}^{3},\forall n\in\mathcal{N}^{+}\)) of the line flow constraints in (4k). The LEM operator specifies an error term for each type of CC, representing the maximum acceptable percentage of a constraint violation, namely, \(\epsilon_{g}\) for generation limits, \(\epsilon_{b}\) for battery limits, \(\epsilon_{u}\) for voltage limits and \(\epsilon_{f}\) for flow limits. Finally, the bounds on reserve coefficients are given in (4p). Footnote 1: For the sake of simplicity, but without loss of generality, we assume perfect charging and discharging efficiencies. ### _Distributed Solution Method_ To obtain a tractable form of the optimization problem in (4), each individual linear chance constraint can be reformulated as a second-order cone constraint. This process is omitted for brevity, and we refer the reader to [6] for more details. Instead, we here focus on decomposing the centralized optimization problem into a distributed form using the scaled formulation of Consensus ADMM [11], as proposed in [12]. Considering that the objective function in (4a) is separable, let us introduce a cost function \(f_{n}(X_{n})\) related to each node \(n\in\mathcal{N}\), where \(X_{n}\) denotes the vector of all variables related to node \(n\). A subset of these variables, collected in \(X_{\mathbb{C}_{n}}=(u_{n},f_{n}^{P},f_{n}^{Q},\alpha_{n})\) and called coupling variables, appear in the constraints of other nodes. The coupling variables have a global copy \(Z_{\mathbb{C}_{n}}\) in the Consensus ADMM algorithm that ensures their system-wide convergence. The full algorithm at iteration \(k\) is given by \[X_{\mathbb{C}_{n}}^{k+1} =\arg\min f_{n}(X_{n})+\frac{\rho_{0}}{2}||X_{\mathbb{C}_{n}}^{k} -Z_{\mathbb{C}_{n}}^{k}+U_{\mathbb{C}_{n}}^{k}||_{2}^{2}, \tag{5a}\] \[Z_{\mathbb{C}_{n}}^{k+1} =\mathbb{A}\mathbb{V}\mathbb{G}(X^{k+1}),\] (5b) \[U_{\mathbb{C}_{n}}^{k+1} =U_{\mathbb{C}_{n}}^{k}+X_{\mathbb{C}_{n}}^{k+1}-Z_{\mathbb{C}_{n }}^{k+1}, \tag{5c}\] where \(\rho_{0}\) is a positive scaling factor and \(U_{\mathbb{C}_{n}}\) are the Lagrange multipliers. In (5b), an arithmetic mean of all relevant local values is computed to find the global values of the coupled variables. The employed convergence metrics are the \(\ell_{2}\)-norms of the primal and dual residuals of the local problems and the total power surplus in the system relative to the total demand. ### _Financial Settlement_ Solving the market problem (4) yields nodal \(\lambda_{n}\) and flexibility \(\pi\) prices, calculated as dual variables of constraints (4e) and (4p), respectively. Using these prices, the financial payoff for a market participant \(n\) from the daily market operation is \[\mathcal{B}_{n}=\sum_{t\in\mathcal{T}}\left[\lambda_{n,t}^{P}(g_{n,t}^{P}+h_{n,t }^{f}-d_{n,t}^{P})+\pi_{t}(\alpha_{n,t}-\frac{\sigma_{n,t}}{\sum_{r\in \mathcal{R}}\sigma_{r,t}})\right] \tag{6}\] As can be seen above, the participants are charged at the flexibility price \(\pi\) for their share of the overall uncertainty, captured by the standard deviation of their forecast error, as well as a fixed tariff for the power flow via the substation [18]. Their profit stems from the provided flexibility and the injected DER controllable and forecasted active power. The final balance \(\mathcal{F}_{n}\) of a party \(n\), however, also includes the results from the two-price imbalance regulation \(\mathcal{I}_{n}\) performed by the DSO [21] on any remaining deviations: \[\mathcal{F}_{n}=\mathcal{B}_{n}+\mathcal{I}_{n}. \tag{7}\] ## III Secure Local Market Protocol ### _Secure Multiparty Computation Preliminaries_ The technique used to ensure the security of the market protocol is Secure Multiparty Computation [13], which allows a group of parties to execute secure computations without relying on a trusted third party. The communication links between the parties are assumed to be encrypted and synchronous. Security in SMPC is defined by input privacy and protocol correctness. Input privacy requires the input values and calculation results to be kept hidden from other participants unless intentionally revealed, whereas protocol correctness requires that the computations yield the same results as with a trusted third party. The goal is thus to have security no worse than an ideal scheme with a trusted party, i.e., SMPC is allowed to be vulnerable to attacks that also work against the ideal scheme. The considered adversaries are assumed to be static and honest-but-curious. The static adversary corrupts participants before the start of the protocol but does not corrupt further during execution. Secondly, the honest-but-curious (or passive) adversary seeks only to violate the input privacy property and does otherwise not deviate from the established SMPC protocol [13]. The choice to work with an honest-but-curious adversary is due to the nature of ADMM. While the global stage (iteration) can be implemented with an SMPC secure against misbehaving or active adversaries, these techniques cannot protect against parties misbehaving in the local optimization stage. In particular, an adversary could simply return random values at each iteration and thereby prevent the ADMM from converging without violating any of the SMPC guarantees. Overcoming this issue would necessitate delving into verifiable computing aspects, which are out of the scope. ### _Shamir Secret-sharing Scheme_ The SMPC protocol used in this paper is based on the Shamir secret-sharing scheme (SSS) [10]. It provides threshold security based on the number of participants \(N\) and a defined threshold \(\Theta<N\), where a single, central adversary can corrupt up to \(\Theta\) participants without compromising the protocol's security. From a high-level perspective, the scheme involves transforming a secret \(s\) into a secure _shared_ domain, yielding \([s]\), then performing secure calculations on the shared values, and finally, recovering relevant results \(r\), by a reconstruction procedure \([r]\mapsto r\). It is worth noting that due to the inner workings of Shamir secret sharing [10], calculations in an SMPC scheme based on it can only use addition and multiplication operations. As such, the secure market protocol that follows uses a variety of transformations and simplifications to reduce computational complexity as much as possible. ### _Secure Market Protocol Overview_ The overview of the proposed market protocol is presented in Fig. 1, with the secure blocks given in green and a dashed line separating the time of market clearing and the time of operation. Each day, market participants initialize the secure market clearing protocol by providing their desired internal parameters such as battery limits, cost coefficients, generation limits, etc. The secure market clearing is then performed, as will be explained in Sec. III-D, which involves the distributed market formulation from Sec. II-D with the updating of global variables and testing for convergence, both implemented securely with SMPC. Upon completion, the parties then store the outcomes of the market for later usage. After the simulated or real market operation, the parties then execute a secure measurement recovery routine (Sec III-E), which ensures the protocol has all the relevant information it needs to proceed. Then, the secure settlement process (Sec. III-F) uses the stored market outcomes and recovered measurements to securely compute the financial balances of all parties. The protocol then ends for the day by having the parties update their internal parameters based on the financial and operational outcomes. ### _Secure Market Clearing_ The main usage of SMPC is to secure the distributed optimization process by replacing the central node that performs the coordination calculations (5b). Figure 2 shows how the ADMM update loop from (5) can be secured via SMPC, where white boxes are local operations and boxes shaded green are SMPC operations. Parties hereby securely share the new value of their local variables \(X_{\mathbb{C}_{n}}^{k+1}\), then calculate the shared value of the new global variables \([Z]_{\mathbb{C}}^{k+1}\), and finally, output their true values to the relevant parties for use in local optimization \(Z_{\mathbb{C}_{n}}^{k+1}\). To improve performance, the division operation in the algebraic mean calculation is done locally, meaning the SMPC instance only involves the addition of scaled variables. The evaluation of convergence criteria is also done within the SMPC instance, which can become computationally expensive due to the non-linear complexity of secure inequality evaluations and Euclidean norm calculations. To address this issue, a stricter version of the residual criteria is used - the infinity norm, instead of the commonly employed \(\ell_{2}\)-norm. ### _Measurement Recovery_ A major concern in digital markets, including the considered LEM, is the "Oracle Problem", where a cyber attack on the underlying measurement systems cause the ground truth and the market's understanding to diverge, causing strain on consumers and network operators [22]. This problem is exacerbated in distribution grids where ensuring complete network observability would require expensive deployment of PMUs or RTUs [23, 24]. The market protocol is thus backstopped with a deterministic and secure measurement recovery procedure to ensure the financial settlement process can be executed even if a subset of nodes fails to report any measurements. The two main failure modes are nodes not reporting measurements and nodes reporting false measurements [16], with the latter being difficult to counteract if done at scale. This paper focuses on the former, with the use case being participants purposefully disconnecting their deployed sensor during operation. We assume the use of modified smart meters that can measure the active and reactive nodal injections and the active and reactive incoming line flow at each node. The measurements along the lines are not required since LinDistFlow assumes lossless lines. Under the assumption of altered smart meters and an always reporting substation node, the following recovery procedure generates feasible values for Fig. 1: Overview of the complete (secure) market protocol. Fig. 2: Illustration of an ADMM iteration with SMPC. all outstanding net injection measurements, even if they are not guaranteed to report the actual value. Non-reporting nodes in the network are grouped into disjoint segments, called _islands_, via the UnionFind algorithm [25]. For each such island \(\mathcal{Y}\), SMPC is then used to calculate its total active power inflow \(\mathrm{P}_{\mathcal{Y}}^{\mathrm{in}}\) and outflow \(\mathrm{P}_{\mathcal{Y}}^{\mathrm{out}}\) using the measurements of honest ancestors and descendants. This determines the overall active power net injection of the island, which is then split equally among the members of a given island to ensure burden sharing, as follows \[\mathrm{net}_{y}^{P}=\frac{\mathrm{P}_{\mathcal{Y}}^{\mathrm{out}}-\mathrm{P }_{\mathcal{Y}}^{\mathrm{in}}}{|\mathcal{Y}|},\qquad\quad\forall y\in\mathcal{ Y}, \tag{8}\] where \(|\mathcal{Y}|\) denotes the cardinality of set \(\mathcal{Y}\). Note that the resulting value may not correspond to the withheld value for each \(y\in\mathcal{Y}\). Any such deviations are balanced via the balancing mechanism at the time of operation and factor into the settlement process. ### _Secure Settlement Process_ The calculation of the financial payoff \(\mathcal{B}_{\mathrm{n}}\) for a party \(n\) after the market clearing is performed employing the same SMPC protocol used for ADMM coordination. Since the prices used for calculating \(\mathcal{B}_{n}\) are dual variables of constraints, they can also be calculated globally as Lagrange multipliers via SMPC, as explained in [26]. For the final financial balances, the settlement process then factors in the results of the DSO imbalance market (see Section II-E) and the measurement recovery process. The secure market protocol introduces a break in time between market clearing and final settlement, requiring each node to re-input their own balance \(\mathcal{B}_{n}\) into the SMPC instance computing the final settlement. A Double Verification Scheme (DVS) is therefore proposed to allow secure verification of these input balances by an honest majority of parties, including actors not participating in the protocol, such as the market operator. The DVS combines storing and securely comparing secret-shared balances with the use of cryptographic commitments to verify the self-claimed balances and is inspired by publicly verifiable secret-sharing schemes [27]. Specifically, parties re-share their post-market clearing financial balances, which are securely compared using SMPC to the shared values the other parties stored after market clearing. In the second phase, parties use the perfectly binding ElGamal commitment scheme to "commit" to their balances after market clearing. If needed, they can then "open" the balances in the second phase of DVS to prove that they used the correct value without it actually being revealed [20]. Thus, DVS Phase 1 provides threshold security via SMPC and DVS Phase 2 provides information theoretic security via ElGamal commitments. DVS thus protects against manipulations by corrupted parties while also providing cryptographic proof for the calculated final financial balances. The secure settlement can then use SMPC to calculate imbalances between the scheduled and actual network values that have both been input securely to find \(\mathcal{I}_{n}\) for each party and then output the final financial balances of all parties. ## IV Results ### _Case Study Setup_ The proposed secure market clearing protocol is evaluated on a 15-bus test network depicted in Fig. 3. This synthetic case study is constructed using simulated data from various real-world datasets, as described in the following. The network model incorporates nodal demand and solar power generation profiles from an English dataset [28], wind power generation profiles from an Australian dataset [29], and battery sizing based on the Tesla Powerwall [30]. Note that in this specific case study, all prosumers have both a DER and a battery, with their flexible generation stemming entirely from the battery. The power prices are obtained from various sources, including suggested substation tariff [2], wholesale day-ahead prices [31], and regulation market prices [32] from the Danish TSO. Finally, the cost parameters and standard deviations for the prosumers' generation are based on the work in [6], and the battery parameters are derived from [33]. The Python package MPyC [34] was utilized for implementing the SMPC protocol and additionally provided the elliptic curve cryptography necessary for the ElGamal commitment scheme. For the local optimization, the CVXPY optimization library was used with ECOS solver for second-order cone programs and the OCSP solver for quadratic programs. CVXPY also natively supports ADMM. The implementation is done in Python, and the code was executed on a laptop with a 6-core processor and 16 GB of RAM. The results section centers around a comparative analysis of the performance of the following four local market solvers: * **C-1**: An insecure central solver solving the market problem in (4), with all chance constraints reformulated as second-order cone constraints [6]. * **N-3**: An insecure distributed solver solving the reformulated chance-constrained market problem (4). * **S-2**: A secure distributed solver solving the reformulated chance-constrained market problem (4) with deterministic voltage magnitude and line flow constraints. Security is imposed using the protocols described in Sec. III. * **S-3**: A secure distributed solver solving the complete reformulated chance-constrained market problem (4), making it the secure version of N-3. Fig. 3: A synthetic 15-bus test system, with PV generation (in yellow), batteries (in blue), and wind generation (in white). ### _Solver Comparison_ The level of security offered by these solvers depends on the extent to which they disclose the private data of the parties during operation. The central solver (C-1), for instance, discloses all parties' information solely to a central computation node. If this central point is compromised, it could potentially threaten input privacy and the correctness of the entire market protocol. On the other hand, in the distributed solvers utilizing ADMM, nodes share certain data only with their immediate neighbors (N-3). Furthermore, when secured with SMPC, widespread data leakage can only occur if a majority of parties are compromised (S-2 and S-3). Table I compares the four solvers in terms of the time and global iterations required for the algorithm to converge, as well as the resulting relative accuracy. The latter refers to the total active power residual in the LEM relative to the fixed total demand and gauges the feasibility of the solution. All considered solvers converge with similar levels of relative accuracy, but the use of both the decomposition and SMPC extend the time required for the convergence. Notably, N-3 and S-3 require similarly many ADMM iterations, indicating that the calculations in SMPC work adequately. However, comparing S-2 and S-3 shows that full chance constraining doubled the ADMM iterations and tripled the time to convergence. This points to a trade-off between the model complexity and the computation performance. Overall, the results indicate that the secure market protocol via solvers S-2 and S-3 is capable of clearing a complex market with results that are slightly worse but comparable to the centralized solver. ### _Convergence of Secure Market Protocols_ As a result of the use of ADMM, all distributed solvers in the paper are guaranteed to eventually converge [26]. However, as the convergence of the residuals of the voltage magnitudes for the S-2 solver in Fig. 4 highlights, there are some notable dynamics at play. The convergence of ADMM typically exhibits a damped oscillatory behavior [11], as is also evident in the given figure, persisting until approximately iteration 250. Around this iteration, the residuals start to briefly increase again due to the inter-dependency of voltage magnitudes and power flows in the LinDistFlow model. Their constant trade-off (as well as the ripple effect across the nodes) implies that the solver can locally diverge from the expected trends but will eventually recover as the ADMM iterations progress. Interestingly, the residual oscillations are of higher magnitudes at nodes closer to the substation. The observed convergence pattern highlights the possibility for improvement, e.g., through the implementation of adaptive penalty terms or over-relaxation techniques [26], which are, however, beyond the scope of this paper. ### _Market Clearing Outcomes_ Figure 5 showcases the different energy balance components over the entire network. It can be observed that the day-ahead energy dispatch of the network is driven by prosumers who use batteries to shift the DER production into the high-demand morning and evening hours. Similar behavior has been seen in previous research [35]. Despite the slight changes in formulations and solution methods, all solvers had very similar energy schedules and prices as the presented S-2 solver, indicating that the secure market protocol is able to clear the market successfully. The allocation of flexibility participation factors described in (2) by the different solvers is observed to fall into one of the following two arrangements, which is then maintained for the entire day. Specifically, all fully chance-constrained solvers assigned the substation the entire flexibility provision, while the S-2 solver, which does not factor in voltage and line flow constraints, splits the flexibility provision equally amongst all nodes with flexible generation. This resulted in the flexibility price being constant throughout the day. ### _Financial Settlement Results_ The above-discussed scheduling outcomes are reflected in the post-market clearing financial balances, with Table II comparing the balances of 4 different LEM participants between solvers N-3, S-2, and S-3. Due to the large variability found in flexibility prices across solvers, the table focuses on the day-ahead energy results. As explained in Sec. III-F, in a distributed setting, the market prices can either originate from the dual variables in the local optimization problems that the parties then share with the LEM (Duals) or from global Lagrange multipliers calculated in SMPC (Sec-Bal). Fig. 4: Voltage magnitude residuals across all nodes when using S-2 solver. Fig. 5: Global energy balance over a day for the S-2 solver. To evaluate the impact of using SMPC on these critical calculations, both secure solvers are considered with both price origins and compared to the insecure, distributed solver N-3. The nodes selected for evaluation are the substation node 0, the PV-prosumer node 3, the wind-prosumer node 7, and the pure load at node 15. Across solvers and price origins, there are only minor differences arising from the usage of SMPC, highlighting both SMPC's viability in the LEM and the overall scheme's success at secure market operation. ## V Conclusion In this paper, we presented a Local Electricity Market framework in a distribution network with uncertain distributed energy resources. To preserve the input privacy of the market participants, we solved the market problem using an ADMM-based distributed optimization with data exchanges protected by leveraging secure multi-party computation protocols. Amongst the considered secure solvers, S-2 stands out as the recommended choice. It avoids the single point of failure of C-1, has security guarantees via SMPC that N-2 lacks, and achieves comparable results in fewer iterations and minutes than S-3. Note that since the secure market protocol is highly customizable, e.g., in its convergence thresholds, chance constraint bounds, etc., the choice of the preferred solver may vary. Additionally, the protocol may benefit from an increased parallelization or the use of better hardware, which could make S-3 become more suitable. ## VI Acknowledgement This research was supported by NCCR Automation, a National Centre of Competence in Research, funded by the Swiss National Science Foundation (grant number 51NF40_180545).
2309.08860
DenseTact-Mini: An Optical Tactile Sensor for Grasping Multi-Scale Objects From Flat Surfaces
Dexterous manipulation, especially of small daily objects, continues to pose complex challenges in robotics. This paper introduces the DenseTact-Mini, an optical tactile sensor with a soft, rounded, smooth gel surface and compact design equipped with a synthetic fingernail. We propose three distinct grasping strategies: tap grasping using adhesion forces such as electrostatic and van der Waals, fingernail grasping leveraging rolling/sliding contact between the object and fingernail, and fingertip grasping with two soft fingertips. Through comprehensive evaluations, the DenseTact-Mini demonstrates a lifting success rate exceeding 90.2% when grasping various objects, spanning items from 1mm basil seeds and small paperclips to items nearly 15mm. This work demonstrates the potential of soft optical tactile sensors for dexterous manipulation and grasping.
Won Kyung Do, Ankush Kundan Dhawan, Mathilda Kitzmann, Monroe Kennedy III
2023-09-16T03:43:10Z
http://arxiv.org/abs/2309.08860v1
# DenseTact-Mini: An Optical Tactile Sensor for Grasping Multi-Scale Objects From Flat Surfaces ###### Abstract Dexterous manipulation, especially of small daily objects, continues to pose complex challenges in robotics. This paper introduces the DenseTact-Mini, an optical tactile sensor with a soft, rounded, smooth gel surface and compact design equipped with a synthetic fingernail. We propose three distinct grasping strategies: tap grasping using adhesion forces such as electrostatic and van der Waals, fingernail grasping leveraging rolling/sliding contact between the object and fingernail, and fingertip grasping with two soft fingertips. Through comprehensive evaluations, the DenseTact-Mini demonstrates a lifting success rate exceeding 90.2% when grasping various objects, spanning items from 1 mm basil seeds and small paperclips to items nearly 15mm. This work demonstrates the potential of soft optical tactile sensors for dexterous manipulation and grasping. ## I Introduction To enable seamless robot-human collaborations within shared environments, dexterous manipulation is critical. While humans effortlessly grasp and manipulate objects of various shapes and sizes, robots struggle with these tasks, especially when it comes to smaller items. Research has sought to address these challenges, presenting solutions ranging from innovative manipulation strategies and harnessing tactile feedback to pioneering new gripper designs specifically for handling small-objects. Tactile sensing through vision-based approaches is a promising avenue, offering rich contact information and the potential for enhanced dexterous manipulation. Yet, certain nuances, like the hardness of the contact surface and the synergy between the sensor and the gripper's shape, have been somewhat understudied. Most research has largely emphasized how tactile sensors 'detect' objects, particularly small ones. However, how these sensors effectively 'grasp' everyday minuscule items has not been thoroughly explored. Conventional optical tactile sensors have rigid gel builds, limiting sensing capabilities and blocking diverse manipulation strategies that are possible with softer materials. Inspired by the sensory fingertip and the rigid nails of human fingers, we propose a new optical tactile sensor with soft, rounded gel surface and fingernail design to facilitate a broader range of grasping strategies. Our paper's contributions include: 1) A novel, compact tactile sensor that exhibits an ultra-soft, rounded gel surface, making it adept at grasping especially small objects, 2) The strategic integration of a fingernail design on the DenseTact-Mini, enhancing its capability to handle thin, small objects, and 3) An exhaustive exploration and evaluation of various grasping strategies for picking up objects of different dimensions from a flat surface. The remainder of this paper is composed as follows: Section II displays related works, Section III describes the fabrication and design process of the DenseTact-Mini, Section IV-B proposes the grasping methodologies for various objectsizes, Section V evaluates each grasping strategy, and Section VI discusses the conclusions and future work. ## II Related Works Since dexterity similar to that of human fingers is often required for precise manipulation tasks, robotic grippers of various operating principles have been explored. Soft-robotic hand-like grippers have been proposed to pick up objects of various sizes, largely depending on the size and dimension of the gripper design [1, 2]. Some gripper designs lean on van der Waals forces for specifically picking up micro-scale objects [3], or maximizing them in a gecko-inspired fashion [4]. Such grippers are capable of gripping either tennis ball-sized or micro-sized objects, but fail to grasp smaller and Fig. 1: **DenseTact-Mini.** Grasping various objects with DenseTact-Mini using tap, fingernail, and fingertip strategies. flatter everyday objects. Manipulation of smaller objects has been extensively explored from multiple perspectives. Specialized grippers that precisely handle small, flat, and thin objects include grippers with diverse grasping modes [5], digging grippers suited for cluttered environments [6], and grippers with retractable fingernails [7]. Concurrently, numerous strategies have been introduced for grasping objects on flat surfaces with two fingers. Many of these strategies are tailored to specialized grippers, which can limit the generalizability of the grasping task [8, 9]. Furthermore, research focusing on general grippers has often concentrated on grasping larger, flat objects [10, 11, 12]. Tactile sensors, notably vision-based, have been studied for enhancing dextrous manipulation for their ability to provide rich contact information [13, 14, 15, 16, 17], useful for in-hand manipulation, classification, or tactile exploration [18, 19, 20, 21, 22]. However, no existing gripper has the combined ability to grasp and sense small and flat daily objects in a generalized fashion. This presents the need for exploring various grasping strategies of everyday objects using a high-resolution optical tactile sensor adapted for enhanced generalized manipulation via an attachable fingernail. ## III DenseTact-Mini Tactile sensors, especially optical variants, are useful for robot-environment interactions during tasks such as object manipulation. However, grasping small objects remains difficult due to integration challenges and the curse of dimensionality of contact situation. To tackle these, we present a tactile sensor with three specialized grasping strategies. Our sensor has a compact 24mm size with a 3D contoured surface, fitting various high degrees of freedom (DOF) grippers. It is outfitted with a fingernail component for grasping small, flat objects, paired with a soft gel segment for everyday items. Additionally, it features a 60 Hz camera module and fisheye lens to address dynamic contact and grasping situations. ### _Clear gel with camera lens for tactile sensing_ To proficiently grasp objects, especially those in the range of 1-2mm, it is imperative that the camera monitors gel deformations upon contact. This means that an object about 1-2mm in size can be observed in around 50 pixels in the camera frame. Furthermore, the gel should be highly curved to enable various grasping strategies from a single sensor. Figure 2 presents the exploded view of the DenseTact-Mini. The design is similar to those presented in [13, 14]. The gel's high curvature is generated from a 30mm diameter spherical mold. A fisheye lens with a 222\({}^{\circ}\) field of view sits 7.6mm below the gel's top surface. The gel hardness is 16 Shore A, and the P-565 silicone base and activator are combined at a \(10:1\) ratio. Even though the gel is harder than previous DenseTact sensors, the gel is still softer than other optical tactile sensors such as Gelsight or Digit [15, 23], enabling more effective grasping by maximizing the contact surface between the object and the sensor. STL files detailing both the mold for the gel and its design template are accessible via the project website. To efficiently grasp small objects and shield the gel from external light, a reflective layer was added after the gel cured. Figure 3 showcases various gel types. The reflective coating was created by blending metallic ink with Psycho PaintTM, thinned using NOVOCSTM solvent. The final weight ratio for the coating is Metalic ink: Base A: Base B: NOVOCS solvent \(=19:50:50:100\). After air-spraying the compounded silicone onto the gel, it was cured for 24 hours. Our chosen gel, depicted in Figure 3 (a), maximizes van der Waals forces between the gel and target objects, explained further in Section IV-A. ### _Attachable fingernail for enhanced contact manipulation_ Most tactile sensors, when used as on fingertip of a gripper or as the gripper itself (provided they lack a sharp edge), display limited manipulation capabilities due to their sensor design. For instance, a flat sensor struggles to effectively grasp thin, small objects [15]. In contrast, round-shaped tactile sensors often require specialized or custom grippers for manipulation [19, 24, 25]. However, even these grippers find it challenging to grasp small, flat objects Fig. 3: **Gel for increasing van der Waals force (a) shows the three different types of gel, and (b)-(d) shows gel’s deformation while tap grasping an M2 nut.** Fig. 2: **Exploded view of DenseTact-Mini with fingernail.** from a flat surface, primarily due to 1) the sensor size and 2) the rounded sensor surface shape. To address this limitation, we introduced a fingernail on top of the sensor to enhance grasping performance, especially for flat objects. The fingernail design is illustrated in Fig. 4. As depicted in (a) of Fig. 4 and in Fig. 2, the fingernail is positioned atop the sensor and secured to the DenseTact-Mini case using M1.6 screws and nuts. Leveraging the screws as pivot points, the fingernail can passively rotate along one axis. This allows the fingernail to sense torsional force and restrict unnecessary movement during contact. Nevertheless, since the pivot point isn't aligned with the gel's center, the gel obstructs the fingernail's rotational motion after a certain extent. The indented section of the fingernail, adjacent to the gel, creates added friction, further limiting the fingernail's rotation. As a result, the fingernail's movement can be observed through the gel, as demonstrated in (d) of Fig. 4. The fingernail's pointed edge aids in grasping tiny objects and detects contact. Covering 10.4 degrees of the sensor's 180-degree field of view, the fingernail utilizes 43% of the gel part for grasping with the fingernail. The rest of the part can also be used for tactile sensing. This 3D-printed component, which can be made from various materials, has a 27.7-degree angle at its tip. While our evaluation employed a polylactic acid (PLA) fingernail, Thermoplastic polyurethane (TPU) provides enhanced flexibility, giving the fingertip a softer touch, as shown in Fig. 4. The STL files are available on the project website. ### _Illumination via LEDs_ A custom PCBA was designed in KiCAD to deliver light to the inner gel surface using parallel red, green, and blue LEDs. Since the luminous intensity of each of the LEDs is different based on color, resistors of varying resistances were added in series with each LED to ensure each LED has the same brightness. When the board is powered with 5V, 79 mA are drawn by the circuit, giving a luminous intensity of 200 mcd for each of the LEDs using the calculated series resistances. A capacitor was added between power and ground to mitigate voltage-step transients. The schematic and pcb files are available on the project website. To optimize LED light dispersion on the gel's reflective surface, we added a 1mm-thick translucent board above the PCB (see Fig. 2). This board, covered with white masking tape, has clear sections 50\({}^{\circ}\) from each LED's position. With LEDs at 0\({}^{\circ}\), 120\({}^{\circ}\), and 240\({}^{\circ}\), clearings were at 170\({}^{\circ}\)-190\({}^{\circ}\), 290\({}^{\circ}\)-310\({}^{\circ}\), and 50\({}^{\circ}\)-70\({}^{\circ}\). The final LED setup is in Fig. 4(b). ### _Sensor assembly_ After fabricating each component, the sensor was assembled by sequentially stacking each module into the sensor's case. Given the modular design of each component, parts can be effortlessly replaced. An IMX219 Camera module's input is processed using an NVIDIA Jetson Orin. This camera works at speeds up to 60Hz, ample for executing contact manipulation tasks in real-time. Moreover, the Jetson module facilitates standalone processing of the sensor's camera feed via a ROS system. The dimensions of the DenseTact-Mini are \(24mm\times 26mm\times 24mm\), excluding the fingernail, where the 26mm width corresponds to the direction from which the camera cable extends. With the fingernail attached, the dimensions expand to \(25.6mm\times 32.6mm\times 29.3mm\). The sensor's weight, including the fingernail, is 15.35g. The DenseTact-Mini is priced under $30, with the majority of the cost stemming from the camera ($25.99), relative to the Gel component ($0.231), 3D printed parts ($2), and PCBA ($1.13). ## IV Grasping strategy of various sizes of an object using DenseTact-Mini Three different strategies for grasping small objects are proposed using the DenseTact-Mini. The modelled view of each strategy is depicted in Fig. 1: tap grasping, fingernail grasping, and fingertip grasping. The tap grasping strategy grasps small, lightweight objects (1mm - 3mm) by tapping the object with the DenseTact-Mini. Fingernail grasping refers to grasping small, flat objects from flat surfaces by sliding the object between two DenseTact-Minis, one with and one without a fingernail. Any flat object with a thin profile can be grasped using this strategy. Finally, the fingertip grasping strategy grasps larger objects with two DenseTact-Minis both without fingernails. \begin{table} \begin{tabular}{|c|c|c|c|} \cline{2-4} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{**Gel and Lift Types**} \\ \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{**Lift / Short lift / No lift (Success Rate(\%))**} \\ \hline **Object** & **Gloss Gel** & **Mate/Gloss Gel** & **Mate Gel** \\ \hline Basil Seed & 41/9/0 (100) & 44/6/0 (100) & 48/1/1 (98) \\ \hline M1.6 Nut & 46/40 (100) & 18/29/3 (94) & 1/7/42 (16) \\ \hline M2 Nut & 46/4/0 (100) & 3/37/10 (80) & 0/1/49 (2) \\ \hline \end{tabular} \end{table} TABLE I: **Reflective surface evaluation for adhesion force.** The glossy gel showed the best sustained lift (\(>3s\)) with minimal short lift (\(\leq 3s\)) and no lift instances. Fig. 4: **DenseTact-Mini Design**: (a) shows the DenseTact-Mini with PLA fingernail, (b) represents the inner view of the sensor, (c) the fingernail is made of TPU, and (d) shows an image taken from DenseTact-Mini with fingernail. ### _Grasping millimeter-scale objects via tap grasping_ Grasping 1-3mm size of objects using a traditional gripper is challenging due to the gripper's size and the non-negligible adhesion force between the gripper and the object. With DenseTact-Mini's clear gel surface, we can harness the surface interaction forces for a tap grasping strategy. As discussed in [26, 27, 28], the adhesion force \(F_{adh}\) prevalent in the small object can be represented as: \[F_{adh} =F_{e}+F_{vdW}+F_{st}\simeq\frac{\pi\epsilon_{0}R_{o}\sigma^{2}} {d}+\frac{AR_{o}R_{dt}}{6(R_{o}+R_{dt})d^{2}} \tag{1}\] \[\Sigma F =F_{adh}-mg \tag{2}\] Where \(F_{e}\), \(F_{vdW}\), and \(F_{st}\) refer to the electrostatic force, van der Waals force, and surface tension force. In our dry experiments, \(F_{st}\simeq 0\). Variables \(d,R_{o},R_{dt},\sigma,\epsilon_{0}\), and \(A\) represent the distance between the object and the sensor, radii of the object and sensor, charge density, electric constant, and contact area. For flat objects, larger \(R_{o}\) amplifies \(F_{vdW}\). The adhesion force allows for grasping when \(\Sigma F\) is positive. To enhance the adhesion force, we pressed the object against the gel to increase \(A\) and minimize \(d\) upon contact. The electrostatic aspect can be modified by altering the material of the planar surface, as \(\sigma\) is dependent on the object and ground material properties. To support our assertions, three gel types were created, as shown in Fig. 3. The first gel, as shown in Fig. 3 (a), is made of NOVOCS Gloss and has a transparent surface. To make the second gel, a silicone base mixed with NOVOCS Gloss was initially applied followed by a silicone base with NOVOCS Matte. The applied period ratio between these two materials is \(\text{Gloss}:\text{Matte}=6:4\). The third gel is made of NOVOCS Matte. The first gel provides the clearest surface, maximizing the contact area during tapping, followed by the second and third gels. We tested adhesion using a tap grasping method on basil seeds, M1.6 nuts, and M2 nuts. The results were categorized into three categories: no lift, short lift (under 3 seconds), and lift (over 3 seconds) in more than 50 trials each. The outcomes are in Table I. The clear gel had the highest lifting success rate due to its optimal contact area. While the matte/gloss and matte sensors had notable success with the Basil seed, their performance dropped with the nuts. This reduction is attributed to decreased van der Waals forces since the nuts have a smaller contact area to weight ratio. The basil seed, a non-metallic object, showed dominant electrostatic forces, making lifting easier even without direct contact. This electrostatic effect was amplified when contact was made on surfaces like acrylic as compared to wood and paper. The metallic nuts displayed smaller electrostatic forces because they are highly conductive. Debris can disturb adhesion, therefore periodic cleaning was necessary. These findings aided the decision to use the clear gel in the final sensor design. ### _Grasping thin, small objects via fingernail grasping_ The fingernail design of the DenseTact-Mini facilitates grasping thin, small objects. Two strategies are proposed for grasping these objects: 1) a rolling contact grasping strategy for objects with sharp edges, and 2) a sliding grasping strategy for objects with rounded edges. Both strategies employ two DenseTact-Mini units in a two-finger grasping configuration. The first finger uses the DenseTact-Mini with the attached fingernail at its fingertip and dynamically moves when grasping. The second finger utilizes the DenseTact-Mini without the fingernail, exposing just the bare gel, and remains stationary throughout the grasping motion. #### Iii-B1 Rolling contact grasping strategy Grasping objects with sharp edges using a fingernail results in a different motion compared to grasping objects with rounded edges. The fingernail cannot slide beneath the bottom edge of the object, leading to a rolling contact. Fig. 5 illustrates the schematic of grasping objects with sharp corners or edges. The left bottom image of Fig. 5 presents the free body diagram of the object with a sharp edge. The gel part of the DenseTact-Mini contacts the object at point \(A\). The object contacts the ground at point \(B\), and the fingernail contacts the object at point \(C\). Point \(C\) denotes any edge that first contacts between the fingernail and the object, allowing the free-body diagram to be represented in a 2D plane. Although the sensor Fig. 5: **Rolling contact grasping strategy.** The fingernail makes rolling contact with the object via a scooping motion. Fig. 6: **Sliding contact strategy.** When the fingernail’s radius is smaller than the object, sufficient torque is produced for grasping. remains stationary during contact, point \(A\) shifts as it makes additional contact due to the gel's hyperelastic properties. Moreover, point \(A\) has a higher friction coefficient, \(\mu_{g,o}\), since the object is in contact with the silicone, compared to the friction coefficient between the floor surface and the object, \(\mu_{s,o}\). Assuming the floor surface is an acrylic board, \(\mu_{s,o}\) is similar to the friction coefficient between the fingernail and the object, \(\mu_{f,o}\). Given that \(\mu_{g,o}>>\mu_{s,o},\mu_{f,o}\), the object rotates using point \(A\) as its pivot. From the free-body diagram, the equations of translational motion, rotational motion, and the force exerted on point \(A\) are \[F_{s,y} =F_{s,x}tan(\theta),\quad\theta=f(p_{obj},p_{gel},C_{fem}) \tag{3}\] \[\Sigma F_{x} =F_{s,x}+R_{x}-F_{f,x}=0\] (4) \[\Sigma\tau =-\frac{l}{2}mg+hR_{x}-(h-d)F_{f,x}+lF_{f,y} \tag{5}\] Where \(l,\,h,\,d\) denote the length and height of the object, and the height of the contact between the object and fingernail tip, respectively. The angle \(\theta\), analytically defined in Equation 3, is a nonlinear function influenced by \(p_{obj},p_{gel},C_{fem}\), where \(p_{obj},p_{gel}\) indicate the pose and geometrical shape of the object and gel, and \(C_{fem}\) references the material property coefficients in hyperelastic material models such as the Yeoh Model [29]. By substituting equations 3 and 4 into 5, The equation 5 can be specified in terms of \(F_{f,x}\) and \(mg\). Then \[\Sigma\tau\simeq lF_{f,y}+dF_{f,x}-hF_{s,x}-\frac{l}{2}mg \tag{6}\] The torque becomes positive if the first two terms of equation 6 becomes bigger than the last two terms. Therefore, the object can rotate when influenced by a certain amount of y-directional force while maintaining rolling contact. While not done in this paper, \(F_{f,x},F_{s,x}\), and \(F_{s,y}\) can be measured by estimating force from both DenseTact-Mini sensors through the model proposed in [14]. After the object rotates with point \(A\) as the pivot, it slides, causing contact point \(B\) to move in the \(+x\) direction, as depicted in the right images of Fig. 5. Once the fingernail successfully positions itself beneath the object's right corner, it establishes sliding contact with the object, resulting in a motion analogous to that described in Section IV-B2, facilitating the grasp of the object. Throughout this transition, the force \(F_{f,x}\) often induces a sudden shift in the object towards the grasp. #### Iv-B2 Sliding grasping strategy When grasping flat, thin objects with rounded edges, or objects with a small gap between their surface and the floor, the fingernail of the DenseTact-Mini aids in the grasp by creating a minor friction point contact between the object and the fingernail. Fig. 5 depicts the overall strategy for grasping rounded-edge objects. The left bottom image displays the free body diagram of the object at the moment of grasping. If the radius of the right lower edge of the object, \(R\), exceeds the edge of the fingernail, \(r\), then \(F_{f,y}\) generates a positive force, leading to a beneficial torque for grasping. The design file defines \(r\) as 0.3mm, with potential deviations due to the 3D printing process. The equation of rotational motion with the dominant term becomes \[\Sigma\tau\simeq lF_{f,y}-\frac{l}{2}mg>0 \tag{7}\] Since the position of the fingernail is below the object, \(F_{f,y}\) is positive and exceeds the gravitational force. Additionally, such objects typically have a lightweight composition, simplifying the grasp of small, thin items. ### _Grasping larger objects via fingertip grasping_ The DenseTact-Mini, even without the fingernail, can grasp larger objects. With two fingertips as the DenseTact-Mini without fingernails, it is possible to grasp objects of various sizes. Analogous to a two-jaw gripper, the gel portion of the DenseTact-Mini ensures compliant grasping through an expansive area of contact with the object. Following this approach, a fingertip grasp can be characterized as any grasp between two DenseTact-Minis devoid of fingernails. The range of objects graspable, particularly those over 10mm, is influenced by the gripper's specifications, including its DOF and load capacity. The grasping motion and strategy are Fig. 7: **Grasping Evaluation Pipeline.** Orange, green, and blue boxes represent the tap, fingernail, and fingertip strategies, respectively. The right-side images depict sensor views with and without objects, and the last column shows image differences for grasp detection. illustrated in the last row of Fig. 1. Nonetheless, given the pronounced curvature of the gel, a gripper with a higher DOF or extended link lengths can execute a scooping motion using both fingertips to grasp objects around 10mm, contingent on the object's geometric shape. ## V Evaluation ### _Evaluation Pipeline_ In the evaluation pipeline presented in Fig. 7, different color diagrams (orange, green, blue) represent tap, fingermail, and fingertip grasping strategies. Our experimental setup, detailed in Fig. 8, involves the AllegroTM hand, positioned on the FrankaTM robot arm, with DenseTact-Mini sensors on the fingertips. The second and fourth fingers have attached fingernails, while the thumb and third finger, used for tap grasping and fingertip strategies, do not. Although the Jetson Orin can support dual camera sensors, we focused on the DenseTact-Mini thumb sensor for grasp detection. ROS facilitates communication between the components. A state machine oversees the statuses and transitions among 'grasp','move', 'ungrasp', and 'detect' states, based on the grasping strategy and feedback from the DenseTact-Mini. In the tap grasping, the process starts with the 'grasp' state using a tapping motion. The gripper moves to a set location before transitioning to 'detect'. The DenseTact-Mini sensor determines object presence by comparing two baseline images: one with the sensor pressed against an empty surface and another against an object, using pixel intensity variations. Successful grasps advance to the next state,'move', while unsuccessful attempts reset to 'grasp'. Once in 'ungrasp' after'move', the gripper relocates and tries to release the object by scraping the DenseTact-Mini's gel surface with a fingernail. The fingernail strategy uses a scooping motion, while the fingertip strategy involves the thumb and third finger closing. Both follow the 'grasp','move', and 'ungrasp' states. The images on the right side of Fig.7 depict successful vs. unsuccessful grasp detection. The source code used for this evaluation study is accessible on the project website. ### _Evaluation result_ Table II presents grasping results for each strategy. Objects are randomly placed within specified regions of interest (ROIs): \(10mm\times 10mm\) for tap grasping, \(45mm\times 15mm\) for fingernail grasping, and \(30mm\times 20mm\) for fingertip grasping. The object's center of mass lies inside the ROI, even if its size exceeds it. Most grasps boast over a 90% success rate. The M2 nut's heavier weight results in a 90.2% success rate in tap grasping, while for fingernail grasping, the paperclip, dome, and battery performed best. Failures typically arise when object centers are near the ROI edge or due to unintended rotations. All object images and sizes are available on the project website. Given the large ROIs relative to object sizes, the DenseTact-Mini is suitable for generalized grasping. AllegroTM hand limitations also contribute to grasp failures, suggesting more force could improve grasp outcomes beyond the strategies proposed. ## VI Conclusions This paper presents three distinct grasping strategies for handling various small objects from flat surfaces using the miniaturized vision-based tactile sensor, the DenseTact-Mini. The DenseTact-Mini comprises of a high-resolution soft gel component with a modular stacked design, a smooth gel surface, and a detachable fingernail design. These attributes facilitate three varied grasping techniques: 1) adhesive force-based tapping for objects sized between 1mm and 3mm, 2) scooping motion with fingernail for grasping thin and small objects, and 3) a conventional two-fingertip grasp for objects larger than 10mm. Through evaluating everyday small objects, including tiny nuts and slender items, we have demonstrated that our sensor, coupled with the fingernail and an appropriate multi-DOF gripper, can effectively grasp a variety of multi-sized objects by applying different grasping strategies. The success rate exceeded 90% for all evaluated objects. In future work, we aim to extend the length of each finger's linkage on the gripper, assess dexterous manipulation capabilities using the DenseTact-Mini sensor, and calibrate the sensor for precise grasp manipulation. We believe the DenseTact-Mini is pivotal for planning multi-object grasps and intricate in-hand manipulation with tactile sensing image input. \begin{table} \begin{tabular}{|c|c|c|c|c|} \cline{2-5} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{\multirow{2}{*}{**Object**}} & \multicolumn{1}{c|}{\multirow{2}{*}{\begin{tabular}{c} **HxLw** \\ **(mm)** \\ \end{tabular} }} & \multicolumn{1}{c|}{\multirow{2}{*}{\begin{tabular}{c} **Weight** \\ **(g)** \\ \end{tabular} }} & \multicolumn{1}{c|}{\multirow{2}{*}{\begin{tabular}{c} **Success** \\ **(Rate \%)** \\ \end{tabular} }} \\ \hline \multirow{3}{*}{**Tap**} & Basil Seed & 1x1.2x2 & 0.0015 & 52725 (100) \\ \cline{2-5} & M1.6 Nut & 1.1x3.2x3.2 & 0.051 & 48/51 (94.11) \\ \cline{2-5} & M2 Nut & 1.6x3.9x3.9 & 0.10 & 46/51 (90.2) \\ \hline \multirow{6}{*}{**Fingermail**} & Paperclip & 0.8x6.9x26.5 & 0.31 & 48/51 (94.12) \\ \cline{2-5} & Small & 0.9x7x45 & 1.95 & 46/51 (90.2) \\ \cline{2-5} & Wrench & 1.3x17.9x17.9 & 2.24 & 54/57 (94.74) \\ \cline{2-5} & CR2032 & Battery & 3.2x20x20 & 3.00 & 50/53 (94.34) \\ \hline \begin{tabular}{c} **Fingertip** \\ **grasping** \\ \end{tabular} & Bearing & 16x5x16 & 4.57 & 50/53 (94.34) \\ \hline \end{tabular} \end{table} TABLE II: **Grasping success rate for each strategy.** The success rate for all daily objects of various sizes and weights is greater than 90%. Fig. 8: **Experimental setup**. The AllegroTM hand with DenseTact-Mini, attached to the FrankaTM arm, manages object grasping, movement, and detachment.
2309.16486
HTC-DC Net: Monocular Height Estimation from Single Remote Sensing Images
3D geo-information is of great significance for understanding the living environment; however, 3D perception from remote sensing data, especially on a large scale, is restricted. To tackle this problem, we propose a method for monocular height estimation from optical imagery, which is currently one of the richest sources of remote sensing data. As an ill-posed problem, monocular height estimation requires well-designed networks for enhanced representations to improve performance. Moreover, the distribution of height values is long-tailed with the low-height pixels, e.g., the background, as the head, and thus trained networks are usually biased and tend to underestimate building heights. To solve the problems, instead of formalizing the problem as a regression task, we propose HTC-DC Net following the classification-regression paradigm, with the head-tail cut (HTC) and the distribution-based constraints (DCs) as the main contributions. HTC-DC Net is composed of the backbone network as the feature extractor, the HTC-AdaBins module, and the hybrid regression process. The HTC-AdaBins module serves as the classification phase to determine bins adaptive to each input image. It is equipped with a vision transformer encoder to incorporate local context with holistic information and involves an HTC to address the long-tailed problem in monocular height estimation for balancing the performances of foreground and background pixels. The hybrid regression process does the regression via the smoothing of bins from the classification phase, which is trained via DCs. The proposed network is tested on three datasets of different resolutions, namely ISPRS Vaihingen (0.09 m), DFC19 (1.3 m) and GBH (3 m). Experimental results show the superiority of the proposed network over existing methods by large margins. Extensive ablation studies demonstrate the effectiveness of each design component.
Sining Chen, Yilei Shi, Zhitong Xiong, Xiao Xiang Zhu
2023-09-28T14:50:32Z
http://arxiv.org/abs/2309.16486v1
# HTC-DC Net: Monocular Height Estimation from Single Remote Sensing Images ###### Abstract 3D geo-information is of great significance for understanding the living environment; however, 3D perception from remote sensing data, especially on a large scale, is restricted, mainly due to the high costs of 3D sensors such as LiDAR. To tackle this problem, we propose a method for monocular height estimation from optical imagery, which is currently one of the richest sources of remote sensing data. As an ill-posed problem, monocular height estimation requires well-designed networks for enhanced representations to improve performance. Moreover, the distribution of height values is long-tailed with the low-height pixels, e.g., the background, as the head, and thus trained networks are usually biased and tend to underestimate building heights. To solve the problems, instead of formalizing the problem as a regression task, we propose HTC-DC Net following the classification-regression paradigm, with the head-tail cut (HTC) and the distribution-based constraints (DCs) as the main contributions. HTC-DC Net is composed of the backbone network as the feature extractor, the HTC-AdaBins module, and the hybrid regression process. The HTC-AdaBins module serves as the classification phase to determine bins adaptive to each input image. It is equipped with a vision transformer encoder to incorporate local context with holistic information and involves an HTC to address the long-tailed problem in monocular height estimation for balancing the performances of foreground and background pixels. The hybrid regression process does the regression via the smoothing of bins from the classification phase, which is trained via DCs. The proposed network is tested on datasets of different resolutions, namely, DFC19 (1.3 m) and GBH (3 m). Experimental results show the superiority of the proposed network over existing methods by large margins. Extensive ablation studies demonstrate the effectiveness of each design component. Codes and trained models are published at [https://github.com/zhu-xlab/HTC-DC-Net](https://github.com/zhu-xlab/HTC-DC-Net). monocular height estimation, vision transformer, adaptive bins, hybrid regression. ## I Introduction Monocular height estimation is the process of deriving height information from single remote sensing images. The generated height maps, usually delivered in the form of digital surface models (DSMs) or normalized digital surface models (nDSMs), are of great importance for many downstream applications. For example, estimating building heights is essential for 3D building models [1, 2, 3], which serve as a crucial information basis for urban planning and disaster management. And modeling vegetation heights, represented as canopy height models [4, 5, 6], can improve the understanding of biomass and, thus, help the carbon cycle studies on a large scale. Height information can be retrieved directly with 3D-aware techniques, e.g., 3D sensors such as light detection and ranging (LiDAR) [7, 8, 9] and synthetic aperture radar (SAR) [10, 11], or stereo pairs of optical images. However, such techniques are only conditionally applicable for various reasons. Though LiDAR delivers high-quality 3D measurements, the very high operational costs hamper its use in most cases. SAR has larger coverage than LiDAR, and satisfactory accuracy, however, suffers a lot from the side-looking geometry [12, 13]. In dense urban areas, the need for a stack of SAR images restricts its applicability for 3D reconstruction in practice [14]. While stereo images are easier to obtain, the compromise between acquisition quality and quantity poses great difficulties [15, 16]. Large-scale applications demand high-quality and comprehensive data, which is not adequately met by either costly aerial imaging acquisitions or low-budget satellite stereo pairs. While the former provides high-quality data, it comes at a significant expense, whereas the latter is typically affected by cloud contamination and long baselines, limiting its usefulness for large-scale applications [17]. In contrast, monocular images, especially those from satellites, are rich in quantity [18], which addresses the deficiencies of the aforementioned techniques and, thus, can support large-scale applications as well as the corresponding updates. The only problem is how to mine the concealed height information from them. Early works on monocular height estimation focus on the level of instances [19, 20]. Following the physical model of shadow casting, shadow lengths are exploited as the cue for inferring heights. Together with solar parameters, the heights of ground objects can be computed mathematically. However, such methods suffer from overlaps between shadows and objects, especially in dense urban areas or dense forests, as well as the availability of exact solar parameters. Fortunately, a large amount of data and the recent emergence of deep learning methods make it possible to tackle the problem in a data-driven manner. Given that sufficient data could be used for training, models of high performance could be expected. Monocular height estimation could be inspired by advances in monocular depth estimation [21], which is faced with exactly the same problem as monocular height estimation, that being the ill-posed nature. Namely, multiple height maps with similar height structures could look very similar in the domain of optical images; thus, one specific optical image can correspond to multiple height map predictions that are hard to disentangle. Inspired by the use of vision transformers (ViTs) [22] for enforcement of global consistency in monocular depth estimation, we propose involving a ViT for modeling long-range attention to combat the ill-posed problem. Besides, changes in solution paradigms have occurred in monocular depth estimation. Instead of solving the problem as a regression task, the state-of-the-art solution is to convert the problem into a classification-regression problem. For example, the hybrid regression process [23] is proposed to facilitate the solution for monocular depth estimation. In this paper, we demonstrate the feasibility and superiority of applying the classification-regression paradigm for monocular height estimation from remote sensing images. Different from monocular depth estimation, monocular height estimation also suffers from the long-tailed distribution problem [24]. Specifically, in the physical world, most of the ground objects are of lower height, such as low buildings and vegetation, while high objects are rare, e.g., skyscrapers. When a network is trained with such data from nature, it will be largely biased. Considering that the long-tailed distribution of height values is even more skewed than the worst cases in long-tailed classification, the predictions can include many fatal cases for higher objects, with incredibly large errors. Different from a simple regression process, the hybrid regression process incorporates a distribution-based approach, which yields a distribution specified by the bin centers from the classification phase and the bin probabilities from the regression phase. Theoretically, the final prediction lies within the bins with the highest probabilities. In practice, to avoid discrete predicted values, the final prediction is computed as the weighted average of the bin centers according to the bin probabilities, equivalent to the expectation value of a distribution. This is based on the assumption that the expectation value of the distribution is close to the value where the probability is the highest. However, the assumptions cannot be guaranteed without any constraints set on the distribution. To cope with the aforementioned problems, we propose HTC-DC Net, which is equipped with a head-tail cut (HTC) and distribution-based constraints (DCs). In summary, our contributions are as follows: * We propose a novel architecture for monocular height estimation. We utilize a classification-regression paradigm for HTC-DC Net, which employs the ViT for enhanced representation learning. * We propose an HTC to address the extremely long-tailed nature of height values, i.e., to mitigate the side impact of the background pixels as the majority. * We propose using DCs to regularize the bin probabilities used during the regression phase, which are mathematically neat and lead to remarkable improvements. * We conduct extensive experiments to showcase the efficacy of the proposed network and comprehensive ablation studies to demonstrate the necessity of each designed component. The proposed network outperforms the existing methods by a large margin. The remainder of the article is organized as follows. Section II gives an overview of the related works. The proposed method is described in detail in Section III, followed by Section IV describing the experiments and Section V showing the experimental results. Discussions and ablation studies are presented in Section VI. Conclusions are drawn, and further research directions are described in Section VII. ## II Related Works ### _Monocular Height Estimation_ Deep-learning-based monocular height estimation networks can be categorized into pixel-wise and instance-wise methods, based on their distinct objectives. As the major focus of the paper, pixel-wise height estimation can be formalized as a deep dense regression task. To tackle the task, encoder-decoder fully convolutional networks (FCNs) are mostly utilized [25]. FCNs for semantic segmentation can be adopted by removing the final classification layer (Softmax or Sigmoid activation), e.g., SegNet [26], U-Net [27], Eff-UNet [28]. Besides, many networks are proposed specifically for monocular height estimation. Those networks can be categorized into two types: single-task learning networks and multi-task learning networks. #### Ii-A1 Single-task Learning Networks Single-task learning networks map the input images to the output height maps. In such networks, feature fusion is usually applied to boost performance. For instance, Mou _et al._[29] proposed one of the first deep-learning-based methods to estimate height from a single optical image--an encoder-decoder neural network, IM2HEIGHT. Compared to plain FCN, IM2HEIGHT has a skip connection, accounting for low-level feature fusion. It leads to sharper object edges and more details preserved. Amirkolaee _et al._[30] adopted advanced techniques from the CV community. They used the up-sampling block to lighten the computation burden and utilized multi-level feature fusion to combat the blurring effect during inference [31]. Besides, they also proposed a post-processing scheme to enforce the continuities around the patch edges. Xing _et al._[32] proposed PLNet with the feature fusion module--gated feature aggregation module (GFAM) and a refining module--progressive refine module (PRM). #### Ii-A2 Multi-task Learning Networks Multi-task learning networks introduce auxiliary tasks in addition to height predictions, with the expectation that both tasks support each other during training. Usually, based on the assumption that heights and semantics are highly correlated [33], semantic segmentation can be regarded as an auxiliary task for height estimation. For example, Srivastava _et al._[34] were the first to showcase the gains the auxiliary semantic segmentation head brings. Carvalho _et al._[35] explored earlier separation between heights and semantics, and compared different multi-task learning strategies. Elhousni _et al._[36] used further auxiliary geometric information, the normal vectors, in a two-stage network, where the first stage results are fed into the second stage de-noising autoencoder for refinement. As an alternative, monocular height estimation can also be regarded as an image translation task, assuming that the images and heights are backed by the same underlying semantics. In this context, generative adversarial networks (GANs) are used. Ghamisi and Yokoya [37] used a GAN-based network consisting of a generator and a discriminator. The generator applies the style transfer, i.e., takes the input image, and transfers it to the output height map. The discriminator is used to help train the generator to generate realistic height maps. The network is trained with image-height map pairs. Later, Paoletti _et al._[38] overcame the problem by introducing the shared latent features. This makes the network generalize better and learn more generic style information. Improvements in the performances are demonstrated by experiments. Besides, methods for instance-wise monocular height estimation, though not the focus of the paper, account for cases when doing 3D perception specifically for some ground objects, e.g., buildings. Under such circumstances, instance segmentation-based networks can be used, where heights are predicted conditioned on the instances as the prior. By this means, the output maps are usually sparse maps with only object pixels filled with object-wise single height values, which, in the context of 3D building reconstruction, are exactly the LoD-1 (Level of Details) building models. Such methods are usually based on two-stage instance segmentation networks, e.g., Mask R-CNN [39]. Mahmud _et al._[40] modified Mask R-CNN into a multi-task network, with a joint prediction of heights, signed distance function, and semantics aggregated into the final output. Chen _et al._[41] proposed a network named as Mask-Height R-CNN, which is adapted from Mask R-CNN for monocular height estimation by adding a height regression head to the region proposal network (RPN) [42]. Recently, Li _et al._[43] proposed a novel type of representation for building instances in 3D space: 3D centripetal shift representation. Their proposed network, termed as 3DCentripetalNet, learns 3D centripetal shift representation and building corners, which are further utilized to retrieve building heights. ### _Monocular Depth Estimation_ As a highly related task to monocular height estimation, monocular depth estimation has been a long-standing task in the computer vision community [21]. The advances in monocular depth estimation can thus inspire better solutions for monocular height estimation. It has been witnessed that the paradigm of doing monocular depth estimation changes from regression to classification, then to classification-regression--the state-of-the-art solution. Treated as a regression problem, the depths are predicted directly from the images, which are supervised by the ground truth depth values. There have been many works in this direction, and they are all intuitive; however, their performances are limited [44, 45, 31, 46, 47]. Fu _et al._[48] proposed using ordinal regression, which converts the regression problem into a classification problem, which inspired the application of DORN in MHE [49]. Besides, Sun _et al._[42] designed a classification network based on the ordinal regression network, however, with an adaptive bin design and a set prediction framework for bin prediction. The results of Fig. 1: Network Architecture of HTC-DC Net. Following the classification-regression paradigm, the HTC-DC Net is formalized into three parts, the backbone network, the HTC-AdBins module as the classification phase, and the hybrid regression process as the regression phase. First, a backbone network is used to extract features from images. Based on the features, the HTC-AdBins module derives the bin edges, which serve as the discretization of the height value range into adaptive bins as classes, and the bin probabilities, regarded as the class probabilities. Finally, the hybrid regression process converts the discretized output space back to a continuous output space by a weighted average of the bin centers according to the bin probabilities. As the contributions of the paper, the head-tail cut (HTC) in the HTC-AdBins module is used to treat foreground and background pixels separately to account for the long-tail effect in monocular height estimation, and distribution-based constraints (DC) are applied to the predicted bin probabilites for regularization. The foreground refers to pixels higher than 1 m. The red numbers in parentheses refer to the corresponding equations. these classification methods are discrete with artifacts; however, the overall performances are better than the regression networks. Recently, the classification-regression scheme has emerged with state-of-the-art performances [23, 50, 51]. They propose to use adaptive bins learned from the image to reflect the real distribution of ground truth values of each image and then predict the depth values by a weighted average of the learned bins. Compared to ordinal regression networks, such as DORN [49], classification-regression networks can adapt to different input images and output continuous depth maps. ## III Methodology As shown in Fig. 1, the proposed HTC-DC Net consists of three parts, the backbone network to extract features from input images, the HTC-AdaBins module to conduct the HTC as well as incorporate local and holistic information, and the hybrid regression module to get the final height predictions. The proposed network follows the classification-regression paradigm: Based on the extracted features, the HTC-AdaBins module conducts the classification of pixels into bins that are adaptive to each input image, and the hybrid regression module smooths the discrete bins into the continuous output space. The three components are described in detail in this section. ### _Backbone Network_ Instead of directly using an encoder-decoder structure for height prediction, the backbone network is used for the generation of feature maps \(\{\mathbf{F}_{1},\mathbf{F}_{2},\mathbf{F}_{3},\mathbf{F}_{4},\mathbf{F}_{5}\}\) from input images \(\mathbf{I}\in\mathbb{R}^{3\times H_{0}\times W_{0}}\), which contain rich spatial and spectral information. Inspired by networks, e.g., U-Net [27], where the intermediate features are aggregated in the later stages of the networks, and following the advanced design in hybrid regression for monocular depth estimation [50, 51], the early injection is done by applying the HTC-AdaBins module and the following hybrid regression process to features of multiple stages in the decoder network, resulting in predictions at different scales. The results of intermediate levels are not taken as the final output, however, used for the computation of loss functions for training. ### _HTC-AdaBins_ The HTC-AdaBins module (see Fig. 2) is a variant of the AdaBins module [23] with modifications to address the long-tailed distribution problem in monocular height estimation from remote sensing images. It is used to obtain bin edges \(\mathbf{b}\in\mathbb{R}^{N+1}\) and bin probabilities \(\mathbf{P}\in\mathbb{R}^{N\times H\times W}\) from the feature maps generated by the backbone network \(\mathbf{F}\in\mathbb{R}^{C\times H\times W}\), where \(N\) is the number of bins as a hyperparameter, the same for all input images. Intuitively, the bins discretize the continuous height into classes, which are adaptive to each input image by reflecting the height value distribution of each image, and the bin probabilities serve as the class probabilities. That is, the HTC-AdaBins module converts the regression problem into a classification problem. Besides, the HTC-AdaBins module enables the interaction between local textures learned by the local branch and the global context learned by the global branch. In addition, the HTC enables different treatment of the foreground and the background pixels, such that the performances for foreground and background pixels are balanced. #### Iii-B1 Local and Global Branch The local branch with one convolutional layer exploits the local feature pattern \(\mathbf{L}\) as \[\mathbf{L}=\text{conv}_{3\times 3}(\mathbf{F}). \tag{1}\] While the global branch with a ViT encoder [22] models the global context. To be fed into the global branch, the feature maps are divided into patches, among which the relations are modeled to refine the embeddings. The process is denoted as \[\mathbf{E}=\{\mathbf{e}_{1},\mathbf{e}_{2},\cdots,\mathbf{e}_{\frac{HW}{p^{2} }}\}=\text{ViT}(\text{conv}_{p\times p}(\mathbf{F})), \tag{2}\] where \(p\) denotes the patch size. The resulting embeddings \(\mathbf{E}\) are taken for different uses. To obtain the bins, the first embedding \(\mathbf{e}_{b}:=\mathbf{e}_{1}\) is regarded as the bin width embedding. It is fed into a linear layer fc, then normalized by a softmax function to get the relative bin widths \(\mathbf{w}_{b}\in\mathbb{R}^{N}\), defined as \[\mathbf{w}_{b}=\text{softmax}(\text{fc}(\mathbf{e}_{b})). \tag{3}\] Finally, given the minimal and the maximal possible values of heights, \(h_{\text{min}}\) and \(h_{\text{max}}\), the bin edges \(\mathbf{b}\) can be obtained by \[\begin{split}\mathbf{b}_{0}&=h_{\text{min}},\\ \mathbf{b}_{i}&=\mathbf{b}_{i-1}+\mathbf{w}_{bi}, \forall i=1,2,\cdots,N.\end{split} \tag{4}\] A fixed number \(m\) of the embeddings following the first one are concatenated and taken as the global feature \(\mathbf{G}:=\text{concatenate}\left(\mathbf{e}_{2},\mathbf{e}_{3},\cdots, \mathbf{e}_{m+1}\right)\). The global feature \(\mathbf{G}\) from the global branch is incorporated with the output from the local branch by a cross-product as follows, \[\mathbf{R}=\mathbf{L}\times\mathbf{G}, \tag{5}\] to compute the range attention maps (RAMs) \(\mathbf{R}\), which represent the extent of how the height value distribution of a local area compares to the global distribution. The RAMs \(\mathbf{R}\) are then convolved and normalized to get the bin probability maps \(\mathbf{P}\) as \[\mathbf{P}=\text{softmax}(\text{conv}_{1\times 1}(\mathbf{R})). \tag{6}\] #### Iii-B2 Head-Tail Cut to Combat the Long-Tailed Effect As mentioned above, in remote sensing, the height values are usually extremely long-tailed distributed (see Fig. 3), so the majority background pixels may disturb the computation of RAMs. To mitigate this effect, we propose using an HTC to separate the computation of foreground and background pixels, where foreground pixels are defined as pixels with height values greater than 1 m. The definition of the threshold is proved to be reasonable through experiments. The separation takes effect within the ViT encoder from the global branch. Instead of computing a unique RAM, two different sets of tokens of the same number \(m\), \(\mathbf{G}_{fg}=\text{concatenate}\left(\mathbf{e}_{2},\mathbf{e}_{3},\cdots, \mathbf{e}_{m+1}\right)\), and \(\mathbf{G}_{bg}=\text{concatenate}\left(\mathbf{e}_{m+2},\mathbf{e}_{m+2}, \cdots,\mathbf{e}_{2m+1}\right)\), are selected to compute the foreground and background RAMs, \(\mathbf{R}_{fg}\in\mathbb{R}^{N\times H\times W}\) and \(\mathbf{R}_{bg}\in\mathbb{R}^{N\times H\times W}\), and then the bin probabilities \(\mathbf{P}_{fg}\) and \(\mathbf{P}_{bg}\), respectively. The Eqn. 5 and Eqn. 6 are rewritten as \[\begin{split}\mathbf{R}_{fg}&=\mathbf{L}\times \mathbf{G}_{fg},\\ \mathbf{R}_{bg}&=\mathbf{L}\times\mathbf{G}_{bg}, \\ \mathbf{P}_{fg}&=\text{softmax}(\text{conv}_{1\times 1}( \mathbf{R}_{fg})),\\ \mathbf{P}_{bg}&=\text{softmax}(\text{conv}_{1\times 1}( \mathbf{R}_{bg})).\end{split} \tag{7}\] In this way, on the one hand, the embeddings from the ViT are utilized in a more efficient way so as to take full advantage of the holistic information acquired by the great computational effort, and on the other hand, foreground and background are perceived earlier in the global attention phase, resulting in more distinguishable treatments for them. The extreme height distribution in the physical world renders the foreground and background pixels each about half of the whole dataset. The HTC problem makes an almost balanced binary classification setting, which is done by simply adding a binary classification head on the foreground RAMs. The probability that pixels belong to the foreground is computed as \[p_{fg}=\text{sigmoid}(\mathbf{R}_{fg}). \tag{8}\] The probability map \(p_{fg}\) serves as a mask to combine the bin probabilities computed for foreground and background pixels, written as \[\mathbf{P}=(p_{fg}>0.5)\cdot\mathbf{P}_{fg}+(p_{fg}\leq 0.5)\cdot\mathbf{P}_{bg}. \tag{9}\] ### _Hybrid Regression Process_ The hybrid regression process is designed to combine the learned information for each bin by smoothing the discrete output space derived from the HTC-AdaBins module to a continuous output space. First, a representative value from each bin, i.e., the bin center \(\mathbf{c}\), is calculated as the midpoints between two bin edges with \[\mathbf{c}_{i}=\frac{\mathbf{b}_{i-1}+\mathbf{b}i}{2},\forall i=1,2,\cdots,N. \tag{10}\] The final predicted height map \(\mathbf{H}\in\mathbb{R}^{1\times H\times W}\) is formalized as a weighted average of the \(N\) bin centers \(\mathbf{c}\) according to the bin probabilities \(\mathbf{P}\), i.e., \[\mathbf{H}=\sum_{i}^{N}\mathbf{P}_{i}\mathbf{c}_{i}. \tag{11}\] Fig. 3: Height value distribution of GBH training and validation set. The background with height values smaller than 1 m consists of around 3e8 pixels, which accounts for 57% of the total pixels, while the pixels with very large height values only count to approximately 10 for each 1 m bin. The long-tailed distribution also exists in building height values. Fig. 2: The HTC-AdaBins module contains two branches, namely the local branch and the global branch. The local branch is responsible for computing local features with a convolutional layer, while the global branch is involved with a vision transformer encoder for capturing global context. The embeddings from the ViT encoder are utilized for computing the bin edges, the foreground bin probabilities, and the background bin probabilities, respectively. During the computation of bin probabilities, a cross-product of the local features and the embeddings from the ViT encoder is conducted to incorporate features of different scopes. The head-tail cut is derived from the foreground range attention map and used to combine bin probabilities maps for foreground and background pixels. The outputs of the HTC-AdaBins module are supervised by a bin edge loss, a head-tail cut loss, and a distribution-based constraint. The red numbers in parentheses refer to the corresponding equations. ### _Loss Functions_ The loss function is composed of four parts. #### Iii-D1 Pixel-wise Height Loss The pixel-wise height loss is defined as the L1 loss, written as \[\mathcal{L}_{h}=\frac{1}{|\mathbf{H}|}\sum L_{1}(\mathbf{H},\tilde{\mathbf{H}}), \tag{12}\] where \(|\mathbf{H}|\) denotes the total number of pixels, and \(\tilde{\mathbf{H}}\) denotes the ground truth height map. #### Iii-D2 Bin Edge Loss To make sure the bin edges comply with the distribution of ground truth values, Chamfer loss [52], which computes the bi-directional distances between two point sets, is utilized to supervise the bin edge predictor, i.e., \[\mathcal{L}_{b}=\text{chamfer}(\mathbf{b},\text{flatten}(\tilde{\mathbf{H}})). \tag{13}\] Intuitively, the bin edges and the flattened ground truth height maps are seen as two 1D point sets with height values as the coordinates. For each point in one set, the closest point in the other set is searched, and the distance between the two points is computed and added to the final loss. On the one hand, the distances to the bin edges force them to come from the height values of the input images. On the other hand, the distances to the height values of pixels encourage the bin edges to spread according to the distribution of pixel height values. When the Chamfer loss is small, the distances between the two point sets are small, i.e., the locations of the bin edges comply with the distribution of height values. Ideally, the bin edges lie at the quantiles of the ground truth height values. #### Iii-D3 Head-Tail Cut Loss As mentioned in Section III-B, an HTC is conducted in the AdaBins module by a binary classification head. The HTC is supervised by a cross-entropy loss, denoted by \[\mathcal{L}_{htc}=\text{cross-entropy}(p_{fg},\ \tilde{\mathbf{H}}>1). \tag{14}\] #### Iii-D4 Distribution-based Constraint Conventionally, the single regression process yields a single point estimation as the height prediction. In contrast, the hybrid regression process incorporates a distribution-based approach. In the classification phase, a distribution is specified by the bin centers and the bin probabilities. Trivially, taking the most probable value drawn from the distribution accounts for the final height prediction. However, it would lead to discrete output maps solely with values from the bin centers. To overcome this limitation, in the regression phase, a weighted average (Eqn. 11) of bin centers according to the bin probabilities serves as the smoothing of the bins and, thus, enables continuous output space and approaches the mode of the distribution. From a probabilistic perspective, the weighted average is equivalent to computing the expectation value of the underlying height value distribution, which could be far from the distribution mode without any constraints on the distribution. One special case when the expectation and the mode of a distribution are close to each other is when the distribution is symmetric and unimodal, such as a Gaussian distribution. In this case, the hybrid regression process yields the mode value, approximated by the expectation value of the underlying distribution. Therefore, to enforce that the resulting expectation value of the distribution approaches the distribution mode, a DC is posed on the bin probabilities. As illustrated in Fig. 4, if the predicted height values obey certain distributions, then the bins are intervals within the defined domain of the distribution, and the bin probabilities are the integrals within the bin intervals. Assuming a known distribution, such as a Gaussian distribution with the ground truth value as the mode, the whole distribution can then be computed and serve as a constraint. Mathematically, consider that the predicted height \(h\) for a pixel is subject to a Gaussian distribution, centered at the ground truth height value \(\tilde{h}\), i.e., \[h\sim\mathcal{N}(h|\tilde{h},\sigma^{2})=\frac{1}{\sigma\sqrt{2\pi}}\exp(- \frac{(h-\tilde{h})^{2}}{2\sigma^{2}}), \tag{15}\] where \(\sigma\) as the standard deviation specifies the scale of the distribution, which is unknown and to be solved. The corresponding cumulative distribution function \(F\) is \[F(h)=\frac{1}{2}(1+\text{erf}(\frac{h-\tilde{h}}{\sigma\sqrt{2}})), \tag{16}\] Fig. 4: Distribution-based constraint. The constraint is derived and applied in three steps. First, the assumed distribution is solved from the probability of the bin where the GT value lies the bin bounded by \(e_{m}\) and \(e_{m}\) + 1), which is assumed as the mode probability (\(p_{m}\), in green). With the GT value taken as the mean value, the scale parameter, e.g., the standard deviation of a Gaussian distribution (\(\sigma\)), can be solved analytically. Second, following the assumed distribution, the probabilities for other bins can be calculated as the integral of the PDF within each bin and regarded as the reference bin probabilities (in red). Last, the reference bin probabilities (in red) are applied as a constraint for the predicted bin probabilities (in blue) in the form of KL divergence. The red numbers in parentheses refer to the corresponding equations. where erf stands for the Gaussian error function, written as \[\text{erf}(x)=\frac{2}{\sqrt{\pi}}\int_{0}^{x}\exp(-t^{2})\text{d}t. \tag{17}\] Then the probability for the bin where the ground truth value lies, i.e., the mode probability \(P_{m}\), can be represented as \[\begin{split} P_{m}&{=F(e_{m+1})-F(e_{m})}\\ &=\frac{1}{2}(\text{erf}(\frac{e_{m+1}-\tilde{h}}{\sigma\sqrt{2}} )-\text{erf}(\frac{e_{m}-\tilde{h}}{\sigma\sqrt{2}})),\end{split} \tag{18}\] where the edges \(e_{m}\) and \(e_{m+1}\) make the bin around the ground truth value. When the mode probability is assumed as the reference, Eqn. 18 has a unique solution of the standard deviation \(\sigma\), and then the distribution is fixed. The above-mentioned equation can be solved numerically by optimization; however, it leads to large computational burdens. Alternatively, to ease the computation and to solve the equation analytically, it is assumed that the ground truth value lies exactly at the bin center. Then Eqn. 18 is simplified as \[P_{m}=\text{erf}(\frac{e_{m+1}-e_{m}}{\sigma 2\sqrt{2}}). \tag{19}\] Then \(\sigma\) can be solved as \[\sigma=\frac{e_{m+1}-e_{m}}{2\sqrt{2}\text{ierf}(P_{m})}, \tag{20}\] where ierf is the inverse function of the Gaussian error function erf. Even though the assumption is not true, it is necessary to make the problem tractable. Surprisingly, the simplification still brings performance improvements. After the distribution is fixed by solving the standard deviation \(\sigma\), the bin probabilities for other bins can be easily computed by \[P_{i}=F(e_{i+1})-F(e_{i}). \tag{21}\] Then, these computed probabilities from the assumed distribution are used to supervise the prediction of bin probabilities by the Kullback-Leibler (KL) divergence. The loss can be formulated as \[\mathcal{L}_{dist}=\frac{1}{|\mathbf{P}|}\sum-\tilde{\mathbf{P}}\log\frac{ \tilde{\mathbf{P}}}{\mathbf{P}}, \tag{22}\] where \(\tilde{\mathbf{P}}\) is the probability map from the assumed underlying distribution, with the probability for each pixel in each bin computed by Eqn. 21. To consider the differences between foreground and background pixels, different distributions are assumed for them. For background pixels, the predicted height values \(h\) are assumed to follow a uniform distribution, with the ground truth value \(\tilde{h}\) as the center point, written as \[h\sim\text{Uniform}(h|a,b)=\frac{1}{b-a}, \tag{23}\] where \(a\) and \(b\) are the lower and upper bounds of the distribution. Similarly, to solve the distribution parameters, the mode probability \(P_{m}\) is set as the reference: \[P_{m}=F(e_{i+1})-F(e_{i})=\frac{e_{i+1}-e_{i}}{b-a}. \tag{24}\] The scale parameter of the distribution denoted as the width \(w:=b-a\) is derived as \[w=\frac{e_{i+1}-e_{i}}{P_{m}}. \tag{25}\] Given the distribution is centered at the ground truth height value \(\tilde{h}\), then the parameters can be derived as \[\begin{split} a&{=\tilde{h}-\frac{e_{i+1}-e_{i}}{ 2P_{m}}},\\ b&{=\tilde{h}+\frac{e_{i+1}-e_{i}}{2P_{m}}}.\end{split} \tag{26}\] Finally, the probabilities for other bins can be easily computed using Eqn. 21. To compute the total loss, for a single feature map \(\mathbf{F}_{i}\) of level \(i\), the loss function \(\mathcal{L}_{i}\) is defined as the summation of the aforementioned loss function parts, denoted by \[\mathcal{L}_{i}=\mathcal{L}_{h}+\mu_{1}\mathcal{L}_{b}+\mu_{2}\mathcal{L}_{ htc}+\mu_{3}\mathcal{L}_{dist}, \tag{27}\] where coefficients \(\mu_{1}\), \(\mu_{2}\), and \(\mu_{3}\) are used for balancing between them. To facilitate the multi-level design, the final total loss is the weighted average of total losses for different levels as \[\mathcal{L}=\sum_{i=1}^{n}\lambda_{i}\mathcal{L}_{i}, \tag{28}\] where \(n\) is the number of features that are used to compute the loss function, and \(\{\lambda_{i}|i=1,2,\cdots,n\}\) are coefficients to weight the loss functions of different levels. Normally, the loss functions at later stages should account for more, that is, \(\lambda_{1}<\lambda_{2}<\cdots<\lambda_{n}\). ## IV Experiments ### _Datasets_ To demonstrate the efficacy of the proposed network, experiments are conducted on three datasets, DFC19, GBH, and ISPRS Vaihingen. #### Iv-A1 Dfc19 DFC19 (Data Fusion Contest 19) dataset [53, 54, 55, 56] provides multi-date satellite images and ground truth geometric and semantic labels in Jacksonville, Florida, and Omaha, Nebraska, USA. The images cover around 100 km\({}^{2}\) and date from 2014 to 2016, with a GSD of 1.3 m. The geometric labels are derived from airborne LiDAR data with a nominal pulse spacing of 80 cm. The dataset is delivered as 2783 triplets of images, nDSMs, and semantic maps of size 1024\(\times\)1024, and GSD of 1.3 m. The semantic maps are processed with only building footprints preserved. To conduct the experiments, the patches are cropped into 44,258 smaller patches of size 256\(\times\)256 due to GPU memory limits, randomly split into training, validation, and test set, with 31,152, 4432, and 8944 data samples. #### Iv-A2 Gbh Existing datasets, including the DFC19 dataset, lack either amount or diversity, so a new dataset--global building height (GBH) dataset is proposed and used to demonstrate the efficacy of the proposed network. The GBH dataset is composed of optical remote sensing images from PLANET, height maps in the form of nDSMs, and building footprint maps. The nDSMs are generated by processing open LiDAR point cloud observations from the authorities. First, the point clouds are de-noised, then the height values of all points and the height values of ground points are rasterized into DSMs and digital terrain models (DTMs), respectively. Finally, the normalized height is obtained by simply subtracting DTMs from the corresponding DSMs. Besides, building footprint maps are included in the dataset for testing in this paper. The dataset of the current version covers 19 diverse cities around the world and the period from 2013 to 2021. With a patch size of 256\(\times\)256 and a GSD of 3 m, the dataset is delivered as 20,532 patches, divided into training, validation, and test sets, with 14,971, 3660, and 1901 patches, respectively. Apart from the 19 cities, three cities, Los Angeles, Sao Paulo, and Guangzhou, with 5787 patches, 108 patches, and 1006 patches, respectively, are left out for testing only. It should be noted that only the number of floors for each building is available in Guangzhou, which is converted into building-wise height by assuming a 3 m floor height. #### Iv-A3 ISPRS Vaihingen ISPRS Vaihingen dataset [57, 58] contains aerial orthophotos in IRRG bands, nDSMs generated from LiDAR point clouds, and the corresponding semantic labels, in 33 tiles, with the GSD of 0.09 m. Due to GPU memory limits, they are cropped into patches of size \(256\times\)256, randomly split into training, validation, and test set, with 1209, 279, and 248 data samples. ### _Evaluation Metrics_ To evaluate and compare the performances of different models, the predictions are evaluated in terms of RMSE, RMSE-M, RMSE-NM, and RMSE-B. While RMSE is the pixel-wise root mean square error for all pixels, RMSE-M measures the RMSE for only building pixels, and RMSE-NM measures the RMSE for only non-building pixels. To further evaluate the capability of the models to generate LoD-1 building models, the building-wise RMSE, denoted by RMSE-B, is computed with building-wise predicted values and building-wise ground truth values, where the building-wise values are defined as the median of the height values for each building instance represented by one connected component in the building footprint maps. ### _Competitors_ The proposed methods are compared to mainstream FCN-based networks, i.e., SegNet [26], FCN [25], U-Net [27], and Efficient U-Net [28]. Methods in this category are mostly designed for semantic segmentation tasks. To facilitate them for height estimation, the final activation layers for classification, e.g., Sigmoid or Softmax activation, are removed. The output from the last convolutional layer is taken as the predicted height values. Besides, five networks specifically designed for monocular height estimation are tested for comparison, among which IM2HEIGHT [29], Amirkolaee _et al._[30], and PLNet [32] are FCN networks taking the problem as a regression task; DORN [48, 49] and Sun _et al._[42, 50] covert the regression task into a classification one, with different bin discretization strategies. The architectures of these networks remain unchanged. ### _Implementation Details_ The backbone network is built on U-Net [27] and EfficientNet [59] with [60] as the decoder, which gives decoded features of five levels, \(\{\mathbf{F}_{1},\mathbf{F}_{2},\mathbf{F}_{3},\mathbf{F}_{4},\mathbf{F}_{5}\}\), among which \(\{\mathbf{F}_{2},\mathbf{F}_{3},\mathbf{F}_{4},\mathbf{F}_{5}\}\) are fed into the rest of the network. For the HTC-AdaBins module, the features are divided into patches of size 4, the number of bins is fixed as 256, and 256 tokens are selected to generate foreground and background embeddings, which complies with the output channel number of the convolutional layer in the local branch. For the hybrid regression process, Gaussian distributions are chosen as the reference distributions for foreground pixels, and uniform distributions are assumed for background pixels. For the loss function, the loss components are weighted with the factors. We set \(\mu_{1}=0.01\), \(\mu_{2}=\mu_{3}=1\), \(\lambda_{1}=0\) (\(\mathbf{F}_{1}\) is discarded), \(\lambda_{2}=0.125\), \(\lambda_{3}=0.25\), \(\lambda_{4}=0.5\), \(\lambda_{5}=1\). The values of the abovementioned hyperparameters are proven to be reasonable through experiments. The proposed networks are trained with the AdamW optimizer, which is a common optimizer used for training ViTs, while the competitors are trained with the Adam optimizer. For all models, the learning rate is set to 1e-4, and early stopping is applied to avoid overfitting. Practically, if the performance of the network fails to improve for 10 epochs, the training terminates. For more implementation details, please refer to the released code. networks gain better results compared to the existing methods, mostly by large margins. ### _Dfc19_ As shown in Table I, our proposed networks consistently achieve the best metrics on the DFC19 dataset. Among the competitors, Amirkolaee _et al._[30] makes the strongest baseline with the best RMSE on all pixels, U-Net [27] performs the best on building pixels, and DORN [49] demonstrates superior results on building instances. However, they are still behind the results from our proposed HTC-DC Nets. For instance, HTC-DC Net B7 outperforms them by margins of 0.7525 m, 4.2589 m, and 0.9049 m, respectively. In addition to better quantitative results, our HTC-DC Nets exhibit better preserved minor structures and boundaries and more accurate predictions. As shown in the first qualitative result in Fig. 5, while other networks predict height maps where the canopy textures are highly blurred, the HTC-DC Nets' predictions are the closest to the ground truth map. In the second example, a tall building is shot with an oblique angle, so the facade is captured in the image. The HTC-DC Nets better distinguish between facade surfaces and roofs, producing consistent elevation maps to input images with sharper building boundaries. Besides, the rooftop of the building should be smooth as seen in the predictions from HTC-DC Nets and the ground truth, while other networks predict different heights for the two parts of the "L"-shaped building, and the heights are either overestimated or underestimated. ### _Gbh_ Generally, larger variabilities are observed on the GBH dataset, which is a more complex and challenging dataset (see Table II). Despite U-Net performing the best on all metrics among the competitors, it is still inferior to our proposed HTC-DC Nets, such as HTC-DC Net B7, by margins of 0.0924 m, 0.2007 m, 0.0507 m, and 0.1714 m, respectively. Note that Sun _et al._[42] fails to deliver reasonable outputs on the GBH dataset with collapsed classification outputs, probably due to the higher complexity of the dataset. Fig. 5: Qualitative results of different models on DFC19. The maps are scaled to the same range. As for the test results on unseen cities (see Table III), it is expected that the networks' performances will degrade in cities with significant domain shifts from the training cities. Given that the training cities are mostly located in Europe and North America, the performances of Sao Paulo and Guangzhou are remarkably worse. However, the absolute performance losses of HTC-DC Nets are relatively smaller than other networks. In the qualitative results presented in Fig. 6, satisfactory outputs are obtained by all the networks on the test set and Los Angeles. However, HTC-DC Nets excel in preserving the minor structures, such as the shapes of the complex buildings. Furthermore, while other networks tend to underestimate the heights of tall buildings, the predicted height value for the tallest building in the first visualization example by HTC-DC Net B5 closely aligns with the ground truth. On Sao Paulo and Guangzhou, other networks show severely degraded performances. For example, in the image sample from Guangzhou, FCNs, Eff U-Nets, IM2HEIGHT, Amirkolaee _et al._, and PLNet generate height maps where the buildings are almost indiscernible, but our proposed HTC-DC Nets still perform well. ### _ISPRS Vaihingen_ As shown in Table IV, our proposed networks outperform the existing methods. The best-performing network, HTC-DC Net B7, surpasses the strongest competitor, U-Net, by margins of 0.1981 m, 0.1874 m, 0.1933 m, 0.1544 m, on RMSE, RMSE-M, RMSE-NM, and RMSE-B, respectively. In general, superior performance is observed on the ISPRS Vaihingen dataset in comparison to the DFC19 dataset, and results on the DFC19 dataset, in turn, are expected to be better than those on the GBH dataset. This observation may be attributed to the resolution differences and the resulting complexity changes between the three datasets. From the experiments, it is concluded that our proposed HTC-DC Nets are advantageous on datasets of various GSDs, namely, the GBH dataset of 3 m GSD, the DFC19 dataset of 1.33 m GSD, and the ISPRS Vaihingen dataset of 0.09 m GSD. ## VI Discussion ### _Classification-Regression Paradigm_ The proposed HTC-DC Nets employ the classification-regression paradigm to tackle the monocular height estimation task. The HTC-AdaBins module is responsible for predicting classes and their corresponding probabilities based on the input images. Then, the hybrid regression process combines the predicted classes and class probabilities to obtain the final predictions in the continuous output space. The classification phase differs from ordinary classification tasks due to the relationship between classes in terms of quantity and their definition varying across different images, which poses great challenges. Consequently, using a simple classification head yields suboptimal results. In the regression phase, the weighted average serves as the smoothing of the related classes. Taking only the values from the classification results for output leads to discrete and unrealistic output maps, as observed in the results from DORN. The need for continuous output maps justifies the introduction of the hybrid regression process. Previous works have predominantly followed the regression paradigm, where the height values are directly mapped. Besides, several works employing the classification paradigm convert the regression problem into a classification problem, bringing improvements in performance but often coming with manually introduced artifacts. In our experiments, we compare Fig. 6: Qualitative results of different models on GBH. The maps are scaled to the same range. Sun _et al._ fails and is, thus, not shown here. our proposed methods to existing works that follow these paradigms. While almost all the previous works follow the regression paradigm, DORN [49] and Sun _et al._[42] follow the classification paradigm. Our proposed networks outperform them by significant margins, highlighting the effectiveness of the classification-regression paradigm for monocular height estimation. ### _HTC-AdaBins_ Our proposed HTC-DC Nets are built upon U-Net [27] and EfficientNet [59]. By comparing the results of U-Net, Eff U-Net [28], and our proposed HTC-DC Nets, we can demonstrate the efficacy of the HTC-AdaBins module. Notably, our proposed HTC-DC Nets outperform U-Net and Eff U-Nets, particularly for building pixels. This demonstrates that the adaptation to different input images, as well as the incorporation of local and hostile information, addresses the the long-tailed effect, alleviates the underestimation issues, and enhances the performance of monocular height estimation for building pixels. ### _Ablation Studies_ Ablation studies are conducted to show the effectiveness of each design component, with the results reported on the GBH dataset. #### Iv-C1 Multi-level Early Injection Multi-level features are utilized, so height maps of different scales are predicted for supervision, allowing for supervisory signals to occur earlier in the network. In our proposed HTC-DC Nets, EfficientNet [59] serves as the backbone, and the decoder based on [60] generates features maps at five levels, \(\{\mathbf{F}_{1},\mathbf{F}_{2},\mathbf{F}_{3},\mathbf{F}_{4},\mathbf{F}_{5}\}\). Typically, the low-level features, such as \(\mathbf{F}_{1}\) are too compact for accurate predictions and are discarded. We select features from \(\{\mathbf{F}_{2},\mathbf{F}_{3},\mathbf{F}_{4},\mathbf{F}_{5}\}\) and report their results. The results in Table V show that using features from all four levels yields superior building-related metrics, often ranking among the top two performers. Additionally, it leads to comparable RMSE for all pixels. These findings indicate that fusing features from all stages enhances the networks' performance, especially in improving the accuracy of building height predictions. #### Iv-C2 Head-Tail Cut To mitigate the negative impact of the majority background pixels on building height predictions, an HTC is employed. Here, we compare the results of networks with and without the HTC. The purpose of the HTC is to improve models' ability to accurately predict building heights, particularly for tall buildings. Table VI illustrates the impact of the HTC on the models' performance. It is obvious that the HTC contributes to the improvement in all metrics. Fig. 7 presents the distribution of RMSE based on pixel ground truth heights and building ground truth heights. It demonstrates that the models' performance improves significantly for both pixels and buildings with higher values as a result of the HTC. This indicates that the HTC is beneficial for areas where the ground truth heights are higher, leading to improved overall performance. As a consequence, the error distribution is "squeezed" toward 0 m, leading to a higher proportion of buildings with absolute errors smaller than 5 m. Furthermore, regarding the HTC accuracy, since the HTC involves a nearly balanced binary classification task due to the extreme distribution, the classification accuracy is relatively high, as shown in Table VI. Additionally, Fig. 8 presents some visualization results from the HTC, where the predictions are close to the ground truth foreground maps. In areas with few non-building ground objects, such as vegetation, the predicted foreground map accurately corresponds to the corresponding building footprint maps. #### Iv-C3 Distribution Constraints as Supervision From a probabilistic perspective, the weighted average denoted in Eqn. 11 is equivalent to computing the expectation value of the underlying height value distribution. Without any constraints, the distribution could be arbitrary, which means the predicted value can often deviate from the mode of the distribution. However, when assuming symmetric unimodal distributions, e.g., Gaussian distribution or Laplace distribution, the predictions should align precisely with the bins where the ground truth values lie. The choice of distribution assumption depends on which bins are expected to contribute to the final prediction. If the bins closer to the ground truth bins are supposed to be the primary contributors, then symmetric unimodal distributions are assumed to underlie. In the case where only the ground truth bin is considered for the final prediction, a Delta distribution is assumed. The choice of distribution also determines the extent to which the bins contribute. Within the family of symmetric unimodal distributions, the main difference lies in the sharpness of the peaks, which represents the margin between the mode probability and the probabilities of the surrounding bins. If all supporting bins are expected to contribute equally, a uniform distribution is assumed. In the ablation study, four different distributions are implemented to demonstrate the optimality of using Gaussian distributions for the foreground and uniform distributions for the background, in terms of selecting bin contributors and determining their contribution amount. The four distributions used are Gaussian, Laplace, Delta, and uniform distributions. (refer to Fig. 9). Among these, the Delta distribution has the sharpest and narrowest peak, while the uniform distribution has the smoothest and broadest peak. As a complement to Section III-D4, the equation for the derivation of the scale parameter from the mode probability of a Laplace distribution \(h\sim\mathcal{L}(h|\tilde{h},b)=\frac{1}{2b}\exp(-\frac{|h-\tilde{h}|}{b})\) is provided without proof as \[b=-\frac{e_{m+1}-e_{m}}{2\ln(1-P_{m})}. \tag{29}\] As there is always a compromise between the prediction for building pixels and non-building pixels, the experiments are evaluated in three folds. The results are presented in Table VII for building pixels, Table VIII for non-building pixels, and Table IX for all pixels. The indicator of a well-performing distribution is its ability to consistently bring improvements compared to experiments without any distribution constraint, which is represented by the column "# of Improvements" in each table. Based on the results, the combination of Gaussian for foreground and uniform for background yields the largest number of improvements on building pixels (7), and all pixels (2). Considering the buildings are of greater interest, this Fig. 8: Results from the head-tail cut and the corresponding building footprint masks. The predicted foreground maps are close to the ground truths. For areas with few non-building ground objects (as the second example), the predictions comply with the corresponding building footprint maps. Fig. 7: RMSE distribution shows that the head-tail cut helps mitigate the long-tailed effect. From top to bottom: pixel GT height distribution, pixel RMSE vs. pixel GT height, building GT height distribution, building RMSE vs. building GT height, and building height error distribution. Fig. 9: Distributions considered in the ablation study. The same mode probability is assumed for comparison. combination is selected as the final configuration. We argue that the choices of distributions have a great effect on the final performances; however, it is hard to analytically decide which distributions to use. Fig. 10 visualizes the predicted bin probabilities from networks without and with the DC. Visually, the application of the distribution constraints has a subtle effect on the predicted height maps. However, when examining the bin probability graphs, it is evident that without constraints, the predicted probabilities are relatively small and disorganized, and the predicted values result from a wider range of bin centers. After the constraints are applied, the predicted probabilities are pushed toward the reference bin probabilities derived from the assumed underlying distribution. This indicates that the constraints effectively regularize the bin probabilities and align them with the bins near the ground truth values, as assumed. The bin probability patterns with constraints result in improvements to the hybrid regression results. It is important to note that there are some failure cases when the mode probabilities are relatively small, leading to extremely large derived standard deviations. This causes the assumed distributions to approach uniform distributions. This phenomenon is likely due to domain shifts, as these failure cases occur more frequently in certain cities. ## VII Conclusion We propose HTC-DC Net, a network for predicting heights from single remote sensing images. The proposed network utilizes a classification-regression paradigm with a ViT to incorporate holistic features and local features. The regression phase with hybrid regression acts as a smoothing process for the classification phase conducted by the HTC-AdaBins module. With the DCs, the height predictions are efficiently regularized. Besides, to combat the long-tailed distribution problems, a novel HTC is conducted to separate the foregrounds from the backgrounds for different treatments. Experiments show that our proposed HTC-DC Net achieves state-of-the-art performance. Despite the impressive results given by our proposed HTC-DC Net, the domain shifts between different cities are still challenging for large-scale applications. They lead to performance drops, especially for cities with distinct urban morphologies. Therefore, further works could be done to address the domain shifts by applying domain generalization techniques. ## Acknowledgment This work was supported by the Helmholtz Association's Initiative and Networking Fund on the HAICORE@KIT partition and the HAICORE@FZJ partition. The authors would like to also thank Y. Cao for easier access to building height data in China, and S. Xing for providing the code of PLNet [32].
2309.08874
Generalised Whittaker models as instances of relative Langlands duality
The recent proposal by Ben-Zvi, Sakellaridis and Venkatesh of a duality in the relative Langlands program, leads, via the process of quantization of Hamiltonian varieties, to a duality theory of branching problems. This often unexpectedly relates two a priori unrelated branching problems. We examine how the generalised Whittaker (or Gelfand-Graev) models serve as the prototypical example for such branching problems. We give a characterization, for the orthogonal and symplectic groups, of the generalised Whittaker models possibly contained in this duality theory. We then exhibit an infinite family of examples of this duality, which, provably at the local level via the theta correspondence, satisfy the conjectural expectations of duality.
Wee Teck Gan, Bryan Wang Peng Jun
2023-09-16T04:37:03Z
http://arxiv.org/abs/2309.08874v2
# Generalised Whittaker models as instances of relative Langlands duality ###### Abstract. The recent proposal by Ben-Zvi, Sakellaridis and Venkatesh of a duality in the relative Langlands program, leads, via the process of quantization of Hamiltonian varieties, to a duality theory of branching problems. This often unexpectedly relates two _a priori_ unrelated branching problems. We examine how the generalised Whittaker (or Gelfand-Graev) models serve as the prototypical example for such branching problems. We give a characterization, for the orthogonal and symplectic groups, of the generalised Whittaker models possibly contained in this duality theory. We then exhibit an infinite family of examples of this duality, which, provably at the local level via the theta correspondence, satisfy the conjectural expectations of duality. ###### Contents * 1 Introduction * 2 Nilpotent orbits and generalised Whittaker models * 3 Hamiltonian spaces and quantization * 4 Relative Langlands duality * 5 Hyperspherical Whittaker models * 6 Theta correspondence * 7 'Hook-type' partitions * 8 Duality under symplectic reduction * 9 Exceptional partitions ## 1. Introduction One of the central themes of the relative Langlands program is to characterize the non-vanishing of certain periods of automorphic forms on a reductive group \(G\) (relative to a subgroup \(H\) of \(G\)), and when the period is non-zero, to relate it to certain (special values of) automorphic L-functions. The corresponding problem at the local level is the \(H\)-distinction problem: the classification of irreducible smooth representations of Introduction ### 1.1. Spherical varieties and [Sv] More precise predictions for these problems were laid out in [SV] in the case when \(H\) is a spherical subgroup of \(G\), so that \(X=H\backslash G\) is a (\(G\)-homogeneous) _spherical_ variety. The main reason for singling out spherical subgroups is the expectation that the spectral decomposition in question will be multiplicity-free (or at least has finite multiplicities). In the spirit of the Langlands philosophy, [SV] associates to the spherical variety \(X\) the following dual data: * a Langlands dual group \(X^{\vee}\) and a map \[\iota_{X}:X^{\vee}\times\operatorname{SL}_{2}(\mathbb{C})\longrightarrow G^{ \vee}.\] * a (graded) finite-dimensional (typically) symplectic representation \(V_{X}\) of \(X^{\vee}\). It was then conjectured that the \(X\)-distinguished representations (of Arthur type) are those whose A-parameters factor through the map \(\iota_{X}\), thus making precise the group from which the Langlands functorial lifting originates. The representation \(V_{X}\) of \(X^{\vee}\) is the main ingredient allowing one to form the automorphic L-function which controls the relevant period. For a more detailed discussion of this, the reader can consult [SV] or the introduction of [GaWa]. There are however plenty of important examples which appear to fit into the framework of the relative Langlands program, but which do not arise from the usual class of spherical varieties. One example is the Bessel and Fourier-Jacobi models [GGP], or more generally, generalised Whittaker models with a Whittaker-twisted component arising from a nilpotent orbit in \(G\). Another example is Howe duality (or theta correspondence) [Sa2], concerning the decomposition of the Weil representation which may be thought of as arising from a symplectic vector space. In each of these problems, one encounters a natural \(G\)-module whose spectral decomposition is multiplicity-free, but the \(G\)-module is not of the form \(C^{\infty}(X)\) with \(X\) a spherical variety. ### Hyperspherical varieties and [Bzsv] The above considerations led Ben-Zvi, Sakellaridis and Venkatesh to investigate the natural setting for the relative Langlands program. Partly motivated by the work of Kapustin-Witten [KW] interpreting geometric Langlands duality as an electric-magnetic duality of (four-dimensional) topological quantum field theories (TQFT) and the work of Gaiotto-Witten [GaWi] on the boundary conditions of these TQFT, they propose in their recent paper [BZSV] a broader framework for the relative Langlands program. According to their new proposal, the basic objects considered by the relative Langlands program should be a class of _Hamiltonian \(G\)-varieties \(M\)_called _hyperspherical varieties_. From this point of view, instead of considering spherical varieties \(X\), one should consider instead its cotangent variety \(M=T^{*}(X)\). By the process of quantization (broadly construed), these hyperspherical \(G\)-varieties give rise to unitary \(G\)-representations whose spectral decomposition is what the relative Langlands program should be concerned with. This point of view reconnects us with the classical philosophy of (geometric) quantization as a means to study the representation theory of Lie groups, a process which itself has its roots in the development of quantum mechanics. We recall the basic definition of a hyperspherical \(G\)-variety in Section 4. A key result shown in [1] is a structure theorem for such varieties. It turns out that any hyperspherical \(G\)-variety can be built out of the following initial data: * a map \[\iota:H\times\operatorname{SL}_{2}\longrightarrow G\] with \(H\subset Z_{G}(\iota(\operatorname{SL}_{2}))\) a spherical subgroup; * a finite-dimensional symplectic representation \(S\) of \(H\). Given these initial data, the corresponding hyperspherical \(G\)-variety \(M\) is built up by the process of 'Whittaker induction' of the symplectic \(H\)-vector space \(S\) along the homomorphism \(\iota\), which is an instance of Hamiltonian reduction a la Marsden-Weinstein. The quantizations of these hyperspherical \(M\)'s capture the aforementioned examples of the space of smooth functions on spherical varieties, the generalised Whittaker models and the Weil representation. In fact, these account for the main prototypical examples of the quantizations of hyperspherical varieties, with the general case built up by the amalgam of these three cases. Thus, this enlarged framework of the relative Langlands program captures all the known examples that one would like to include. ### BZSV Duality Observe that the initial data \[(\iota:H\times\operatorname{SL}_{2}\to G,S)\] used in the construction of a hyperspherical variety is very similar to the key data \[(\iota_{X}:X^{\vee}\times\operatorname{SL}_{2}\to G^{\vee},V_{X})\] used in the formulation of the conjecture of [11] recalled above. If one were to apply the process of Whittaker induction to the latter data, one will get a hyperspherical \(G^{\vee}\)-variety \(M^{\vee}\) (over \(\mathbb{C}\)). Now another novel realization in [1] is that the conjecture of [11] on the classification of \(X\)-distinguished representations can be elegantly reformulated in terms of \(M^{\vee}\). Namely, the \(X\)-distinguished representations (of Arthur type) are those whose A-parameters have nonempty fixed point set on \(M^{\vee}\). This suggests that not only should the basic objects in the relative Langlands program be these hyperspherical \(G\)-varieties \(M\)'s, the dual objects describing the solution of the spectral problem arising from \(M\) should also be hyperspherical varieties, for the Langlands dual group \(G^{\vee}\). Pursuing this train of thought further, [1] suggested that there should exist an involutive theory of _duality_ of such hyperspherical varieties, \[G\circlearrow M\longleftrightarrow M^{\vee}\circlearrow G^{\vee},\] relating two _a priori_ unrelated instances of the relative Langlands program, namely the spectral problems associated to the quantizations of \(M\) and \(M^{\vee}\). From the viewpoint of the geometric Langlands program, they view this proposed relative Langlands duality as a classical manifestation of a duality of boundary conditions of the TQFT's arising in the work of Kapustin-Witten and Gaiotto-Witten. However, there is as yet no firm definition of this purported theory of duality. As an instance of this purported duality, the trivial \(G\)-space (consisting of a single point) is dual to the Whittaker (twisted) cotangent bundle of \(G^{\vee}\). Another striking instance of duality is that the hyperspherical variety underlying the branching problem occurring in the GGP conjecture [1], is dual to that underlying the theta correspondence, which is just a symplectic vector space acted upon by the corresponding reductive dual pair (Remark 7.12). Let us remark also that it is expected in [1] that the duality theory should extend to a much wider class of Hamiltonian spaces, such as non-smooth spaces, or spaces which are not varieties (e.g. stacks, or derived schemes). In this paper, we restrict ourselves to the definition of 'hyperspherical' as given in [1], for which there exists a reasonable structure theory and formulation of expectations for the duality. ### The result of this paper In this paper, we consider mainly the special case of a hyperspherical variety \(M\) whose initial data satisfies \[H=Z_{G}(\iota(\operatorname{SL}_{2}))\quad\text{and}\quad S=0.\] (We also consider \(S\neq 0\) in some cases, namely when the associated nilpotent conjugacy class, as below, is non-even. The choice of \(S\) is largely dictated by the choice of nilpotent conjugacy class.) Such a \(M\) is thus determined by a homomorphism \[\iota:\operatorname{SL}_{2}\longrightarrow G\] which, by the Jacobson-Morozov theorem, is associated to a nilpotent conjugacy class \[e=d\iota\left(\begin{array}{cc}0&1\\ 0&0\end{array}\right)\in\mathfrak{g}=\operatorname{Lie}(G).\] The obtained hyperspherical variety \(M_{e}\) can be described more explicitly as \[M_{e}=((f+\mathfrak{g}^{e})\cap\mathfrak{h}^{\perp})\times^{H}G\] where \[f=d\iota\left(\begin{array}{cc}0&0\\ 1&0\end{array}\right),\qquad\mathfrak{g}^{e}=\operatorname{Ker}(\operatorname{ ad}(e))\quad\text{and}\quad\mathfrak{h}=\operatorname{Lie}(H).\] Thus, \(M_{e}\) is built from the Slodowy slice associated to \(e\) and the corresponding quantization \(\Pi_{e}\) of \(M_{e}\) is an instance of the generalized Whittaker (or Gelfand-Graev) models. After this preparation, we can describe the main results of this paper: 1. Our first result gives a characterization of those \(e\)'s for the orthogonal and symplectic groups, which could possibly give rise to hyperspherical varieties. The precise statements can be found in Theorem 5.5 and Theorem 5.6. In fact, the same upper bound for the possible \(e\)'s holds, regardless of the choice of \(S\). For example, an infinite family of such \(e\)'s consists of those whose corresponding partition has associated Young diagrams of hook type. The corresponding generalized Whittaker models are the so-called Bessel and Fourier-Jacobi models that one encounters in the GGP conjecture. 2. Our second result determines the hyperspherical dual \(M_{e}^{\vee}\) of \(M_{e}\), for some of those \(e\)'s in (a), in particular for all those \(e\)'s of hook type. The precise statements can be found in Theorem 7.2 and Theorem 7.6. For these \(e\)'s of hook type, it turns out that \[M_{e}^{\vee}\cong M_{e^{\vee}}\] for some \(e^{\vee}\) which is also of hook type. Because there is no formal definition of the duality \(M\longleftrightarrow M^{\vee}\), let us explain what we mean by (b) above. In this paper, when we say that a hyperspherical \(G\)-variety \(M\) with associated data \((\iota:H\times\operatorname{SL}_{2}\to G,S)\) is dual to a hyperspherical \(G^{\vee}\)-variety \(M^{\vee}\) with associated data \((\iota^{\dagger}:H^{\dagger}\times\operatorname{SL}_{2}\to G^{\vee},S^{\dagger})\), we mean that the following two statements hold: * The irreducible representations of \(G\) of Arthur type which intervene in the spectral decomposition of the quantization \(\Pi_{M}\) of \(M\) have A-parameters factoring through the map \(\iota^{\dagger}\); * The irreducible representations of \(G^{\vee}\) of Arthur type which intervene in the spectral decomposition of the quantization \(\Pi_{M^{\vee}}\) of \(M^{\vee}\) have A-parameters factoring through the map \(\iota\); In other words, in establishing the results highlighted in (b) above, we are solving a pair of branching problems and showing that their answers can be described in terms of each other. The main tool used in our proof of (b) above is the theta correspondence, and in particular a result of Gomez and Zhu ([GZ], [Zh]) which relates generalised Whittaker models via the theta correspondence, and is a manifestation of the more general principle that the theta correspondence often relates two periods on each member of a dual pair. Finally, let us remark that there is nothing essential about the choice to focus only on orthogonal and symplectic groups in this paper; one could obtain similar results for the general linear and exceptional groups by similar methods, that is, the unitary and exceptional theta correspondences respectively. ### Duality and symplectic reduction The proof of our main results above (using the theorems of Gomez-Zhu) has an underlying geometric interpretation, which is a manifestation of the following principle that we learned from a suggestion of Venkatesh: _Hyperspherical duality 'commutes' with symplectic reduction._ More precisely, one expects a diagram of the form \[\begin{CD}M_{1}^{\prime}@<{\text{duality}}<{}<M_{2}^{\prime}\\ \text{symplectic reduction}\\ \text{with }\{0\}\end{CD}\] \[\begin{CD}M_{1}@<{\text{duality}}<{}<M_{2}^{\prime}\\ \text{duality}@<{\text{}}<{}<M_{1}@<{\text{duality}}<{}<M_{2}\end{CD}\] As a somewhat trivial illustration of this principle, consider for instance the hyperspherical duality in the 'group case' [2], between the \((G\times G)\)-variety \(T^{*}G\) and the \((G^{\vee}\times G^{\vee})\)-variety \(T^{*}G^{\vee}\) (with one factor twisted by the Chevalley involution). Now symplectic reduction of \(T^{*}G\) gives the trivial \(G\)-variety, whose dual is the Whittaker cotangent bundle for \(G^{\vee}\), which can be obtained by Whittaker reduction of \(T^{*}G^{\vee}\). As we will explain in Section 8, the proof of our main results above can in fact be interpreted as a quantization of this geometric principle, taking \(M_{1}^{\prime},M_{2}^{\prime}\) as the hyperspherical dual pair associated with the GGP problem and the theta correspondence. ### Organisation of the paper The paper is organised as follows. In Section 2 we set up the necessary preliminaries on nilpotent orbits and associated objects, and introduce the generalised Whittaker models. After reviewing preliminaries regarding Hamiltonian spaces and quantization in Section 3, in Section 4 we introduce the relative Langlands duality of hyperspherical varieties, and explain how the generalised Whittaker models fit into the framework of hyperspherical varieties. In Section 5 we give a characterization of the generalised Whittaker models which could possibly satisfy the hyperspherical assumption. The rest of the paper is devoted to studying the cases which arise from Section 5: In Section 6, we review the theory and results surrounding the theta correspondence, in preparation for Section 7, where we study the 'hook-type' generalised Whittaker models, which provide a class of examples of hyperspherical dual pairs. In Section 8, we consider related hyperspherical dual pairs (for the group \(G\times H\)) [FU], and in doing so examine how hyperspherical duality interacts with the operation of symplectic reduction. Finally, in Section 9 we make some brief remarks on the exceptional partitions which arise from the characterization of Section 5. ### Notation and conventions Throughout, let \(F\) be a (non-Archimedean) local field of characteristic \(0\), and fix a non-trivial unitary character \(\psi:F\to\mathbb{C}^{\times}\). \(G\) will always denote a (split) reductive group, and we work with split forms of classical groups, unless stated otherwise. Unless otherwise specified we work throughout with smooth admissible representations. We denote by ind and Ind the (unnormalized) compact induction and induction of representations respectively. We work over \(\mathbb{C}\) (or an algebraically closed field of characteristic zero) when dealing with hyperspherical varieties and their duals, and work over \(F\) otherwise. ### Acknowledgements We would like to thank Yiannis Sakellaridis and Akshay Venkatesh for illuminating discussions about [2] during the course of this work and a number of very helpful comments on the paper. The first author thanks Ivan Losev for pointing out the reference [FU] at the Nisyros conference 2023. The second author would also like to thank Nhat Hoang Le for some helpful conversations while the work on this paper was ongoing. W.T. Gan is partially supported by a Singapore government MOE Tier 1 grant R-146-000-320-114 and a Tan Chin Tuan Centennial Professorship. ## 2. Nilpotent orbits and generalised Whittaker models ### Preliminaries on nilpotent orbits We first review preliminaries concerning nilpotent orbits (Dynkin-Kostant theory), mainly to fix notation and to highlight the similarities between the structure theory of hyperspherical varieties and the formation of generalised Whittaker models. A reference for the relevant theory is [CM]. Fix \(\kappa\), an \(\operatorname{Ad}(G)\)-invariant non-degenerate bilinear form on \(\mathfrak{g}\). Let \(\gamma=\{e,h,f\}\subset\mathfrak{g}\) be an \(\mathfrak{sl}_{2}\)-triple associated to a nilpotent orbit of \(\mathfrak{g}\). _Remark 2.1_.: By the Jacobson-Morozov theorem (and other results of Kostant) [CM], there is a correspondence between (conjugacy classes of) \(\mathfrak{sl}_{2}\)-triples and nilpotent orbits; as such, in this paper, we will often refer to \(\mathfrak{sl}_{2}\)-triples and their corresponding nilpotent orbits interchangeably, with no confusion to be expected for the reader. Under the adjoint \(\mathfrak{sl}_{2}\)-action, \(\mathfrak{g}\) decomposes into \(\mathfrak{sl}_{2}\) weightspaces \[\mathfrak{g}_{j}=\{v\in\mathfrak{g}\mid\operatorname{ad}(h)v=jv\}\] for \(j\in\mathbb{Z}\). We have the parabolic \[\mathfrak{p}=\oplus_{j\geq 0}\mathfrak{g}_{j}=\mathfrak{l}\oplus\mathfrak{u},\] where \(\mathfrak{l}=\mathfrak{g}_{0}\). Set \(\mathfrak{u}^{+}:=\oplus_{j\geq 2}\mathfrak{g}_{j}\). We get corresponding subgroups \(P=L\ltimes U\) and \(U^{+}\) of \(G\). Note \[L=\{l\in G\mid\operatorname{Ad}(l)h=h\}\] is the stabiliser of \(h\). Denote the centraliser of \(\gamma\) by \[M_{\gamma}=\{l\in L\mid\operatorname{Ad}(l)e=e\}=\{g\in G\mid\operatorname{ Ad}(g)e=e,\,\operatorname{Ad}(g)f=f,\,\operatorname{Ad}(g)h=h\},\] which is reductive. We define a character \(\chi_{\gamma,\psi}\) on \(U^{+}\) via \[\chi_{\gamma,\psi}(\exp u):=\psi(\kappa(f,u)),\hskip 14.226378pt\forall\ u \in\mathfrak{u}^{+}. \tag{2.1}\] Denote also \[\kappa_{f}(u):=\kappa(f,u).\] Since \(\psi\) is fixed, in what follows we will drop the subscript and simply write \(\chi_{\gamma}\). ### Classification of nilpotent orbits Suppose now \(G\) is the isometry group of a \(n\)-dimensional vector space \(V\) equipped with an orthogonal or symplectic form \(B\) over \(F\). From an \(\mathfrak{sl}_{2}\)-triple as above, we obtain an \(\mathfrak{sl}_{2}\)-representation on \(V\) and the decomposition \(V=\oplus_{j=1}^{l}V^{(j)}\), where \[V^{(j)}=W_{j}^{\oplus a_{j}}\cong W_{j}\otimes V_{j}\] is the isotypic component of \(V\) for the irreducible \(j\)-dimensional representation \(W_{j}\) of \(\mathfrak{sl}_{2}\), and \(V_{j}\) is a \(a_{j}\)-dimensional multiplicity space. Recall, from standard \(\mathfrak{sl}_{2}\)-theory, that \(W_{j}\) is symplectic (resp. orthogonal) if \(j\) is even (resp. odd); fix corresponding \(\mathfrak{sl}_{2}\)-invariant forms \(A_{j}\) on \(W_{j}\). The form \(B\) induces a symplectic or orthogonal form \(B_{j}\) on the \(a_{j}\)-dimensional multiplicity spaces \(V_{j}\). \(B_{j}\) is symplectic if \(j\) is even and \(B\) is orthogonal or if \(j\) is odd and \(B\) is symplectic, and otherwise \(B_{j}\) is orthogonal. \(M_{\gamma}\) is in fact (isomorphic to) the direct product of the isometry groups of \((V_{j},B_{j})\); we denote \[M_{\gamma}\cong\prod_{j=1}^{l}G(V_{j},B_{j}).\] The above furnishes a parameterisation of the nilpotent orbits in \(G\), by the datum of: * the partition \(\lambda=[l^{a_{l}},\dots,1^{a_{1}}]\) of \(n\), and * the forms on the multiplicity spaces \((V_{j},B_{j})\), such that the \((V_{j},B_{j})\) are compatible with \(B\) in the above way; to be precise, this means that the \(B_{j}\) must be the forms that would be induced from \(B\) as above, or that \[\bigoplus_{j}(V_{j},B_{j})\otimes(W_{j},A_{j})\cong(V,B).\] In particular if \(G\) is an orthogonal group, then even parts must occur with even multiplicity in \(\lambda\), and if \(G\) is symplectic, then odd parts must occur with even multiplicity in \(\lambda\). ### Generalised Whittaker models We now define the generalised Whittaker representations \(W_{\gamma}\) associated to the nilpotent orbit \(\gamma\) and the associated generalised Whittaker models. These are also called generalised Gelfand-Graev representations and models (in analogy with the finite field case), but we use the name Whittaker in keeping with the rest of the paper. #### 2.3.1. Even nilpotent orbits The most typical case we will consider is when \(U=U^{+}\), that is, the nilpotent orbit under consideration is _even_. **Definition 2.2**.: In this case, we denote \[W_{\gamma,\psi}:=\operatorname{ind}_{M_{\gamma}U}^{G}\chi_{\gamma} \tag{2.2}\] with the trivial \(M_{\gamma}\) action on \(\chi_{\gamma}\). Denote also for \(\pi\in\operatorname{Irr}(G)\) \[W_{\gamma,\psi}(\pi):=\operatorname{Hom}_{G}(\operatorname{ind}_{M_{\gamma}U}^{ G}\chi_{\gamma},\pi^{\vee})\cong\operatorname{Hom}_{G}(\pi,\operatorname{Ind}_{M_{ \gamma}U}^{G}\chi_{\gamma})\cong\operatorname{Hom}_{M_{\gamma}U}(\pi,\chi_{ \gamma})\] This is called the space of generalised Whittaker functionals of \(\pi\) (associated to \(\gamma\)). For \(\gamma\) a regular nilpotent orbit, it is easy to check that the character \(\chi_{\gamma}\) of \(U\) is generic (in the sense that its stabiliser is as small as possible, among all unitary characters of \(U\)). In fact \(\chi_{\gamma}\) is generic for all even nilpotent orbits \(\gamma\). #### 2.3.2. Non-even nilpotent orbits If \(\mathfrak{g}_{1}\neq 0\), then we have an isomorphism \(\operatorname{ad}(f)|_{\mathfrak{g}_{1}}:\mathfrak{g}_{1}\longrightarrow \mathfrak{g}_{-1}\) coming from \(\mathfrak{sl}_{2}\)-theory, which allows us to transfer the non-degenerate pairing \(\kappa\) between \(\mathfrak{g}_{-1}\) and \(\mathfrak{g}_{1}\) to a symplectic structure \(\kappa_{1}\) on \(\mathfrak{g}_{1}\). Precisely, it is as follows: \[\kappa_{1}(v,w)=\kappa(\operatorname{ad}(f)v,w)=\kappa(f,[v,w]),\qquad\text{ for all }v,\,w\in\mathfrak{g}_{1}. \tag{2.3}\] Note that \(\mathfrak{u}/\mathfrak{u}^{+}\) hence carries a \(M_{\gamma}\)-invariant symplectic form \(\kappa_{1}\). Now consider \(H_{\gamma}\) (not to be confused with the use of \(H\) for a subgroup of \(G\)), the Heisenberg group associated to the symplectic \((\mathfrak{g}_{1},\kappa_{1})\). That is, \(H_{\gamma}=\mathfrak{g}_{1}\times F\), with \(F\) central, and \((v,0)(w,0)=(v+w,\kappa_{1}(v,w)/2)\) for all \(v\), \(w\in\mathfrak{g}_{1}\). We have a group homomorphism \(\alpha_{\gamma}:U\to H_{\gamma}\) given by \[\alpha_{\gamma}(\exp v\exp u)=(v,\kappa(f,u)),\qquad\text{for all }v\in \mathfrak{g}_{1},\,u\in\mathfrak{u}^{+}.\] Denoting \(U^{\prime}=\exp(\ker(\kappa_{f}|_{\mathfrak{u}^{+}}))=\ker(\alpha_{\gamma})\), we may think of this as exhibiting the structure of \(U/U^{\prime}\) as a Heisenberg group with center \(U^{+}/U^{\prime}\). If \(\omega_{\psi}\) is the unique (by the Stone-von Neumann theorem) (smooth, irreducible, unitary) representation of \(H_{\gamma}\) such that its center acts by \(\psi\), then note that the action of \(U^{+}/U^{\prime}\) is given by \[(\exp u)v=(0,\kappa(f,u))v=\psi(\kappa(f,u))v=\chi_{\gamma}(\exp u)v\qquad \text{for all }u\in\mathfrak{u}^{+},v\in\omega_{\psi},\] i.e. it acts by the character \(\chi_{\gamma}\). Since \(M_{\gamma}\) preserves the symplectic form \(\kappa_{1}\), in similar manner to how the Weil representation is constructed from the representation \(\omega_{\psi}\) of the Heisenberg group, here, we have a representation on \(\omega_{\psi}\), of some central cover of \(M_{\gamma}\) which we denote as \(\tilde{M_{\gamma}}\). It is the pre-image of \(M_{\gamma}\) in \(\operatorname{Mp}(\mathfrak{g}_{1})\). For a genuine representation \(\rho\) of \(\tilde{M_{\gamma}}\) (with \(U\) acting trivially), \(\rho\otimes\omega_{\psi}\) descends to an actual representation of \(M_{\gamma}U\) (if we are working with the metaplectic cover as a double cover). **Definition 2.3**.: We denote \[W_{\gamma,\rho,\psi}:=\operatorname{ind}_{M_{\gamma}U}^{G}\rho\otimes\omega_{\psi} \tag{2.4}\] and \[W_{\gamma,\rho,\psi}(\pi):=\operatorname{Hom}_{G}(\operatorname{ind}_{M_{ \gamma}U}^{G}\rho\otimes\omega_{\psi},\pi^{\vee})\cong\operatorname{Hom}_{G}( \pi,\operatorname{Ind}_{M_{\gamma}U}^{G}\rho\otimes\omega_{\psi})\cong \operatorname{Hom}_{M_{\gamma}U}(\pi,\rho\otimes\omega_{\psi})\] This is called the generalised Whittaker model of \(\pi\) (associated to \(\gamma\) and \(\rho\)). More generally, \(\rho\) may be a (genuine) representation of \(\tilde{H}\) for \(H\) a reductive subgroup of \(M_{\gamma}\), and we may form the corresponding \(W_{\gamma,\rho,\psi}\) and \(W_{\gamma,\rho,\psi}(\pi)\) with \(H\) in place of \(M_{\gamma}\). In most applications, \(H\) will be a relatively big (e.g. finite index) subgroup of \(M_{\gamma}\). Note that in the case of even nilpotent orbits (Section 2.3.1), we have \(\tilde{M}_{\gamma}=M_{\gamma}\), \(\omega_{\psi}=\chi_{\gamma}\), and we have essentially taken \(\rho\) to be the trivial representation, so that we may henceforth use the same notation for both cases. _Remark 2.4_.: The preceding discussion suggests that one may omit the representation \(\rho\) of \(\tilde{M}_{\gamma}\) and instead consider the generalised Whittaker model as a representation of \(G\times\tilde{M}_{\gamma}\), and indeed one commonly does so, for instance when dealing with the Bessel and Fourier-Jacobi models. See Remarks 5.10 and 7.12 for further discussion on this aspect. Finally, since \(\psi\) is fixed throughout this paper, where there is no danger of confusion we will sometimes drop the subscripts \({}_{\psi}\). #### 2.3.3. The choice of \(\rho\) Observe that in the even orbit case, there is a natural 'canonical' choice of \(\rho\): the trivial one. In the non-even case, one would ideally also like to have a 'canonical' choice of \(\rho\) and hence 'canonical' choice of generalised Whittaker model. Conceptually, this should be achieved by choosing the'smallest'1 possible \(\rho\). Footnote 1: in the sense of Gelfand-Kirillov dimension It is probably not instructive to make such a choice explicit in full generality, since such a choice will depend, for instance, on whether the induced cover \(\tilde{M}_{\gamma}\) is split or not. Therefore, in this subsection, we will record such a choice in one especially pertinent case to be considered in this paper (the 'hook-type' case for symplectic groups, cf. Section 7). More precisely, this is the case where \(M_{\gamma}\) (or a subgroup \(H\subseteq M_{\gamma}\)) is precisely isomorphic to the symplectic group \(\operatorname{Sp}(\mathfrak{g}_{1})\). In this case, \(\rho\) will be taken to be the dual of the Weil representation \(\omega_{\psi}^{\vee}\), of the metaplectic cover \(\operatorname{Mp}(\mathfrak{g}_{1})\) (which will also allow us to work with the metaplectic cover as an \(S^{1}\)-cover), with again \(U\) acting trivially. One then has: **Lemma 2.5**.: _As representations of \(M_{\gamma}U\),_ \[\omega_{\psi}^{\vee}\otimes\omega_{\psi}\cong\operatorname{ind}_{M_{\gamma}U ^{+}}^{M_{\gamma}U}\chi_{\gamma},\] _with \(M_{\gamma}\) acting trivially on \(\chi_{\gamma}\). The isomorphism is given by formation of matrix coefficients:_ \[s^{\vee}\otimes s\mapsto\bigl{(}mu\mapsto\langle s^{\vee},mu\cdot s\rangle \bigr{)}\] _for all \(s^{\vee}\in\omega_{\psi}^{\vee},s\in\omega_{\psi},mu\in M_{\gamma}U\)._ Proof.: For the action of \(M_{\gamma}\), one simply notes the standard fact [Pr, Remark 2.11] that \[\omega_{\psi}^{\vee}\otimes\omega_{\psi}\cong\mathscr{S}(\mathfrak{g}_{1})\] as representations of \(\operatorname{Sp}(\mathfrak{g}_{1})\), where \(\mathscr{S}(\mathfrak{g}_{1})\) denotes the space of (locally constant) functions on \(\mathfrak{g}_{1}\) with compact support, and \(U/U^{+}\cong\mathfrak{u}/\mathfrak{u}^{+}\cong\mathfrak{g}_{1}\). Similarly for the action of \(U\), this follows from [Li, Proposition 3.2.11]. It is straightforward to check that the respective isomorphisms for the \(M_{\gamma}\)- and \(U\)-actions are furnished by the same (natural) map of formation of matrix coefficients, as given above. Therefore in this case we have: \[W_{\gamma,\rho,\psi}\cong\operatorname{ind}_{M_{\gamma}U^{+}}^{G}\chi_{\gamma} \tag{2.5}\] from which the similarity to the even orbit case is immediately apparent. ## 3. Hamiltonian spaces and quantization In the next two sections, we introduce the theory of duality of hyperspherical varieties as set out in [BZSV]. We begin in this section by recalling the necessary preliminaries from symplectic geometry, as well as the classical philosophy of geometric quantization as a bridge between symplectic geometry and representation theory. _Remark 3.1_.: We will be dealing with unitary representations in this section only, to illustrate the philosophy of quantization. For the rest of this paper, we will work in the analogous setting of smooth representations (hence all inductions are taken to be smooth, etc.). ### Hamiltonian spaces We first review some preliminaries from symplectic geometry; one good reference is [CG, Chapter 1]. **Definition 3.2**.: (Hamiltonian \(G\)-spaces) A _Hamiltonian \(G\)-space_ (or \(G\)-variety) is a smooth, symplectic variety \(M\) with a \(G\)-action (from the right, unless otherwise specified) by symplectomorphisms and a \(G\)-equivariant _moment map_ \[\mu:M\to\mathfrak{g}^{*}.\] The moment map \(\mu\) must satisfy the following: * Each \(X\in\mathfrak{g}\) induces a vector field \(\rho(X)\) on \(M\) by 'differentiating' the \(G\)-action, which further induces a \(1\)-form on \(M\) by contracting with the symplectic form \(\omega\): \[Y\mapsto\omega(\rho(X),Y).\] On the other hand \(X\) and \(\mu\) also define a \(1\)-form on \(M\) via \[d\bigl{(}m\mapsto(\mu(m))(X)\bigr{)}.\] These two \(1\)-forms must coincide. **Definition 3.3**.: (Poisson bracket) Given two regular functions \(f_{1},f_{2}\) on \(M\), we define the Poisson bracket \(\{f_{1},f_{2}\}\) as follows: the two \(1\)-forms \(df_{1},df_{2}\) are the contractions with \(\omega\) of some (unique) vector fields \(X_{f_{1}},X_{f_{2}}\) respectively. Then take \[\{f_{1},f_{2}\}:=\omega(X_{f_{1}},X_{f_{2}}).\] This makes the ring of regular functions on \(M\) a Poisson algebra. **Example 3.4**.: Any symplectic vector space \((W,\langle-,-\rangle)\) is naturally a Hamiltonian \(\operatorname{Sp}(W)\)-space with the moment map \[\mu:w\mapsto\big{(}X\mapsto\frac{1}{2}\langle Xw,w\rangle\big{)}\quad\text{for $w \in W$ and $X\in\mathfrak{g}$.}\] **Example 3.5**.: Any cotangent bundle \(T^{*}X\) (for \(X\) a \(G\)-variety) is naturally a symplectic variety with the symplectic form \(\omega=d\lambda\), where \(\lambda\) is the tautological \(1\)-form pairing tangent and cotangent vectors. It is then naturally a Hamiltonian variety, with the moment map \[\mu:p\mapsto\big{(}Y\mapsto-\lambda(\rho(Y))|_{p}\big{)}\quad\text{for $p\in T^{*}X$ and $Y\in\mathfrak{g}$.}\] ### Quantization and examples According to the classical philosophy of geometric quantization [GuSt], and as explained in [Ga], one may construct from each Hamiltonian \(G\)-space \(M\) a (unitary) representation of \(G\), which we call its quantization. **Example 3.6**.: (Weil representation) Consider a symplectic vector space \(M:=W\) with \(G:=\operatorname{Sp}(W)\) acting on it. Choosing a polarisation \(W=X\oplus Y\) with \(X,Y\) Lagrangians, the Weil representation, which can be realised on \(L^{2}(Y)\), may be thought of as a quantization of the Hamiltonian \(\operatorname{Sp}(W)\)-space \(W\). _Remark 3.7_.: (Anomaly) Note that the Weil representation is not a representation of \(\operatorname{Sp}(W)\) but of the metaplectic cover \(\operatorname{Mp}(W)\). In the language of [BZSV], this is because \(W\) has 'anomaly' (which can be detected via Betti or etale cohomology), and anomalous varieties are at present excluded from the expectations of duality of hyperspherical varieties. None of the new examples of hyperspherical duality that we will exhibit in this paper will be anomalous. One should still work with the quantization as a representation of the metaplectic cover where appropriate, focusing on the cases where the quantization descends to an actual representation of an algebraic group, as in the remarks before Definition 2.3. **Example 3.8**.: (Cotangent bundles) The quantization of a cotangent bundle \(T^{*}X\), for \(X\) a \(G\)-variety, should be the space of functions \(L^{2}(X)\), as a unitary representation of \(G\). ### Symplectic reduction and induction This philosophy further postulates that many standard operations in symplectic geometry correspond to standard operations in representation theory. We shall review two of the most pertinent ones below: symplectic reduction and symplectic induction, which correspond respectively to formation of coinvariant spaces (or, more generally, multiplicity spaces) and induction of representations. **Definition 3.9**.: (Symplectic reduction) The symplectic reduction of a Hamiltonian \(G\)-space \(M\) is defined as \[M\times^{G}_{\mathfrak{g}^{*}}\{0\}.\] The notation \(\times_{\mathfrak{g}^{*}}^{G}\) denotes the fiber product of \(M\) and \(\{0\}\) over \(\mathfrak{g}^{*}\) (via the moment maps), modulo the action of \(G\) on \(M\) (assuming the quotient exists as a scheme). As explained in [10], quantization of symplectic reduction corresponds to taking \(G\)-coinvariant spaces. Furthermore, one may replace the trivial space \(\{0\}\) with a coadjoint orbit \(\mathscr{O}\subset\mathfrak{g}^{*}\) corresponding under quantization to an irreducible representation \(\rho\) of \(G\). Quantization of the symplectic reduction then corresponds to taking the \(\rho\)-multiplicity space. Extending this idea further, given two Hamiltonian \(G\)-spaces \(M_{1}\) and \(M_{2}\), one may also consider the symplectic reduction of \(M_{1}\times M_{2}^{-}\) (where \(M_{2}^{-}\) is \(M_{2}\) with its symplectic form and moment map negated), which corresponds under quantization to the formation of Hom spaces [11]. **Definition 3.10**.: (Symplectic induction) We define the symplectic induction of a Hamiltonian \(H\)-space \(S\) from \(H\) to \(G\) as \[M:=S\times_{\mathfrak{h}^{*}}^{H}T^{*}G\cong(S\times_{\mathfrak{h}^{*}} \mathfrak{g}^{*})\times^{H}G \tag{3.1}\] Some remarks are in order: * Here \(H\) acts on \(T^{*}G\) from the left, and we have the identification \(T^{*}G\cong\mathfrak{g}^{*}\times G\), where the moment map is the projection onto the first factor. * The notation \(\times_{\mathfrak{h}^{*}}^{H}\) denotes the fiber product of \(S\) and \(T^{*}G\) over \(\mathfrak{h}^{*}\) (via the moment maps), modulo the relation \((sh,x)\sim(s,hx)\) for \(s\in S,h\in H,x\in T^{*}G\) (assuming the quotient exists as a scheme). * The moment map for \(M\) is induced by the right (coadjoint) action of \(G\) on \(\mathfrak{g}^{*}\) sending \(\phi\mapsto\operatorname{Ad}(g^{-1})\phi\). The quantization of symplectic induction corresponds to induction of representations from \(H\) to \(G\). _Remark 3.11_.: With the operations of symplectic induction and reduction, one may readily formulate further analogues of other standard constructions in representation theory; for instance, symplectic analogues of Frobenius reciprocity have been first studied in [12]. See [11] (in particular Theorem 3.4) for a detailed discussion. One may further hope that this can be formalised in the sense of forming a category of Hamiltonian spaces, and this is indeed possible in the setting of _shifted symplectic geometry_[13], which involves the machinery of derived geometry. For our purposes, the classical constructions of symplectic induction and reduction are sufficient. We now examine some important examples of quantizations of cotangent bundles to illustrate symplectic reduction and induction. **Example 3.12**.: (Cotangent bundles) Suppose \(H\subset G\) are groups, then the cotangent bundle \(M:=T^{*}(H\backslash G)\) may be identified with the symplectic induction of the trivial \(H\)-space \(\{0\}\) from \(H\) to \(G\), which is \[\{0\}\times_{\mathfrak{h}^{*}}^{H}T^{*}G.\] Its quantization should be the (unitary) induction \(L^{2}(H\backslash G)=(L^{2}-)\mathrm{Ind}_{H}^{G}\mathbb{C}\). Most often, we consider the case where \(H\backslash G\) is a spherical variety. The cotangent bundle \(T^{*}(H\backslash G)\) may also be thought of as the symplectic reduction of \(T^{*}G\) with respect to \(H\) (with \(H\) now acting from the right via \(g\mapsto h^{-1}g\)), which corresponds to taking the \(H\)-coinvariant space of the regular representation \(L^{2}(G)\) of \(G\). **Example 3.13**.: (Twisted cotangent bundles) Suppose now \(H=N\) is a unipotent subgroup of \(G\). By shifting the moment map of the trivial \(N\)-space \(\{0\}\), we obtain twisted cotangent bundles \[M:=(\lambda+\mathfrak{n}^{\perp})\times^{N}G\to N\backslash G\] which quantizes to \[L^{2}(N,\psi\backslash G)=(L^{2}-)\mathrm{Ind}_{N}^{G}\psi=\{f:G\to\mathbb{C} \mid f(ng)=\psi(n)f(g)\}\] (the choice of \(\lambda\) corresponds to choice of \(\psi\)). In general, a representation induced from a character can be thought of as the quantization of some twisted cotangent bundle. This example includes the usual Whittaker case, when \(N\) is a maximal unipotent subgroup of \(G\). ## 4. Relative Langlands duality In this section, we introduce the theory of duality of hyperspherical varieties as set out in [1]. ### Hyperspherical varieties The central objects of study in [1] are a class of Hamiltonian \(G\)-varieties defined over \(\mathbb{C}\) (or an algebraically closed field of characteristic zero), called _hyperspherical_ varieties. **Definition 4.1**.: (Hyperspherical varieties) A _hyperspherical variety_ is a smooth Hamiltonian \(G\)-variety equipped with a grading (that is, a commuting \(\mathbb{G}_{m}\)-action), such that: * it is affine; * it satisfies the _multiplicity-free, or coisotropic_, condition: the ring of \(G\)-invariant functions on \(M\) is Poisson-commutative (cf. Definition 3.3). One also requires \(M\) to satisfy several technical conditions: its generic stabiliser is connected, its moment map image meets the nilcone, and the \(\mathbb{G}_{m}\)-action is "neutral" (which we will not define here). However, the most important condition is the multiplicity-free condition, and in what follows we will often (without loss of generality) ignore the other technical conditions, cf. the remarks after Theorem 4.4. Under the philosophy of quantization as explained in Section 3, the multiplicity-free condition corresponds to the multiplicity-free property for representations [13]. ### Whittaker induction Continue the notation of Section 2. **Definition 4.2**.: (Whittaker induction) Consider any reductive subgroup \(H\) of \(G\) and a commuting \(\operatorname{SL}_{2}\) (giving rise to a homomorphism \(H\times\operatorname{SL}_{2}\to G\)). Let \(S\) be a symplectic \(H\)-vector space (or more generally a Hamiltonian \(H\)-space, but we do not need this). The Whittaker induction of \(S\) along \(H\times\operatorname{SL}_{2}\to G\) is defined as follows: Let \(\gamma\) be the \(\mathfrak{sl}_{2}\) triple corresponding to the \(\operatorname{SL}_{2}\) factor; we have seen that \(\mathfrak{u}/\mathfrak{u}^{+}\) carries a \(M_{\gamma}\)-invariant symplectic form \(\kappa_{1}\), and hence can be naturally considered as a Hamiltonian \(H\)-space (since \(H\) centralises \(\gamma\)) via the adjoint action of \(H\). We in fact consider it as a Hamiltonian \(HU\)-space where \(U\) acts additively via the identification \(U/U^{+}\cong\mathfrak{u}/\mathfrak{u}^{+}\), and the moment map \(\mu_{U}:\mathfrak{u}/\mathfrak{u}^{+}\to\mathfrak{u}^{*}\) is shifted by \(\kappa_{f}\), that is, \(\mu_{U}(u)=\kappa_{1}(u)+\kappa_{f}\), where \(\kappa_{1}:\mathfrak{u}/\mathfrak{u}^{+}\to(\mathfrak{u}/\mathfrak{u}^{+})^{*}\) is the identification via the symplectic form. Then the Whittaker induction of \(S\) is defined to be the symplectic induction of \(S\times(\mathfrak{u}/\mathfrak{u}^{+})\) from \(HU\) to \(G\): \[(S\times(\mathfrak{u}/\mathfrak{u}^{+}))\times^{HU}_{(\mathfrak{h}+\mathfrak{ u})^{*}}T^{*}G\cong((S\times(\mathfrak{u}/\mathfrak{u}^{+}))\times_{(\mathfrak{h}+ \mathfrak{u})^{*}}\mathfrak{g}^{*})\times^{HU}G \tag{4.1}\] Comparing Section 2.3.2 and Definition 4.2, we see that since \(H\) is a subgroup of the centraliser \(M_{\gamma}\) of the \(\operatorname{SL}_{2}\) factor, then under the philosophy of quantization, Whittaker induction corresponds precisely to the formation of the generalised Whittaker representations \(W_{\gamma,\rho,\psi}\) as in Section 2.3.2: * \(S\) corresponds to \(\rho\); * \(\mathfrak{u}/\mathfrak{u}^{+}\) corresponds to the oscillator representation \(\omega_{\psi}\) of \(U\) with the associated (Weil) representation of \(H\); * the symplectic induction from \(HU\) to \(G\) corresponds to the induction of representations from \(HU\) to \(G\). It is this case that we will be working with throughout this paper. In particular, the choice of \(S\) should correspond to the canonical choice of \(\rho\) as described in Section 2.3.3. _Remark 4.3_.: (Grading) When \(S\) has a grading (i.e. commuting \(\mathbb{G}_{m}\)-action), the Whittaker induction of \(S\) can also be given a natural grading. However, since we do not make essential use of the grading in this paper, we omit the details (which are rather lengthy and technical). It will suffice to mention that every symplectic \(H\)-vector space \(S\) is naturally graded via linear scaling, and so every Whittaker-induced space from a symplectic vector space also carries a corresponding natural grading. #### 4.2.1. Simplifying the Whittaker induction The definition (4.1) of Whittaker induction, while corresponding nicely under quantization to the formation of generalised Whittaker representations, is geometrically unwieldy. It is possible, via the theory of Slodowy slices, to simplify the Whittaker induction somewhat. In particular, [GaGi, Lemma 2.1] states that one has an isomorphism \[U\times(f+\mathfrak{g}^{e})\to f+\mathfrak{u}^{+,\perp} \tag{4.2}\] given by the action map of \(U\) on \((f+\mathfrak{g}^{e})\), where \(\mathfrak{g}^{e}\) is the centraliser of \(e\) (considered as a subspace of \(\mathfrak{g}^{*}\) via \(\kappa\)). Now note that \[(S\times(\mathfrak{u}/\mathfrak{u}^{+}))\times_{(\mathfrak{h}+\mathfrak{u})^{* }}\mathfrak{g}^{*} \tag{4.3}\] may be identified with the set of pairs \((s,x)\) for \(s\in S\) and \(x\in\mathfrak{g}^{*}\), such that * the restrictions of \(\mu(s)\) and \(x\) to \(\mathfrak{h}\) are equal (\(\mu\) is the moment map for \(S\)), and * that \(x\) restricts to \(f\) on \(\mathfrak{u}^{+}\), that is, \(x\in f+\mathfrak{u}^{+,\perp}\) (noting that we have used the \(f\)- or \(\kappa_{f}\)-shifted moment map for \((\mathfrak{u}/\mathfrak{u}^{+})\)), since then the corresponding element of \((\mathfrak{u}/\mathfrak{u}^{+})\) is uniquely determined by \((s,x)\). Combining (4.2) and (4.3), one therefore sees that we have a (\(H\)-equivariant) isomorphism \[(S\times(\mathfrak{u}/\mathfrak{u}^{+}))\times_{(\mathfrak{h}+\mathfrak{u})^{* }}\mathfrak{g}^{*}\cong(S\times_{\mathfrak{h}^{*}}(f+\mathfrak{g}^{e}))\times U,\] Hence the Whittaker induction can be written as \[(S\times_{\mathfrak{h}^{*}}(f+\mathfrak{g}^{e}))\times^{H}G \tag{4.4}\] We remark that when \(S\) is trivial, we have \(S\times_{\mathfrak{h}^{*}}(f+\mathfrak{g}^{e})=\mathfrak{h}^{\perp}\cap(f+ \mathfrak{g}^{e})\). Such a rewriting as in (4.4) makes clear the geometric meaning of the Whittaker induction \(M\): it is always an (affine) bundle over \(H\backslash G\). In other words, it is the subgroup \(H\) which plays the biggest role in the geometry of \(M\). Also, since we will later work with the full orthogonal groups (rather than the special orthogonal groups), (4.4) also makes clear that one can formulate the analogous hyperspherical dual pairs for special orthogonal groups without much issue. ### Structure theorem The main structure theorem proved in [2] is as follows: **Theorem 4.4**.: _Suppose \(M\) is a hyperspherical \(G\)-variety. Then there is a reductive subgroup \(H\) of \(G\) and a commuting \(\operatorname{SL}_{2}\) (giving rise to a homomorphism \(H\times\operatorname{SL}_{2}\to G\)), and a symplectic \(H\)-vector space \(S\), such that \(M\) is the Whittaker induction of \(S\) along \(H\times\operatorname{SL}_{2}\to G\)._ Conversely, to check that the Whittaker induction of \(S\) along \(H\times\operatorname{SL}_{2}\to G\) is hyperspherical, it suffices to check the multiplicity-free condition (and that the generic stabiliser is connected). We have seen from (4.4) that the Whittaker induction is automatically affine, and it is shown in [2] that the other technical conditions of Definition 4.1 are also satisfied. _Remark 4.5_.: (On connectedness) Note that in [2] the group \(G\) is usually assumed connected; for us, we do not require that \(G\) be connected, since we will work with the orthogonal group rather than the special orthogonal group. Consequently, the condition that the generic stabiliser be connected will have correspondingly less significance for us. However, since this condition is of importance in certain contexts and is a condition that needs to be independently checked, let us remark that it will be possible, using (4.4) and the results of [GL] for instance, to verify that the generic stabilisers are connected in the specific cases that we will consider in this paper, after replacing the groups involved with their identity components. While we have not verified this in every case, we do not have any reason to expect otherwise. _Remark 4.6_.: (Rationality) With the structure theorem in hand, now one defines (forms of) hyperspherical varieties over non-algebraically closed fields (the obvious one being our local field \(F\)), via the algebraic datum \[H\times\operatorname{SL}_{2}\to G,\quad H\to\operatorname{Sp}_{2n}\] over \(F\). (To be more precise, \(H,G\) are reductive group schemes over \(F\) and \(H\to G\) is a closed immersion.) The expectation being that, for each \(M\) (over \(\mathbb{C}\)) there will be a distinguished'split form' of \(M\) defined over arithmetic fields \(F\) (and there is an expected construction of such a form in the untwisted case). For our purposes in working with generalised Whittaker models, this is the most reasonable and natural point of view, and resolves potential issues to do with rationality. ### Duality The key point of the theory is that there is expected to exist a _duality_ \[G\circlearrow M\longleftrightarrow M^{\vee}\circlearrow G^{\vee}\] of (anomaly-free, cf. Remark 3.7) hyperspherical varieties (over \(\mathbb{C}\)). Each such hyperspherical dual pair \((M,M^{\vee})\) encodes an instance of the relative Langlands program with corresponding (conjectural) statements at the local and global levels. However, there is at present no definition of the duality \(M\leftrightarrow M^{\vee}\) (which can be proved to satisfy the desiderata of the duality theory). Hence, for a given pair \((M,M^{\vee})\), the expected duality of \(M\) and \(M^{\vee}\) should be verified via the conjectures it entails. Here we focus on a (smooth) local incarnation or counterpart of the conjectures. **Expectation 4.7**.: _The irreducible representations of \(G\) of Arthur type which occur as quotients of a quantization of \(M\) belong to Arthur packets whose Arthur parameters factor through the morphism defining the hyperspherical variety \(M^{\vee}\):_ \[X^{\vee}(\mathbb{C})\times\operatorname{SL}_{2}(\mathbb{C})\to G^{\vee}( \mathbb{C}).\] _Remark 4.8_.: Because the theory is a _duality_, one in fact expects a pair of statements arising from this dual pair \(M,M^{\vee}\) by exchanging the roles of \(M\) and \(M^{\vee}\), often relating two _a priori_ unrelated (branching) problems: The irreducible representations of \(G^{\vee}\) of Arthur type which occur as quotients of a quantization of \(M^{\vee}\) belong to Arthur packets whose Arthur parameters factor through the morphism defining the hyperspherical variety \(M\): \[H(\mathbb{C})\times\operatorname{SL}_{2}(\mathbb{C})\to G(\mathbb{C}).\] We may refer to this as the 'dual problem', and this is the key feature of the duality theory. The conjecture hence characterizes the spectral decomposition of the quantization of \(M\) as the image of a certain Langlands functorial lifting. In the smooth setting, it is known that there are additional subtleties and such a formulation in terms of Arthur parameters is only true as a guiding principle, or up to an approximation. Nonetheless, one expects a natural map realising a lifting of irreducible representations, and in this paper, such a lifting will be facilitated by the theta correspondence (Section 6). It is an interesting future problem to investigate the precise statements in the local \(L^{2}\)-setting and even the global setting. This will involve at a minimum the computation of the relevant local relative characters in the unramified setting (as was first done in [1] and later also in [2]). ## 5. Hyperspherical Whittaker models In this section, we determine an upper bound for the possible generalised Whittaker models for the orthogonal and symplectic groups which arise from hyperspherical varieties (over \(\mathbb{C}\)), cf. Section 4.2. This provides an effective upper bound for the possible generalised Whittaker models which may be contained in the conjectural class admitting a duality theory. To do so we first need an effective criterion for hypersphericality. Continuing the notation of Section 4, we record: **Proposition 5.1**.: _If \(M\) is hyperspherical, then \(H\backslash L\) is a (smooth, affine) spherical \(L\)-variety, where \(L\) is the Levi factor of the parabolic \(P=LU\) associated to the \(\mathfrak{sl}_{2}\)-triple \(\gamma\). In particular, \(H\) is a spherical subgroup of \(M_{\gamma}\), and \(M_{\gamma}\) is a spherical subgroup of \(L\)._ Proof.: It is shown in [1] that if \(M\) is coisotropic, then \(HU\backslash G\) is spherical. (We remark that it is known from the theory of spherical varieties that a twisted cotangent bundle \(M=T^{*}(X,\psi)\) is coisotropic if and only if \(X\) is a spherical variety. The proof in [1] essentially extends this to general Whittaker-induced \(M\).) Now again from the theory of spherical varieties (cf. the theory of Whittaker-type induction [2]), we may view \(HU\backslash G\) as the parabolic induction from \(H\backslash L\) along the parabolic \(P=LU\), with then \(H\backslash L\) a (smooth, affine) spherical \(L\)-variety. (A direct proof of the key spherical property is as follows: Let \(B_{L},B\) be Borel subgroups of \(L,G\) with \(B_{L}=B\cap L\) and \(P\supseteq B\supseteq U\); we have the natural embedding \[B_{L}\backslash L\hookrightarrow B\backslash G.\] Now every spherical variety has only finitely many orbits under a Borel subgroup [1], so \(B\backslash G\), and hence too \(B_{L}\backslash L\), has finitely many \(HU\)-orbits, and hence finitely many \(H\)-orbits. One of these orbits must hence be dense, which means \(H\backslash L\) is spherical as desired. _Remark 5.2_.: It is in fact shown in [2] that \(M\) is coisotropic if and only if \(HU\backslash G\) is spherical and \((S\times(\mathfrak{u}/\mathfrak{u}^{+}))\) is coisotropic for the generic stabiliser of \(T^{*}(HU\backslash G)\) (in particular for \(H\)). This latter condition can be checked for each given case using the tables of [12]. Therefore, modulo the choice of \(S\) and checking that the generic stabiliser of \(M\) be connected (which in practice also corresponds roughly to the exclusion of type N spherical roots [2]), the upper bound we obtain in this section is relatively close to a precise classification of nilpotent orbits that give rise to hyperspherical varieties. In view of the above proposition, and since we are interested in determining the possible \(\gamma\) which gives rise to hyperspherical varieties, let us without loss of generality assume for the rest of this section that \(H=M_{\gamma}\) is the centraliser of the corresponding \(\mathfrak{sl}_{2}\) triple \(\gamma\). Further, for simplicity, let us first consider the case where the corresponding nilpotent orbit is even. _Remark 5.3_.: We briefly consider the case when the nilpotent orbit is not even at the end of this section, in Proposition 5.8. Given the relative complexity of the characterization we obtain, it is likely that a cleaner classification in generality can only be achieved after a fuller theory of combinatorial datum for general hyperspherical varieties is developed (in line with that for spherical varieties), which is one of the main open questions arising from [2]. In what follows we will also make use of the following dimensional consideration: **Lemma 5.4**.: _If \(H\backslash L\) is spherical, then \(\dim L-\dim H\leq\dim B\), where \(B\) is a Borel subgroup of \(L\)._ Proof.: This is an immediate consequence of the fact that \(B\) has an open dense orbit on \(H\backslash L\). ### Orthogonal groups Suppose now \(G\) is the orthogonal group \(\mathrm{O}_{n}\), acting on an \(n\)-dimensional vector space \(V\) equipped with an orthogonal form \(B\). Recall from Section 2.2 that the nilpotent orbits in \(G\) are parameterised by the datum of the partition \(\lambda=[l^{a_{l}},\dots,1^{a_{1}}]\) of \(n\), and the forms on the multiplicity spaces \((V_{j},B_{j})\), where even parts must occur with even multiplicity in \(\lambda\). For even nilpotent orbits, all parts of the partition \(\lambda\) have the same parity [2]. We have seen also that \[H=M_{\gamma}\cong\prod_{j=1}^{l}G(V_{j},B_{j}).\] Let \[\lambda^{t}=[a_{l}+\dots+a_{1},\dots,a_{l}]=[c_{1},\dots,c_{m}]=[h^{b_{h}}, \dots,1^{b_{1}}]\] denote the transpose partition of \(\lambda\). If \(\lambda\) has all even parts, then \[H\cong\mathrm{Sp}_{a_{l}}\times\dots\mathrm{Sp}_{a_{2}},\] \[L\cong\operatorname{GL}_{h}^{\times b_{h}/2}\times\cdots\times\operatorname{GL}_{1}^ {\times b_{1}/2}.\] If \(\lambda\) has all odd parts, then \[H\cong\operatorname{O}_{a_{l}}\times\ldots\operatorname{O}_{a_{1}},\] \[L\cong\operatorname{O}_{h}\times\operatorname{GL}_{h}^{\times(b_{h}-1)/2} \times\cdots\times\operatorname{GL}_{1}^{\times b_{1}/2}.\] In other words, representing \(\lambda\) by a Young tableaux \(d\) (in the usual way), the factors of \(H\) correspond to groups of rows of the tableaux with same length, while the factors of \(L\) correspond to (pairs of) columns of the tableaux with same length. See Figure 1 for an example. Furthermore, \(H\) is embedded in \(L\) such that, for a factor \(G(V_{j},B_{j})\) of \(H\) and a factor \(\operatorname{O}_{i}\) or \(\operatorname{GL}_{i}\) of \(L\), the composite projection \(G(V_{j},B_{j})\hookrightarrow H\hookrightarrow L\to\operatorname{O}_{i}\) or \(\operatorname{GL}_{i}\) is non-zero if and only if the corresponding rows and columns in \(d\) share common cells. Now we may use the classification of smooth affine spherical varieties in [4] to characterize the allowable nilpotent orbits. **Theorem 5.5**.: _Let \(G\) be the orthogonal group \(\operatorname{O}_{n}\) and \(M\) be a hyperspherical \(G\)-variety. By Theorem 4.4, it is obtained as the Whittaker induction along a map \(H\times\operatorname{SL}_{2}\to G\). Let \(\gamma\) be the nilpotent orbit determined by the \(\operatorname{SL}_{2}\) factor. If \(\gamma\) is even, then it corresponds to a partition \(\lambda\) of the form_ * \([2^{a_{2}}]\) _(Shalika)_ * \([n-a_{1},1^{a_{1}}]\) _(hook-type, corresponding to Bessel models_ _[_GGP_]__)_ Figure 1. An illustration of the factors of \(H\) and \(L\) in one example where \(\lambda\) has even parts. * _finitely many low-rank exceptions_ \[[3,3],[4,4],[6,6].\] Proof.: The classification of smooth affine spherical varieties \(H^{\prime}\backslash G^{\prime}\) is given in Tables 4 and 5 of [10]. There, all the possible (indecomposable) pairs \(\mathfrak{h}^{\prime}=\mathfrak{h}^{\prime}_{1}\oplus\cdots\oplus\mathfrak{h} ^{\prime}_{s}\), \(\mathfrak{g}^{\prime}=\mathfrak{g}^{\prime}_{1}\oplus\cdots\oplus\mathfrak{g} ^{\prime}_{r}\) are listed, together with the combinatorial datum of a graph \(\Gamma\) with vertices corresponding to the factors \(\mathfrak{h}^{\prime}_{1},\ldots,\mathfrak{h}^{\prime}_{s}\) and \(\mathfrak{g}^{\prime}_{1},\ldots,\mathfrak{g}^{\prime}_{r}\), and edges indicating if the composite projection \(\mathfrak{h}^{\prime}_{j}\hookrightarrow\mathfrak{h}^{\prime}\hookrightarrow \mathfrak{g}^{\prime}\twoheadrightarrow\mathfrak{g}^{\prime}_{i}\) is non-zero. Here we always take \(\mathfrak{h}^{\prime},\mathfrak{g}^{\prime}\) to be the commutator subalgebras of the Lie algebras of \(H^{\prime},G^{\prime}\). In our case, we take \(H^{\prime}=H\) and \(G^{\prime}=L\), in view of Proposition 5.1. First, if \(\mathfrak{h}\) is trivial, then \(\mathfrak{l}\) must be trivial or some copies of \(\mathfrak{sl}_{2}\). In the former case, it is easy to see that one can only have the regular nilpotent orbit. In the latter case, a simple dimensional consideration shows that we can have at most one copy of \(\mathfrak{sl}_{2}\), and then that we can only have the partitions \([n-2,1,1]\), \([2,2]\), or \([3,3]\). Therefore assume henceforth that \(H\) has a factor \(H_{j}=G(V_{j},B_{j})\) with non-trivial \(\mathfrak{h}_{j}\), corresponding to \(a_{j}\) rows of \(d\) with the same length \(j\). (a) If all parts of \(\lambda\) are even. Recall that even parts of \(\lambda\) occur with even multiplicity. Then if \(j\geq 4\), we must have in the graph \(\Gamma\) a vertex corresponding to an \(\mathfrak{sp}(a_{j})\) factor, connected to two or more vertices each corresponding to \(\mathfrak{sl}(a_{j}+r_{1}),\mathfrak{sl}(a_{j}+r_{2})\) factors for some even \(r_{1},r_{2}\geq 0\). An inspection of Table 5 of [10] shows that this is only possible when \(a_{j}=2\) and \(r_{1}=r_{2}=0\) (that is, there are no other rows in \(d\)), and then only when \(j\leq 6\). So there are only finitely many low-rank exceptions here, namely \([4,4]\) and \([6,6]\). Hence \(j=2\). Now if there are \(r\geq 1\) rows of length \(>2\) in \(d\), then we must have \(r\geq 2\) (because \(r\) is even). Then one must have in \(\Gamma\) a vertex corresponding to an \(\mathfrak{sp}(a_{2})\) factor, connected to a vertex corresponding to an \(\mathfrak{sl}(a_{2}+r)\) factor with \(r\geq 2\). An inspection of Tables 4 and 5 of [10] shows that there is only one possibility, corresponding to the partition \([4,4,2,2]\). However, in this case it is not possible for the \(H\backslash L\) to be spherical, by a dimensional consideration. Otherwise, we are left with the partition \(\lambda=[2^{a_{2}}]\) (which in fact corresponds to the Shalika case). (b) If all parts of \(\lambda\) are odd. Note then \(a_{j}\geq 3\). If \(j\geq 3\), we must have in the graph \(\Gamma\) a vertex corresponding to an \(\mathfrak{so}(a_{j})\) factor, connected to two or more vertices, one corresponding to an \(\mathfrak{so}(a_{j}+r)\) factor and one corresponding to \(\mathfrak{sl}(a_{j}+r^{\prime})\) for \(r\geq r^{\prime}\geq 0\). An inspection of Table 5 of [10] shows that this is not possible. (Note that the partition \([3,3,3]\) appears to be a possible low-rank exception, but is not, due to the difference in embedding between \(\mathfrak{so}(3)\hookrightarrow\mathfrak{sl}(3)\) and \(\mathfrak{sl}(2)\hookrightarrow\mathfrak{sl}(3)\).) Hence \(j=1\). Now if there are \(r\geq 2\) other rows of length \(>1\) in \(d\), then one must have in \(\Gamma\) a vertex corresponding to an \(\mathfrak{so}(a_{1})\) factor, connected to a vertex corresponding to an \(\mathfrak{so}(a_{1}+r)\) factor with \(r\geq 2\). When \(r=2\) the tables in [10] do not preclude this but it is readily checked that the \(H\backslash L\) cannot be spherical (because \(\mathrm{O}(n)\backslash\mathrm{O}(n+2)\) is not spherical for \(n\geq 1\)), so in fact \(r\geq 3\). An inspection of Tables 4 and 5 of [KVS] shows that this is not possible. So there is at most \(1\) other row of length \(>1\) in \(d\), and the corresponding partitions are \([1^{a_{1}}]\) (trivial partition) or \([n-a_{1},1^{a_{1}}]\) (hook-type). ### Symplectic groups Suppose now \(G\) is the symplectic group \(\operatorname{Sp}_{2n}\), acting on a \(2n\)-dimensional vector space \(V\) equipped with an symplectic form \(B\). Again from Section 2.2, the nilpotent orbits in \(G\) are parameterised by the datum of the partition \(\lambda=[l^{a_{l}},\ldots,1^{a_{1}}]\) of \(n\), and the forms on the multiplicity spaces \((V_{j},B_{j})\), where odd parts must occur with even multiplicity in \(\lambda\). For even nilpotent orbits, all parts of the partition \(\lambda\) have the same parity. We have seen also that \[H=M_{\gamma}\cong\prod_{j=1}^{l}G(V_{j},B_{j}).\] Let \[\lambda^{t}=[a_{l}+\cdots+a_{1},\ldots,a_{l}]=[c_{1},\ldots,c_{m}]=[h^{b_{h}}, \ldots,1^{b_{1}}]\] denote the transpose partition of \(\lambda\). If \(\lambda\) has all even parts, then \[H\cong\operatorname{O}_{a_{l}}\times\ldots\operatorname{O}_{a_{2}},\] \[L\cong\operatorname{GL}_{h}^{\times b_{h}/2}\times\cdots\times\operatorname{ GL}_{1}^{\times b_{1}/2}.\] If \(\lambda\) has all odd parts, then \[H\cong\operatorname{Sp}_{a_{l}}\times\ldots\operatorname{Sp}_{a_{1}},\] \[L\cong\operatorname{Sp}_{h}\times\operatorname{GL}_{h}^{\times(b_{h}-1)/2} \times\cdots\times\operatorname{GL}_{1}^{\times b_{1}/2}.\] In other words, representing \(\lambda\) by a Young tableaux \(d\) (in the usual way), the factors of \(H\) correspond to groups of rows of the tableaux with same length, while the factors of \(L\) correspond to (pairs of) columns of the tableaux with same length. Furthermore, \(H\) is embedded in \(L\) such that, for a factor \(G(V_{j},B_{j})\) of \(H\) and a factor \(\operatorname{Sp}_{i}\) or \(\operatorname{GL}_{i}\) of \(L\), the composite projection \(G(V_{j},B_{j})\hookrightarrow H\hookrightarrow L\twoheadrightarrow\operatorname{ Sp}_{i}\) or \(\operatorname{GL}_{i}\) is non-zero if and only if the corresponding rows and columns in \(d\) share common cells. Now we have the analogous result to Theorem 5.5. **Theorem 5.6**.: _Let \(G\) be the symplectic group \(\operatorname{Sp}_{2n}\) and \(M\) be a hyperspherical \(G\)-variety. By Theorem 4.4, it is obtained as the Whittaker induction along a map \(H\times\operatorname{SL}_{2}\to G\). Let \(\gamma\) be the nilpotent orbit determined by the \(\operatorname{SL}_{2}\) factor. If \(\gamma\) is even, then it corresponds to a partition \(\lambda\) of the form_ * \([2^{a_{2}}]\) _(Shalika)_ * _the 'exceptional' partitions_ \[[3,3,1^{2a}],[5,5,1^{2a}]\] _for_ \(a\geq 0\) * \([1^{2n}]\) _(trivial orbit) or \([2n]\) (regular orbit)._ _Remark 5.7_.: The hook-type partitions are not included for the symplectic group as they correspond to non-even nilpotent orbits; they correspond to Fourier-Jacobi models [GGP] and will also be studied later in Section 7, cf. Remark 7.9. Proof.: Keep the notation of the proof of Theorem 5.5. First, if \(\mathfrak{h}\) is trivial, then \(\mathfrak{l}\) must be trivial or some copies of \(\mathfrak{sl}_{2}\). In the former case, it is easy to see that one can only have the regular nilpotent orbit. In the latter case, a simple dimensional consideration shows that we can have at most one copy of \(\mathfrak{sl}_{2}\), and then that we can only have the partition \([2,2]\). Therefore assume henceforth that \(H\) has a factor \(H_{j}=G(V_{j},B_{j})\) with non-trivial \(\mathfrak{h}_{j}\), corresponding to \(a_{j}\) rows of \(d\) with the same length \(j\). (a) If all parts of \(\lambda\) are even. Note then \(a_{j}\geq 3\). If \(j\geq 4\), we must have in the graph \(\Gamma\) a vertex corresponding to an \(\mathfrak{so}(a_{j})\) factor, connected to two or more vertices each corresponding to \(\mathfrak{sl}(a_{j}+r_{1}),\mathfrak{sl}(a_{j}+r_{2})\) factors for \(r_{1},r_{2}\geq 0\). An inspection of Table 5 of [KVS] shows that this is not possible. Hence \(j=2\). Now if there are \(r\geq 2\) other rows of length \(>1\) in \(d\), then one must have in \(\Gamma\) a vertex corresponding to an \(\mathfrak{so}(a_{1})\) factor, connected to a vertex corresponding to an \(\mathfrak{sl}(a_{1}+r)\) factor with \(r\geq 1\). An inspection of Tables 4 and 5 of [KVS] shows that this is not possible. So we are left with the partition \(\lambda=[2^{a_{2}}]\) (which in fact corresponds to the Shalika case). (b) If all parts of \(\lambda\) are odd. Note that then all parts of \(\lambda^{t}\) are even. It follows that all factors of \(\mathfrak{h}\) and \(\mathfrak{l}\) are non-trivial (they are all \(\mathfrak{sp}\) and \(\mathfrak{sl}\) factors). Considering the possible graph structures of \(\Gamma\) and examining Table 5 of [KVS], then, one sees that there are only the following possible partitions: \[[3,3],[5,5],[3,3,1^{2a}],[5,5,1^{2a}].\] ### Non-even orbits We end this section by considering the case of non-even nilpotent orbits. Continuing the hypothesis of Theorems 5.5 and 5.6, we have: **Proposition 5.8**.: _(Non-even nilpotent orbits) Let \(G\) be either the orthogonal or the symplectic group, and \(M\) be a hyperspherical \(G\)-variety. By Theorem 4.4, it is obtained as the Whittaker induction along a map \(H\times\mathrm{SL}_{2}\to G\). Let \(\gamma\) be the nilpotent orbit determined by the \(\mathrm{SL}_{2}\) factor. Suppose \(\gamma\) is not necessarily even, and has corresponding partition \(\lambda\)._ _Let \(\lambda_{even}\) (resp. \(\lambda_{odd}\)) be the partitions formed by taking only the even (resp. odd) parts of \(\lambda\)._ _Then \(\lambda_{even}\) and \(\lambda_{odd}\) must each be of a form listed in Theorems 5.5 or 5.6 respectively (according as \(G\) is orthogonal or symplectic)._ Proof.: Note that \(H\backslash L\) decomposes as a direct product \(H_{even}\backslash L_{even}\times H_{odd}\backslash L_{odd}\), corresponding to the even and odd parts respectively of \(\lambda\), and that \(H\backslash L\) is spherical \(\iff H_{even}\backslash L_{even}\) and \(H_{odd}\backslash L_{odd}\) are both spherical. It follows that the even and odd parts of \(\lambda\) must be of the forms listed in Theorems 5.5 or 5.6. **Example 5.9**.: (Hook-type for symplectic groups) For instance, the hook-type partitions \(\lambda=[2a,1^{2b}]\) for the symplectic groups has even part \(\lambda_{even}=[2a]\) and odd parts \(\lambda_{odd}=[1^{2b}]\), both of which are listed in Theorem 5.6. We will see several more examples in Section 9, cf. Expectation 9.9. ### The \(G\times M_{\gamma}\) case _Remark 5.10_.: In [FU], a similar classification of nilpotent orbits is given in the case of hyperspherical \(G\times M_{\gamma}\)-varieties; see Remarks 2.4 and 7.12. That is, they essentially work with the datum \[M_{\gamma}\times\operatorname{SL}_{2}\to G\times M_{\gamma}\quad\text{and} \quad S=0,\] with \(M_{\gamma}\) diagonally embedded, and classify the nilpotent orbits which give rise to hyperspherical varieties. This is a slightly stronger condition than the case we are considering in this paper; in the language of our proof, one would require that \(H^{\Delta}\backslash(L\times H)\) be a spherical \((L\times H)\)-variety, and one could then obtain a classification along the same lines as in our proof. Their proof proceeds via a simple dimensionality criterion for hypersphericality, which, to the best of our knowledge, is not as readily applicable in our case, since the varieties in their case take the (relatively) simple form \(G\times(f+\mathfrak{g}^{e})\). Furthermore, our proof (by the results of [BZSV] used in Proposition 5.1) does not place any restriction on the choice of \(S\) (which would also expand the number of possible nilpotent orbits). Heuristically, since the coisotropic property corresponds to the multiplicity-one property for representations, one would expect that if one does not have multiplicity-one for a \((H,\rho)\)-isotypic subspace of the generalised Whittaker representation \(W_{\gamma,\psi}\) (which is the case we are considering in this paper), then one would not expect multiplicity-one for \(W_{\gamma,\psi}\) as a \(G\times H\)-module. This explains the sense in which the condition in [FU] is stronger. Notably, our classification includes the Shalika case as well as several larger 'exceptional' cases, whereas the classification in [FU] essentially gives the 'hook-type' case and several smaller 'exceptional' cases (for which the corresponding generalised Whittaker models have been studied in [WZ]). See Remark 7.12 for some remarks on the 'hook-type' case in this situation. ## 6. Theta correspondence In this section, we review the theory and results surrounding the theta correspondence, in preparation for the next section, where the expected functorial lifting of representations will be facilitated by the theta lift. Throughout this paper, we have fixed a non-trivial unitary character \(\psi:F\to\mathbb{C}^{\times}\). ### Howe duality Suppose \((G_{1},G_{2})\) is a type I reductive dual pair; for our purposes we will take \(G_{1},G_{2}\) the isometry groups of a split orthogonal vector space \((V_{1},B_{1})\) and a symplectic vector space \((V_{2},B_{2})\) (or vice versa). If \(\dim V_{1}\) is odd then in what follows we understand that we will have to work with representations of \(\operatorname{\mathrm{M}p}(V_{2})\) instead of \(\operatorname{\mathrm{Sp}}(V_{2})\). We suppose also that \(G_{1}\) is the smaller group of the two. One may restrict the Weil representation \(\omega_{\psi}\) of \(\operatorname{\mathrm{M}p}(V_{1}\otimes V_{2})\) (corresponding to the character \(\psi\)) to \(G_{1}\times G_{2}\). For each \(\pi\in\operatorname{\mathrm{Irr}}(G_{1})\) define the _big theta lift_\(\Theta_{\psi}(\pi)\) of \(\pi\) by \[\Theta_{\psi}(\pi):=(\omega_{\psi}\otimes\pi^{\vee})_{G_{1}},\] the maximal \(G_{1}\)-invariant quotient of \(\omega_{\psi}\otimes\pi^{\vee}\). Since \(\psi\) is fixed, in what follows we shall sometimes drop the subscript \({}_{\psi}\) where there is no danger of confusion. It is known that \(\Theta(\pi)\) is either zero or has finite length with a unique irreducible quotient, which we denote \(\theta(\pi)\) and call the _small theta lift_ of \(\pi\). In fact we summarise the key result as follows: **Theorem 6.1**.: _(Howe duality) Let_ \[C=\{(\pi_{1},\pi_{2})\in\operatorname{\mathrm{Irr}}(G_{1})\times\operatorname {\mathrm{Irr}}(G_{2})\mid\pi_{1}\otimes\pi_{2}\text{ is a quotient of }\omega_{\psi}\}.\] _Then \(C\) is the graph of a bijective (partially-defined) function between \(\operatorname{\mathrm{Irr}}(G_{1})\) and \(\operatorname{\mathrm{Irr}}(G_{2})\)._ _Furthermore, we have_ \[\dim\operatorname{\mathrm{Hom}}(\omega_{\psi},\pi_{1}\otimes\pi_{2})\leq 1\] _for all \(\pi_{1}\in\operatorname{\mathrm{Irr}}(G_{1}),\pi_{2}\in\operatorname{\mathrm{ Irr}}(G_{2})\)._ We note also that by results of [Sa2], Howe duality may also be formulated in the \(L^{2}\)-setting. He showed in particular that the spectral support of \(\hat{\omega_{\psi}}\) (unitary completion of \(\omega_{\psi}\)) is contained in the tempered dual of \(G_{1}\). ### Functoriality As mentioned, the theta lift is, for us, a means to realise the expected functorial lifting of representations from \(G_{1}\) to \(G_{2}\). We now make more precise what this means. Recall that we are assuming that \(G_{1}\) is the smaller group of the two, and that we are working with split orthogonal vector spaces throughout. The central result in this regard is Adams' conjecture; by the recent results of [BH], we have **Theorem 6.2**.: _(Adams' conjecture) Suppose \(\dim V_{1}\) and \(\dim V_{2}\) are even. Suppose \(\pi\in\operatorname{\mathrm{Irr}}(G_{1})\) has corresponding A-parameter \(\phi\). Then for all sufficiently large \(\dim V_{2}\), the theta lift \(\theta_{\psi}(\pi)\in\operatorname{\mathrm{Irr}}(G_{2})\) (is non-zero and) has corresponding A-parameter_ \[\phi\oplus W_{\dim V_{2}-\dim V_{1}-1}\] _where \(W_{\dim V_{2}-\dim V_{1}-1}\) is the irreducible \(\operatorname{SL}_{2}\)-representation of dimension \((\dim V_{2}-\dim V_{1}-1)\)._ Here \(\dim V_{2}\) is _sufficiently large_ in the sense that it is at least the larger of two "first occurrence indices" attached to the dual pair and \(\pi\) (and \(\psi\)); to avoid excessive technicality, we do not define this precisely here, but suffice it to note that this will not pose any problems in the cases we will consider in Section 7, in particular for generic representations \(\pi\). We refer the reader to [BH] (or other treatments of the standard theory of theta correspondence, such as [Ga]), for details on the first occurrence indices. Finally, we note here that although Adams' conjecture is proven in [BH] in the case where \(\dim V_{1},\dim V_{2}\) are even, it is expected in [BH] that the same results will hold in all cases. ### Transfer of nilpotent orbits One would like to use the theta correspondence to relate two generalised Whittaker models on each member of a dual pair. To do so, one needs a correspondence of nilpotent orbits, and this is facilitated by a double fibration via moment maps. We refer to [GZ], [Zh] for the details (cf. also [DKP]) and state only the result here. For the sake of simplifying notation, let us henceforth replace \(G_{1}\) and \(G_{2}\) with \(G\) and \(G^{\prime}\) respectively, and similar for all other notation. **Proposition 6.3**.: _(Transfer of nilpotent orbits via moment map)_ _One has moment maps_ \[\mathfrak{g}\stackrel{{\phi}}{{\leftarrow}}\operatorname{Hom}( V^{\prime},V)\stackrel{{\phi^{\prime}}}{{\rightarrow}}\mathfrak{g}^{\prime}\] _defined by_ \[\phi(f)=ff^{*},\] \[\phi^{\prime}(f)=f^{*}f,\] _where \(f^{*}\) denotes the adjoint of the linear map \(f\)._ _Given a nilpotent element \(e\) in the image of \(\phi\) corresponding to a \(\mathfrak{sl}_{2}\)-triple \(\gamma\), one may uniquely define a nilpotent orbit/conjugacy class of \(\mathfrak{sl}_{2}\)-triple \(\gamma^{\prime}\) of \(\mathfrak{g}^{\prime}\) (with corresponding nilpotent element \(e^{\prime}\in\mathfrak{g}^{\prime}\)) such that:_ * \(e,e^{\prime}\) _are the images of some common element_ \(f\in\operatorname{Hom}(V^{\prime},V)\)_;_ * _the form on_ \(V^{\prime}\) _restricts to a nondegenerate form on_ \(\ker f\) _(including if_ \(\ker f=0\)_),_ * _and_ \(f\) _sends the_ \(k\)_-weight space of_ \(V^{\prime}\) _to the_ \((k+1)\)_-weight space of_ \(V\) _for all_ \(k\in\mathbb{Z}\) _(here the weight spaces are under the_ \(\mathfrak{sl}_{2}\) _action coming from_ \(\gamma,\gamma^{\prime}\)_)._ _The partitions corresponding to \(\gamma,\gamma^{\prime}\) are related in the following way: suppose their corresponding Young tableaux are \(d,d^{\prime}\) respectively. Then one removes the first column of \(d\) and adds suitably many rows of length 1, to obtain \(d^{\prime}\)._ _Furthermore, recall from Section 2.2 that the nilpotent orbits of \(\mathfrak{g},\mathfrak{g}^{\prime}\) are parameterised also by (symplectic or orthogonal) forms \(B_{j},B_{j}^{\prime}\) on the multiplicity spaces \(V_{j},V_{j}^{\prime}\). To obtain the corresponding forms \(B_{j}^{\prime}\) for \(\gamma^{\prime}\), the forms \(B_{j}\) from \(\gamma\) are left unchanged, _(and the form \(B_{1}^{\prime}\) corresponding to the rows of length 1 in \(d^{\prime}\) is determined by the compatibility condition of Section 2.2)._ _In other words, one has_ \[(V_{j}^{\prime},B_{j}^{\prime})=(V_{j+1},B_{j+1})\quad\text{for $j\geq 2$}\] _and_ \[V_{1}^{\prime}=V_{2}\oplus V_{new}\] _for a subspace \(V_{new}\) corresponding to the newly added rows of length 1 in \(d^{\prime}\). In fact, \(V_{new}=\ker f\)._ _See Figure 2 for an illustration._ One verifies that the regular nilpotent orbit of \(\mathfrak{g}\) is in the image of the moment map \(\phi\). ### Results of Gomez-Zhu We now come to the result of Gomez and Zhu [GZ],[Zh] which relates two generalised Whittaker models via the theta correspondence. Retain the notation of the previous sections. Figure 2. An illustration of the transfer of nilpotent orbits via the moment maps, in terms of the Young tableaux \(d\) and \(d^{\prime}\). \(*\) indicates the newly added rows of length 1 in \(d^{\prime}\). For simplicity also assume that the nilpotent orbit defined by \(\gamma\) is in the image of the moment map \(\phi\). Recall from Section 2 that \[M_{\gamma}\cong\prod_{k=1}^{j}G(V_{k},B_{k})\] and \[M_{\gamma^{\prime}}\cong\prod_{k=1}^{j}G^{\prime}(V_{k}^{\prime},B_{k}^{\prime }).\] In particular \(M_{\gamma}\) and \(M_{\gamma^{\prime}}\) contain respectively factors \[G(V_{1},B_{1})\] and \[G^{\prime}(V_{1}^{\prime},B_{1}^{\prime}),\] corresponding respectively to the rows of length \(1\) in \(d\) and \(d^{\prime}\). Furthermore \(G^{\prime}(V_{1}^{\prime},B_{1}^{\prime})\) contains a subgroup \(G^{\prime}(V_{new})\), which is an isometry group of the subspace \(V_{new}\subseteq V_{1}^{\prime}\) corresponding to the _newly added_ rows of length \(1\) in \(d^{\prime}\) (cf. Proposition 6.3). In almost all of the cases we will work with, \(d\) has no rows of length \(2\), so that in fact \(G^{\prime}(V_{new})=G^{\prime}(V_{1}^{\prime},B_{1}^{\prime})\). The only exception is when \(d\) is simply the partition [2], corresponding to the regular nilpotent orbit of \(\mathfrak{sp}_{2}\). We have that \(G(V_{1},B_{1})\) and \(G^{\prime}(V_{new})\) forms a reductive dual pair (inside \(\operatorname{Sp}(V_{1}\otimes V_{new})\), and of the same type as \(G\) and \(G^{\prime}\) respectively). **Proposition 6.4**.: _([22, Theorem 3.7], [6, Theorem 6.2]) For any \(\pi\in\operatorname{Irr}(G^{\prime})\), and for any genuine representation \(\tau\in\operatorname{Irr}(G(V_{1},B_{1}))\), one has_ \[W_{\gamma,\tau,\psi}(\Theta_{\psi}(\pi))\cong W_{\gamma^{\prime},\Theta(\tau) ^{\vee},\psi}(\pi^{\vee}).\] _Here:_ * \(\Theta(\pi)\) _is the big theta lift for the dual pair_ \((G,G^{\prime})\)_;_ * \(\Theta(\tau)^{\vee}\) _is the (dual of) the big theta lift for the dual pair_ \((G(V_{1},B_{1}),G^{\prime}(V_{new}))\)_._ * _Note that the isomorphism is also as_ \(\big{(}G(V_{2},B_{2})\stackrel{{\sim}}{{\sim}}\cdots\times G(V_ {j},B_{j})\big{)}\)_-modules._ _Remark 6.5_.: If the nilpotent orbit defined by \(\gamma\) is _not_ in the image of the moment map \(\phi\), then one has that \[W_{\gamma,\tau,\psi}(\Theta_{\psi}(\pi))=0\] for all \(\pi\in\operatorname{Irr}(G^{\prime})\). ## 7. 'Hook-type' partitions In this section, we determine the expected hyperspherical dual for the hook-type partitions \([n-a_{1},1^{a_{1}}]\) of \(\operatorname{O}_{n}\). As laid out in Section 4.4, we show that the expectations at the (smooth) local level are satisfied. ### Even orthogonal groups Suppose \(n=2k\) is even. Then we must have \(a_{1}=2a+1\) odd. **Theorem\({}^{\prime}\) 7.1**.: _The hyperspherical varieties \(M_{1},M_{2}\) defined respectively by_ * _the datum_ \(\mathrm{O}_{2a+1}\times\mathrm{SL}_{2}\to\mathrm{O}_{2k}\)_, corresponding to the nilpotent orbit with partition_ \([2k-2a-1,1^{2a+1}]\)_, and trivial_ \(S\)_;_ * _and the datum_ \(\mathrm{O}_{2k-2a+1}\times\mathrm{SL}_{2}\to\mathrm{O}_{2k}\)_, corresponding to the nilpotent orbit with partition_ \([2a-1,1^{2k-2a+1}]\)_, and trivial_ \(S\)_._ _are dual under the expected [2]-duality of hyperspherical varieties._ We refer to this statement as "Theorem\({}^{\prime}\)", since there is as yet no formal definition of the statement "\(M_{1}\) and \(M_{2}\) are hyperspherical dual". Rather, what one has is a list of expected properties, as in Expectation 4.7, and the meaning of the assertion in Theorem\({}^{\prime}\) 7.1 is given by the precise Theorem 7.2. Similar remarks apply for the subsequent sections. Recalling the setup of Section 6.3, let \(\gamma_{1}\) be the nilpotent orbit of \(\mathfrak{so}_{2k}\) corresponding under the moment map (as in Proposition 6.3) to a regular nilpotent orbit \(\gamma_{r,1}\) of \(\mathfrak{sp}_{2k-2a}\); we know that it corresponds to the partition \([2k-2a-1,1^{2a+1}]\). Similarly, let \(\gamma_{2}\) be the nilpotent orbit of \(\mathfrak{so}_{2k}\) corresponding under the moment map to a regular nilpotent orbit \(\gamma_{r,2}\) of \(\mathfrak{sp}_{2a}\); we know that it corresponds to the partition \([2a-1,1^{2k-2a+1}]\). Figure 3. The Young tableaux corresponding to \(\gamma_{r,1}\) and \(\gamma_{1}\) respectively. Furthermore, as we have seen in Section 4, \(M_{1}\) and \(M_{2}\) have quantizations which are respectively the generalised Whittaker representations \(W_{\gamma_{1},\operatorname{triv}_{1},\psi}\) and \(W_{\gamma_{2},\operatorname{triv}_{2},\psi}\). Here \(\operatorname{triv}_{1},\operatorname{triv}_{2}\) are the trivial representations for the subgroups \(H_{1}=\operatorname{O}_{2a+1}\) and \(H_{2}=\operatorname{O}_{2k-2a+1}\) respectively. **Theorem 7.2**.: _We have:_ * _If_ \(\pi\) _is an irreducible representation of_ \(\operatorname{O}_{2k}\) _which occurs as a quotient of_ \(W_{\gamma_{1},\operatorname{triv}_{1},\psi}\)_, then_ \(\pi=\theta_{\psi}(\sigma)\) _for_ \(\sigma\) _an irreducible representation of_ \(\operatorname{Sp}_{2k-2a}\)_._ _Conversely, if_ \(\sigma\) _is an irreducible_ \(\psi\)_-generic representation of_ \(\operatorname{Sp}_{2k-2a}\)_, then_ \(\pi:=\theta_{\psi}(\sigma)\) _is an irreducible representation of_ \(\operatorname{O}_{2k}\) _which occurs as a quotient of_ \(W_{\gamma_{1},\operatorname{triv}_{1},\psi}\)_._ * _If_ \(\pi\) _is an irreducible representation of_ \(\operatorname{O}_{2k}\) _which occurs as a quotient of_ \(W_{\gamma_{2},\operatorname{triv}_{2},\psi}\)_, then_ \(\pi=\theta_{\psi}(\sigma)\) _for_ \(\sigma\) _an irreducible representation of_ \(\operatorname{Sp}_{2a}\)_._ _Conversely, if_ \(\sigma\) _is an irreducible_ \(\psi\)_-generic representation of_ \(\operatorname{Sp}_{2a}\)_, then_ \(\pi:=\theta_{\psi}(\sigma)\) _is an irreducible representation of_ \(\operatorname{O}_{2k}\) _which occurs as a quotient of_ \(W_{\gamma_{2},\operatorname{triv}_{2},\psi}\)_._ Proof.: We show the first statement; the second (the 'dual statement') is similar. The main result of [Zh] (cf. also [GZ]), as stated in Proposition 6.4, implies that \[W_{\gamma_{r,1},\operatorname{triv},\psi}(\Theta_{\psi}(\pi))\cong W_{\gamma_{ 1},\operatorname{triv}_{1},\psi}(\pi^{\vee})\] for all \(\pi\in\operatorname{Irr}(\operatorname{O}_{2k})\). (Here \(\operatorname{triv}\) is the trivial representation for the trivial subgroup of \(M_{\gamma_{r,1}}\). Henceforth we shall drop the \(\operatorname{triv}\) subscripts for ease of reading.) On one hand, if \(\pi\) occurs as a quotient of \(W_{\gamma_{1}}\), then \(W_{\gamma_{1}}(\pi^{\vee})\neq 0\), hence \(W_{\gamma_{r,1}}(\Theta(\pi))\neq 0\); in particular \(\Theta(\pi)\neq 0\), hence \(\theta(\pi)\neq 0\). In view of Theorem 6.1, \(\pi\) is the (small) theta lift of an irreducible representation of \(\operatorname{Sp}_{2k-2a}\). Note that if \(\Theta(\pi)\) is itself already irreducible (i.e. equal to \(\theta(\pi)\)), which one expects to be true most of the time, then we have shown that \(\pi\) is the (small) theta lift of an irreducible \(\psi\)_-generic_ representation of \(\operatorname{Sp}_{2k-2a}\). Figure 4. The Young tableaux corresponding to \(\gamma_{r,2}\) and \(\gamma_{2}\) respectively. On the other hand, let \(\sigma\) be an irreducible \(\psi\)-generic (tempered) representation of \(\operatorname{Sp}_{2k-2a}\), so \(W_{\gamma_{r,1}}(\sigma)\neq 0\). As in the remarks after Theorem 4.1 of [Ba], since \((1\leq a\leq k-1\) and) \(\sigma\) is generic, one has \(\theta(\sigma)\neq 0\). We want to show that \(W_{\gamma_{1}}(\theta(\sigma))\neq 0\). Suppose otherwise that \(W_{\gamma_{1}}(\theta(\sigma))=0\), then \(W_{\gamma_{r,1}}(\Theta(\theta(\sigma)))=0\); but this means that \(\sigma\), as a quotient of \(\Theta(\theta(\sigma))\) (cf. Theorem 6.1), is not generic, a contradiction. Note that, by the (enhanced) Shahidi's conjecture [HLL] for symplectic groups, any irreducible generic tempered representation lives in exactly one A-packet, and an A-packet is tempered if and only if it has a generic member. Since the theta-lift realises the respective functorial lifting via the maps \(\operatorname{O}_{2k-2a+1}\times\operatorname{SL}_{2}\to\operatorname{O}_{2k}\) and \(\operatorname{O}_{2a+1}\times\operatorname{SL}_{2}\to\operatorname{O}_{2k}\) (cf. Theorem 6.2), we see that Theorem 7.2 realises the expected functorial lifting of Expectation 4.7. In other words, Theorem 7.2 shows Theorem \({}^{\prime}\)7.1 at the smooth local level. _Remark 7.3_.: From the proof, we see also that the multiplicity-one property should hold for \(W_{\gamma_{1}}\) (and \(W_{\gamma_{2}}\)), that is, \[\dim\operatorname{Hom}(W_{\gamma_{1}},\pi)\leq 1\] for \(\pi\in\operatorname{Irr}(\operatorname{O}_{2k})\), as long as \(\Theta(\pi)\) is irreducible (i.e. equal to \(\theta(\pi)\)). _Remark 7.4_.: One has \(1\leq a\leq k-1\). When \(a=k-1\), the corresponding nilpotent orbit is trivial, and we recover (in principle) the case of the spherical variety \(\operatorname{O}_{2k-1}\backslash\operatorname{O}_{2k}\), which was studied in [GaWa]. ### Odd orthogonal groups Suppose \(n=2k+1\) is odd. Then we must have \(a_{1}=2a\) even. In this case, the dual group \(G^{\vee}\) is a symplectic group, so there are slight differences from the previous section. **Theorem\({}^{\prime}\) 7.5**.: _The hyperspherical varieties \(M_{1},M_{2}\) defined respectively by_ * _the datum_ \(\operatorname{O}_{2a}\times\operatorname{SL}_{2}\to\operatorname{O}_{2k+1}\)_, corresponding to the nilpotent orbit with partition_ \([n-2a,1^{2a}]\)_, and trivial_ \(S\)_;_ * _and the datum_ \(\operatorname{Sp}_{2k-2a+2}\times\operatorname{SL}_{2}\to\operatorname{Sp}_{ 2k}\)_, corresponding to the nilpotent orbit with partition_ \([2a-2,1^{2k-2a+2}]\)_, and_ \(S\) _the standard symplectic representation of_ \(\operatorname{Sp}_{2k-2a+2}\)_,_ _are dual under the expected [BZSV]-duality of hyperspherical varieties._ Recalling the setup of Section 6.3, let \(\gamma_{1}\) be the nilpotent orbit of \(\mathfrak{so}_{2k+1}\) corresponding under the moment map to a regular nilpotent orbit \(\gamma_{r,1}\) of \(\mathfrak{sp}_{2k-2a+2}\); we know that it corresponds to the partition \([2k-2a+1,1^{2a}]\). Similarly, let \(\gamma_{2}\) be the nilpotent orbit of \(\mathfrak{sp}_{2k}\) corresponding under the moment map to a regular nilpotent orbit \(\gamma_{r,2}\) of \(\mathfrak{so}_{2a}\); we know that it corresponds to the partition \([2a-2,1^{2k-2a+2}]\). Note that, in this case, there is some ambiguity in the choice of regular nilpotent orbit \(\gamma_{r,2}\) of \(\mathfrak{so}_{2a}\) (since the corresponding partition is \([2a-1,1]\) and not \([2a]\)). We fix a choice with the discriminant of the (orthogonal) form \(B_{1}\) on the multiplicity space \(V_{1}\) being trivial in \(F^{\times}/F^{\times 2}\). The centraliser of \(\gamma_{2}\), \(M_{\gamma_{2}}\), is isomorphic to the direct product of \(\mu_{2}\) and \(H\), the isometry group of a \((2k-2a+2)\)-dimensional symplectic vector space over \(F\). As in subsection 2.3.3, let \(\rho\) be the dual of the Weil representation (associated to \(\psi\)) of the metaplectic cover of \(H\), which is also isomorphic to the Weil representation associated to \(\bar{\psi}(x):=\psi(-x)\). Furthermore, as we have seen in Section 4, \(M_{1}\) and \(M_{2}\) have quantizations which are respectively the generalised Whittaker representations \(W_{\gamma_{1},\operatorname{triv}_{1},\psi}\) and \(W_{\gamma_{2},\rho,\psi}\). If working with hyperspherical datum over \(F\) (cf. Remark 4.6), we negate the symplectic form on \(S\), so that its quantization is the Weil representation associated to \(\bar{\psi}\), which is \(\rho\). Figure 5. The Young tableaux corresponding to \(\gamma_{r,1}\) and \(\gamma_{1}\) respectively. Figure 6. The Young tableaux corresponding to \(\gamma_{r,2}\) and \(\gamma_{2}\) respectively. **Theorem 7.6**.: _We have:_ * _If_ \(\pi\) _is an irreducible representation of_ \(\mathrm{O}_{2k+1}\) _which occurs as a quotient of_ \(W_{\gamma_{1},\mathrm{triv}_{1},\psi}\)_, then_ \(\pi=\theta_{\psi}(\sigma)\) _for_ \(\sigma\) _an irreducible representation of_ \(\mathrm{Mp}_{2k-2a+2}\)_._ _Conversely, if_ \(\sigma\) _is an irreducible_ \(\psi\)_-generic representation of_ \(\mathrm{Mp}_{2k-2a+2}\)_, then if_ \(\pi:=\theta_{\psi}(\sigma)\neq 0\)_, it is an irreducible representation of_ \(\mathrm{O}_{2k+1}\) _which occurs as a quotient of_ \(W_{\gamma_{1},\mathrm{triv}_{1},\psi}\)_._ * _If_ \(\pi\) _is an irreducible representation of_ \(\mathrm{Sp}_{2k}\) _which occurs as a quotient of_ \(W_{\gamma_{2},\rho,\psi}\)_, then_ \(\pi=\theta_{\psi}(\sigma)\) _for_ \(\sigma\) _an irreducible representation of_ \(\mathrm{O}_{2a}\)_._ _Conversely, if_ \(\sigma\) _is an irreducible_ \(\psi\)_-generic_2 _representation of_ \(\mathrm{O}_{2a}\)_, then_ \(\pi:=\theta_{\psi}(\sigma)\) _is an irreducible representation of_ \(\mathrm{Sp}_{2k}\) _which occurs as a quotient of_ \(W_{\gamma_{2},\rho,\psi}\)_._ Footnote 2: one should think of the dependence not as being on \(\psi\), but rather on the choice of regular nilpotent orbit \(\gamma_{r,2}\) of \(\mathfrak{so}_{2a}\) (as above), as is explained in [GGP, Section 12] Proof.: Since the proof will be largely the same as that of Theorem 7.2, we do not reproduce it here. Note here that, by the local Shimura correspondence [GaSa], irreducible representations of the metaplectic group are parameterised by L-parameters into the symplectic group. Now since the theta-lift is expected to realise the respective functorial lifting via the maps \(\mathrm{Sp}_{2k-2a+2}\times\mathrm{SL}_{2}\to\mathrm{Sp}_{2k}\) and \(\mathrm{O}_{2a}\times\mathrm{SL}_{2}\to\mathrm{O}_{2k+1}\) (cf. Theorem 6.2), we see that Theorem 7.6 should realise the expected functorial lifting of Expectation 4.7. In other words, Theorem 7.6 shows Theorem 7.5 at the smooth local level. _Remark 7.7_.: From the proof, we see also that the multiplicity-one property should hold for \(W_{\gamma_{1}}\) (and \(W_{\gamma_{2},\rho}\)), that is, \[\dim\mathrm{Hom}(W_{\gamma_{1}},\pi)\leq 1\] for \(\pi\in\mathrm{Irr}(\mathrm{O}_{2k+1})\), as long as \(\Theta(\pi)\) is irreducible (i.e. equal to \(\theta(\pi)\)). _Remark 7.8_.: One has \(2\leq a\leq k\). When \(a=k\), the corresponding nilpotent orbit is trivial, and we recover (in principle) the case of the spherical variety \(\mathrm{O}_{2k}\backslash\mathrm{O}_{2k+1}\), which was studied in [GaWa]. ### Closing remarks We close this section with several remarks. _Remark 7.9_.: It is of note that we do also exhaust all the possible hook-type partitions for the group \(\mathrm{Sp}_{2n}\), modulo the occurrence of the Weil representation, or the non-trivial \(S\). When \(S\) is trivial, this corresponds to trivial \(\rho\), so that (cf. the discussion of Section 2.3.2) we should be working with representations of the metaplectic cover \(\mathrm{Mp}_{2n}\) in its quantization. In the language of [BZSV], the corresponding \(M\) has anomaly, cf. Remark 3.7, and is currently excluded from the expectations of duality. The choice of \(S\), or of \(\rho\) as above, corresponds to the canonical choice explained in subsection 2.3.3. In fact \(S\) is the dual of the symplectic space \((\mathfrak{u}/\mathfrak{u}^{+})\), and the resulting Hamiltonian variety \(M_{2}\) is a (twisted) cotangent bundle over \(HU^{+}\backslash G\), whose quantization is an induction from a character, as in (2.5). So we obtain a fairly complete picture of the duality in the case of hook-type partitions for the orthogonal and symplectic groups. _Remark 7.10_.: The main result of Gomez and Zhu [22], [6] used in the proof has natural global analogues, special cases of which have been studied in for example [10]. It is possible that this will play a role in the study of the analogous global problem. _Remark 7.11_.: As alluded to in SS1.4(b), our results on the hook-type partitions in this section give rise to a correspondence \[e\leftrightarrow e^{\vee}\] between nilpotent orbits of \(G\) and \(G^{\vee}\), which resembles a duality theory of nilpotent orbits. It is very natural to ask if this correspondence coincides with, for instance, the Barbasch-Vogan duality of nilpotent orbits [11]. Unfortunately, this does not appear to be the case. For instance, we have computed that the nilpotent orbit with partition \([3,1,1,1,1]\) in type \(B_{3}\) corresponds under Barbasch-Vogan duality to the nilpotent orbit with partition \([4,2]\) in type \(C_{3}\), which is not of hook-type. _Remark 7.12_.: As mentioned in Remarks 2.4 and 5.10, one may also consider the generalised Whittaker models as representations of \(G\times\tilde{M}_{\gamma}\), especially when dealing with the Bessel and Fourier-Jacobi models (i.e. hook-type partitions). However, the situation in this case (where we are dealing with the Bessel and Fourier-Jacobi models) as regards to hyperspherical duality is in fact somewhat'simpler' than the situation we deal with, as the spherical varieties thus obtained are _strongly tempered_, in the sense that its dual datum satisfies \(X^{\vee}=G^{\vee}\times\tilde{M}_{\gamma}^{\vee}\)[11]. The dual in this case is essentially known [12] to be the standard symplectic vector space (acted upon by the corresponding dual groups \(G^{\vee}\) and \(\tilde{M}_{\gamma}^{\vee}\) which form a reductive dual pair), quantizing to the Weil representation (or theta correspondence / Howe duality, as studied in [14]). The dual problem concerning the spectral decomposition of the Weil representation is essentially precisely Adams' conjecture (Theorem 6.2)! We note that, for instance, corresponding problems related to the Bessel models have been studied by Liu [22], and for the Fourier-Jacobi models by Xue [23]. ## 8. Duality under symplectic reduction Following a suggestion of Venkatesh, let us now examine how duality in the \(G\times M_{\gamma}\) case (Remark 7.12, [12]) may be seen to be related to duality in the \(G\) case. In doing so, we will also see how hyperspherical duality is expected to relate to the operation of symplectic reduction, via the following guiding principle: _Hyperspherical duality 'commutes' with symplectic reduction._ We will now illustrate this using the case where \(G\) is the even orthogonal group, that is, the case in Section 7.1. Retain all notation of Section 7.1. From (4.4), one easily sees that the hyperspherical variety \(M_{1}\) is obtained by the symplectic reduction (Definition 3.9), with respect to \(H_{1}=\operatorname{O}_{2a+1}\), of the corresponding hyperspherical variety in the \(G\times M_{\gamma}\) case, that is, the hyperspherical variety with datum \[\operatorname{O}_{2a+1}^{\Delta}\times\operatorname{SL}_{2}\to\operatorname {O}_{2k}\times\operatorname{O}_{2a+1}\quad\text{and}\quad S=0.\] Denote this hyperspherical variety by \(M_{1}^{\prime}\). We may thus also view \(M_{1}\) as the symplectic reduction of \(M_{1}^{\prime}\) with the trivial space \(\{0\}\) for \(\operatorname{O}_{2a+1}\). We have seen above that the dual of \(M_{1}^{\prime}\) is just the symplectic vector space \(M_{2}^{\prime}=\mathbb{C}^{2k}\otimes\mathbb{C}^{2a}\) given by the tensor product of the defining representations of the dual group \(\operatorname{O}_{2k}\times\operatorname{Sp}_{2a}\). Now, since the dual of the trivial \(\operatorname{O}_{2a+1}\)-space \(\{0\}\) is the Whittaker cotangent bundle \(T^{*}(N,\psi\backslash\operatorname{Sp}_{2a})\) for \(N\) a maximal unipotent subgroup of \(\operatorname{Sp}_{2a}\), one might expect, following a suggestion of Venkatesh, that the dual of \(M_{1}\) is obtained by the symplectic reduction of \(M_{2}^{\prime}\) with the Whittaker cotangent bundle \(T^{*}(N,\psi\backslash\operatorname{Sp}_{2a})\) for \(\operatorname{Sp}_{2a}\) (see Definition 3.9 and the remarks after it): \[(M_{2}^{\prime}\times_{\mathfrak{sp}_{2a}^{*}}T^{*}(N,\psi\backslash \operatorname{Sp}_{2a}))/\operatorname{Sp}_{2a}.\] There are several ways to view such a reduction with the Whittaker cotangent bundle, which we will also call a 'Whittaker reduction'. On one hand, writing \(T^{*}(N,\psi\backslash\operatorname{Sp}_{2a})\) in the usual form \((\lambda+\mathfrak{n}^{\perp})\times^{N}\operatorname{Sp}_{2a}\) (as in Example 3.13), one may easily view this reduction of \(M_{2}^{\prime}\) as a'reduction with respect to \((N,\psi)\)'. (In fact, viewing the Whittaker cotangent bundle as a symplectic induction from \(N\) to \(\operatorname{Sp}_{2a}\), this is an instance of the symplectic analogue of Frobenius reciprocity, as in Remark 3.11 and [2, Theorem 3.4].) This form is most familiar to us from the representation-theoretic viewpoint, where it corresponds to the formation of \((N,\psi)\)-coinvariants. On the other hand, one may again use (4.4) but for the Whittaker cotangent bundle, to write \[T^{*}(N,\psi\backslash\operatorname{Sp}_{2a})\cong(f_{r,2}+\mathfrak{sp}_{2a} ^{e_{r,2}})\times\operatorname{Sp}_{2a}, \tag{8.1}\] where \((f_{r,2}+\mathfrak{sp}_{2a}^{e_{r,2}})\) is the principal Slodowy slice for the regular nilpotent orbit \(\gamma_{r,2}\) of \(\mathfrak{sp}_{2a}\) (recall that we are continuing the notation of Section 7.1). Now one sees readily that the desired reduction of \(M_{2}^{\prime}\) is then nothing but the pre-image, under the moment map \(M_{2}^{\prime}\to\mathfrak{sp}_{2a}^{*}\), of the principal Slodowy slice \((f_{r,2}+\mathfrak{sp}_{2a}^{e_{r,2}})\) of \(\mathfrak{sp}_{2a}\). See [CR, Proposition 3.9] for a detailed proof, where such a reduction is also called a _Poisson slice_ of \(M_{2}^{\prime}\) (with respect to \(\operatorname{Sp}_{2a}\)). Therefore one expects that the dual \(M_{2}\), which is the cotangent bundle \(T^{*}(H_{2}U_{2}\backslash\operatorname{O}_{2k})\) obtained by the Whittaker induction from the hook-type nilpotent orbit \(\gamma_{2}\) of \(\operatorname{O}_{2k}\), is precisely obtained by taking such a Poisson slice of the symplectic vector space \(M_{2}^{\prime}\). We summarise the above discussion via the following diagram: **Proposition 8.1**.: _Retain the notation of the preceding discussion. Assume that the Poisson slice, or Whittaker reduction, of \(M_{2}^{\prime}\) is hyperspherical. Then this Poisson slice is isomorphic to the dual \(M_{2}\)._ Proof.: First, one may use [CR, Corollary 3.4] to verify that the desired dimensions coincide: both have dimension \(4ka-2a^{2}\). Recall that the nilpotent orbit \(\gamma_{2}\) is precisely obtained by a transfer, via moment maps (Section 6.3), from the regular nilpotent orbit \(\gamma_{r,2}\) (cf. Figure 4). Now, note that the moment maps in Section 6.3 are precisely the moment maps for the symplectic vector space \(M_{2}^{\prime}\) (after choosing some invariant identifications)! Therefore, retaining the notation of Section 6.3, by definition of the transfer, one has an element \(f\in M_{2}^{\prime}\), which maps under the moment maps to the nilpotent elements \(f_{2}\) of \(\mathfrak{so}_{2k}^{*}\) (corresponding to the nilpotent orbit \(\gamma_{2}\)), and \(f_{r,2}\) of \(\mathfrak{sp}_{2a}^{*}\) (corresponding to \(\gamma_{r,2}\)) respectively. In particular, since its moment map image is \(f_{r,2}\), \(f\) lives in the Poisson slice of \(M_{2}^{\prime}\). Now since the moment map to \(\mathfrak{sp}_{2a}^{*}\) is a quotient map for \(\mathrm{O}_{2k}\), it is also readily verified that \(f\) is in the (unique) closed \(\mathrm{O}_{2k}\times\mathbb{G}_{m}\)-orbit of the Poisson slice, which is the pre-image of \(f_{r,2}\) in \(M_{2}^{\prime}\). Here the \(\mathbb{G}_{m}\)-action is the grading (cf. Remark 4.3) on the Poisson slice, which comes from the grading on \(M_{2}^{\prime}\) and the Whittaker cotangent bundle. Under the identification of (8.1), \(\mathbb{G}_{m}\) acts on \(\mathfrak{sp}_{2a}^{e_{r,2}}\) so as to leave \(0\in\mathfrak{sp}_{2a}^{e_{r,2}}\) fixed. Furthermore, by the definition of \(f\) as a linear map between vector spaces satisfying the conditions of Proposition 6.3, one has \(\ker f=V_{new}\) is \((2k-2a+1)\)-dimensional (and carries a non-degenerate orthogonal form). Now it is easy to verify that the stabiliser of \(f\) is precisely \[H_{2}=\mathrm{O}_{2k-2a+1},\] (cf. also [Zh, Lemma 3.4]). Now, since the moment map image of \(f\) is \(f_{2}\in\mathfrak{so}_{2k}^{*}\), one may employ the structure theorem of [BZSV], to deduce that the Poisson slice is indeed given by Whittaker induction along the datum \[\mathrm{O}_{2k-2a+1}\times\mathrm{SL}_{2}\to\mathrm{O}_{2k}\] corresponding to the nilpotent orbit \(\gamma_{2}\), with the element \(f\) as basepoint. The fact that the inducing symplectic vector space \(S\) is trivial follows from the dimension comparison at the beginning. _Remark 8.2_.: This isomorphism is the geometric analog of the result of Gomez-Zhu (by the above interpretation of taking \((N,\psi)\)-coinvariants). By the above proof, we also obtain a very conceptual geometric explanation of the involvement of the moment map transfer of nilpotent orbits, in the result of Gomez and Zhu. Now, since \(M_{2}\) is also a hook-type hyperspherical variety, one may repeat the above discussion with the roles of \(M_{1}\) and \(M_{2}\) reversed. In all, one has the following summarising diagram: \[\begin{CD}\text{O}_{2k}\times\text{O}_{2a+1}\circlearrowright M_{1}^{\prime}& \text{duality}\\ \text{symplectic reduction}\\ \text{with }\{0\}\\ \text{O}_{2k}\circlearrowright M_{1}&\text{duality}\\ \text{Whittaker reduction,}\\ \text{Whittaker reduction,}\\ \text{or taking Poisson slice}\\ \text{O}_{2k}\times\text{Sp}_{2k-2a}\circlearrowright M_{1}^{\prime\prime}= \mathbb{C}^{2k}\otimes\mathbb{C}^{2k-2a}&\text{duality}\\ \text{duality}&M_{2}\circlearrowright\text{O}_{2k}\times\text{O}_{2k-2a+1}\end{CD}\] ## 9. Exceptional partitions In this section, we examine the 'exceptional' partitions that feature in Theorems 5.5 and 5.6. In particular, we determine in many cases their expected hyperspherical duals and explain how the local Expectation 4.7, as well as that for the 'dual' problem (Remark 4.8), may be seen to be satisfied using (exceptional) theta correspondences. We also summarize some further expectations for hyperspherical duality at the end of the section; see Expectation 9.9. In the following, we will take the liberty of working with different isogenous types of the groups involved, or with similitude groups. Note here one advantage of the proofs of Theorems 5.5 and 5.6 (and the classification of spherical varieties in [4]) is that they go through the commutator subalgebras of the Lie algebras (so that the proof carries over to isogenous or similitude groups without too much difficulty). For simplicity, we will thus not make much essential distinction between isogenous types or similitude groups. ### Orthogonal groups Recall from Theorem 5.5 that the exceptional partitions which occur for the orthogonal group \(\text{O}_{n}\) are \[[3,3],[4,4],[6,6].\] We shall consider each of these in turn. #### 9.1.1. \([3,3]\) partition Because \(A_{3}=D_{3}\), we are essentially considering the general linear group \(\operatorname{GL}_{4}\) in this case. Working in the context of \(\operatorname{GL}_{4}\), we are looking at the nilpotent orbit with hook-type partition \([3,1]\). In fact we may replace \(\operatorname{GL}_{4}\) with any \(\operatorname{GL}_{n}\) (\(n\geq 2\)), and a nilpotent orbit with partition \([n-1,1]\). The hyperspherical dual is then expected to be (the cotangent bundle of) the standard representation of \(\operatorname{GL}_{n}\), by the standard Jacquet-Shalika theory of integral representations of the standard L-function (applied to \(\operatorname{GL}_{n}\times\operatorname{GL}_{1}\), with the trivial character on \(\operatorname{GL}_{1}\)). More precisely, we have the following meta-theorem **Theorem\({}^{\prime}\) 9.1**.: _Let_ * \(M_{1}\) _be the hyperspherical variety associated with the datum_ \(\operatorname{GL}_{1}\times\operatorname{SL}_{2}\to\operatorname{GL}_{n}\) _(corresponding to the nilpotent orbit_ \(\gamma\) _of_ \(\operatorname{GL}_{n}\) _with partition_ \([n-1,1]\)_) and trivial_ \(S\)_;_ * \(M_{2}\) _be the hyperspherical variety associated with the datum_ \(\operatorname{GL}_{n}\times\operatorname{SL}_{2}\to\operatorname{GL}_{n}\) _(corresponding to the trivial nilpotent orbit) and_ \(S=\operatorname{std}\oplus\operatorname{std}^{*}\)_, where_ \(\operatorname{std}\) _is the standard representation of_ \(\operatorname{GL}_{n}\)_._ _Then \(M_{1}\) and \(M_{2}\) are dual under the expected \([\operatorname{BZSV}]\)-duality._ Let us now explicate the precise meaning of the above meta-theorem. As we have seen in Section 4, the quantization of \(M_{1}\) is the generalised Whittaker representation \(W_{\gamma,\psi}\). For \(M_{2}\), its quantization is (cf. Example 3.6) the pullback (after fixing a splitting) of the Weil representation \(\omega_{\psi}\) of \(\operatorname{Sp}_{2n}\) to the Levi factor \(\operatorname{GL}_{n}\) of its Siegel parabolic subgroup. Then the mathematical content of Theorem' 9.1 is: **Theorem 9.2**.: * _The irreducible representations of_ \(\operatorname{GL}_{n}\) _which occurs as a quotient of_ \(W_{\gamma,\psi}\)_, are precisely the generic representations of_ \(\operatorname{GL}_{n}\)_._ * _The irreducible representations of_ \(\operatorname{GL}_{n}\) _(of Arthur type) which occurs as a quotient of_ \(\omega_{\psi}\) _have Arthur parameters factoring through the morphism_ \(\operatorname{GL}_{1}\times\operatorname{SL}_{2}\to\operatorname{GL}_{n}\) _defining_ \(M_{1}\)_._ Proof.: Let us in fact view \(\operatorname{GL}_{n}(F)\) as the unitary group \(\operatorname{U}(V)\) for a Hermitian space \(V\) of dimension \(n\) over \(E=F\times F\). In the unitary group case, almost all the results of Section 6 have corresponding analogues (though we did not state them for simplicity); for details, we point the reader to [GZ] and [Zh]. One may then, exactly as in Section 7, show that the irreducible representations of \(\operatorname{GL}_{n}\) which occur as quotients of the corresponding generalised Whittaker model, are precisely \(\psi\)-generic representations of \(\operatorname{GL}_{n}\). Note that the theta-lift for the (unitary) dual pair \(\operatorname{GL}_{n}\times\operatorname{GL}_{n}\) in this case is precisely the identity \(\pi\mapsto\pi\), cf. the remarks in [Ga2, Section 3.1]. For the second statement (the dual problem), one may view the situation as concerning the theta correspondence for the (unitary) dual pair \(\operatorname{U}_{1}\times\operatorname{U}_{n}\) in \(\operatorname{Sp}_{2n}\), since the theta correspondence in this case involves the restriction of the Weil representation \(\omega_{\psi}\) of \(\operatorname{Sp}_{2n}\) to \(\operatorname{U}_{1}\times\operatorname{U}_{n}\), and \(\operatorname{U}_{n}\cong\operatorname{GL}_{n}\) is the group we are interested in. (Recall that irreducible representations of \(\operatorname{U}_{1}\cong\operatorname{GL}_{1}\) are 1-dimensional.) The dual problem is then essentially just Adams' conjecture (cf. Theorem 6.2) for the dual pair \(\operatorname{U}_{1}\times\operatorname{U}_{n}\)! #### 9.1.2. \([4,4]\) partition In this case, we shall take \(G=\operatorname{PGSO}_{8}\) to be the adjoint group, so that \(G^{\vee}=\operatorname{Spin}_{8}\) is simply connected. The phenomenon of triality has some interesting structural implications in this context, which we shall first explicate. * First, there are 3 non-conjugate homomorphisms \[f_{j}:\operatorname{SO}_{8}\longrightarrow G=\operatorname{PGSO}_{8}\quad \text{and likewise}\quad p_{j}:G^{\vee}=\operatorname{Spin}_{8}\longrightarrow \operatorname{SO}_{8}.\] The \(f_{j}\)'s and \(p_{j}\)'s are permuted cyclically by a triality automorphism \(\theta\) of \(\operatorname{PGSO}_{8}\) and \(\operatorname{Spin}_{8}\) respectively. Now the description of nilpotent orbits in type \(D_{4}\) by partitions of 8 makes sense only if one is working with \(\operatorname{SO}_{8}\) with its standard representation. In the present setting, this signifies that we have distinguished one of the 3 conjugacy classes of maps, say \(f_{1}\) and \(p_{1}\), as the standard one. Thus, \(p_{1}:\operatorname{Spin}_{8}\to\operatorname{SO}_{8}\) is considered the standard representation, whereas \(p_{2}\) and \(p_{3}\) are considered the half-spin representations of \(\operatorname{Spin}_{8}\). * If one denotes by \(\operatorname{SO}_{7}\) the stabilizer in \(\operatorname{SO}_{8}\) of a unit vector in the standard representation, then set \[\operatorname{Spin}_{7}^{[j]}:=p_{j}^{-1}(\operatorname{SO}_{7})\subset \operatorname{Spin}_{8}.\] This gives 3 distinct conjugacy classes of embeddings \(\operatorname{Spin}_{7}\to\operatorname{Spin}_{8}\) and 3 spherical varieties \[X_{j}:=\operatorname{Spin}_{7}^{[j]}\backslash\operatorname{Spin}_{8}.\] Having fixed \(p_{1}:\operatorname{Spin}_{8}\to\operatorname{SO}_{8}\) as the standard representation of \(\operatorname{Spin}_{8}\), the restriction of \(p_{2}\) or \(p_{3}\) to \(\operatorname{Spin}_{7}^{[1]}\) is then the irreducible spin representation of \(\operatorname{Spin}_{7}\). Moreover, one has \[\operatorname{Spin}_{7}^{[i]}\cap\operatorname{Spin}_{7}^{[j]}\cong G_{2} \quad\text{if }i\neq j,\] where \(G_{2}\) is the exceptional group of rank 2. Likewise, the restriction of \(f_{j}\) to \(\operatorname{SO}_{7}\) gives 3 conjugacy classes of embedding \(\operatorname{SO}_{7}\longrightarrow\operatorname{PGSO}_{8}\) permuted by triality. * If one starts with a nilpotent orbit of type \([5,1,1,1]\) relative to \((f_{1},p_{1})\) and apply the triality automorphism to it, one obtains a nilpotent orbit of type \([4,4]\) relative to \((f_{1},p_{1})\) (cf. [CM, Example 5.3.7]). After the above preliminaries on triality, we can now formulate the following meta-theorem. **Theorem\({}^{\prime}\) 9.3**.: _Let * \(M_{1}\) _be the hyperspherical variety associated to the datum corresponding to a nilpotent orbit of_ \(\mathrm{PGSO}_{8}\) _associated to a partition_ \([4,4]\) _(relative to_ \(f_{1}\)_);_ * \(M_{2}\) _be the the cotangent bundle of the spherical variety_ \[X_{2}=\mathrm{Spin}_{7}^{[2]}\backslash\mathrm{Spin}_{8}.\] _Then \(M_{1}\) and \(M_{2}\) are dual under the expected \([\textsc{BZSV}]\)-duality._ As before, the precise mathematical meaning of this meta-theorem involves resolving two branching problems. Indeed, we shall see that the desired result can be deduced by an application of a triality automorphism \(\theta\) to the results of Section 7.1 in the case \(a=3\) and \(k=4\), i.e. for the duality between the nilpotent orbit of type \([5,1,1,1]\) and \(\mathrm{SO}_{7}\backslash\mathrm{SO}_{8}\cong\mathrm{Spin}_{7}^{[1]} \backslash\mathrm{Spin}_{8}=X_{1}\) (with the isomorphism induced by \(p_{1}\)). More precisely, since a triality automorphism \(\theta\) carries \(\mathrm{Spin}_{7}^{[1]}\) to \(\mathrm{Spin}_{7}^{[2]}\), it carries the irreducible quotients of \(C_{c}^{\infty}(X_{1})\) to that of \(C_{c}^{\infty}(X_{2})\), via: \[C_{c}^{\infty}(X_{2})\cong C_{c}^{\infty}(X_{1})^{\theta}.\] By Theorem 7.2, the irreducible quotients of \(C_{c}^{\infty}(X_{1})\) are described in terms of the dual data \[\mathrm{SO}_{3}\times\mathrm{SL}_{2}\begin{CD}>{}>\end{CD}\mathrm{SO}_{8} \begin{CD}>{}>\end{CD}\mathrm{PGSO}_{8}\] whose restriction to \(\mathrm{SL}_{2}\) corresponds to the partition \([5,1,1,1]\) (relative to \(f_{1}\)). In view of (c), the composition of this map with \(\theta\) gives a new morphism whose restriction to \(\mathrm{SL}_{2}\) corresponds to the partition \([4,4]\) (relative to \(f_{1}\)). This implies that the solution to the branching problem for the quantization of \(M_{2}\) (i.e. \(C_{c}^{\infty}(X_{2})\)) is given in terms of the initial data defining \(M_{1}\). For the dual problem, the branching problem for the quantization of \(M_{1}\) is the generalized Whittaker model attached to the partition \([4,4]\). By (c), this is carried by the triality automorphism \(\theta\) to the branching problem for the generalized Whittaker model attached to \([5,1,1,1]\). This latter branching problem has been addressed in Theorem 7.2 and its solution is given in terms of the dual data \[\mathrm{Spin}_{7}^{[1]}\longrightarrow\mathrm{Spin}_{8}.\] (More precisely, the irreducible quotients of the generalized Whittaker model attached to the partition \([5,1,1,1]\) are classical theta lifts of generic representations of \(\mathrm{PGSp}_{6}\)). The composition of the above dual data with \(\theta\) produces \[\mathrm{Spin}_{7}^{[2]}\longrightarrow\mathrm{Spin}_{8}.\] Thus the solution of the branching problem for the quantization of \(M_{1}\) is given by the above map, which is the initial data for \(M_{2}\). #### 9.1.3. \([6,6]\) partition One expects the following, by the results of [WZ]: **Theorem\({}^{\prime}\) 9.4**.: _Let_ * \(M_{1}\) _br the hyperspherical variety associated with the datum corresponding to a nilpotent orbit_ \(\gamma\) _of_ \(\operatorname{PGSO}_{12}\) _with partition_ \([6,6]\)_;_ * \(M_{2}\) _be the (multiplicity-free_ _[_Kn2_]__) half-spin representation_ \(S\) _of_ \(\operatorname{Spin}_{12}\)_._ _Then \(M_{1}\) and \(M_{2}\) are dual under the expected [BZSV]-duality._ As before, the mathematical content of this meta-theorem involves resolving the branching problems associated to quantization of \(M_{1}\) and \(M_{2}\), in terms of the data defining the other. Let us sketch how this can be done in this case. The problem of determining the irreducible quotients of the quantization of \(M_{1}\), or of the generalised Whittaker representation \(W_{\gamma,\psi}\) arising from the \([6,6]\) partition is essentially resolved by the results in [WZ, Section 9], in which the local multiplicity for this model is studied. This is an instance of the "strongly tempered" case, in conformity with the fact that the data defining \(M_{2}\) is the pair \[(\operatorname{id}:\operatorname{Spin}_{12}\longrightarrow\operatorname{Spin }_{12},S).\] Therefore what remains is to investigate the dual problem, which concerns the quantization of \(M_{2}\), or of the half-spin representation of \(\operatorname{Spin}_{12}\). This is the pullback, via the half-spin representation, of the Weil representation \(\omega_{\psi}\) obtained from \(\operatorname{Mp}_{32}\). (Note that the metaplectic cover splits over \(\operatorname{Spin}_{12}\), a manifestation of the fact that \(M_{2}\) is anomaly-free (Remark 3.7).) Let us continue to denote this representation of \(\operatorname{Spin}_{12}\) by \(\omega_{\psi}\). To analyze the irreducible quotients of \(\omega_{\psi}\) as a \(\operatorname{Spin}_{12}\)-module, we will make use of the dual pair \((\operatorname{SL}_{2},H):=(\operatorname{SL}_{2},\operatorname{Spin}_{12})\) in the (simply-connected) exceptional group of type \(E_{7}\), where \(H\) is the derived subgroup of the Levi factor \(L\) of a Heisenberg parabolic \(P=LU\) of \(E_{7}\). The unipotent radical \(U\) is a Heisenberg group corresponding to a 32-dimensional symplectic space, on which \(H\) acts via the half-spin representation. From the description [Ru, Proposition 43] of the mixed model of the minimal representation \(\Pi\) of \(E_{7}\), in terms of the Weil representation \(\omega_{\psi}\) obtained on \(H\), one has: as a representation of \(\operatorname{Spin}_{12}\), \[\omega_{\psi}\cong\Pi_{N,\psi},\] for \(N\) a maximal unipotent subgroup of \(\operatorname{SL}_{2}\). Hence we see that the irreducible quotients of \(\omega_{\psi}\cong\Pi_{N,\psi}\) consists of lifts (via this exceptional theta correspondence) from \(\psi\)-generic representations of \(\operatorname{SL}_{2}\), which exhibits the desired lifting of Expectation 4.7. It remains to check the \(\operatorname{SL}_{2}\)-type of the lifted representations (they should correspond to the \([6,6]\) partition). That this is the case follows from [GaSa2, Proof of Prop. 4.4, Pg.1239, line -4]. ### Symplectic groups Recall from Theorem 5.6 that the exceptional partitions which occur for the symplectic group \(\mathrm{Sp}_{2n}\) are \[[3,3],[5,5],[3,3,1^{2a}],[5,5,1^{2a}].\] #### 9.2.1. The models related to the partitions \[[3,3,1^{2a}],[5,5,1^{2a}]\] for \(a>0\) are closely related to that for the partitions \[[4,4],[6,6]\] in the orthogonal group case, by the theta correspondence and results of Gomez and Zhu [GZ], [Zh] (as in Section 6). In particular, once one understands the models for \([4,4],[6,6]\), then one also understands the models for \([3,3,1^{2a}],[5,5,1^{2a}]\). This allows one to easily formulate conjectural expectations for their hyperspherical duals; some such examples may be found in Expectation 9.9 below. However, the corresponding dual problems seem to be much more difficult, and we are not yet aware of any way to resolve this case in complete generality. #### 9.2.2. \([3,3]\) partition Similar to the orthogonal group case above in subsection 9.1.2, one has: **Theorem\({}^{\prime}\) 9.5**.: _Let_ * \(M_{1}\) _be the hyperspherical variety for_ \(\mathrm{PGSp}_{6}\) _associated to the datum corresponding to a nilpotent orbit_ \(\gamma\) _of_ \(\mathrm{PGSp}_{6}\) _with partition_ \([3,3]\)_;_ * \(M_{2}\) _be the cotangent bundle of the spherical variety_ \[G_{2}\backslash\mathrm{Spin}_{7}.\] _Then \(M_{1}\) and \(M_{2}\) are duals of each other under the expected [BZSV] duality._ The hyperspherical variety \(M_{1}\) has a quantization which is the generalised Whittaker representation \(W_{\gamma,\psi}\). Working with the group \(\mathrm{PGSp}_{6}\) instead, let us consider the dual pair \((G_{2},\mathrm{PGSp}_{6})\) in the (adjoint) exceptional group of type \(E_{7}\). Let \(\Pi\) be the minimal representation of \(E_{7}\), and \((N,\psi)\) a Whittaker datum for the group of type \(G_{2}\). By [GaSa3, Proposition 11.5], we see that \[W_{\gamma,\psi}\cong\Pi_{N,\psi}.\] Therefore, since the Howe duality (and functorial properties of the theta-lift) has been shown for this dual pair \((G_{2},\mathrm{PGSp}_{6})\) in [GaSa3], one sees that the irreducible quotients of the generalised Whittaker representation \(W_{\gamma,\psi}\cong\Pi_{N,\psi}\) corresponding to the \([3,3]\) partition, consists of lifts (via this exceptional theta correspondence) of generic representations of \(G_{2}\). In other words, the L-parameters of these irreducible quotients factor through the inclusion \[G_{2}\longrightarrow\mathrm{Spin}_{7}\] which is the data defining \(M_{2}\). For the dual problem, the quantization of \(M_{2}=T^{*}(G_{2}\backslash\mathrm{Spin}_{7})\) is \(C_{c}^{\infty}(G_{2}\backslash\mathrm{Spin}_{7})\). One needs to show that the irreducible quotients of this representation have A-parameters which factor through \[\mathrm{SO}_{3}\times\mathrm{SL}_{2}\longrightarrow\mathrm{SO}_{3}\times \mathrm{SO}_{3}\longrightarrow\mathrm{PGSp}_{6}.\] This means that these irreducible representations of \(\mathrm{Spin}_{7}\) should be functorial lifts from \(\mathrm{SL}_{2}\). This expectation can in fact be shown by the classical theta correspondence for \(\mathrm{SL}_{2}\times\mathrm{SO}_{8}\). More precisely, as we saw in subsection 9.1.2, the theta correspondence for \(\mathrm{SL}_{2}\times\mathrm{SO}_{8}\) shows that the irreducible quotients of \(C_{c}^{\infty}(\mathrm{SO}_{7}\backslash\mathrm{SO}_{8})\) are given by theta lifts from generic representations of \(\mathrm{SL}_{2}\). On the other hand, by item (b) in SS9.1.2, we have an embedding: \[p_{2}:\mathrm{Spin}_{7}\hookrightarrow\mathrm{SO}_{8}\] inducing \[G_{2}\backslash\mathrm{Spin}_{7}\cong\mathrm{SO}_{7}\backslash\mathrm{SO}_{8}.\] Hence the irreducible constituents of \[C_{c}^{\infty}(G_{2}\backslash\mathrm{Spin}_{7})\cong C_{c}^{\infty}( \mathrm{SO}_{7}\backslash\mathrm{SO}_{8})\] should be the irreducible representations of \(\mathrm{Spin}_{7}\) obtained by restriction, via \(p_{2}\), of the theta lifts of irreducible generic representations of \(\mathrm{SL}_{2}\). _Remark 9.6_.: This [3, 3] case is the only exceptional case for the orthogonal or symplectic groups (and one of two exceptional cases in general) which features in the classification of [FU]. Similar to Section 8, one has that the dual of their \(\mathrm{PGSp}_{6}\times\mathrm{PGL}_{2}\)-variety \(M_{1}^{\prime}\) is the 16-dimensional symplectic vector space \(M_{2}^{\prime}\) for \(\mathrm{SO}_{8}\times\mathrm{SL}_{2}\), restricted to \(\mathrm{Spin}_{7}\times\mathrm{SL}_{2}\). As in Proposition 8.1, one has that the \(\mathrm{SL}_{2}\)-Whittaker reduction, or Poisson slice, of \(M_{2}^{\prime}\), is isomorphic to the cotangent bundle of \(\mathrm{SO}_{7}\backslash\mathrm{SO}_{8}\), which is in turn isomorphic to the cotangent bundle of \(G_{2}\backslash\mathrm{Spin}_{7}\), which is \(M_{2}\). Finally, let us remark also in passing, that the final exceptional case of [FU] is a duality between \(M_{1}^{\prime}\), the \(G_{2}\times\mathrm{SL}_{2}\)-variety corresponding to the 8-dimensional nilpotent orbit of \(G_{2}\), and \(M_{2}^{\prime}\), the 14-dimensional symplectic vector space for \(\mathrm{SO}_{7}\times\mathrm{SL}_{2}\), restricted to \(G_{2}\times\mathrm{SL}_{2}\) (which is in fact anomalous). Again, as in Section 8 and the previous paragraph, one has an expected duality between * \(M_{1}\) the hyperspherical variety for \(G_{2}\) associated to the datum corresponding to the 8-dimensional nilpotent orbit \(\gamma\) of \(G_{2}\); * \(M_{2}\) the cotangent bundle of the spherical variety \[\mathrm{SL}_{3}\backslash G_{2}(\cong\mathrm{SO}_{6}\backslash\mathrm{SO}_{7}).\] The corresponding branching problems can be shown exactly as for the \([3,3]\) case in this subsection, using the exceptional theta correspondence for the dual pair \(\mathrm{PGL}_{3}\times G_{2}\) (in \(E_{6}\)), and the classical theta correspondence for \(\widetilde{\mathrm{SL}_{2}}\times\mathrm{SO}_{7}\) (restricted to \(\widetilde{\mathrm{SL}_{2}}\times G_{2}\)), respectively. The former has been completely understood by [GaSa3], and the latter by [GaGu]. #### 9.2.3. \([5,5]\) partition Similar to the orthogonal group case above in subsection 9.1.3, one has: **Theorem\({}^{\prime}\) 9.7**.: _Let_ * \(M_{1}\) _be the hyperspherical variety associated with a nilpotent orbit_ \(\gamma\) _of_ \(\operatorname{PGSp}_{10}\) _with partition_ \([5,5]\)_;_ * \(M_{2}\) _be the (multiplicity-free_ _[_11_]__) spin representation_ \(S\) _of_ \(\operatorname{Spin}_{11}\)_._ _Then \(M_{1}\) and \(M_{2}\) are duals of each other under the expected \([\operatorname{BZSV}]\)-duality._ Similar to subsection 9.1.3, the problem of determining the irreducible quotients of the generalised Whittaker representation \(W_{\gamma,\psi}\) arising from the \([5,5]\) partition is essentially resolved by the results in [10, Section 9], in which the local multiplicity for this model is studied. This is an instance of the "strongly tempered" case, whose solution is expressed in terms of the dual data \[(\operatorname{id}:\operatorname{Spin}_{11}\to\operatorname{Spin}_{11},S).\] Therefore what remains is to study the dual problem, which concerns the quantization of the spin representation \(S\) of \(\operatorname{Spin}_{11}\). But since the spin representation of \(\operatorname{Spin}_{11}\) is the half-spin representation for \(\operatorname{Spin}_{12}\) restricted to \(\operatorname{Spin}_{11}\), this quantization is just the Weil representation of \(\operatorname{Mp}_{32}\) pulled back to \(\operatorname{Spin}_{12}\) (which we studied above in SS9.1.3) and then further to \(\operatorname{Spin}_{11}\). ### A non-even example Let us give another example of the [2]-duality which involves the case of a non-even nilpotent orbit. **Theorem\({}^{\prime}\) 9.8**.: _Let_ * \(M_{1}\) _be the hyperspherical variety associated with a (non-even) nilpotent orbit_ \(\gamma\) _of_ \(\operatorname{PGSO}_{8}\) _with partition_ \([2,2,1,1,1,1]\)_;_ * \(M_{2}\) _be the cotangent bundle of the spherical variety_ \[G_{2}\backslash\operatorname{Spin}_{8}.\] _Then \(M_{1}\) and \(M_{2}\) are dual under the expected \([\operatorname{BZSV}]\)-duality._ To justify this meta-theorem, we first consider the branching problem associated to the quantization of \(M_{1}\), which is a generalized Whittaker representation \(W_{\gamma,\operatorname{triv},\psi}\). By the similitude theta correspondence for \(\operatorname{PGSO}_{8}\times\operatorname{PGSp}_{6}\), and applying the result of Gomez-Zhu for this dual pair, one sees that the irreducible quotients of \(W_{\gamma,\operatorname{triv},\psi}\) are theta lifts of the irreducible quotients of the generalized Whittaker representation of \(\operatorname{PGSp}_{6}\) associated with a nilpotent orbit with partition \([3,3]\). As we have alluded to in subsection 9.2.2, the latter irreducible quotients are themselves theta lifts of generic representations of \(G_{2}\), via the exceptional theta correspondence for \(G_{2}\times\operatorname{PGSp}_{6}\). This shows that the irreducible quotients of the quantization of \(M_{1}\) are those whose A-parameters factor through the map \[G_{2}\longrightarrow\operatorname{Spin}_{8}\] which is the datum defining \(M_{2}\). Next we consider the branching problem arising from the quantization of \(M_{2}\), so that we are interested in determining the irreducible quotients of \(C_{c}^{\infty}(G_{2}\backslash\operatorname{Spin}_{8})\). For this, we shall make use of the exceptional theta correspondence for the dual pair \[\operatorname{SL}_{2}^{3}/\mu_{2}^{\Delta}\times\operatorname{Spin}_{8}\] in the adjoint group of type \(E_{7}\). As discussed in [GaGo], the irreducible quotients of \(C_{c}^{\infty}(G_{2}\backslash\operatorname{Spin}_{8})\) are theta lifts of generic representations of \(\operatorname{SL}_{2}^{3}\). A study of this theta correspondence should show that the A-parameters of these theta lifts factor through the map \[H\times\operatorname{SL}_{2}=(\operatorname{SL}_{2}\times_{\mu_{2}} \operatorname{SL}_{2}\times_{\mu_{2}}\operatorname{SL}_{2})\times\operatorname {SL}_{2}\longrightarrow\operatorname{PGSO}_{8}.\] This is the datum defining \(M_{1}\). ### Some speculations We conclude this paper with some further speculations about the [BZSV]-duality, arranging some examples in families. **Expectation 9.9**.: _One has the following tables of examples of hyperspherical dual pairs, ignoring (for clarity) all issues of isogeny and center:_ \begin{tabular}{|c|c|c|c|} \hline \(M\) & \(\operatorname{PGSO}_{8}\), & \(\operatorname{PGSp}_{8}\), & \(\operatorname{PGSO}_{10}\), \\ \hline \(M^{\vee}\) & \(T^{*}(\operatorname{Spin}_{7}\backslash\operatorname{Spin}_{8})\) & \(T^{*}(\operatorname{Spin}_{7}\backslash\operatorname{Spin}_{9})\) & \(T^{*}(\operatorname{Spin}_{7}\backslash\operatorname{Spin}_{10})\) \\ \hline \end{tabular} \begin{tabular}{|c|c|c|c|} \hline \(M\) & \(\operatorname{PGSO}_{10}\), & \(\operatorname{Spin}_{10}\), & \(\operatorname{Spin}_{10}\), \\ \hline \(M^{\vee}\) & \(\operatorname{Spin}_{10}\), & \(\operatorname{Spin}_{10}\), & \(\operatorname{Spin}_{10}\) \\ \hline \end{tabular} _(The occurrence of a Young tableaux indicates the Whittaker induction or generalised Whittaker model corresponding to the nilpotent orbit with that Young tableaux.)_ _Remark 9.10_.: Several remarks: * Within each table, the models are related by the result of Gomez-Zhu (Proposition 6.4), moving from left to right. The choice of \(S\) should be dictated according to this result. * Note that the example involving the (multiplicity-free) spin representation for \(\operatorname{SO}_{10}\) comes from the work of Ginzburg [Gi] studying the Spin L-function of \(\operatorname{GSO}_{10}\). * Finally, the above tables account for essentially all of the 'exceptional' spherical varieties (to do with low-rank orthogonal groups) which occur in the classification of spherical varieties [KVS]. #### 9.4.1. A family of dual pairs from spin representations Let us now examine how the examples in the last three tables of Expectation 9.9 can in fact be seen to fit into a larger family of examples. For \(0\leq j\leq 5\), let \(M_{j}^{\vee}\) denote the restriction of the half-spin representation of \(\operatorname{Spin}_{12}\) to its subgroup \[G_{j}^{\vee}:=\operatorname{Spin}_{12-j}\times\operatorname{Spin}_{j}.\] Note that \(M_{j}^{\vee}\) is hence either the tensor product of spin reps (when \(j\) is odd), or the (sum of) tensor products of half-spin reps (when \(j\) is even). Now we have the following expectation, or meta-theorem: **Theorem\({}^{\prime}\) 9.11**.: _When \(j\) is even, then \(G_{j}=\operatorname{PGSO}_{12-j}\times\operatorname{PGSO}_{j}\)._ _Let \(\gamma\) be the nilpotent orbit of \(\operatorname{PGSO}_{12-j}\) with partition \([6-j,6-j,1^{j}]\); then \(M_{\gamma}=(\operatorname{GSp}_{2}\times\operatorname{GSO}_{j})^{sim}/ \mathbb{G}_{m}\). Here \((\operatorname{GSp}_{2}\times\operatorname{GSO}_{j})^{sim}\) consists of the pairs of elements of \(\operatorname{GSp}_{2}\) and \(\operatorname{GSO}_{j}\) with the same similitude character._ _Then the hyperspherical dual \(M_{j}\) is defined by the hyperspherical datum_ \[(\operatorname{GSp}_{2}\times\operatorname{GSO}_{j})^{sim}/\mathbb{G}_{m} \times\operatorname{SL}_{2}\to\operatorname{PGSO}_{12-j}\times\operatorname {PGSO}_{j}\] _(and trivial \(S\)), where the map into the \(\operatorname{PGSO}_{12-j}\) factor is as usual (defined by \(\gamma\)) while the map into the \(\operatorname{PGSO}_{j}\) factor is by projection._ _Similarly, when \(j\) is odd, then \(G_{j}=\operatorname{PGSp}_{11-j}\times\operatorname{PGSp}_{j-1}\)._ _Let \(\gamma\) be the nilpotent orbit of \(\operatorname{PGSp}_{11-j}\) with partition \([6-j,6-j,1^{j-1}]\); then \(M_{\gamma}=(\operatorname{GSp}_{2}\times\operatorname{GSp}_{j-1})^{\text{sim}}/ \mathbb{G}_{m}\)._ _Then the hyperspherical dual \(M_{j}\) is defined by the hyperspherical datum_ \[(\operatorname{GSp}_{2}\times\operatorname{GSp}_{j-1})^{\text{sim}}/ \mathbb{G}_{m}\times\operatorname{SL}_{2}\to\operatorname{PGSp}_{11-j}\times \operatorname{PGSp}_{j-1}\] _(and trivial \(S\)), where the map into the \(\operatorname{PGSp}_{11-j}\) factor is as usual (defined by \(\gamma\)) while the map into the \(\operatorname{PGSp}_{j-1}\) factor is by projection._ _(In informal terms, while the cases in \([\operatorname{\textsc{FU}}]\) considers the residual action of the entire \(M_{\gamma}\), and the cases we have mainly considered so far have'reduced away' this residual action of \(M_{\gamma}\), here we are considering the 'partial' residual action of a factor of \(M_{\gamma}\).)_ Note that when \(j=3\) or \(4\), we are essentially considering the nilpotent orbits with partitions \([3,3,1,1]\) and \([2,2,1,1,1,1]\) respectively, which have already featured in some of our other examples above. When \(j=0,1,2\), we of course essentially recover the last three examples of Expectation 9.9 respectively. We will mention the \(j=5\) case below. On one hand, the branching problems associated to the quantization of \(M_{j}^{\vee}\) can in principle be resolved by the exceptional theta correspondence for the dual pair \[(\operatorname{SL}_{2}\times_{\mu_{2}}\operatorname{Spin}_{j},\operatorname{ Spin}_{12-j})\] in the exceptional group of type \(E_{7}\). Indeed, continue the notation of SS9.1.3; then the quantization of \(M_{j}^{\vee}\) is the representation \(\omega_{\psi}\) further restricted to \(\operatorname{Spin}_{12-j}\times\operatorname{Spin}_{j}\). As in SS9.1.3, one has: as a representation of \(\operatorname{Spin}_{12-j}\times\operatorname{Spin}_{j}\), \[\omega_{\psi}\cong\Pi_{N,\psi},\] for \(N\) a maximal unipotent subgroup of \(\operatorname{SL}_{2}\). Suppose now Howe duality holds for the dual pair \((\operatorname{SL}_{2}\times_{\mu_{2}}\operatorname{Spin}_{j},\operatorname{ Spin}_{12-j})\); then one has the irreducible quotients of \(\Pi\) are in general of the form \[\pi\boxtimes\sigma\boxtimes\Theta(\pi\boxtimes\sigma)\] for \(\pi\in\operatorname{Irr}(\operatorname{SL}_{2})\) and \(\sigma\in\operatorname{Irr}(\operatorname{Spin}_{j})\). It follows then that the irreducible quotients of \(\omega_{\psi}\cong\Pi_{N,\psi}\) are representations of \(\operatorname{Spin}_{j}\times\operatorname{Spin}_{12-j}\) consisting in general of lifts from \(\operatorname{SL}_{2}\times_{\mu_{2}}\operatorname{Spin}_{j}\), and whose A-parameters should factor through the defining datum of \(M_{j}\) \[(\operatorname{GSp}_{2}\times\operatorname{GSO}_{j})^{\text{sim}}/\mathbb{G}_ {m}\times\operatorname{SL}_{2}\to\operatorname{PGSO}_{12-j}\times \operatorname{PGSO}_{j}\] (if \(j\) is even, and similarly if \(j\) is odd), recalling also how this map was defined above. On the other hand, the branching problems associated to the quantization of \(M_{j}\) are of strongly-tempered type. As mentioned, we have already considered several cases above. Of particular note is the extreme case \(j=5\), in which case the corresponding nilpotent orbit \(\gamma\) is trivial, and \(M_{j}\) would be (the cotangent bundle of) \(\bigl{(}(\operatorname{GSp}_{2}\times\operatorname{GSp}_{4})^{\text{sim}}/ \mathbb{G}_{m}\bigr{)}\setminus\bigl{(}\operatorname{PGSp}_{6}\times \operatorname{PGSp}_{4}\bigr{)}\), which is a strongly tempered spherical variety \([\text{WZ}]\).
2305.19474
Ethical Considerations for Machine Translation of Indigenous Languages: Giving a Voice to the Speakers
In recent years machine translation has become very successful for high-resource language pairs. This has also sparked new interest in research on the automatic translation of low-resource languages, including Indigenous languages. However, the latter are deeply related to the ethnic and cultural groups that speak (or used to speak) them. The data collection, modeling and deploying machine translation systems thus result in new ethical questions that must be addressed. Motivated by this, we first survey the existing literature on ethical considerations for the documentation, translation, and general natural language processing for Indigenous languages. Afterward, we conduct and analyze an interview study to shed light on the positions of community leaders, teachers, and language activists regarding ethical concerns for the automatic translation of their languages. Our results show that the inclusion, at different degrees, of native speakers and community members is vital to performing better and more ethical research on Indigenous languages.
Manuel Mager, Elisabeth Mager, Katharina Kann, Ngoc Thang Vu
2023-05-31T01:04:20Z
http://arxiv.org/abs/2305.19474v1
# Ethical Considerations for Machine Translation of Indigenous Languages: ###### Abstract In recent years machine translation has become very successful for high-resource language pairs. This has also sparked new interest in research on the automatic translation of low-resource languages, including Indigenous languages. However, the latter are deeply related to the ethnic and cultural groups that speak (or used to speak) them. The data collection, modeling and deploying machine translation systems thus result in new ethical questions that must be addressed. Motivated by this, we first survey the existing literature on ethical considerations for the documentation, translation, and general natural language processing for Indigenous languages. Afterward, we conduct and analyze an interview study to shed light on the positions of community leaders, teachers, and language activists regarding ethical concerns for the automatic translation of their languages. Our results show that the inclusion, at different degrees, of native speakers and community members is vital to performing better and more ethical research on Indigenous languages. ## 1 Introduction With the advancement of data-driven machine translation (MT) systems, it has become possible to, with varying degrees of quality, to translate between any pair of languages. The only precondition is the availability of enough monolingual (Lample et al., 2018; Artetxe et al., 2018) or parallel data (Vaswani et al., 2017; Bahdanau et al., 2015). There are many advantages to having high-performing MT systems. For example, they increase access to information for speakers of indigenous languages (Mager et al., 2018) and can assist revitalization efforts for these languages (Zhang et al., 2022). Research on machine translation as well as natural language processing (NLP) more generally is moving towards low-resourced setups and multilingual models. Thus, the NLP community needs to open the discussion of repercussions and best practices for research on indigenous languages (that in most cases are also low-resourced) since non-artificial languages cannot exist without a community of people that use (or have traditionally used) them to communicate. Indigenous languages further differ from more widely used ones in a crucial way: they are commonly spoken by small communities, and many communities use their language (besides other features) as a delimiter to define their own identity (Palacios, 2008; Enriquez, 2019), and have in many cases also a certain degree of endangerment. Furthermore, in some cases, highly sensitive information - such as secret aspects of their religion - has been encoded with the help of their language (Barron-Romero et al., 2016). This is why, in recent years, discussions around ethical approaches to studying endangered languages have been started (Smith, 2021; Liu et al., 2022). When we consider the past (and present) of some of the communities that speak these languages, we will find a colonial history, where research is not the exception (Bird, 2020). Therefore, it is possible to trespass on ethical limits when using typical NLP and data collection methodologies (Dwyer, 2006). In this work, we explore the basic concepts of ethics related to MT of endangered languages with a special focus on Indigenous communities, surveying previous work on the topic. To better understand the expectations and concerns related to the development of MT systems for Indigenous communities, we then conducted an interview study with 22 language activists, language teachers, and community leaders who are members of Indigenous communities from the Americas. Additionally, we also performed 1:1 dialogues with two study participants to deepen our understanding of the matter. The goal is to answer the following research questions: _How do community members want to be involved in the MT process, and why?_ _Are there sensible topics that are not ethical to translate, model, or collect data without the community's explicit permission? How can we collect data in an ethical way?_ Surprisingly, most survey participants positively view MT for their languages. However, they believe research on their languages should be done in close collaboration with community members. Open access to research discoveries and resources is also valued highly, as well as the high quality of the resulting translations. The personal interviews also confirmed this. Thus, our most important finding is that it is crucial to work closely with the communities to understand delicate ethical topics when developing MT systems for endangered languages. A Spanish translation of this paper is included in Appendix C. This translation aims to share our findings with all study participants and their communities and facilitate access to a broader audience in the Americas. ## 2 Defining "Endangered Language" Terms frequently used in NLP are _low-resource language_, _resource-poor language_, and _low-resource setting_. Those terms are not highlighting the fact that many low-resource languages are also endangered (Liu et al., 2022). Instead, they emphasize the critical machine learning problem of getting a data-driven approach to perform well with a smaller-than-ideal amount of available data (or just fewer data than what has been used for other languages). In this case, algorithmic or technological innovations are needed to close the performance gap between high-resource and resource-poor languages. This further implies that being low-resourced is not a property of a language but a term that only makes sense in the context of a particular task or tasks. In contrast, the term _endangered language_ refers to a language with a certain degree of danger for its existence.1 Endangered languages are relevant for our study, as most Indigenous languages are also endangered (Hale, 1992). According to the UNESCO classification, (Moseley, 2010) languages can be sorted into the following different categories: Footnote 1: In this paper, we will discuss only non-artificially created languages. * _safe_: spoken by all generations; * _vulnerable_: restricted just to a certain domain (e.g., inside the family); * _definitely endangered_: it has no kids that speak the language; * _severely endangered_: only elder people speak it; * _critical endangered_: there are only speakers left with partial knowledge, and they use it infrequently; * _extinct_, when there are no persons able to speak the language anymore. Languages can become endangered due to social, cultural, and political reasons; most commonly conquests and wars, economic pressures, language policies from political powers, assimilation of the dominant culture, discrimination, and language standardization (Austin and Sallabank, 2013). As we can see, the problem of how a language gets endangered involves factors that must be addressed in the ethical approach of any study. On the machine learning side, an additional challenge arises: data for endangered languages is not easily available (or, in fact, available at all), as these languages have limited media production (TV shows, literature, internet blogs; Hamalainen, 2021). One possible source of data for these languages is already existing documents in form of books, records, and archives (Bustamante et al., 2020). ## 3 Ethics and MT ### Ethics and Data The study of endangered languages in indigenous communities has a long history, with the most prominent questions being focused mainly on the challenge of data collection (Smith, 2021). One of the common forms of this is to use normative ethics (deontology). Examples of relevant guidelines include those from The Australian Institute of Aboriginal and Torres Strait Islander Studies;2 the Ethical statement of the Linguistic Society of America;3 and the DOBS code of conduct.4 These lists are the results of broad discussions which have taken place over decades. In this debate also, indigenous voices were inside academia raised (Smith, 2021). But why do we have so many attempts to set up an ethical code for linguistic fieldwork? When it comes to working with human societies, there are no easy solutions for the ethical dilemmas that arise (Dwyer, 2006). Every situation requires a unique treatment and compromise. This is why, in addition to the creation of a framework which is as general as possible, the concrete application of such principles involves continued discussion. Dwyer (2006) suggests documenting the ethical issues and concerns which arise during the course of a research project and the way these issues are addressed, such that other researchers can learn from the experience. While a code of conduct or principles is good, it runs the risk of introducing either overly complicated - or even inadequate - regulations, relegating this needed discussion. Overall, we can summarize those principles that appear in all suggested lists under three big themes: * _Consultation, Negotiation and Mutual Understanding_. The right to consultation of Indigenous people is stipulated in convention 167 of the International Labor Organization (Ilo, 1989) and states that they "have the right to preserve and develop their own institutions, languages, and cultures". Therefore, informing the community about the planned research, negotiating a possible outcome, and reaching a mutual agreement on the directions and details of the project should happen in all cases. * as well as any governing organizations interested in the project - should be familiar with the history and traditions of the community. Also, it should be recommended that local researchers, speakers, or internal governments should be involved in the project. * _Sharing and distribution of data and research_. The product of the research should be available for use by the community, so they can take advantage of the generated materials, like papers, books, or data. Some of these commonly agreed-on principles need to be adapted to concrete situations, which might not be easy to do via a general approach. For instance, the documentation process will create data, and the ownership of this data is a major source of discussion (cf. Sections 4, 5). Here, the traditional views of the communities might contradict the juridical system of a country (Daes, 1993). This problem does not have a simple solution and needs to be carefully considered when collecting data. An additional call from these sources is to de-colonize research and to stop viewing Indigenous communities as sources of data, but rather as people with their own history (Smith, 2021). The current divorce between researchers and the cultural units of the communities can lead to reinforcing colonial legacy (Leonard, 2020). As a final remark, we want to discuss the common assumption that any Ethical discussion must end with a normative setup for a field. It reduces indigenous institutions' collective to norms that allow an individual approach to the matter (Meza Salcedo, 2017). This would also not allow understanding the ethical questions with their own Indigenous communal cosmovision (Salcedo, 2016). Therefore, in this text, we aim to open the MT ethical debate to the NLP researchers and the Indigenous communities based on inclusion and dialog. ### Ethics and _Human_ Translation For a successful translation, the inclusion of all participants is important, requiring their equal, informal, and understanding-oriented participation (Nissing and Muller, 2009). For Rachels and Rachels (1986), the minimum conception of morality is that when we give "equal weight to the interests of each individual affected by one's decision." The question is how authors' intentions relate to the source culture's obterness, with their culturally-specific values (Chesterman, 2001). According to Doherty (2016), "the translation process studies emerged to focus on the translator and the process of translation rather than on the end product," incorporating mixed-method designs to get objective observations. A well-documented example of the non-ethical misuse of translation is the application of translation as an instrument for colonial domination. The main aim of this colonialist vision was to "civilize the savages" (Ludescher, 2001). For example, the summer institute of linguistics (SIL International)5 was used for this goal during the 20th century in countries with Indigenous cultures, translating the Bible and trying to provoke a cultural change6 in these communities (DelValls, 1978; Errington, 2001; Carey, 2010). Of course, these practices are not new and can be found throughout history (Gilmour, 2007). It is essential to notice that non-ethical research can still deliver useful material and knowledge, e.g., for language revitalization (Premsirat and Malone, 2003), but might inflict harm on the targeted community. Footnote 6: The role of SIL is controversial, and can not be summarized with one single statement. In our approach, we only refer to the role played related to cultural change. In many cases, the communities that got religious texts translated were already Christians, given previous colonization actions. However, there are also cases, where non-christian communities had Bilbes and other religious texts translated into their language, with missionary aims. This triggered community divisions. For example, the translation of the religious texts to Wixarika (Fernandez, 2022). This also happened in the Community of Zoupian (in the Mexican state of Nayarit), where Christians, using the SIL-translated Bible, triggered an internal conflict in the community (the first author is part of this community). For the interested reader, we also recommend Dobrin (2009) introductory article. ### Ethics and _Machine_ Translation In the context of NLP research, the speakers are not directly involved when a model is trained (Pavlick et al., 2014). In contrast, the data collection processes (Fort et al., 2011) and human evaluation (Couillault et al., 2014) directly interact with the speakers and, therefore, have central importance regarding ethics. This is also true for the final translation service, which will interact with the broad public. Data collection is the first and most evident issue when it comes to translation. Modern neural MT systems require a large amount of parallel data to be trained optimally (Junczys-Dowmunt, 2019). One way to obtain data is from crowd-sourcing (Fort et al., 2011). However, this kind of job can be ill-paid and might constitute a problem for the living conditions of the workers (Schmidt, 2013). Also, data privacy is not trivial to handle. Systems must be able to filter sensitive information. The problem of encoding biases7, like gender bias (Stanovsky et al., 2019), is also an ethical concern for MT. It is also necessary to disclose the limitations and issues with certain systems (Leidner and Plachouras, 2017). Footnote 7: Multilingual systems refer in NLP to systems capable of translating a set of languages from and to English. In some cases, they are also able to translate between languages where English is not involved. NLP research can also be used as a political instrument of power, where we can observe mutual relationships between language, society, and the individual that "are also the source for the societal impact factors of NLP" (Horvath et al., 2017). In this way, NLP translation can be applied as an instrument to changing the culture of minorities as in traditional translation (cf. Section 3.2). So conizers used translation as means of imperial control and expropriation (Cheyfitz, 1997; Niranjana, 1992). The asymmetry of power is the cause of domination, where subaltern cultures being flooded with "foreign materials and foreign language impositions" is a real danger for minority cultures (Tymoczko, 2006). Schwartz (2022) discuss the need to decolonize the scientific approach of the NLP community as a whole, expressing the need for researchers to be cognizant of the history and the cultural aspects of the communities which use the languages they are working with. Additionally, he proposes that our research should have an obligation to provide some benefit from our studies to the communities, an obligation of accountability (and therefore be in direct contact with their governing organizations), and an obligation of non-maleficence. The fact that many translation systems nowadays are multilingual8 also result in more multi-cultural challenges (Hershcovich et al., 2022). Footnote 8: Multilingual systems refer in NLP to systems capable of translating a set of languages from and to English. In some cases, they are also able to translate between languages where English is not involved. Finally, we also want to highlight the importance of discussing MT systems in a text-to-text setup. The usage of text is constrained to certain topics and varies from community to community. For instance, Wixarika and Quechua, languages that are spoken across all generations, are used in a written fashion mostly in private messaging apps (like WhatsApp) but also have a prolific Meme and Facebook publication generation9. Even if a certain community does not widely adopt the written tradition, there are, at minimum legal obligations of the States towards indigenous languages. For example, some constitutions recognize indigenous languages as national languages (e.g., Mexico and Bolivia), binding the state to the responsibility to translate all official pages, documents, laws, etc., to indigenous languages. This has not been implemented, and this case is a highly valuable application case for machine translation to assist human translation. However, our findings also apply to speech-to-text translation and speech-to-speech tasks that would cover all languages, even with no written tradition. ## 4 The Speakers' Opinions It is important to include the opinion and vision of speakers of endangered languages in NLP research, especially for topics such as MT. Therefore, we conduct a survey study with 22 language activists, teachers, and community leaders from the Americas. Importantly, our primary goal is not only to gather quantitative input on the ethical questions regarding MT for their languages but also to collect qualitative input by asking them to expand on their answers. Additionally, we also perform an interview with a subset of two participants of the initial interview study. ### Study Design We focus our study on the Americas,10 selecting the following communities: Aymara, Chatino, Maya, Mazatec, Mixe, Nahua, Otomi, Quechua, Tenek, Tepheuano, Kichwa of Otavalo, and Zapotec. We want to note that our study does not aim to represent a general opinion of all Indigenous tribes, nor is it a final general statement on the issue. It is a case study that surfaces the opinions of specific groups of speakers of Indigenous languages. Furthermore, the views of the interviewed individuals are their own and do not necessarily represent the view of their tribes, nations, or communities. Footnote 10: Different parts of the world have very different levels of wariness, not just from colonial history but precisely due to interactions with field workers. Quantitative and Qualitative aspectsFor the quantitative study, we used a survey. Surveys are a well-established technique to be used with Indigenous communities with an extensive history and are used and documented by classics like Edward Tylor, Anthony Wallace, Lewis Henry Morgan. This is also true for well-recognized Mexican (Indigenous engaged) social anthropologists Jimenez and Ramos (1985); Alfredo and Alberto (1978). For the qualitative part, we revisit existing positional papers and articles of Indigenous researchers and activists. Additionally, we use open questions in the survey, allowing extending the pure quantitative view to a qualitative one. Finally, we performed two 1-to-1 interviews with an activist (Mixe) and a linguist Chatino. Participant RecruitmentWe contact potential participants online in three ways. Our first approach is to establish communication through the potential participants' official project websites or public online accounts. This includes e-mail, Twitter, Facebook, and Instagram pages. Our second approach is to directly contact people in our target group with whom at least one of the co-authors has already established a relationship. Finally, we also published a call for participation on social media and check if the volunteers belong to our target group. The goals of our research, as well as the reach and data handling, are explained directly to each participant and are also included in the final form. We do not gather any personal information about the participants, like name, gender, age, etc. All study participants are volunteers. QuestionnaireOur study consists of 12 questions. The first three questions are rather general: they ask for the tribe, nation, or Indigenous people the participant belongs to if they self-identify as an activist, community leader, or teacher, and for their fluency in their language. The remaining questions target data policies, inclusion policies, benefits and dangers of MT systems, and best research practices. The full questionnaire is available in the appendix. The questions are available in English and Spanish, but only one form has been filled in English, while the rest has been completed in Spanish. Therefore, the authors have automatically translated all comments which are shown in this paper. ### Results The results of the study can be seen in Figure 1. Additionally, we also discuss the open answers to each question to provide more insight. Inclusion of Native Speakers and Permissions to Study the LanguageFigure 1(a) shows that 77.3% of the participants report that their community has no restrictions regarding the sharing of their language with outside people. The comments for this question show that many participants are proud of their language and heritage: "We are supportive and share our roots. Proud of who visits us" We even find stronger statements against the pro hibition to share: "No one has the right to restrict the spread of the language". However, there also do exist communities with restrictions. Thus, we conclude that researchers cannot assume by default that all Indigenous groups would agree to share information about their language or would be happy about research on it. Benefits and Dangers of MT SystemsFigure 1(b) shows that a strong majority of our participants think that an MT system for their language would be beneficial. However, there is also an important number of people who see at least some degree of danger. In this case, we need to look at the participants' comments to understand their worries. First, we find that a main concern for the participants is the translation quality. The fear of inadequate translations of cultural terms is also important. In Table 2, we can see a set of comments that illustrate these fears. One interesting comment refers to the fear of standardization of the participant's language, which could lead to a loss of diversity. In the same table, we can also see the benefits the participants expect, mostly in education and in elevating the status and usefulness of their languages. Figure 1: Study performed on 22 participants that are members of Indigenous communities from the Americas. Table 1 shows some answers to the open question on possible topics that might cause damage to the community. Most answers could not identify any possible topic that could be dangerous. However, the second most frequent answer was related to religion. Some answers are worried that ancient ceremonial secrets could be revealed. Others also show worries about the influence of Western religions. This brings us to the question if the Bible Christodouloupoulos and Steedman (2015); McCarthy et al. (2020); Agic and Vulic (2019) is suited to use as our default corpora for MT, when an indigenous language is involved. Finally, also few answers expressed that the usage of indigenous languages in the internal organization of the community could be in danger with MT systems. In contrast, figure 1(c) shows the topics that that most positive evaluation registered: everyday talks (15), science and education (14), culture and traditions (14), and medicine and health (14). **Can you think of any dangers to the language and culture, if so, which?** There are cultural linguistic concepts that are only understood in our native language. The existence of so many variants would make the project little or not profitable and would lead the "experts" to an attempt to standardize language, which would be a tremendous mistake. There are cultural elements that must be taken into account. They could undoubtedly distort the proper use of the language. **What advantages would you see with an automatic translation system?** The use of automatic translators in spaces such as hospitals, government offices, etc. Perhaps a contribution of modernity to the community, preservation of the native language. It would contribute to the status of indigenous languages It would contribute to the social use of our language It would facilitate teaching because you would have many support tools. **Participation of Members of Indigenous Communities in Research** Figure 1(d) shows that our study participants think it is important to include people from the targeted communities in research projects. This confirms the experience in linguistics, where they found a similar pattern Smith (2021) (see SS3.1). It is important to note that only one answer was stated that official permission is needed to perform the study. In the comments, the right of consulting was mentioned, together with the advantages of involving community members in research: "It is preferable [to integrate people from the community] to obtain a good system, and not just to have approximations, because only the members of the culture know how the language is used."; "So that the vocabulary is enriched and some words that do not exist are not imposed."; "Carry out activities where the community can be involved, win-win.". **Data Usage and Translation Quality** Regarding data ownership and accessibility, we find diverse sets of responses. First, Figure 1(e) shows many different opinions. Overall, we can say that a strong feeling exists that data should be publicly available. However, when it comes to the property of the data, opinions are more diverse. Surprisingly, an important number of participants (\(17\%\)) think that the external research group should own the data. Nevertheless, a higher number of participants think that the data should be owned by the community (\(29.4\%\)), and 20.6% thinks it should be owned by the speakers who participate in the research. This is a difficult topic, as traditional norms and modern law systems interact (cf. Section 3.1). In the comments, we find sad examples of mistrust in academic institutions. For example, one comment talks about previous problems of their tribe, as recordings and other material taken by linguists is not accessible to them: "Wary of academic institutions since we currently have issues accessing recordings that belong to academics and libraries and are not publicly accessible." However, in general, we see a wide range of opinions: "The work of the few who take linguistic identity seriously should be valued", "It could be public but always with the endorsement and consent of the community." This diversity demonstrates that there is a need for researchers to have a close relationship with the communities to understand the background and the aims of each particular case. As discussed above, the quality of the final sys \begin{table} \begin{tabular}{l} \begin{tabular}{l} **What would you see as damaging topics that should not be machine translated?** \\ \hline Anything ceremonial \\ Laws, medicine and health, science, mercantile matters, religion and sacred songs. \\ Issues that threaten organic life. \\ Western religion \\ Political situations and religions unless it is in the interest of the person. \\ Sacred songs, like those of a healer. \\ \end{tabular} \end{table} Table 1: Some answers to the open question on possible dangers of MT for indigenous languages. \begin{table} \begin{tabular}{l} **Can you think of any dangers to the language and culture, if so, which?** \\ \hline There are cultural linguistic concepts that are only understood in our native language. \\ The existence of so many variants would make the project little or not profitable and would lead the "experts" to an attempt to standardize language, which would be a tremendous mistake. \\ There are cultural elements that must be taken into account. \\ They could undoubtedly distort the proper use of the language. \\ \end{tabular} \end{table} Table 2: Open answers of speakers to questions on dangers and benefits of MT systems for their communities. tem is an important concern for many participants. In Figure 1(f) we can see that publishing an experimental MT system is also controversial. The possibility of using an experimental system is liked by \(54.8\%\) of our participants, which is slightly higher than the number of participants who are against this (\(45.5\%\)). Some opinions against it are in line with earlier worries about incorrect translations of cultural content: "Something that is devoid of structure and cultural objectivity cannot be made available to the public" and "...damage would be caused to the language and its representatives since the learners would learn in the wrong way." Most people with a positive opinion agree that an initially poor system could be improved over time: "If it could be improved and corrected, that would be excellent." ## 5 Discussion In Section 3 we survey the ongoing debate on ethics in documentation, translation, and MT, before presenting an interview study in Section 4. Now we discuss some the most important issues we have identified in the last section in more depth. Need for Consultations with CommunitiesPrevious experiences Bird (2020); Liu et al. (2022) as well our study highlight the need for consultation with Indigenous communities when performing research involving their languages11. In some cases, the minimal expressed requirement is to inform speakers about new technological advances. Feedback and quality checks are also crucial for MT systems and important to the members of the communities. This consultation should include intercultural dialog as it has been a central instrument in the decision-making of indigenous communities Beauclair (2010). We recommend doing this by integrating community members into the loop while, of course, giving them the credit they deserve. Footnote 11: An example of a community engaged fieldwork is Czaykowska-Higgins (2009) Legal systems vs. Traditional Views of Communal Knowledge OwnershipLegal systems and, with that, copyright laws vary by country. However, legal rules are sometimes in conflict with the traditional views of Indigenous people Dwyer (2006). Thus, when working with Indigenous communities, we recommend discussing and agreeing upon ownership rights with annotators or study participants prior to starting the work to find an arrangement everyone is happy with. We would also like to point out that, according to our case study, a general feeling is that data and research results need to be accessible to the community speaking the language. This contradicts the practice of some documentation efforts that close the collected data to the public and even to the speakers of the community Avelino (2021). Some participants in our study even suggest the usage of Creative Commons (CC)12 for data. However, the use of CC might not be the best licensing option, as it not designed specifically for the needs of Indigenous. Finally, whenever collected data are used for commercial usage, special agreements involving financial aspects are crucial. Footnote 12: [https://creativecommons.org/licenses/](https://creativecommons.org/licenses/) PermissionsSome communities require that a permit from their governing entity be obtained when someone, not a member, wants to study their language. This might be difficult as sometimes there is no central authority. Figuring out from whom to get permission can be challenging in such scenarios. However, as we see in this study, many communities do not require this permission. A promising project that aims to simplify this topic is the KP labels13. It is a set of labels that communities can use to express their permissions and willingness to cooperate with researchers and external projects. Footnote 13: [https://localcontexts.org/labels/traditional-knowledge-labels/](https://localcontexts.org/labels/traditional-knowledge-labels/) Personal DataFrom the free-text answers, we further learn that, for many speakers, using their own language in their daily environment helps them protect their privacy: Their conversations can only be understood by their family or close environment. This concern of data handling is, however, also valid for other languages. Concerns about Private Information of the CommunityThe previous point can further be extended to assemblies and other organizational meetings, where the language barrier is used to keep their decisions or strategies private. This is one worry that the communities have with MT and the possible topics that might be harmful for them. Some communities also have general concerns about sharing their language with people that do not belong to them (e.g., the Hopi Dictionary controversy Hill (2002)). For this case, it is important not to approach this issue from a Western legal point of view and go towards traditional internal governance practices and norms and consultation with the communities. Religion and the BibleRegarding problematic domains for MT, multiple survey participants mentioned religion. This is quite relevant for the NLP community, as the broadest resource currently available for minority languages is the Bible. As seen in Section 3.2, the colonial usage of translation of religious texts [21] is precisely the origin of these detests. Thus, we recommend that NLP and MT researchers use the Bible carefully, through a consultation process, and consider its impacts. Nevertheless, without a close relationship with each community (e.g., in a massive multilingual MT experiment), the recommendation is to void using the Bible. Technology and data SovereigntyHaving technology for their own languages is well seen by most study participants. However, we also find a strong wish to participate directly in the development of MT systems. This requires more inclusion of Indigenous researchers in NLP. Therefore, training Indigenous researchers and engineers is an important task that we recommend should be valued more highly by the NLP and MT communities. We are aware that existing inequalities cannot be removed immediately or in isolation, but everyone can be supportive.14 The creation of a collaborative process is a proposal emerging from the communities themselves: "Technology as Tequio; technological creation and innovation as a common good" [1]. However, it is not possible to build contemporary data-driven NLP technologies without data. And this opens the discussion regarding Data Sovereignty. First, it is important to mention that the communities have the right to self-determination, and this includes the data that they create. Applying this sovereignty to data refers to having control over the data, knowledge15 and cultural expressions that are created by these communities. As discussed in this paper, it is important to reach agreements with the communities through consultations and direct collaborations. This includes the licensing and ownership of the final data products. Footnote 14: Tech sovereignty is a central topic for the Natives in Tech conference in 2022: [https://nativesintech.org/conferences/2022](https://nativesintech.org/conferences/2022) Footnote 15: See [https://indigenoususinrovate.org/downloads/indigenous-knowledges-and-data-governance-protocol_may-2021.pdf](https://indigenoususinrovate.org/downloads/indigenous-knowledges-and-data-governance-protocol_may-2021.pdf) Our Findings and Previous WorkFinally, we want to relate our findings to similar discussions in prior work. Most previous concerns and suggestions related to including and consulting people from the communities [1, 13] are aligned with the wishes and desires of the participants in our study. The inclusion of community members as co-authors [13] should not be an artificial mechanic but more a broad inclusion process, including data and technology sovereignty. This is also aligned with the community building aimed at by Zhang et al. [2022]. Additionally, we should consider that there might exist problematic topics and not underestimate the importance of high-quality translations. ## 6 Conclusion In this work, which is focused on ethical challenges for MT of Indigenous languages, we first provided an overview of relevant ethical approaches, ethical challenges for translation in general, and more specific challenges for MT. Afterward, we conducted a case study, for which we interviewed \(22\) Indigenous language activists, language teachers, and community leaders from the Americas. Our findings aligned with previous findings regarding the need for inclusion and consultation with communities when working with language data. Additionally, our participants expressed a surprisingly strong interest in having MT systems for their languages but also concerns regarding commercial usage, cultural and religious misuse, data, and technological sovereignty. We ended with specific recommendations for the NLP and MT communities, but even more important, an open discussion framework for the indigenous communities. ## Acknowledgments We want to thank all community members, linguists, and language activists who participants in our study. We will also thank the reviewers for their valuable comments and Heriberto Avelino for his useful insights. This project has benefited from financial support to Manuel Mager by a DAAD Doctoral Research Grant. ## Limitations This study is restricted to the Americas. Therefore the results from this paper can not be gen eralized, as different indigenous communities or nations might have different pasts. Also, all opinions expressed by the interviewed people are exclusively personal and in should not be interpreted as the general stand of the communities. As discussed in the paper, the main aim of this work is not to provide a normative for MT researchers. We rather provide a set of questions and open topics that should be considered when performing MT work with indigenous languages. Nevertheless, we also provide general and broad non-normative recommendations that should be carefully applied to the concrete case of each community. ## Ethical statement To ensure the ethics of this work, we followed well-recognized ethical codes: The Australian Institute of Aboriginal and Torres Strait Islander Studies ( AIATSIS)16 and the DOBES code of conduct17. As a result, all participants were well informed about the intent of this work, our aims, and the complete anonymization of their answers. Moreover, this work was done with indigenous leadership (as suggested by AIATSIS). Footnote 16: [https://aiatsis.gov.au/sites/default/files/2020-10/aiatsis-code-ethics.pdf](https://aiatsis.gov.au/sites/default/files/2020-10/aiatsis-code-ethics.pdf) Footnote 17: [https://dobes.mpi.nl/ethical_legal_aspects/DOBES-coc-v2.pdf](https://dobes.mpi.nl/ethical_legal_aspects/DOBES-coc-v2.pdf) Here we list the ethical issues we found while working on this work and how we try to minimize their impact. First, we were concerned with the data protection of the participants in this study. As for this study, no personal data is required. Therefore, we decided to remove any questions containing any information that could reveal the identity of the participants. Second, as our study aims to get substantial input from the communities, we decided to leave as many open questions as possible and consider the available comments section of each question. All participants were informed about the goals of this project and participated in a free and informed way. To give proper recognition to the participants of this study, we offer an option to be included in the acknowledgment section.
2309.04096
Kinetic description of scalar conservation laws with Markovian data
We derive a kinetic equation to describe the statistical structure of solutions $\rho$ to scalar conservation laws $\rho_t=H(x,t,\rho )_x$, with certain Markov initial conditions. When the Hamiltonian function is convex and increasing in $\rho$, we show that the solution $\rho(x,t)$ is a Markov process in $x$ (respectively $t$) with $t$ (respectively $x$) fixed. Two classes of Markov conditions are considered in this article. In the first class, the initial data is characterize by a drift $b$ which satisfies a linear PDE, and a jump density $f$ which satisfies a kinetic equation as time varies. In the second class, the initial data is a concatenation of fundamental solutions that are characterized by a parameter $y$, which is a Markov jump process with a jump density $g$ satisfying a kinetic equation. When $H$ is not increasing in $\rho$, the restriction of $\rho$ to a line in $(x,t)$ plane is a Markov process of the same type, provided that the slope of the line satisfies an inequality.
Fraydoun Rezakhanlou
2023-09-08T03:17:28Z
http://arxiv.org/abs/2309.04096v1
# Kinetic description of scalar conservation laws with Markovian data ###### Abstract We derive a kinetic equation to describe the statistical structure of solutions \(\rho\) to scalar conservation laws \(\rho_{t}=H(x,t,\rho)_{x}\), with certain Markov initial conditions. When the Hamiltonian function is convex and increasing in \(\rho\), we show that the solution \(\rho(x,t)\) is a Markov process in \(x\) (respectively \(t\)) with \(t\) (respectively \(x\)) fixed. Two classes of Markov conditions are considered in this article. In the first class, the initial data is characterize by a drift \(b\) which satisfies a linear PDE, and a jump density \(f\) which satisfies a kinetic equation as time varies. In the second class, the initial data is a concatenation of fundamental solutions that are characterized by a parameter \(y\), which is a Markov jump process with a jump density \(g\) satisfying a kinetic equation. When \(H\) is not increasing in \(\rho\), the restriction of \(\rho\) to a line in \((x,t)\) plane is a Markov process of the same type, provided that the slope of the line satisfies an inequality. ## 1 Introduction Hamilton-Jacobi equation (HJE) is one of the most popular and studied PDE which enjoys vast applications in numerous areas of science. Originally HJEs were formulated in connection with the completely integrable Hamiltonian ODEs of celestial mechanics. They have also been used to study the evolution of the value functions in control and differential game theory. Several growth models in physics and biology are described by HJEs. In these models, a random interface separates regions associated with different phases and the interface can be locally approximated by the graph of a solution to a HJE. To make up for the lack of exact information or/and the presence of impurity, it is common to assume that the Hamiltonian function which appears in our HJE is random. Naturally we would like to understand how the randomness affects the solutions and how the statistics of solutions are propagated with time. In dimension one, the differentiated version of a Hamilton-Jacobi equation becomes a scalar conservation law for the inclination of the one-dimensional interface, and may be used to model an one-dimensional fluid. In the context of fluids, we wish to obtain some qualitative information about the structure of shocks and their fluctuations. The primary purpose of this article is to derive an evolution equation for the statistics of solutions to a HJE in dimension one. We achieve this by utilizing a kinetic description for the shock densities of piecewise smooth solutions. Given a \(C^{2}\) Hamiltonian function \(H:\mathbb{R}\times[0,\infty)\times\mathbb{R}\to\mathbb{R}\), we consider the HJE \[u_{t}=H(x,t,u_{x}),\hskip 14.226378ptt\geq t_{0}, \tag{1.1}\] or the corresponding scalar conservation law \[\rho_{t}=H(x,t,\rho)_{x},\hskip 14.226378ptt\geq t_{0}. \tag{1.2}\] We assume that the Hamiltonian function \(H(x,t,\rho)\) is convex in the _momentum_ variable \(\rho\). As our main goal, we show that the statistics of \(\rho(x,t)\) admits an exact kinetic description when the initial data \(\rho^{0}(x)=\rho(x,t_{0})\) is an inhomogeneous Markov process. ### Main result I For our first result, we assume that the initial data \(\rho^{0}=\rho^{0}(x)\) is a piecewise-deterministic inhomogeneous Markov process (PDMP) Markov process determined by a generator \(\mathcal{A}_{x}^{0}=\mathcal{A}_{x,t_{0}}\) acting on test functions \(\psi(\rho)\) according to \[(\mathcal{A}_{x}^{0}\psi)(\rho)=b^{0}(x,\rho)\psi^{\prime}(\rho)+\int_{\rho}^ {\infty}\big{(}\psi(\rho_{*})-\psi(\rho)\big{)}\,f^{0}(x,\rho,\rho_{*})\ d\rho_{*}. \tag{1.3}\] The random path \(\rho^{0}(x)\) may be constructed by solving (deterministically) the ODE \(d\rho^{0}/dx=b^{0}(x,\rho^{0})\), interrupted by jumps which occur stochastically: the rate density at which \(\rho^{0}\) makes a jump at \(x\) is \(f^{0}(x,\rho^{0}(x),\rho_{*})\). As our main result, we show that the process \(x\mapsto\rho(x,t)\) (for fixed \(t>t_{0}\)) is again a PDMP, with generator \[\big{(}\mathcal{A}_{x,t}\psi\big{)}(\rho)=b(x,t,\rho)\psi^{\prime}(\rho)+\int _{\rho}^{\infty}\big{(}\psi(\rho_{*})-\psi(\rho)\big{)}\,f(x,t,\rho,\rho_{*}) \,d\rho_{*}. \tag{1.4}\] Here \(b(x,t,\rho)\) and \(f(x,t,\rho_{-},\rho_{+})\) are obtained from their initial (\(t=t_{0}\)) conditions \[b(x,t_{0},\rho)=b^{0}(x,\rho),\hskip 14.226378ptf(x,t_{0},\rho_{-},\rho_{+})=f^{ 0}(x,\rho_{-},\rho_{+}), \tag{1.5}\] by solving a semi-linear PDE, \[b_{t}+H_{x}b_{\rho}-H_{\rho}b_{x}=H_{\rho\rho}b^{2}+2H_{\rho x}b+H_{xx}, \tag{1.6}\] and a kinetic (integro-)PDE \[f_{t}-(vf)_{x}-C(f)=Q(f), \tag{1.7}\] where \[v(x,t,\rho_{-},\rho_{+}):=\frac{H(x,t,\rho_{-})-H(x,t,\rho_{+})}{\rho_{-}-\rho _{+}}, \tag{1.8}\] \(Q(f)=Q^{+}(f)-Q^{-}(f)\) is a coagulation-like collision operator, and \(C(f)=C^{+}(f)+C^{-}(f)\) is a linear first order differential operator. More precisely, **(i)**\(Q^{+}\) is a quadratic operator and \(Q^{+}(f)(x,t,\rho_{-},\rho_{+})\) is defined as \[\int_{\rho_{-}}^{\rho_{+}}\big{(}v(x,t,\rho_{*},\rho_{+})-v(x,t,\rho_{-},\rho_ {*})\big{)}f(x,t,\rho_{-},\rho_{*})f(x,t,\rho_{*},\rho_{+})\ d\rho_{*}. \tag{1.9}\] **(ii)** The quadratic operator \(Q^{-}\) is of the form \(Q^{-}(f)=fJf\), for a linear operator \(J\). Given \(f\), the function \((Jf)(x,t,\rho_{-},\rho_{+})\) is defined as \[A(vf)(x,t,\rho_{+})-A(vf)(x,t,\rho_{-})-v(x,t,\rho_{-},\rho_{+})\big{(}(Af)(x,t,\rho_{+})-(Af)(x,t,\rho_{-})\big{)}, \tag{1.10}\] for linear operators \(A\) defined by \[Ah(x,t,\rho_{-})=\int_{\rho_{-}}^{\infty}h(x,t,\rho_{-},\rho_{+})\ d\rho_{+}.\] **(iii)** Given a \(C^{1}\) kernel \(f\), \[\big{(}C^{+}f\big{)}(x,t,\rho_{-},\rho_{+})=\left[K(x,t,\rho_{+},\rho_{-})f(x, t,\rho_{-},\rho_{+})\right]_{\rho_{+}}, \tag{1.11}\] where \[K(x,t,\rho_{+},\rho_{-}) =b(x,t,\rho_{+})v(x,t,\rho_{-},\rho_{+})-\beta(x,t,\rho_{+}), \quad\text{ with }\] \[\beta(x,t,\rho) =\big{(}H_{x}+bH_{\rho}\big{)}(x,t,\rho).\] Here and below, by the expression \(X_{a}\) we mean the partial derivative of \(X\) with respect to the variable \(a\). For example the right-hand side of (1.11) represents the partial derivative of the expression inside the brackets with respect to \(\rho_{+}\). **(iv)** Given a \(C^{1}\) kernel \(f\), \[\big{(}C^{-}f\big{)}(x,t,\rho_{-},\rho_{+})= b(x,t,\rho_{-})(vf)_{\rho_{-}}(x,t,\rho_{-},\rho_{+})-\beta(x,t,\rho_{-}) f_{\rho_{-}}(x,t,\rho_{-},\rho_{+}) \tag{1.12}\] \[= b(x,t,\rho_{-})\big{(}v_{\rho_{-}}f\big{)}(x,t,\rho_{-},\rho_{+}) +K(x,t,\rho_{-},\rho_{+})f_{\rho_{-}}(x,t,\rho_{-},\rho_{+}).\] **Remark 1.1** For a more compact reformulation of our equations (1.6) and (1.7), let us write \[x_{1}=x,\quad x_{2}=t,\quad f^{1}=f,\quad f^{2}=vf^{1},\quad b^{1}=b,\quad b^{2 }=\beta. \tag{1.13}\] Recall \(Ag(\rho)=A(g)(\rho)=\int g(\rho,\rho_{*})\ d\rho_{*}\), and define \[(g\otimes k)(\rho_{-},\rho_{+})=g(\rho_{-},\rho_{+})k(\rho_{+}),\quad\ (k\otimes g)(\rho_{-},\rho_{+})=k(\rho_{-})g(\rho_{-},\rho_{+}). \tag{1.14}\] A more symmetric rewriting of the equations (1.6) and (1.7) read as \[b^{1}_{x_{2}}-b^{2}_{x_{1}}=b^{1}b^{2}_{\rho}-b^{2}b^{1}_{\rho},\ \ \ \ f^{1}_{x_{2}}-f^{2}_{x_{1}}=\mathcal{Q}(f^{1},f^{2})-\mathcal{Q}(f^{2},f^{1}), \tag{1.15}\] where \[\mathcal{Q}(f^{j},f^{i})=f^{j}*f^{i}-A(f^{j})\otimes f^{i}-f^{j}\otimes A(f^{ i})+b^{j}\otimes f^{i}_{\rho_{-}}-(f^{j}\otimes b^{i})_{\rho_{+}}, \tag{1.16}\] where \[(f^{j}*f^{i})(\rho_{-},\rho_{+})=\int f^{j}(\rho_{-},\rho_{*})f^{i}(\rho_{*}, \rho_{+})\ d\rho_{*}.\] \(\square\) We now formulate our assumptions on the initial drift \(b^{0}\), the initial jump rate kernel \(f^{0}\), and the Hamiltonian function \(H(x,t,\rho)\). **Hypothesis 1.1(i)** The Hamiltonian function \(H:\mathbb{R}\times[t_{0},T]\times[P_{-},P_{+}]\to\mathbb{R}\) is a \(C^{2}\) function. Additionally, \(H\) is increasing and convex in \(\rho\). **(ii)** The PDE (1.6) has a bounded \(C^{1}\) solution \(b\leq 0\) for \(t\in[t_{0},T]\). We set \(b^{0}(x,\rho):=b(x,t_{0},\rho)\). **(iii)** The PDE (1.7) has a solution \(f:\hat{\Lambda}\to[0,\infty)\), where \(\hat{\Lambda}:=\mathbb{R}\times[t_{0},T]\times\Lambda(P_{-},P_{+})\), with \[\Lambda(P_{-},P_{+}):=\Lambda\cap[-P,P]^{2}:=\big{\{}(\rho_{-},\rho_{+}):\ P_{-}\leq\rho_{-}\leq\rho_{+}\leq P_{+}\big{\}}.\] We assume that \(f\) is \(C^{1}\) in the interior of \(\hat{\Lambda}\), and that \(f\) is continuous in \(\hat{\Lambda}\). Moreover, \(f(x,t,\rho_{-},\rho_{+})>0\), when \(P_{-}<\rho_{-}<\rho_{+}<P_{+}\), and \(f(x,t,\rho_{-},\rho_{+})=0\), whenever \(\rho_{-}\) or \(\rho_{+}\notin(P_{-},P_{+})\). To ease our notation, we extend the domain of the definition of \(f\) to \(\mathbb{R}\times[t_{0},T]\times\mathbb{R}^{2}\), by setting \(f(x,t,\rho_{-},\rho_{+})=0\), whenever \(\rho_{-}\) or \(\rho_{+}\notin\Lambda(P_{-},P_{+})\). We also write \(f^{0}(x,\rho_{-},\rho_{+})\) for \(f(x,t_{0},\rho_{-},\rho_{+})\). **(iv)** We assume that \(\rho(x,t)\) is an entropy solution of (1.2), and that its initial condition \(\rho^{0}(x):=\rho(x,t_{0})\) is \(0\) for \(x<a_{-}\), and is a Markov process for \(x\geq a_{-}\) that starts at \(\rho^{0}(a_{-})=m_{0}\). This Markov process has an infinitesimal generator in the form (1.3) for a drift \(b^{0}\) and a jump rate density \(f^{0}\). \(\Box\) Our statistical description consists of a one-dimensional marginal, a drift, and a rate kernel generating the rest of the path. The evolution of the drift and the rate kernel are given by (1.6) and the kinetic equation (1.7). Evolution of the marginal will be described in terms of the solutions to these equations. We continue with some definitions. **Definition 1.1(i)** We define the linear operator \(\mathcal{A}^{i}\) by \[\big{(}\mathcal{A}^{i}_{x,t}\psi\big{)}(\rho)=\big{(}\mathcal{A}^{i}\psi \big{)}(\rho)=b^{i}(x,t,\rho)\psi^{\prime}(\rho)+\int_{\rho}^{\infty}f^{i}(x,t, \rho,\rho_{+})\big{(}\psi(\rho_{+})-\psi(\rho)\big{)}\,d\rho_{+}, \tag{1.17}\] for \(i=1,2\). Note that \(\mathcal{A}^{1}=\mathcal{A}\) of (1.4), and \(f^{i}\) was defined in (1.13). We write \(\mathcal{A}^{i*}\) for the adjoint of the operator \(\mathcal{A}^{i}\) which acts on measures. When the measure \(\nu\) is absolutely continuous with respect to the Lebesgue measure with a \(C^{1}\) Radon-Nykodym derivative, then \(\mathcal{A}^{i*}\nu\) is also absolutely continuous with respect to the Lebesgue measure. The action of the operator \(\mathcal{A}^{i*}\) on \(\nu\) can be described in terms of its action on the corresponding Radon-Nykodym derivative. By a slight abuse of notation, we write \(\mathcal{A}^{i*}\) for the corresponding operator that now acts on \(C^{1}\) functions. More precisely, for a probability density \(\nu\), we have \[\big{(}\mathcal{A}^{i*}_{x,t}\nu\big{)}(\rho)= \left[\int_{-\infty}^{\rho}f^{i}(x,t,\rho_{*},\rho)\ \nu(\rho_{*})\ d\rho_{*}\right]-A(f^{i})(x,t,\rho)\nu(\rho)-\big{(}b^{i}(x,t, \rho)\nu(\rho)\big{)}_{\rho}.\] **(ii)** We write \(\mathcal{M}\) for the set of measures and \(\mathcal{M}_{1}\) for the set of probability measures. \(\Box\) **Theorem 1.1**: _Given a \(C^{1}\) rate \(f\), and \(m_{0}\in\mathbb{R}\), assume \(\ell:[t_{0},\infty)\to\mathcal{M}_{1}\) satisfies \(\ell(t_{0},d\rho_{0})=\delta_{m_{0}}(d\rho_{0})\), and_ \[\frac{d\ell}{dt}=\mathcal{A}^{2*}_{a_{-},t}\ell,\ \ \ \ t>t_{0}. \tag{1.18}\] _When Hypothesis 1.1 holds, the entropy solution \(\rho\) to (1.1) for each fixed \(t>t_{0}\) has \(x=a_{-}\) marginal given by \(\ell(t,d\rho_{0})\) and for \(a_{-}<x<\infty\) evolves according to a Markov process with the generator \(\mathcal{A}^{1}_{x,t}\). Moreover, the process \(t\mapsto\rho(a,t)\) is a Markov process with generator \(\mathcal{A}^{2}_{a,t}\), for every \(a\geq a_{-}\)._ **Remark 1.2(i)** According to Hypothesis 1.1, the function \(H\) is increasing. This condition is needed to guarantee that \(f^{2}\geq 0\), which in turn guarantee that \({\cal A}^{2}\) is a generator of a Markov process. This restriction on \(H\) can be relaxed almost completely. The main role of the condition \(H_{\rho}>0\) is that all shock discontinuities of \(\rho\) travel with negative velocities so that they cross any fixed location, say \(x=a\) eventually. This allows us to assert that if \(\rho(a,t)\) is known, then the law of \(\rho(x,t)\) can be determined uniquely for all \(x>a\). In general, we may try to determine \(\rho(x,t)\) for \(x>a(t)\), provided that \(\rho(a(t),t)\) is specified. The condition \(H_{\rho}>0\), allows us to choose \(a(t)\) constant. If instead we can find a negative constant \(c\) such that \(H_{\rho}>c\), then \(\hat{\rho}(x,t):=\rho(x-ct,t)\) satisfies \[\hat{\rho}_{t}=\hat{H}(x,t,\hat{\rho})_{x},\] for \(\hat{H}(x,t,\rho)=H(x-ct,t,\rho)-c\rho\), which is increasing. Hence, the process \(t\mapsto\hat{\rho}(x,t)=\rho(x-ct,t)\) is now Markovian with a generator \(\hat{\cal A}^{2}\) which is obtained from \({\cal A}^{2}\) by replacing \(H\) with \(\hat{H}\). Even an upper bound on \(H^{\prime}\) can lead to a result similar to Theorem 1.2. For example if \(H_{\rho}<0\), then \(x\mapsto\rho(x,t)\) is a Markov process but now as we decrease \(x\). **(ii)** To guarantee the existence of a solution to (1.6) in an interval \([t_{0},T]\), let us assume that \(H_{\rho x}\) and \(H_{xx}\) are uniformly bounded, and that \(H_{xx}\leq 0\) in this interval. Under such assumptions, we claim that if initially at time \(t=t_{0}\) the drift is nonpositive and bounded, then the no blow-up condition of Hypothesis **1.1(ii)** is met because \(b\) remains bounded and nonnegative. To see this, assume that the function \(b\) solves the equation (1.6), and write \(\Theta^{t}_{s}(a,m)\) for the flow of the Hamiltonian ODE \[\dot{x}=-H_{\rho}(x,t,\rho),\ \ \ \ \ \dot{\rho}=H_{x}(x,t,\rho).\] In other words \((\rho(t),x(t))=\Theta^{t}_{s}(a,m))\) solves (1.19), subject to the initial conditions \(x(s)=a\), and \(\rho(s)=m\). To ease the notation, we write \(b(x,\rho,t)\) and \(H(x,\rho,t)\) for \(b(x,t,\rho)\), and \(H(x,t,\rho)\) respectively. Evidently, \(\hat{b}(x,\rho,t)=b\big{(}\Theta^{t}_{s}(x,\rho),t\big{)}\) satisfies \[\hat{b}_{t}=A\hat{b}^{2}+2B\hat{b}+C,\] where \[A(x,\rho,t):=H_{\rho\rho}\big{(}\Theta^{t}_{s}(x,\rho),t\big{)},\ \ \ B(x,\rho,t):=H_{\rho x}\big{(}\Theta^{t}_{s}(x,\rho),t\big{)},\ \ \ C(x,\rho,t):=H_{xx}\big{(}\Theta^{t}_{s}(x,\rho),t\big{)}.\] Since the right-hand side of (1.20) is nonpositive when \(\hat{b}=0\), we deduce that \(\hat{b}(t)=\hat{b}(x,\rho,t)\) remains nonpositive for \(t\in[t_{0},T]\), if this is true initially at \(t=t_{0}\). Note that since \(\hat{b}_{t}\geq 2B\hat{b}+C\), with \(B\) and \(C\) bounded, \(b\) is also bounded from below in \([t_{0},T]\), if this is so initially. **(iii)** The existence of a classical solutions to (1.7) and (1.18) can be found in [KR2] and [OR] when \(H\) is independent of \((x,t)\), and \(b\) is either constant, or \(f\) is independent of \((x,t)\). The same type of arguments can be worked out in our setting. **(iv)** As a consequence of Hypothesis 1.1**(iii)**, the density \(\rho(x,t)\in[P_{-},P_{+}]\) almost surely. This restriction will be needed in Sections 2 and 3 when we derive a forward equation for the law of \(\rho(\cdot,t)\). The boundedness of \(\rho(x,t)\) is needed only when we restrict \(\rho\) to a bounded set of the form \(\Lambda:=[a_{-},a_{+}]\times[t_{0},T]\) (see Theorem 2.1 below). Note however that for a \(C^{1}\) drift \(b\), the density is always bounded below in \(\Lambda\), because the random jump only increases the density. So we only need to require an upper bound on the density, and the requirement \(P_{-}\leq\rho_{-}\) is redundant. In other words, we can find \(P_{-}\) that depends on \(b\), and a lower bound of \(\rho^{0}\), such that the condition \(P_{-}\leq\rho_{-}\) holds in \(\Lambda\). In Theorem 1.3 below, we will learn how to relax the boundedness requirement on the density. \(\Box\) ### Main result II Our Hypothesis 1.1**(ii)** is rather stringent requirement because the right-hand side of the PDE (1.6) is quadratic in \(b\). Our main Theorem 1.1 applies only when no new shock discontinuity is created in the time interval \([t_{0},T]\). Indeed a blow-up of the drift occurs exactly when a new jump discontinuity is formed for a local continuous solution that is represented by the ODE \(\rho_{x}=b(x,t,\rho)\). In Remark 1.1**(ii)** we stated conditions that would prevent a blow-up, but these conditions exclude many important stochastic growth models that are governed by HJE associated with random Hamiltonian (see Example 1.1**(ii)** below). We emphasis that Theorem 1.1 offers a kinetic description for the interaction between the existing shock discontinuities, not those which are created after the initial time. To go beyond what is offered by Theorem 1.1, we need to enlarge the class of Markovian profiles that has been used so far. We offer a way to achieve this by considering profiles that are Markovian concatenations of _fundamental solutions_ of (1.2). **Definition 1.2** Given \(z=(y,s)\in\mathbb{R}^{2}\), by a _fundamental solution_\(W(\cdot;z):\mathbb{R}\times(s,\infty)\to\mathbb{R}\) associated with \(z\) we mean \[W(x,t;z)=\sup\left\{\int_{s}^{t}L\big{(}\xi(\theta),\theta,\dot{\xi}(\theta) \big{)}\ d\theta:\ \xi\in C^{1}\big{(}[s,t];\mathbb{R}\big{)},\ \xi(s)=y,\ \xi(t)=x\right\}, \tag{1.21}\] where \(L\) is the Legendre transform of \(H\) in the \(p\)-variable: \[L(x,t,v)=\inf_{p}\big{(}p\cdot v+H(x,t,p)\big{)},\ \ \ \ H(x,t,p)=\sup_{v} \big{(}L(x,t,v)-p\cdot v\big{)}.\] We also set \(M(x,t;z)=W_{x}(x,t;z)\) for the \(x\)-derivative of \(W\). \(\Box\) Under our conditions on \(H\), the function \(W\) is a Lipschitz function of \((x,t)\) for \(t>s\), and \(M(x,t)=M(x,t;z)\) is well-defined a.e.. A representation of \(M\) is given as follows. For each \((x,t)\), we may find a maximizing piecewise \(C^{1}\) path \(\xi(\theta)=\xi(\theta;x,t;z)\) that is differentiable at time \(\theta=t\). The function \(M\) is continuous at \((x,t)\) if and only if the maximizing path is unique. When this is the case, we simply have \[M(x,t)=L_{v}\big{(}\xi(t),t,\dot{\xi}(t)\big{)}=L_{v}\big{(}x,t,\dot{\xi}(t) \big{)}. \tag{1.22}\] In general \(M(x,t)\) could be multi-valued; for each maximizing path, the right-hand side of (1.22) offers a possible value for \(M(x,t)\). The Cauchy problem associated with (1.1) has a representation of the form \[u(x,t)=\sup_{y}\big{(}u^{0}(y)+W(x,t;y,t_{0})\big{)}. \tag{1.23}\] In other words, \(u\) given by (1.23), satisfies (1.1) in viscosity sense for \(t>t_{0}\), and \(u(x,t_{0})=u^{0}(x)\). The type of stochastic solutions we will be able to describe kinetically would look like \[u(x,t)=\sup_{i\in I}\big{(}g_{i}+W(x,t;z_{i})\big{)},\] where \(\big{\{}(z_{i},g_{i}):\ i\in I\big{\}}\) is a discrete set. Since our Markovian process is \(\rho=u_{x}\), we consider profiles of the form \[\rho(x,t)=\sum_{i\in I}M(x,t;z_{i})\ 1\hskip-2.845276pt1\big{(}x\in[x_{i},x_{i +1})\big{)},\] for a discrete set \(\big{\{}q_{i}=(x_{i},z_{i}):\ i\in I\big{\}}\). (Note that because of the type of results we have in mind, we switched from \((g_{i},z_{i})\) to \((x_{i},z_{i})\).) We now give the definition of the Markov processes we will work with in this subsection. **Definition 1.3(i)** Given \(s,T\), with \(s<T\), let \(g(x,t;y_{-},y_{+})\) be a \(C^{1}\) nonnegative (kernel) function that is defined for \(x\in\mathbb{R},\ t\in[s,T],\ y_{+}\in(y_{-},\infty)\). We also write \[x_{1}=x,\quad x_{2}=t,\quad g^{1}=g,\quad g^{2}=\hat{v}g,\] where \[\hat{v}(x,t,y_{-},y_{+})=\frac{H\big{(}x,t,M(x,t;y_{+},s)\big{)}-H\big{(}x,t,M(x,t;y_{-},s)\big{)}}{M(x,t;y_{+},s)-M(x,t;y_{-},s)}. \tag{1.24}\] We write \(\mathcal{B}^{i}_{x,t}\) for the operator \[\mathcal{B}^{i}_{x_{1},x_{2}}F(y)=\int_{y}^{\infty}\big{(}F(y_{*})-F(y)\big{)} \ g^{i}(x_{1},x_{2};y,y_{*})\ dy_{*}.\] \(\mathcal{B}^{1}\) is the infinitesimal generator of an _inhomogeneous Markov jump_ process \(\mathbf{y}(x_{1})\). When \(\hat{v}>0\), the operator \(\mathcal{B}^{2}\) also generates a Markovian jump process. **(iii)** We write \(\mathcal{B}^{i*}\) for the adjoint of the operator \(\mathcal{B}^{i}\) which acts on measures. As before, when the measure \(\nu\) is absolutely continuous with respect to the Lebesgue measure with a \(C^{1}\) Radon-Nykodym derivative, then \(\mathcal{B}^{i*}\nu\) is also absolutely continuous with respect to the Lebesgue measure. By a slight abuse of notation, we write \(\mathcal{B}^{i*}\) for the corresponding operator that now acts on \(C^{1}\) functions. More precisely, for a probability density \(\nu\), we have \[\big{(}\mathcal{B}^{i*}_{x}\nu\big{)}(y)=\left[\int_{-\infty}^{y}g^{i}(x,t,y_{ *},y)\nu(y_{*})\ dy_{*}\right]-\hat{A}(g^{i})(x,t,y)\nu(y),\] where \[\hat{A}(g)(y)=\int_{y}^{\infty}g(y,y_{*})\ dy_{*}.\] **(iv)** Given \(\mathbf{y}:[a_{-},a_{+}]\to\mathbb{R}\), we define \[\rho(x,t;\mathbf{y},s):=M\big{(}x,t;\mathbf{y}(x),s\big{)}.\] According to our second main result, if \(\rho=u_{x}\) solves (1.2) with an initial condition which comes from a Markov process associated with a kernel \(g^{0}\), then at later times \(x\mapsto\rho(x,t)\) also comes from a Markov process associated with a kernel \(g\) which satisfies a kinetic equation in the form \[g_{t}-(\hat{v}g)_{x}=\hat{Q}(g)=\hat{Q}^{+}(g)-\hat{Q}^{-}(g)=\hat{Q}^{+}(g)-g \hat{J}(g), \tag{1.25}\] where \[\hat{Q}^{+}(g)(y_{-},y_{+})=\int_{y_{-}}^{y_{+}}\big{(}\hat{v}(y_ {*},y_{+})-\hat{v}(y_{-},y_{*})\big{)}g(y_{-},y_{*})g(y_{*},y_{+})\ dy_{*},\] \[\hat{J}(g)(y_{-},y_{+})=\big{(}\hat{A}(\hat{v}g)(y_{+})-\hat{A}( \hat{v}g)(y_{-})\big{)}-\hat{v}(y_{-},y_{+})\big{(}\hat{A}(g)(y_{+})-\hat{A}( g)(y_{-})\big{)}.\] Here we have not displayed the dependence of our functions on \((x,t)\) for a compact notation. We are now ready to state our hypotheses and the second main result. **Hypothesis 1.2(i)** The Lagrangian function \(L\) is \(C^{2}\) function that is strictly concave in \(v\). Moreover, there are positive constants \(c_{0},c_{1}\) and \(c_{2}\) such that \[-c_{0}+c_{2}v^{2}\leq-L(x,t,v)\leq c_{0}+c_{1}v^{2},\] \[-c_{0}+c_{2}|v|\leq|L_{v}(x,t,v)|\leq c_{0}+c_{1}|v|,\] \[\qquad|L_{xv}(x,t,v)|+|L_{xx}(x,t,v)|\leq c_{1},\] \[\qquad|H_{x}(x,t,v)|+|H_{x\rho}(x,t,\rho)|\leq c_{1}.\] **(ii)** The rate kernel \(g(x,t,y_{-},y_{+})\) is continuous nonnegative solution of (1.25) which is \(C^{1}\) in \((x,t)\)-variable, and is supported on \[\big{\{}(x,t,y_{-},y_{+}):\ x\in\mathbb{R},\ t\in[t_{0},T],\ Y_{-}\leq y_{-} \leq y_{+}\leq Y_{+}\big{\}},\] for some constants \(Y_{\pm}\). We write \(g^{0}(x,y_{-},y_{+})\) for \(g(x,t_{0},y_{-},y_{+})\) **(iii)**\(\rho\mapsto H_{\rho}(a_{-},t,\rho)>0\), for every \(t\in[t_{0},T]\) and \(\rho\in[M_{-},M_{+}]\), where \[M_{+}=\sup_{t\in[t_{0},T]}M(a_{-},t;Y_{+},s),\ \ \ \ M_{-}=\inf_{t\in[t_{0},T]}M(a_{-},t;Y _{-},s).\] **(iv)** Given \(s\) and \(t_{0}\), with \(t_{0}>s\), the initial condition \(\rho^{0}(x)=M(x,t_{0};y^{0},s)\) for \(x<a_{-}\), and \(\rho(x,t_{0})=\rho(x,t_{0};{\bf y}_{t_{0}},s)\) for \(x\geq a_{-}\), where \({\bf y}_{t_{0}}\) is a Markov process which starts at \({\bf y}_{t_{0}}(a_{-})=y^{0}>a_{-}\), and has an infinitesimal generator \({\cal B}^{1}_{x,t_{0}}\), associated with a kernel \(g^{0}(x,y_{-},y_{+})=g(x,t_{0},y_{-},y_{+})\). **(v)** Assume that \(\ell:[0,\infty)\to{\cal M}_{1}\) satisfies \(\ell(t_{0},dy_{0})=\delta_{y^{0}}(dy_{0})\), and \[\frac{d\ell}{dt}={\cal B}^{2}_{a_{-},t}\ell. \tag{1.26}\] \(\Box\) **Theorem 1.2**: _When Hypothesis 1.2 holds, the entropy solution \(\rho\) to (1.2) for each fixed \(t\in[t_{0},T]\) has \(x=a_{-}\) marginal given by \(M(a_{-},t;y_{0},s)\), with \(y_{0}\) distributed according to \(\ell(t,dy_{0})\) and for \(a_{-}<x\) evolves as \(\rho(x,t)=\rho(x,t;{\bf y}_{t},s)\), with \({\bf y}_{t}\) a Markov process with the generator \({\cal B}^{1}_{x,t}\)._ **Remark 1.3(i)** Observe that the finiteness of the Lagrangian \(L\) implies that Hamiltonian function \(H\) cannot be monotone \(\rho\). As a consequence, the velocity \(\hat{v}\) can take both negative and positive values, and the process \(t\mapsto\rho(x,t)\) many not be a Markov process for every \(x\). However, when \(x=a_{-}\), our Hypothesis 1.2**(iii)** would guarantee that the process \(t\mapsto\rho(a_{-},t)\) is Markovian. Indeed Hypothesis 1.2**(iii)** is designed to guarantee that no shock discontinuity can cross \(a_{-}\) from left to right. This assumption though can be relaxed for the price of replacing the boundary line segment \(\{(a_{-},t):\ t\in[t_{0},T]\}\) with a suitable line segment which is titled to the right. In other words, part **(i)** of Remark 1.2 is applicable. Moreover, part **(iii)** of Remark 1.2 is also applicable to the kernel \(g\) satisfying (1.25). **(ii)** As we will see in Proposition 5.2**(iii)** in Section 5, there exist positive constants \(C_{0}\) and \(C_{1}\) such that \(M(x,t;y,s)\geq-C_{1}x\) for \(x\leq-C_{0}\). Our condition \(|L_{v}|\leq c_{1}(1+|v|)\) in Hypothesis 1.2**(i)**, means \[|\rho|\leq c_{1}\big{(}1+|H_{\rho}(x,t,\rho)|\big{)}.\] Since \(H_{\rho}(x,t,\rho)\) is an increasing function of \(\rho\), we deduce that \(H_{\rho}(x,t,\rho)\to\pm\infty\) as \(\rho\to\pm\infty\). From this we learn that there exists a positive constant \(C_{2}\) such that \(H_{\rho}(a_{-},t,\rho)>0\) whenever \(\rho\geq C_{2}\). As a consequence, \(H_{\rho}(a_{-},t,\rho)>0\) for \(\rho\in[M(a_{-},t;y_{-},s),M(a_{+},t;y_{+},s)]\), provided that \(a_{-}\leq-C_{2}C_{1}^{-1}\). This means that Hypothesis 1.2**(iii)** is automatically satisfied when \(a_{-}\leq-C_{2}C_{1}^{-1}\). **(iii)** As a concrete example, when \(H(x,t,\rho)=\rho^{2}/2\), then \(L(x,t,v)=-v^{2}/2\), and \(M(x,t;y,s)=(y-x)/(t-s)\). In this case Hypothesis 1.2**(iii)** holds if and only if \(a_{-}<Y_{-}\). \(\Box\) **Example 1.1** When \(H\) does not depend on \((x,t)\), then \[W(x,t;y,s)=(t-s)L\left(\frac{x-y}{t-s}\right),\ \ \ \ M(x,t;y,s)=L^{\prime} \left(\frac{x-y}{t-s}\right).\] **Remark 1.4** As an example for a stochastic growth model, we may consider \(H(x,t,\rho)=H_{0}(\rho)-V(x,t)\), with \(H_{0}\) convex, and the potential \(V\) given formally as \[V(x,t)=\sum_{i\in I}\delta_{s_{i}}(t)1\!\!1(x=a_{i}), \tag{1.27}\] where \(\omega=\big{\{}(a_{i},s_{i}):\ i\in I\big{\}}\), is a realization of a _Poisson Point Process_ in \(\mathbb{R}^{2}\). In practice, we may approximate \(V\) by \[V_{\varepsilon}(x,t)=\sum_{i\in I}\varepsilon\zeta\left(\frac{t-s_{i}}{ \varepsilon}\right)\eta\left(\frac{x-a_{i}}{\delta(\varepsilon)}\right),\] where \(\delta(\varepsilon)\to 0\), in small \(\varepsilon\)-limit, and \(\eta\) and \(\zeta\) are two smooth functions of compact support such that \(\int\zeta(t)\ dt=1\), and \(\eta(x)=1\) in a neighborhood of the origin. Replacing \(V\) with \(V_{\varepsilon}\) yields a Hamiltonian function \(H^{\varepsilon}\) for which the equation (1.1) is well-defined and its solution \(u^{\varepsilon}\) has a limit \(u\) as \(\varepsilon\to 0\). A variational representation as in (1.21) for \(u^{\varepsilon}\) would yield a variational representation for \(u\) as well. Indeed the corresponding \(W\) still has the form (1.21), where \(L(x,t,v)=L_{0}(v)-V(x,t)\), with \(L_{0}\) a concave function given by \[L_{0}(v)=\inf_{p}\big{(}p\cdot v+H_{0}(p)\big{)}.\] It is not hard to show that the minimizing path \(\xi\) of the variational problem (1.21) is a concatenation of line segments between Poisson points of \(\omega\). In other words, \[W(x,t;y,s)=W(x,t;y,s;\omega)=\sup\left(N({\bf z})+\sum_{i=0}^{N({\bf z})}(s_{i +1}-s_{i})L_{0}\left(\frac{a_{i+1}-a_{i}}{s_{i+1}-s_{i}}\right)\right), \tag{1.28}\] where the supremum is over sequences \({\bf z}=\big{(}(a_{0},s_{0}),(a_{1},s_{1}),\ldots,(a_{n},s_{n}),(a_{n+1},s_{n+1}) \big{)}\), such that \(N({\bf z})=n\), and \[s_{0}<s_{1}<\cdots<s_{n+1},\quad(a_{0},s_{0})=(y,s),\quad(a_{n+1},s_{n+1})=(x,t),\quad(a_{1},s_{1}),\ldots,(a_{n},s_{n})\in\omega.\] This model was defined and studied in Bakhtin [B] and Bakhtin et al. [BCK] when \(H_{0}(p)=p^{2}/2\) (which leads to \(L_{0}(v)=-v^{2}/2\)). If \(H_{0}(p)=|p|\), then \(L_{0}(v)=-\infty\ 1\!\!1(|v|>1)\). In this case, \[W(x,t;y,s)=W(x,t;y,s;\omega)=\sup N({\bf z}),\] where the supremum is over sequences \({\bf z}\) as in (1.28), with the additional requirement \[|a_{i+1}-a_{i}|\leq s_{i+1}-s_{i}.\] The corresponding \(u(x,t)\) is a stochastic growth model that is known as _Polynuclear Growth_ (We refer to [PS] for more details). Our Theorem 1.2 does not directly apply to this model because Hypothesis 1.2**(i)** fails. Also for Hypothesis 1.2**(ii)** to hold, we need to assume that the intensity of \(\omega\) is \(0\) outside \([a_{-},\infty)\times{\mathbb{R}}\). Nonetheless, our method of proof can be adopted to treat this model as well. For this model however, it is more natural to consider a concatenation of fundamental solutions \(M(x,t;y_{i},\theta_{i})\), where \(\{(y_{i},\theta_{i}):\ i\in I\}\) is selected randomly. This extension requires developing new techniques and goes beyond the scope of the present article. \(\Box\) ### Unbounded density In Theorem 1.1 (respectively 1.2), we assumed that the density \(\rho\) (respectively \(y\)) is bounded. This assumption is technically convenient for the derivation of the forward equation that is carried out in Section 3, and is at the heart of our proofs of Theorems 1.1 and 1.2. Unfortunately it excludes many important models encountered in statistical mechanics, especially when we study stochastic growth models. As an example, if we take the case of the Burgers equation with white noise initial data, the density at later times would be an unbounded Markov jump process (see [Gr], [MS], and [OR]). In this subsection we explain how one can relax this restriction with the aid of an approximation that is related to Doob's \(h\)-transform. We carry out this idea in the case of Theorem 1.2 only, though our method of proof is also applicable to the setting of Theorem 1.1. Imagine that we have a kernel \(g\) which satisfies the kinetic equation (1.25), and the arguments \(y_{\pm}\) are not restricted to a bounded interval as in Hypothesis 1.2**(ii)**. **Hypothesis 1.3** We assume that parts **(i)** and **(iii)**-**(iv)** of Hypothesis 1.2 hold, but in part **(ii)**, we allow \(Y_{+}=\infty\), and assume that \(g(x,t,y_{-},y_{+})\) is a continuous kernel such that the Markov process \({\bf y}_{t_{0}}\) associated with the generator \({\cal B}^{1}_{x,t_{0}}\) satisfies \[\limsup_{x\to\infty}x^{-1}y_{t_{0}}(x)<1, \tag{1.29}\] almost surely. \(\Box\) **Theorem 1.3**: _The conclusion of Theorem 1.2 holds even when \(g\) is a kernel which satisfies Hypothesis 1.3._ Our strategy for proving Theorem 1.3 is by approximating the kernel \(g\) with a sequences of kernels \(g^{n}\) for which Theorem 1.2 is applicable. We cannot simply restrict \(g\) to a large bounded interval, because the resulting kernel does not satisfy the kinetic equation. However, if \({\bf y}\) is a Markov process with the jump kernel density \(g\) (associated with the generator \({\cal B}^{1}_{x,t}\) as in the Definition 1.3**(i)**), we may condition \({\bf y}\) to remain in a bounded interval. The resulting process is again a Markov process for which the jump kernel \(\hat{g}\) is related to \(g\) via a Doob's \(h\)-transform. In other words, there exists a suitable function \(h(x,t,y)\) such that \[\hat{g}(x,t,y_{-},y_{+})=\frac{h(x,t,y_{+})}{h(x,t,y_{-})}g(x,t,y_{-},y_{+})=: \eta(x,t,y_{-},y_{+})g(x,t,y_{-},y_{+}). \tag{1.30}\] Indeed the resulting kernel is again a solution to the kinetic equation as the following result confirms: **Proposition 1.1**: _Assume \(g\) satisfies (1.25) and \(h:[a_{-},a_{+}]\times[t_{0},T]\times\mathbb{R}\to\mathbb{R}\) is a \(C^{1}\) function such that_ \[h_{x}+{\cal B}^{1}_{x,t}h=0,\ \ \ \ h_{t}+{\cal B}^{2}_{x,t}h=0. \tag{1.31}\] _Then \(\hat{g}\) given by (1.30) also satisfies (1.25)._ As we will see in Subsection 1.4 below, the two equations that appeared in (1.31) are compatible whenever \(g\) satisfies the equation (1.25). This means that one of these equations is redundant: **Proposition 1.2**: _Assume that \(g\) and \(h\) are bounded, and \(C^{1}\) in \((x,t)\) variable, and that \(g\) satisfies (1.25). Also assume that \(h\) is uniformly positive, satisfies \(h_{x}+{\cal B}^{1}_{x,t}h=0\), and_ \[h_{t}(a_{-},t,y)+({\cal B}^{2}_{a_{-},t}h)(a_{-},t,y)=0. \tag{1.32}\] _(In other words, the second equation in (1.31) holds at \(x=a_{-}\).) Then the second equation in (1.31) holds in \([a_{-},a_{+}]\)._ The proof of Proposition 1.2 is similar to the proof of Proposition 4.1 of [OR], and is omitted. ### Heuristics According to Theorem 1.2, the process \(x_{i}\mapsto\rho(x_{1},x_{2})\) is a Markov process with the generator \[\mathcal{A}^{i}\psi(\rho)=\mathcal{A}^{i}_{x_{1},x_{2}}\psi(\rho)=b^{i}\rho_{x_ {i}}+\int_{\rho}^{\infty}f^{i}(x_{1},x_{2},\rho,\rho_{*})\big{(}\psi(\rho_{*})- \psi(\rho)\big{)}\ d\rho_{*}.\] Hence, if \(\ell(x_{1},x_{2},\rho)\) denotes the probability density of \(\rho(x_{1},x_{2})\), then \(\rho\) must satisfy the _forward equation_ \[\ell_{x_{i}}=\mathcal{A}^{i*}\ell=\ell*f^{i}-A(f^{i})\ell-(b^{i}\ell)_{\rho}= \ell*f^{i}-A(\ell\otimes f^{i})-(b^{i}\ell)_{\rho},\ \ \ \ i=1,2,\] where \[(\ell*f^{i})(\rho)=\int\ell(\rho_{*})f^{i}(\rho_{*},\rho)\ d\rho_{*}.\] From differentiating both sides we learn \[\ell_{x_{1}x_{2}}= \mathcal{A}^{1*}(\ell_{x_{2}})+\ell*f^{1}_{x_{2}}-A(f^{1})_{x_{2 }}\ \ell-(b^{1}_{x_{2}}\ell)_{\rho}=\mathcal{A}^{*1}\mathcal{A}^{2*}\ell+\ell*f^{1 }_{x_{2}}-A(f^{1})_{x_{2}}\ \ell-(b^{1}_{x_{2}}\ell)_{\rho},\] \[\ell_{x_{2}x_{1}}= \mathcal{A}^{2*}(\ell_{x_{1}})+\ell*f^{2}_{x_{1}}-A(f^{2})_{x_{1 }}\ \ell-(b^{2}_{x_{1}}\ell)_{\rho}=\mathcal{A}^{2*}\mathcal{A}^{1*}\ell+\ell*f^{2 }_{x_{1}}-A(f^{2})_{x_{1}}\ \ell-(b^{2}_{x_{1}}\ell)_{\rho}.\] On the other hand, \[\mathcal{A}^{i*}\mathcal{A}^{j*}\ell= \left(\ell*f^{j}-A(f^{j})\ell-(b^{j}\ell)_{\rho}\right)*f^{i}-A(f^ {i})\left(\ell*f^{j}-A(f^{j})\ell-(b^{j}\ell)_{\rho}\right)\] \[-\left[b^{i}\left(\ell*f^{j}-A(f^{j})\ell-(b^{j}\ell)_{\rho} \right)\right]_{\rho}\] \[= \ell*\left[f^{j}*f^{i}-A(f^{j})\otimes f^{i}-f^{j}\otimes A(f^{i} )+b^{j}\otimes f^{i}_{\rho_{-}}-(f^{j}\otimes b^{i})_{\rho_{+}}\right]\] \[+\ell\left[A(f^{i})A(f^{j})+A(f^{i})b^{j}_{\rho}+(A(f^{j})b^{i})_ {\rho}\right]\] \[+\ell_{\rho}\left[A(f^{i})b^{j}+A(f^{j})b^{i}\right]+\left(b^{i}b ^{j}_{\rho}\ell\right)_{\rho}+\left(b^{i}b^{j}\ell\right)_{\rho}.\] (Here, we have performed an integration by parts to replace \((b^{j}\ell)_{\rho}*f^{i}\) with \(\ell*\left(b^{j}\otimes f^{i}_{\rho_{-}}\right)\).) As a result \[\mathcal{A}^{1*}\mathcal{A}^{2*}\ell-\mathcal{A}^{2*}\mathcal{A}^ {1*}\ell= \ell*\left[\mathcal{Q}(f^{2},f^{1})-\mathcal{Q}(f^{1},f^{2})\right] +\left[\left(b^{1}b^{2}_{\rho}-b^{2}b^{1}_{\rho}\right)\ell\right]_{\rho}\] \[+\ell\left[A(f^{2})_{\rho}\ b^{1}-A(f^{1})_{\rho}\ b^{2}\right],\] where \(\mathcal{Q}(f^{j},f^{i})\) is given by (1.16). Hence, \[\ell_{x_{1}x_{2}}-\ell_{x_{2}x_{1}}= \ell*\left[\mathcal{Q}(f^{2},f^{1})-\mathcal{Q}(f^{1},f^{2})+f^{ 1}_{x_{2}}-f^{2}_{x_{1}}\right]\] \[-\ell\left[A(f^{1})_{x_{2}}-A(f^{2})_{x_{1}}+A(f^{1})_{\rho}\ b^{2 }-A(f^{2})_{\rho}\ b^{1}\right]\] \[+\left[\left(b^{2}_{x_{1}}-b^{1}_{x_{2}}+b^{1}b^{2}_{\rho}-b^{2}b^ {1}_{\rho}\right)\ell\right]_{\rho}. \tag{1.33}\] It is rather straightforward to show \[A\left(\mathcal{Q}(f^{j},f^{i})\right)=-A(f^{j})A(f^{i})+b^{j}A(f^{i})_{\rho}.\] As a consequence, \[A\left(R(f^{1},f^{2})\right)=A(f^{1})_{x_{2}}-A(f^{2})_{x_{1}}+A(f^{1})_{\rho} \ b^{2}-A(f^{2})_{\rho}\ b^{1}. \tag{1.34}\] where \[R=R(f^{1},f^{2})=f^{1}_{x_{2}}-f^{2}_{x_{1}}+\mathcal{Q}(f^{2},f^{1})-\mathcal{ Q}(f^{1},f^{2}).\] From (1.33) and (1.34) we deduce \[\ell_{x_{1}x_{2}}-\ell_{x_{2}x_{1}}=\ell*R-A(R)\ell+(S\ell)_{\rho},\] where \[S=S(b^{1},b^{2})=b^{2}_{x_{1}}-b^{1}_{x_{2}}+b^{1}b^{2}_{\rho}-b^{2}b^{1}_{ \rho}.\] Clearly \(R=S=0\) implies the compatibility of the equations \(\ell_{x_{i}}=\mathcal{A}^{i*}\ell\), for \(i=1,2\). Observe that \(R=S=0\) are exactly our equations (1.6) and (1.7). Various terms in the kinetic equation can be readily explained in terms of the underlying particle system that represents the dynamics of the shock discontinuities of a solution to the PDE (1.1). **(1)** According to the generator (1.1), the process \(x\mapsto\rho(x,t)\) satisfies the ODE \[\rho_{x}(x,t)=b\big{(}x,t,\rho(x,t)\big{)}, \tag{1.35}\] in between shock discontinuities. The PDE (1.6), governing the evolution of the velocity \(b\), follows from the consistency of (1.22) with (1.1); differentiating these equations with respect to \(t\) and \(x\) respectively lead to \[\rho_{xt} =b_{t}+b_{\rho}\ H(x,t,\rho)_{x}=b_{t}+b_{\rho}\big{(}H_{x}+H_{ \rho}b\big{)}=b^{1}_{x_{2}}+b^{2}b^{1}_{\rho},\] \[\rho_{tx} =H(x,t,\rho)_{xx}=\big{(}H_{x}+H_{\rho}b\big{)}_{x}=H_{xx}+2H_{x \rho}b+H_{\rho\rho}b^{2}+H_{\rho}b_{x}+H_{\rho}b_{\rho}b=b^{2}_{x_{1}}+b^{1}b ^{2}_{\rho}.\] Matching these two equations yields (1.6). This calculation is simply a repetition of the derivation of the equation \(S(b^{1},b^{2})=0\). **(2)** If a shock discontinuity occurs at a location \(x(t)\) with \(\rho_{\pm}(t)=\rho\big{(}x(t)\pm,t\big{)}\), then by the classical Rankine-Hugoniot equation \[\dot{x}(t)=-v\big{(}x(t),t,\rho_{-}(t),\rho_{+}(t)), \tag{1.36}\] where \(v\) was defined in (1.8). This equation is responsible for the occurrence of the term \(-(vf)_{x}\) in (1.7). **(3)** Since \(\rho(x,t)\) solves (1.1) classically away from the jump discontinuities, we have \[\dot{\rho}_{+}(t) =-\rho_{x}\big{(}x(t)+,t\big{)}v\big{(}x(t),t,\rho_{-}(t),\rho_{+}( t)\big{)}+\rho_{t}\big{(}x(t)+,t\big{)}\] \[=-b\big{(}x(t),t,\rho_{+}(t)\big{)}v\big{(}x(t),t,\rho_{-}(t),\rho _{+}(t)\big{)}+(H_{x}+bH_{\rho})(x(t),t,\rho_{+}(t)) \tag{1.37}\] \[=-K\big{(}x(t),t,\rho_{+}(t),\rho_{-}(t)\big{)}.\] As in **(2)**, this equation is responsible for the occurrence of \(-C^{+}f\) in (1.7) (see (1.11) for the definition of \(C^{+}\)). **(4)** A repetition of our calculation in **(3)** yields \[\dot{\rho}_{-}(t)=-K\big{(}x(t),t,\rho_{-}(t),\rho_{+}(t)\big{)}. \tag{1.38}\] Based on this, we are tempted to guess that \(C_{-}f\) is \(\big{[}K\big{(}x,t,\rho_{-},\rho_{+}\big{)}f\big{(}x,t,\rho_{-},\rho_{+}\big{)} \big{]}_{\rho_{-}}.\) This is not what we have in (1.12). The reason behind this has to do with the fact that we regard \(\rho(x,t)\) as a Markov process in \(x\) as we increase \(x\). As a result, the role of \(\rho_{-}\) and \(\rho_{+}\) cannot be interchanged. In order to explain the form of \(C^{-}f\) in (1.12), we fix \(a\in\mathbb{R}\), and assume that \(x(t)\) is the first discontinuity which occurs to the right of \(a\). Now, if we set \(\rho_{0}(t)=\rho(a,t)\), and write \[\rho(x)=\phi_{a}^{x}(m_{0};t),\] for the flow of the ODE (1.35) (in other words \(\rho(x)\) solves (1.35) subject to the initial condition \(\rho(a)=m_{0}\)), then \[\rho_{-}(t)=\phi_{a}^{x(t)}\big{(}\rho_{0}(t);t\big{)}.\] Since \(\rho_{0}(t)\) satisfies \(\dot{\rho}_{0}=\beta(a,t,\rho_{0})\), its law \(\ell(t,\rho_{0})\) obeys the equation \[\ell_{t}(t,\rho_{0})+\big{(}\beta(a,t,\rho_{0})\ell(t,\rho_{0})\big{)}_{\rho_ {0}}=0,\] away from the shock discontinuity. As it turns out, the function \[k(x,t,\rho_{0},\rho_{+}):=\ell(t,\rho_{0})f\big{(}x,t,\phi_{a}^{x}(\rho_{0};t ),\rho_{+}\big{)},\] satisfies the identity \[k_{t}-(wk)_{x}-(\beta k)_{\rho_{0}}=\ell\big{(}f_{t}-(vf)_{x}-C^{-}f\big{)},\] where \[w(x,t,\rho_{0},\rho_{+})=v\big{(}x,t,\phi_{a}^{x}(\rho_{0};t),\rho_{+}\big{)}.\] **(5)** Observe that if a solution \(\rho\) has two jump discontinuities at \(x=x(t)\) and \(y=y(t)\), with \(x<y\), and \[\rho_{-}=\rho(x-,t),\ \ \ \ \rho_{*}=\rho(x+,t),\ \ \ \ \rho_{*}^{\prime}=\rho(y-,t),\ \ \ \ \rho_{+}=\rho(y+,t),\] then the relative velocity of these two discontinuities is exactly \[v\big{(}x,t,\rho_{-},\rho_{*}\big{)}-v\big{(}y,t,\rho_{*}^{\prime},\rho_{+}\big{)}.\] As \(y(t)\) catches up with \(x(t)\), \(\rho_{*}^{\prime}\) converges to \(\rho_{*}\) and the relative velocity becomes \[v\big{(}x,t,\rho_{-},\rho_{*}\big{)}-v\big{(}x,t,\rho_{*},\rho_{+}\big{)}.\] This explains the form of \(Q^{+}\) in (1.9). \(\square\) ### Bibliography and the outline of the paper Most of the earlier works on stochastic solutions of Hamilton-Jacobi PDEs have been carried out in the Burgers context. For example, Groeneboom [Gr] determined the statistics of solutions to Burgers equation (\(H(p)=p^{2}/2\), \(d=1\)) with white noise initial data. Recently Ouaki [O] has extended this result to arbitrary convex Hamiltonian function \(H\). The special cases of \(H(p)=\infty\hbox{1\kern-2.5ptl}(p\notin[-1,1])\), and \(H(p)=p^{+}\) were already studied in the references Abramson-Evans [AE], Evans-Ouaki [EO], and Pitman-Tang [PW]. Carraro and Duchon [CD1-2] considered _statistical_ solutions, which need not coincide with genuine (entropy) solutions, but realized in this context that Levy process initial data should interact nicely with Burgers equation. Bertoin [Be] showed this intuition was correct on the level of entropy solutions, arguing in a Lagrangian style. Developing an alternative treatment to that given by Bertoin, which relies less on particulars of Burgers equation and happens to be more Eulerian, was among the goals of Menon and Srinivasan [MS]. Most notably, [MS] formulates an interesting conjecture for the evolution of the infinitesimal generator of the solution \(\rho(\cdot,t)\) which is equivalent with our kinetic equation (1.7) when \(H\) is independent of \((x,t)\). When the initial data \(\rho(x,0)\) is allowed to assume values only in a fixed, finite set of states, the infinitesimal generators of the processes \(x\mapsto\rho(x,t)\) and \(t\mapsto\rho(x,t)\) can be represented by triangular matrices. The integrability of this matrix evolution has been investigated by Menon [M2] and Li [Li]. For generic matrices--where the genericity assumptions unfortunately exclude the triangular case--this evolution is completely integrable in the Liouville sense. The full treatment of Menon and Srinivasan's conjecture was achieved in papers [KR1] and [KR2] (we also refer to [R] for an overview). The work of [KR1] have been recently extended to higher dimensions in [OR1]. In [OR2], the main result of [KR2] has been used to give a new proof of Groeneboom's results [Gr]. We continue with an outline of the paper: **(i)** In Section 2, we show that the evolution of the PDE (1.1) for piecewise smooth solutions is equivalent to a particle system in \({\mathbb{R}}\times[P_{-},P_{+}]\). We restrict this particle system to a large finite interval \([a_{-},a_{+}]\) and introduce a stochastic boundary condition at \(a_{+}\). This restriction allows us to reduce Theorem 1.1 to a finite system; the precise statement can be found in Theorem 2.1 of Section 2. **(ii)** The strategy of the proof of Theorem 2.1 will be described in Section 3. Our strategy is similar to the one that was utilized in our previous work [KR1-2]: Since we have a candidate for the generator of the process \(x\mapsto\rho(x,t)\), we have a candidate measure, say \(\mu(\cdot,t)\) for the law of \(\rho(\cdot,t)\). We establish Theorem 2.1 by showing that this candidate measure satisfies the _forward equation_ associated with Markovian dynamics of the underlying particle system (see the equation (3.2) in Section 3). The particle system has a deterministic evolution inside the interval and a stochastic (Markovian) dynamics at the right end boundary point. The rigorous derivation of the forward equation will be carried out in Section 3. **(iii)** Section 4 is devoted to the proof of Theorem 2.1. **(iv)** Section 5 is devoted to the proof of Theorem 1.2. **(v)** In Section 6, we establish Theorem 1.3 and Proposition 1.1. \(\Box\) ## 2 Particle System We assume that the initial condition \(\rho^{0}\), in the PDE (1.21) is of the following form * \(\rho^{0}(x)=m_{0}\) for \(x\leq x_{0}=a_{-}\). * There exists a discrete set \(I^{0}=\{x_{i}:i\in\mathbb{N}\}\), with \(a_{-}<x_{1}<\cdots<x_{i}<\dots\) such that for every \(x>0\) with \(x\notin I^{0}\), we have \(\rho^{0}_{x}(x)=b^{0}(x,t_{0},\rho^{0}(x))\). * If \(\rho^{\pm}_{i}=\rho^{0}(x_{i}\pm)\) denote the right and left values of \(\rho^{0}\) at \(x_{i}\), then \(\rho^{-}_{i}<\rho^{+}_{i}\). Now if \(\rho\) is an entropic solution of (1.2) with initial \(\rho^{0}\), then we may apply the _method of characteristics_ to show that for each \(t\geq 0\), the function \(\rho(\cdot,t)\) has a similar form. To explain this, consider the ODE \[\frac{d}{dx}\rho(x)=b(x,t,\rho(x)), \tag{2.1}\] where \(b\) is the solution to (1.6), subject to the initial condition \(b(x,t_{0},\rho)=b^{0}(x,\rho)\). Recall that we are write \(\phi^{z}_{a}(m;t)\) for the flow of the ODE (2.1). In other words, if \(\rho(x)=\phi^{x}_{a}(m;t)\), then (2.1) holds, and \(\rho(a)=m\). Then there are pairs \(\mathbf{q}(t)=\big{(}(x_{i}(t),\rho_{i}(t)):\ i=0,1,\dots\big{)}\), with \[a_{-}=x_{0}(t)<x_{1}(t)<\cdots<x_{i}(t)<\dots,\] such that for \(x\geq a_{-}\), we can write \[\rho\big{(}x,t\big{)}=\sum_{i=0}^{\infty}\phi^{x}_{x_{i}(t)}\big{(}\rho_{i}(t );t\big{)}\,\mbox{1I}\big{(}x_{i}(t)\leq x<x_{i+1}(t)\big{)}. \tag{2.2}\] Note that \(\rho(x_{i}(t)+,t)=\rho_{i}(t)\), and the data \({\bf q}(t)\) determines \(\rho(\cdot,t)\) completely. Because of this, we can fully describe the evolution of \(\rho(\cdot,t)\) by describing the evolution of the particle system \({\bf q}(t)\). Indeed from the PDE (1.1) and the Rankine-Hugoniot Formula, we have \(\dot{\rho}_{0}(t)=\beta(a,t,\rho_{0}(t))\), \(\rho_{0}(t_{0})=m_{0}\), and \[\dot{x}_{i}(t)=-v(x_{i}(t),t,\hat{\rho}_{i-1}(t),\rho_{i}(t)),\ \ \ \ \ \dot{\rho}_{i}(t)=-K(x_{i}(t),t,\rho_{i}(t),\hat{\rho}_{i-1}(t)), \tag{2.3}\] for \(i\in\mathbb{N}\), where \(\hat{\rho}_{i-1}(t)=\phi_{x_{i-1}(t)}^{x_{i}(t)}(\rho_{i-1}(t),t)\) (we refer to Subsection 1.4, especially (1.36) and (1.37) for explanation). Here (2.3) gives a complete description of \({\bf q}\) in an inductive fashion; once \((x_{i-1},\rho_{i-1})\) is determined, then we use (2.3) to write a system of two equations for the pair \((x_{i},\rho_{i})\). Moreover (2.3) holds so long as \(x_{i}^{\prime}\)s do not collide. When there is a collision between \(x_{i}\) and \(x_{i+1}\), for some \(i=0,1,\dots\), we remove \(x_{i+1}\) from the system, replace \(\rho_{i}\) with \(\rho_{i+1}\), and relabel \((x_{j},\rho_{j})\) as \((x_{j-1},\rho_{j-1})\) for \(j>i+1\). As we will see shortly, the function \(\rho(x,t)\), defined by equation (2.2), with \({\bf q}(t)\) evolving as above, is the unique entropy solution of (1.2). According to Theorem 1.1 if \(\rho(\cdot,t_{0})\) is a PDMP with drift \(b^{0}\) and jump rate \(f^{0}\), then \(\rho(\cdot,t)\) is also a PDMP with drift \(b(x,t,\cdot)\) and \(f(x,t,\cdot,\cdot)\). We may translate this into a statement about the law of our particle system \({\bf q}(t)\). However, since the dynamics of \({\bf q}\) involves infinite number of particles, we may take advantage of the finiteness of propagation speed in (1.2) and reduce Theorem 1.1 to an analogous claim for a finite interval \([a_{-},a_{+}]\). Since \(H_{\rho}>0\) by Hypothesis 1.1**(ii)**, all particles travel to left. Because of this, we need to choose appropriate boundary dynamics at the right boundary \(a_{+}\) only. The involved analysis will all pertain to the following result. **Theorem 2.1**: _Assume Hypothesis 1.1. For any fixed \(a_{+}>a_{-}\), consider the scalar conservation law (1.2) in \([a_{-},a_{+}]\times[t_{0},T)\) with initial condition \(\rho(x,t_{0})=\rho^{0}(x)\) (restricted to \([a_{-},a_{+}]\)), open boundary at \(x=a_{-}\), and random boundary \(\zeta\) at \(x=a_{+}\). Suppose the process \(\zeta\) has initial condition \(\zeta(t_{0})=\rho^{0}(a_{+})\), and evolves according to the time-dependent rate kernel \(f^{2}(a_{+},t,\rho,\rho_{+})\) and drift \(b^{2}(a_{+},t,\rho)\), independently of \(\rho^{0}\). Then for all \(t>t_{0}\) the law of \(\big{(}\rho(x,t):\ x\in[a_{-},a_{+}]\big{)}\) is as follows:_ **(i)**: _The_ \(x=a_{-}\) _marginal is_ \(\ell(t,d\rho_{0})\)_, given by_ \(\dot{\ell}={\cal A}_{a_{-},t}^{2*}\ell\)_._ **(ii)**: _The rest of the path is a PDMP with generator_ \({\cal A}_{x,t}^{1}\) _(rate kernel_ \(f(x,t,\rho_{-},\rho_{+})\) _and drift_ \(b(x,t,\rho)\)_)._ To prove our main result Theorem 1.2, we can send \(a_{+}\to\infty\), applying Theorem 2.1 on each \([a_{-},a_{+}]\), and use bounded speed of propagation. The argument is straightforward and can be found in [12]. We prove Theorem 2.1 by showing that the particle system \({\bf q}(t)\) restricted to the interval \([a_{-},a_{+}]\) has the correct law predicted by this theorem. We now give a precise description for the evolution of \({\bf q}\) restricted to \([a_{-},a_{+}]\). First we make some definitions. **Definition 2.1(i)** The configuration space for our particle system \({\bf q}\), is the set \(\Delta=\cup_{n=0}^{\infty}\bar{\Delta}_{n}\), where \(\bar{\Delta}_{n}\) is the topological closure of \(\Delta_{n}\), with \(\Delta_{n}\) denoting the set \[\big{\{}{\bf q}=\big{(}(x_{i},\rho_{i}):i=0,1,\ldots,n\big{)}:\ x_{0}=a_{-}<x_{ 1}<\cdots<x_{n}<x_{n+1}=a_{+},\quad\rho_{0},\ldots,\rho_{n}\in\mathbb{R}\big{\}}.\] We write \({\bf n}({\bf q})\) for the number of particles i.e., \({\bf n}({\bf q})=n\) means that \({\bf q}\in\Delta_{n}\). What we have in mind is that \(\rho_{i}(t)=\rho(x_{i}(t)+,t)\) with \(x_{1},\ldots,x_{n}\) denoting the locations of all shocks in \((a_{-},a_{+})\). **(ii)** Given a realization \({\bf q}=\big{(}x_{0},\rho_{0},x_{1},\rho_{1},\ldots,x_{n},\rho_{n}\big{)}\in \bar{\Delta}_{n}\), we define \[\rho\big{(}x,t;{\bf q}\big{)}=R_{t}({\bf q})(x)=\sum_{i=0}^{n}\phi_{x_{i-1}}^{ x_{i}}\big{(}\rho_{i};t\big{)}{\rm 1\hskip-2.845276ptl}\big{(}x_{i}\leq x<x_{i+1} \big{)}.\] **(iii)** The process \({\bf q}(t)\) evolves according to the following rules: * So long as \(x_{i}\) remains in \((x_{i-1},x_{i+1})\), for some \(i\geq 1\), it satisfies \(\dot{x}_{i}=-v(x_{i},t,\hat{\rho}_{i-1},\rho_{i})\) with \(\hat{\rho}_{i}(t)=\phi_{x_{i-1}(t)}^{x_{i}(t)}\big{(}\rho_{i-1}(t);t\big{)}\). * We have \(\dot{\rho}_{0}=\beta(x_{0},t,\rho_{0})\), and for \(i>0\), we have \(\dot{\rho}_{i}=-K(x_{i},t,\rho_{i},\hat{\rho}_{i-1})\). * With rate \(f^{2}\big{(}a_{+},t,\hat{\rho}_{n},\rho_{n+1})\), the configuration \({\bf q}\) gains a new particle \((x_{n+1},\rho_{n+1})\), with \(x_{n+1}=a_{+}\). This new configuration is denoted by \({\bf q}(\rho_{n+1})\). * When \(x_{1}\) reaches \(a_{-}\), we relabel particles \((x_{i},\rho_{i}),\ i\geq 1\), as \((x_{i-1},\rho_{i-1})\). * When \(x_{i+1}-x_{i}\) becomes \(0\) for some \(i\geq 1\), then \({\bf q}(t)\) becomes \({\bf q}^{i}(t)\), that is obtained from \({\bf q}(t)\) by omitting \((\rho_{i},x_{i})\) and relabeling particles to the right of the \(i\)-th particle. \(\Box\) As we mentioned before, the function \(\rho(x,t;{\bf q}(t))\) is indeed an entropic solution of (1.2). We also need a stability inequality for our constructed solutions. **Proposition 2.1**: **(i)** _The function \(\rho(x,t)=\rho(x,t;{\bf q}(t))\), with \({\bf q}(t)\) evolving as above, is an entropy solution of \(\rho_{t}=H(x,t,\rho)_{x}\) in \((a_{-},a_{+})\times(t_{0},T)\)._ **(ii)** _The process \(t\mapsto m(t):=\rho(a_{+},t)=\rho(a_{+},t;{\bf q}(t))\) is a Markov process with generator \({\cal A}^{2}_{a_{+},t}\)._ **(iii)** _Suppose \(\rho,\rho^{\prime}:[a_{-},a_{+}]\times[t_{0},T)\to[P_{-},P_{+}]\) are two piecewise \(C^{1}\) entropy solutions of \(\rho_{t}=H(x,t,\rho)_{x}\). If \(t_{0}\leq s\leq t<T\), then_ \[\int_{a_{-}}^{a_{+}}|\rho^{\prime}(x,t)-\rho(x,t)|\ dx\leq e^{C_{0}(t-s)}\int_{a_{-}}^{a^{+}}|\rho^{\prime}(x,s)-\rho(x,s)|\ dx \tag{2.4}\] \[+e^{C_{0}(t-s)}\int_{s}^{t}\big{|}H(a_{+},\theta,\rho^{\prime}(a_ {+},\theta))-H(a_{+},\theta,\rho(a_{+},\theta))\big{|}\ d\theta,\] _where_ \[C_{0}=\max_{x\in[a_{-},a_{+}]}\max_{t\in[t_{0},T]}\max_{\rho\in[P_{-},P_{+}]}|H_{x \rho}(x,t,\rho)|.\] **Remark 2.1** From Proposition 2.1**(iii)** we learn that two solutions \(\rho\) and \(\rho^{\prime}\) equal in \([a_{-},a_{+}]\times[t_{0},T]\) if they coincide at \(t=0\), and \(x=a_{+}\). This confirms the fact that under the assumption \(H_{\rho}<0\), the boundary \(a_{-}\) is free. In particular, \(\rho(x,t;{\bf q}(t))\) is the unique solution which satisfies the stochastic boundary condition at \(x=a_{+}\). \(\Box\) The proof of Proposition 2.1 will be given at the end of this section. We continue with a precise description for the PDMP \(\rho(\cdot,t)\) in terms of \({\bf q}(t)\) and some preparatory steps toward the proof of Proposition 2.1 and Theorem 2.1. **Definition 2.2(i)** To ease the notation, we write \(\lambda(x,t,\rho)\) and \(A(x,t,\rho)\), for \((Af^{1})(x,t,\rho)\) and \((Af^{2})(x,t,\rho)\) respectively. Given \({\bf q}\in\Delta_{n}\), we also set \[\Gamma(x,y,t,\rho) =\int_{x}^{y}\lambda(z,t,\phi_{x}^{z}(\rho;t))\ dz,\] \[\Gamma({\bf q},t) =\int_{a_{-}}^{a_{+}}\lambda\big{(}y,t,\rho\big{(}y,t;{\bf q} \big{)}\big{)}\ dy=\sum_{i=0}^{n}\Gamma(x_{i},x_{i+1},t,\rho_{i}).\] **(ii)** We define a measure \(\mu(d{\bf q},t)\) on the set \(\Delta\) that is our candidate for the law of \({\bf q}(t)\). The restriction of \(\mu\) to \(\Delta_{n}\) is denoted by \(\mu^{n}(d{\bf q},t)\). This measure is explicitly given by \[\ell(t,d\rho_{0})\exp\left\{-\Gamma({\bf q},t)\right\}\prod_{i=1}^{n}\ f\big{(}x_{i},t,\phi_{x_{i-1}}^{x_{i}}( \rho_{i-1};t),\rho_{i})\ dx_{i}d\rho_{i},\] where \(f\) solves (1.7) and \(\ell\) solves (1.18). Note that if \(\rho(x,t)=R_{t}({\bf q}(t))(x)\), with \(R\) as in Definition **2.1(ii)**, then the process \(x\mapsto\rho(x,t),x\geq a_{-}\) is a Markov process associated with the generator \({\cal A}^{1}_{x,t}\), and an initial law \(\ell(t,\cdot)\). **(iii)** Let us write \(T_{x}^{y}g(\rho)=g(\phi_{x}^{y}(\rho;t))\) and \(({\cal D}_{x}g)(\rho)=b(x,t,\rho)g^{\prime}(\rho)\) for its generator (to simplify the notation, we do not display the dependence of \(T_{x}^{y}\) and \({\cal D}_{x}\) on \(t\)). It is straightforward to show \[T_{x}^{y}\circ T_{y}^{z}=T_{x}^{z},\qquad\frac{dT_{x}^{y}}{dy}=T_{x}^{y}\circ{ \cal D}_{y},\qquad\frac{dT_{x}^{y}}{dx}=-{\cal D}_{x}\circ T_{x}^{y}. \tag{2.5}\] Indeed \[T_{x}^{y+\delta}g =T_{x}^{y}\big{(}g\circ\phi_{y}^{y+\delta}\big{)}=T_{x}^{y}\big{(} g+\delta{\cal D}_{y}g+o(\delta)\big{)}=T_{x}^{y}g+\delta\big{(}T_{x}^{y}\circ{ \cal D}_{y}\big{)}g+o(\delta),\] \[T_{x-\delta}^{y}g =T_{x-\delta}^{x}\big{(}T_{x}^{y}g\big{)}=\big{(}T_{x}^{y}g\big{)} \circ\phi_{x-\delta}^{x}=T_{x}^{y}g-\delta\big{(}{\cal D}_{x}\circ T_{x}^{y} \big{)}g+o(\delta).\] In the following Lemma, we derive several identities that we will use for the proof of Proposition 2.1 and Theorem 2.1. **Lemma 2.1**: _The following identities are true:_ \[b(x,t,\rho)\Gamma_{\rho}(x,y,t,\rho)=-\Gamma_{x}(x,y,t,\rho)+ \lambda(x,t,\rho)=-\int_{x}^{y}\big{[}\lambda\big{(}z,t,\phi_{x}^{z}(\rho;t) \big{)}\big{]}_{x}\ dz, \tag{2.7}\] \[b_{t}=\beta_{x}+b\beta_{\rho}-b_{\rho}\beta,\] (2.8) \[\big{[}\phi_{x}^{y}(\rho;t)\big{]}_{t}=\beta(y,t,\phi_{x}^{y}(\rho ;t))-\beta\big{(}x,t,\rho\big{)}\big{[}\phi_{x}^{y}(\rho;t)\big{]}_{\rho},\] (2.9) \[\lambda_{t}(x,t,\rho)+\beta(x,t,\rho)\lambda_{\rho}(x,t,\rho)=b(x,t,\rho)A_{\rho}(x,t,\rho)+A_{x}(x,t,\rho),\] (2.10) \[\Gamma_{t}(x,y,t,\rho)+\beta(x,t,\rho)\Gamma_{\rho}(x,y,t,\rho)=A \big{(}y,t,\phi_{x}^{y}(\rho;t)\big{)}-A(x,t,\rho),\] (2.11) \[\big{[}\phi_{x}^{y}(\rho;t)\big{]}_{x}+b(x,t,\rho)\big{[}\phi_{x}^ {y}(\rho;t)\big{]}_{\rho}=0.. \tag{2.6}\] **Proof** For the proof of (2.6) use the definition of \(\Gamma\) and (2.5) to assert that the left-hand side of (2.6) equals to \[\int_{x}^{y}b(x,t,\rho)\left[\lambda\big{(}z,t,\phi_{x}^{z}(\rho; t)\big{)}\right]_{\rho}\ dz= -\int_{x}^{y}\big{[}\lambda\big{(}z,t,\phi_{x}^{z}(\rho;t)\big{)} \big{]}_{x}\ dz\] \[=-\left[\int_{x}^{y}\lambda\big{(}z,t,\phi_{x}^{z}(\rho;t)\big{)} \ dz\right]_{x}+\lambda(x,t,\rho)\] \[=-\Gamma_{x}(x,y,t,\rho)+\lambda(x,t,\rho).\] For (2.7) observe that by (1.6), \[\beta_{x}+b\beta_{\rho}-b_{\rho}\beta =\big{(}bH_{\rho}+H_{x}\big{)}_{x}+b\big{(}bH_{\rho}+H_{x}\big{)} _{\rho}-bb_{\rho}H_{\rho}-b_{\rho}H_{x}\] \[=bH_{\rho x}+b_{x}H_{\rho}+H_{xx}+bb_{\rho}H_{\rho}+b^{2}H_{\rho \rho}+bH_{\rho x}-bb_{\rho}H_{\rho}-b_{\rho}H_{x}\] \[=2bH_{\rho x}+b_{x}H_{\rho}+H_{xx}+b^{2}H_{\rho\rho}-b_{\rho}H_{x} =b_{t}.\] We now turn to the proof of (2.8). Set \[X(x,y,t,\rho):=\big{[}\phi_{x}^{y}(\rho;t)\big{]}_{t}-\beta(y,t,\phi_{x}^{y}( \rho;t))+\beta\big{(}x,t,\rho\big{)}\big{[}\phi_{x}^{y}(\rho;t)\big{]}_{\rho}.\] We wish to show that \(X(\rho,x,y,t)=0\) for all \((x,y,t,\rho)\). This is trivially true when \(x=y\). On the other hand, \[X_{y}(x,y,t,\rho)= \big{[}b(y,t,\phi_{x}^{y}(\rho;t))\big{]}_{t}-(\beta_{y}+b\beta_{ \rho})(y,t,\phi_{x}^{y}(\rho;t))+\beta\big{(}x,t,\rho\big{)}\big{[}b(y,t,\phi_ {x}^{y}(\rho;t))\big{]}_{\rho}\] \[= b_{t}(y,t,\phi_{x}^{y}(\rho;t))+b_{\rho}(y,t,\phi_{x}^{y}(\rho;t ))\big{[}\phi_{x}^{y}(\rho;t)\big{]}_{t}-(\beta_{y}+b\beta_{\rho})(y,t,\phi_ {x}^{y}(\rho;t))\] \[+\beta\big{(}x,t,\rho)b_{\rho}(y,t,\phi_{x}^{y}(\rho;t))\big{[} \phi_{x}^{y}(\rho;t)\big{]}_{\rho}\] \[= b_{\rho}(y,t,\phi_{x}^{y}(\rho;t))X(x,y,t,\rho),\] where we used (2.7) for the third equality. As a result. \[X(x,y,t,\rho)=X(x,x,t,\rho)\exp\left[\int_{x}^{y}b_{\rho}\big{(}z,t,\phi_{x}^{z}( \rho;t)\big{)}\ dz\right]=0.\] This completes the proof of (2.8). For (2.9), we first observe \[\int_{\rho}^{\infty}Q(f)(x,t,\rho,\rho_{+})\ d\rho_{+}=0, \tag{2.12}\] because the left-hand side equals \[\int_{\rho}^{\infty}\int 1\!\!1\big{(}\rho_{*}\in(\rho,\rho_{+}) \big{)}\big{(}v(x,t,\rho_{*},\rho_{+})-v(x,t,\rho,\rho_{*})\big{)}f(x,t,\rho, \rho_{*})f(x,t,\rho_{*},\rho_{+})\ d\rho_{*}d\rho_{+}\] \[\quad-\int_{\rho}^{\infty}\big{[}A(x,t,\rho_{+})-A(x,t,\rho)-v(x, t,\rho,\rho_{+})\big{(}\lambda(x,t,\rho_{+})-\lambda(x,t,\rho)\big{)}\big{]}\ f(x,t,\rho,\rho_{+})\ d\rho_{+}\] \[\quad-\int_{\rho}^{\infty}\big{[}A(x,t,\rho_{+})-A(x,t,\rho)-v(x, t,\rho,\rho_{+})\big{(}\lambda(x,t,\rho_{+})-\lambda(x,t,\rho)\big{)}\big{]}\ f(x,t,\rho,\rho_{+})\ d\rho_{+}\] \[\quad=\int_{\rho}^{\infty}\big{(}A(x,t,\rho)-v(x,t,\rho,\rho_{+}) \lambda(x,t,\rho)\big{)}f(x,t,\rho,\rho_{+})\ d\rho_{+}=0.\] We next integrate both sides of (1.7) with respect to \(\rho_{+}\) and use (2.12) to assert \[\lambda_{t}(x,t,\rho) =\int_{\rho}^{\infty}\big{\{}(Cf)(x,t,\rho,\rho_{+})+\big{[}(vf)( x,t,\rho,\rho_{+})\big{]}_{x}\big{\}}\ d\rho_{+} \tag{2.13}\] \[=\int_{\rho}^{\infty}(Cf)(x,t,\rho,\rho_{+})\ d\rho_{+}+A_{x}(x,t,\rho).\] On the other hand, \[\int\big{(}C^{-}f\big{)}(x,t,\rho,\rho_{+})\ d\rho_{+}= b(x,t,\rho)\int_{\rho}^{\infty}(vf)_{\rho}(x,t,\rho,\rho_{+})\ d\rho_{+}-\beta(x,t,\rho)\int_{\rho}^{\infty}f_{ \rho}(x,t,\rho,\rho_{+})\ d\rho_{+}\] \[= (bA_{\rho}-\beta\lambda_{\rho})(x,t,\rho)+(bH_{\rho}-\beta)(x,t, \rho)f(x,t,\rho,\rho),\] \[\int\big{(}C^{+}f\big{)}(x,t,\rho,\rho_{+})\ d\rho_{+}= \int_{\rho}^{\infty}\big{[}K(x,t,\rho_{+},\rho)f(x,t,\rho,\rho_{+}) \big{]}_{\rho_{+}}\ d\rho_{+}\] \[= (\beta-bH_{\rho})(x,t,\rho)f(x,t,\rho,\rho),\] because \(f(x,t,\rho,\infty)=0\), and \[\lim_{\rho_{+}\to\rho}v(x,t,\rho,\rho_{+})=H_{\rho}(x,t,\rho).\] From this, and (2.13) we deduce (2.9). We now turn to the proof of (2.10). We rewrite (2.10) as \[\int_{x}^{y}\left[\lambda(z,t,\phi_{x}^{z}(\rho;t))\right]_{t}\,dz+\beta(x,t, \rho)\int_{x}^{y}\left[\lambda(z,t,\phi_{x}^{z}(\rho;t))\right]_{\rho}\,dz= \int_{x}^{y}\left[A\big{(}z,t,\phi_{x}^{z}(\rho;t)\big{)}\right]_{z}\,dz.\] For this, it suffices to check \[\big{[}\lambda(z,t,\phi_{x}^{z}(\rho;t))\big{]}_{t}+\beta(x,t,\rho)\,\left[ \lambda(z,t,\phi_{x}^{z}(\rho;t))\right]_{\rho}=\big{[}A\big{(}z,t,\phi_{x}^{z }(\rho;t)\big{)}\big{]}_{z}, \tag{2.14}\] for every \((x,z,t,\rho)\). By (2.8), the identity (2.14) is equivalent to \[\lambda_{t}(z,t,\phi_{x}^{z}(\rho;t))+\beta(z,t,\phi_{x}^{z}(\rho;t))\lambda_ {\rho}(z,t,\phi_{x}^{z}(\rho;t))=\big{[}A\big{(}z,t,\phi_{x}^{z}(\rho;t)\big{)} \big{]}_{z}.\] This is an immediate consequence of (2.9) because \[\big{[}A\big{(}z,t,\phi_{x}^{z}(\rho;t)\big{)}\big{]}_{z}=b(z,t,\phi_{x}^{z}( \rho;t))A_{\rho}\big{(}z,t,\phi_{x}^{z}(\rho;t)\big{)}+A_{z}\big{(}z,t,\phi_{ x}^{z}(\rho;t)\big{)}.\] The proof of (2.10) is complete. Finally, (2.11) is simply the third equation of (2.5) applied to the function \(g(\rho)=\rho\). \(\square\) We are now ready to establish Proposition 2.1. **Proof of Proposition 2.1(i)** We first show that \(\rho\) solves (1.2) classically away from the shock curves. For this, take a point \((x,t)\) such that \(x\in\big{(}x_{i}(t),x_{i+1}(t)\big{)}\), for some nonnegative integer \(i\). Let us write \(\hat{\phi}_{x}^{y}(\rho;t)\) and \(\tilde{\phi}_{x}^{y}(\rho;t)\) for the partial derivatives \(\big{[}\phi_{x}^{y}(\rho;t)\big{]}_{\rho}\) and \(\big{[}\phi_{x}^{y}(\rho;t)\big{]}_{x}\) respectively. From \(\rho(x,t)=\phi_{x_{i}(t)}^{x}(\rho_{i}(t);t)\), we learn \[\begin{split}\rho_{t}(x,t)=&-v\big{(}x_{i}(t),t, \hat{\rho}_{i-1}(t),\rho_{i}(t)\big{)}\tilde{\phi}_{x_{i}(t)}^{x}(\rho_{i}(t); t)\\ &+\beta(x,t,\rho(x,t))-\beta\big{(}x_{i}(t),t,\rho_{i}(t)\big{)} \,\,\hat{\phi}_{x_{i}(t)}^{x}(\rho_{i}(t);t)\\ &-K\big{(}x_{i}(t),t,\rho_{i}(t),\hat{\rho}_{i-1}(t)\big{)}\hat{ \phi}_{x_{i}(t)}^{x}(\rho_{i}(t);t)\\ =&-v\big{(}x_{i}(t),t,\hat{\rho}_{i-1}(t),\rho_{i}(t )\big{)}\tilde{\phi}_{x_{i}(t)}^{x}(\rho_{i}(t);t)+\beta(x,t,\rho(x,t))\\ &-v\big{(}x_{i}(t),t,\hat{\rho}_{i-1}(t),\rho_{i}(t)\big{)}b\big{(} x_{i}(t),t,\rho_{i}(t)\big{)}\hat{\phi}_{x_{i}(t)}^{x}(\rho_{i}(t);t)\\ =&\beta(x,t,\rho(x,t))=H(x,t,\rho(x,t))_{x},\end{split} \tag{2.15}\] as desired. Here we used (2.8) for the first equality, and (2.11) for the third equality. Since the Rankine-Hugoniot Formula is valid at shock curves by our construction, and (1.1) holds classically away from the shock curves, we deduce that \(\rho\) is a weak solution of (1.1). On the other hand, since \(\rho(x_{i}(t)-,t)<\rho(x_{i}(t)+,t)\) by construction, we deduce that \(\rho\) is an entropy solution in \((a_{-},a_{+})\times(t_{0},T)\). **(ii)** From the way the boundary dynamics is described in **(3)**, the process \(m(t)\) depends on the particle system to the left of \(a_{+}\). Nonetheless we show that if the process \(\bar{m}(t)\) is a Markov process with generator \({\cal A}^{2}_{a_{+},t}\), and initial state \(m_{0}=\rho^{0}(a_{+})\), then \(m(t)=\bar{m}(t)\). To verify this, let us construct the process \(t\mapsto\bar{m}(t)\) with the aid of a sequence of independent standard exponential random variables \(\big{(}\tau_{i}:\ i\in\mathbb{N}\big{)}\). Let us write \(\gamma^{t}_{\rm s}(\rho)\) for the flow of the ODE associated with speed \(\beta(a_{+},t,\rho)\), and define \[\eta(t,m)=\int_{m}^{\infty}f^{2}(a_{+},t,m,\rho_{+})\ d\rho_{+}. \tag{2.16}\] Now construct a sequence \({\bf z}=\big{(}(\sigma_{i},m_{i}):\ i=0,1,\dots\big{)}\) inductively by the following recipe: \(\sigma_{0}=0\), and given \((\sigma_{i},m_{i})\), we set \[\sigma_{i+1}=\min\left\{s>\sigma_{i}:\ \int_{\sigma_{i}}^{s}\eta\big{(}\theta, \gamma_{\sigma_{i}}^{\theta}(m_{i})\big{)}\ d\theta\geq\tau_{i+1}\right\}, \ \ \ \ \hat{m}_{i}=\gamma_{\sigma_{i}}^{\sigma_{i+1}}(m_{i}),\] and select \(m_{i+1}\) randomly according to the probability measure \[\eta\big{(}\sigma_{i+1},\hat{m}_{i}\big{)}^{-1}\ f^{2}\big{(}a_{+},\sigma_{i +1},\hat{m}_{i},m_{i+1}\big{)}\ dm_{i+1}.\] Using our sequence \({\bf z}\), we construct \(\bar{m}(t)\) by \[\bar{m}(t)=\sum_{i=0}^{\infty}\gamma_{\sigma_{i}}^{t}(m_{i})1\!\!1\big{(}t\in[ \sigma_{i},\sigma_{i+1})\big{)}.\] By the very construction of the processes \(m(t)\) and \(\bar{m}(t)\), the desired equality \(m(t)=\bar{m}(t),\ t\geq t_{0}\) would follow if we can show that \(m(t)=\bar{m}(t)\) for \(t\in(\sigma_{i-1},\sigma_{i})\) for every \(i\in\mathbb{N}\). This can be checked by induction on \(i\). If there are exactly \(n\) particle to the left of \(a_{+}\), and we already know that \(\hat{\rho}_{n}(\sigma_{i})=\hat{m}_{i-1}\), then we can guarantee that \(\rho_{n+1}(\sigma_{i})=m_{i}\). Moreover, \(\hat{\rho}_{n+1}(t)=\gamma_{\sigma_{i}}^{t}(m_{i})\) for \(t\in(\sigma_{i},\sigma_{i+1})\), because the function \(\zeta(t)=\phi_{x_{n}(t)}^{a_{+}}(\rho_{n}(t);t)\) satisfies \[\dot{\zeta}(t)=\beta(a_{+},t,\zeta(t)), \tag{2.17}\] by (2.15) in the case of \(x=a_{+}\). This completes the proof of part **(ii)**. **(iii)** The proof of (2.4) is a standard application of the celebrated Kruzhkov's inequality [K], and we only sketch it. It is not hard to show that the piecewise \(C^{1}\) entropy solutions \(\rho\) and \(\rho^{\prime}\) can be extended to entropy solutions that are defined on a larger domain \((b_{-},b_{+})\times[t_{0},T)\), with \(b_{-}<a_{-}<a_{+}<b_{+}\). With a slight abuse of notation, we write \(\rho\) and \(\rho^{\prime}\) for these extensions. Given an arbitrary constant \(c\), the following Kruzkov's entropy inequalities hold weakly in \((b_{-},b_{+})\times[t_{0},T)\): \[|\rho(x,t)-c|_{t} \leq|H(x,t,\rho(x,t))-H(x,t,c)|_{x}+\ sgn\big{(}\rho(x,t)-c\big{)}H _{x}(x,t,c),\] \[|\rho^{\prime}(x,t)-c|_{t} \leq|H(x,t,\rho^{\prime}(x,t))-H(x,t,c)|_{x}+\ sgn\big{(}\rho^{ \prime}(x,t)-c\big{)}H_{x}(x,t,c).\] This allows us to use Kruzkov's standard arguments as in [K] to deduce \[|\rho(x,t)-\rho^{\prime}(x,t)|_{t}\leq |H(x,t,\rho(x,t))-H(x,t,\rho^{\prime}(x,t))|_{x}\] \[-\ sgn\big{(}\rho(x,t)-\rho^{\prime}(x,t)\big{)}\big{(}H_{x}(x,t, \rho(x,t))-H_{x}(x,t,\rho^{\prime}(x,t))\big{)}\] \[\leq |H(x,t,\rho(x,t))-H(x,t,\rho^{\prime}(x,t))|_{x}+C_{0}|\rho(x,t)- \rho^{\prime}(x,t)|,\] weakly in \((b_{-},b_{+})\times[t_{0},T)\). From this, we can readily deduce \[\big{[}e^{-C_{0}t}|\rho(x,t)-\rho^{\prime}(x,t)|\big{]}_{t}\leq e^{-C_{0}t}|H(x,t,\rho(x,t))-H(x,t,\rho^{\prime}(x,t))|_{x}, \tag{2.18}\] weakly in \((b_{-},b_{+})\times[0,T)\). We wish to integrate both sides of (2.18) with respect to \(x\) from \(a_{-}\) to \(a_{+}\). To perform such integration rigorously, we take a smooth function \(\gamma\) of compact support with \(\int\gamma\ dx=1\), rescale it as \(\gamma_{\varepsilon}(x)=\varepsilon^{-1}\gamma(x/\varepsilon)\), and choose \(\tau_{\varepsilon}(x)\) so that \(\tau_{\varepsilon}\geq 0\), \(\tau_{\varepsilon}^{\prime}(x)=\gamma_{\varepsilon}(x-a_{-})-\gamma_{ \varepsilon}(x-a_{+})\), and \(\tau_{\varepsilon}(b_{-})=0\). For small \(\varepsilon\), the function \(\tau_{\varepsilon}\) is supported in \((b_{-},b_{+})\). We can now integrate both sides of (2.18) against \(\tau_{\varepsilon}\) to deduce that weakly \[\left[e^{-C_{0}t}\int|\rho(x,t)-\rho^{\prime}(x,t)|\tau_{\varepsilon}(x)\ dx\right]_{t}\leq-\int e^{-C_{0}t}|H(x,t,\rho(x,t))-H(x,t,\rho^{ \prime}(x,t))|\ \tau_{\varepsilon}^{\prime}(x)\ dx.\] We can now send \(\varepsilon\) to \(0\) to arrive at \[\left[e^{-C_{0}t}\int_{a_{-}}^{a_{+}}|\rho(x,t)-\rho^{\prime}(x,t )|\ dx\right]_{t}\leq e^{-C_{0}t}|H(a_{+},t,\rho(a_{+},t))-H(a_{+},t,\rho^{\prime}(a_{+},t))|\] \[-e^{-C_{0}t}|H(a_{-},t,\rho(a_{-},t))-H(a_{-},t,\rho^{\prime}(a_{ -},t))|.\] Integrating both sides over the time interval \([s,t]\) yields \[e^{-C_{0}t}\int_{a_{-}}^{a_{+}}|\rho(x,t)-\rho^{\prime}(x,t)|\ dx\leq e^{-C_{0}s}\int_{a_{-}}^{a_{+}}|\rho(x,s)-\rho^{\prime}(x,s)|\ dx\] \[+\int_{s}^{t}e^{-C_{0}\theta}|H(a_{+},\theta,\rho(a_{+},\theta))- H(a_{+},\theta,\rho^{\prime}(a_{+},\theta))|\ d\theta\] \[-\int_{s}^{t}e^{-C_{0}\theta}|H(a_{-},\theta,\rho(a_{-},\theta))- H(a_{-},\theta,\rho^{\prime}(a_{-},\theta))|\ d\theta\] \[\leq e^{-C_{0}s}\int_{a_{-}}^{a_{+}}|\rho(x,s)-\rho^{\prime}(x,s)|\ dx\] \[+\int_{s}^{t}e^{-C_{0}\theta}|H(x,\theta,\rho(a_{+},\theta))-H(x,\theta,\rho^{\prime}(a_{+},\theta))|\ d\theta.\] This evidently implies (2.4). \(\square\) ## 3 Forward Equation As a preliminary step for establishing Theorem 2.1, we derive a Kolmogorov type forward equation for the measure \(\mu(d{\bf q},t)\). We first introduce some notation for the particle dynamics. **Definition 3.1(i)** For \(0\leq s\leq t\) and \({\bf q}\in\Delta\), we write \(\psi_{s}^{t}{\bf q}\) for the deterministic evolution from time \(s\) to \(t\) of the configuration \({\bf q}\) according to the annihilating particle dynamics of Definition 2.1**(iii)**, _without_ random entry dynamics at \(x=a_{+}\). **(ii)** Given a configuration \({\bf q}=\big{(}(x_{0},\rho_{0}),\ldots,(x_{n},\rho_{n})\big{)}\) and \(\rho_{+}\in\mathbb{R}\), write \(\epsilon_{\rho_{+}}{\bf q}\) for the configuration \(\big{(}(x_{0},\rho_{0}),\ldots,(x_{n},\rho_{n}),(a_{+},\rho_{+})\big{)}\). **(iii)** Write \(\Psi_{s}^{t}{\bf q}\) for the _random_ evolution of the configuration according to deterministic particle dynamics interrupted with random entries at \(x=a_{+}\) according to the boundary process as in **(3)** in Definition 2.1**(iii)**, where the latter has been started at time \(s\) with value \(\phi_{x_{n}}^{a_{+}}(\rho_{n};s)\). In particular, if the jumps between times \(s\) and \(t\) occur at times \(\tau_{1}<\cdots<\tau_{k}\) with values \(m_{1},\cdots,m_{k}\), then \[\Psi_{s}^{t}{\bf q}=\psi_{\tau_{k}}^{t}\epsilon_{m_{k}}\psi_{\tau_{k-1}}^{ \tau_{k}}\epsilon_{m_{k-1}}\cdots\psi_{\tau_{1}}^{\tau_{2}}\epsilon_{m_{1}} \psi_{s}^{\tau_{1}}{\bf q}. \tag{3.1}\] **(iv)** For \(n\geq 1\), and \(i\in\{0,\ldots,n-1\}\), we write \(\partial_{i}\Delta_{n}\) for the portion of the boundary \(\Delta_{n}\) such that \(x_{i}=x_{i+1}\). Note that \({\bf q}(t)\) reaches the boundary set \(\partial_{0}\Delta_{n}\) at time \(\tau\) if at this time \(x_{1}(\tau)=a_{-}\). For time \(t\) immediately after \(\tau\), the configuration \({\bf q}(t)\) belongs to \(\Delta_{n-1}\) with \(\rho_{0}\) taking new value. Similarly \({\bf q}(t)\) reaches the boundary set \(\partial_{i}\Delta_{n}\) for some \(i>0\) at time \(\tau\) if at this time \(x_{i+1}\) collides with \(x_{i}\). For time \(t\) immediately after \(\tau\), the configuration \({\bf q}(t)\) belongs to \(\Delta_{n-1}\). We also set \[\hat{\partial}\Delta_{n}=\cup_{i=0}^{n}\partial_{i}\Delta_{n}.\] **(v)** We write \(\partial_{n+1}\Delta_{n+1}\) for the set of points \({\bf q}\in\Delta_{n+1}\) with \(x_{n+1}=a_{+}\). When \({\bf q}\in\Delta_{n}\), and a new particle is created at \(a_{+}\) at time \(\tau\) by the stochastic boundary dynamics, the configuration \({\bf q}(\tau+)\) is regarded as a boundary point in \(\partial_{n+1}\Delta_{n+1}\). **(vi)** Given a function \(G:\Delta\to\mathbb{R}\), we write \(G^{n}\) for the restriction of the function \(G\) to the set \(\Delta_{n}\). Also, given a measure on \(\Delta\), we write \(\nu^{n}\) for the restriction of a measure \(\nu\) to \(\Delta_{n}\). **(vii)** We write \(\mathcal{L}=\mathcal{L}^{t}\) for the generator of the (inhomogeneous Markov) process \({\bf q}(t)\). This generator can be expressed as \(\mathcal{L}=\mathcal{L}_{0}+\mathcal{L}_{b}\), where \(\mathcal{L}_{0}\) is the generator of the deterministic part of dynamics, and \(\mathcal{L}_{b}\) represents the Markovian boundary dynamics. The deterministic and stochastic dynamics restricted to \(\Delta_{n}\) have generators that are denoted by \(\mathcal{L}_{0n}\) and \(\mathcal{L}_{bn}\) respectively. While \({\bf q}(t)\) remains in \(\Delta_{n}\), its evolution is governed by an ODE of the form \[\frac{d{\bf q}}{dt}(t)={\bf b}\big{(}{\bf q}(t),t\big{)},\] with \({\bf b}={\bf b}_{n}:\Delta_{n}\to\mathbb{R}^{2n+1}\), that can be easily described with the aid of rules **(1)** and **(2)** of Definition 2.1**(iii)**, and (2.3). Given this vector field, the generator \(\mathcal{L}_{0n}\) is given by \[\mathcal{L}_{0n}F={\bf b}\cdot\nabla F,\] where \(\nabla F\) is the full gradient of \(F\) with respect to variables \(\big{(}\rho_{0},x_{1},\rho_{1},\ldots,x_{n},\rho_{n}\big{)}\). We also write \(\mathcal{L}_{0n}^{*}=\mathcal{L}_{0n}^{*}\) for the adjoint of \(\mathcal{L}_{0n}\) with respect to the Lebesgue measure: \[\mathcal{L}_{0n}^{*}\mu=-\nabla\cdot(\mu{\bf b}).\] \(\square\) It is not hard to show that \(t\mapsto\Psi_{s}^{t}{\bf q},\ t\geq s\) is indeed a strong Markov process. This is rather straightforward, and we refer to Davis [D] for details. We establish Theorem 2.1 by verifying the forward equation \(\dot{\mu}=\mathcal{L}^{*}\mu\), or equivalently \[\dot{\mu}^{n}=\big{(}\mathcal{L}^{*}\mu\big{)}^{n}, \tag{3.2}\] for all \(n\geq 0\), where \(\mu\) was defined in Definition 2.2**(ii)**, and \(\mathcal{L}^{*}\) is the adjoint of the operator \(\mathcal{L}\). To explain this, observe that Theorem 2.1 offers a candidate for the law of \({\bf q}(t)\), namely the measure \(\mu(d{\bf q},t)\). Hence for our Theorem 2.1, it suffices to show \[\int G\big{(}{\bf q},t\big{)}\ \mu\big{(}d{\bf q},t\big{)}=\mathbb{E}\int G \big{(}\Psi_{0}^{t}{\bf q},t\big{)}\ \mu(d{\bf q},0\big{)}, \tag{3.3}\] for every function \(G\) of the form \[G({\bf q},t)=\exp\left(i\int_{a_{-}}^{a_{+}}\rho(x,t;{\bf q})\varphi(x)\ dx \right), \tag{3.4}\] for some smooth function \(\varphi\) (we refer to the beginning of Section 3 of [KR1] for more details.) Here and below, we write \(\mathbb{P}\) and \(\mathbb{E}\) for the probability and the expected value for the randomness associated with the boundary dynamics. To ease the notation, we set \(\hat{G}({\bf q},s)=\mathbb{E}\ G(\Psi_{s}^{t}{\bf q},t)\). We establish (3.3) by verifying \[\frac{d}{ds}\int\hat{G}({\bf q},s)\ \mu(d{\bf q},s)=0, \tag{3.5}\] for \(t_{0}<s<t\). The differentiation of \(\mu(d{\bf q},s)\) can be carried out directly and poses no difficulty. As for the contribution of \(G(\Psi_{s}^{t}{\bf q},t)\) to the \(s\)-derivative, we wish to show \[\int\hat{G}_{s}({\bf q},s)\ \mu(d{\bf q},s)=-\int\big{(}{\cal L}^{s}\hat{G} \big{)}({\bf q},s)\ \mu(d{\bf q},s). \tag{3.6}\] Since the deterministic part of the evolution is discontinuous in time, the justification of (3.6) requires some work. Additionally, to make sense of the right-hand side, we need \(\hat{G}\) to be in the domain of the definition of \({\cal L}^{s}_{0}\). We expect \(\hat{G}\) to be weakly differentiable with respect to \({\bf q}\). To avoid the differentiability question of \(\hat{G}\), we would formally apply an integration by parts to the right-hand side of (3.6), so that the differentiation operator would act on the density of \(\mu\), which is differentiable. We also have a boundary contribution that correspond to the collisions between particles. We establish the following variant of the forward equation (3.6). **Theorem 3.1**: _We have_ \[\begin{split}\lim_{s^{\prime}\uparrow s}\int_{\Delta_{n}}\frac{ \hat{G}^{n}({\bf q},s^{\prime})-\hat{G}^{n}({\bf q},s)}{s-s^{\prime}}\ \mu^{n}({\bf q},s)\ d{\bf q}=&\int_{\Delta_{n}}({\cal L}^{s}_{b} \hat{G})^{n}({\bf q},s)\mu^{n}({\bf q},s)\ d{\bf q}\\ &+\int_{\Delta_{n}}\hat{G}^{n}({\bf q},s)\big{(}{\cal L}^{s*}_{0 n}\mu^{n}\big{)}({\bf q},s)\ d{\bf q}\\ &+\int_{\hat{\partial}\Delta_{n+1}}\hat{G}^{n+1}({\bf q},s)\mu^{ n+1}({\bf q},s)({\bf b}_{n+1}\cdot{\bf N}_{n+1})\ \sigma(d{\bf q}),\end{split} \tag{3.7}\] _where \({\bf N}_{n+1}\) is the outer unit normal vector of \(\hat{\partial}\Delta_{n+1}\), and \(\sigma(d{\bf q})\) is the surface measure of \(\partial\Delta_{n+1}\)._ Note that for the differentiation in (3.7) we will need to compare \(\hat{G}({\bf q},s)\) and \(\hat{G}({\bf q},s^{\prime})\) for \(t_{0}<s^{\prime}<s\leq t\). As a warm-up we verify the Lipschitzness of the function \(s\mapsto\mathbb{E}\ \hat{G}({\bf q},s)\). **Lemma 3.1**: _Fix \(t>t_{0}\). There exists a constant \(C_{1}=C_{1}(\varphi,H,f)\) such that_ \[\big{|}\hat{G}({\bf q},s^{\prime})-\hat{G}({\bf q},s)\big{|}\leq C_{1}(n+1)|s^{ \prime}-s|, \tag{3.8}\] _for all \({\bf q}\in\Delta_{n}\) and \(s,s^{\prime}\in[t_{0},t]\)._ The proof follows from the \(L^{1}\)-stability (2.4) and a coupling argument for the stochastic boundary dynamics. We skip the proof of Lemma 3.1 because it is very similar to the proof of the analogous Lemma 3.1 that appeared in [KR2]. Armed with (3.8), we are now ready for the proof of (3.7). **Proof of Theorem 3.1**_(Step 1)_ Let \(t_{0}<s^{\prime}<s\leq t\). We first show that we can separate the deterministic and stochastic portions of the dynamics over the time interval \([s^{\prime},s]\), when \(s-s^{\prime}\) is small. Write \(\tau=\tau({\bf q},s^{\prime})\) for the first time a jump occurs at \(x=a_{+}\) after the time \(s^{\prime}\), and let \(E\) denote the event that \(\tau\in(s^{\prime},s)\). We also write \[\hat{\rho}_{n}=R_{s}({\bf q})(a_{+})=\phi_{x_{n}}^{a_{+}}(\rho_{n};s),\ \ \ \ \hat{\rho}_{n}^{\prime}=R_{s^{\prime}}({\bf q})(a_{+})=\phi_{x_{n}}^{a_{+}}( \rho_{n};s^{\prime}).\] By the Lipschitz regularity of \(b\), we can show that \(\hat{\rho}_{n}-\hat{\rho}_{n}^{\prime}=O(s-s^{\prime})\) (see also (3.14) below). Recall that \(\gamma\) denotes the flow associated with the ODE (2.17), and \(\eta\) was defined in (2.16). Observe that by the Lipschitz regularity of \(\eta\) (which is the consequence of the Lipschitz regularity of \(v\) and \(f\)), \[\mathbb{P}\big{(}E\big{)}=\int_{s^{\prime}}^{s}\eta\big{(}\theta,\gamma_{s^{ \prime}}^{\theta}(\theta,\hat{\rho}_{n}^{\prime})\big{)}\ d\theta+O((s-s^{ \prime})^{2})=(s-s^{\prime})\eta\big{(}s,\hat{\rho}_{n}\big{)}+O((s-s^{\prime} )^{2}), \tag{3.9}\] with both errors bounded uniformly over \({\bf q}\). We claim that there exists a constant \(c_{1}\) so that for \({\bf q}\in\Delta_{n}\), \[\hat{G}({\bf q},s^{\prime})= (s-s^{\prime})\int_{\hat{\rho}_{n}}^{\infty}\Big{(}\mathbb{E} \left[\hat{G}\big{(}\epsilon_{\rho_{+}}\psi_{s^{\prime}}^{\tau}{\bf q},s\big{)} \ \big{|}\ E\right]-\hat{G}(\psi_{s^{\prime}}^{s}{\bf q},s)\Big{)}\,f^{2}(a_{+},s,\hat{\rho}_{n},\rho_{+})\ d\rho_{+}\] (3.10) \[\ \ By (3.8), \[\Big{|}\mathbb{E}\ \hat{G}\big{(}\epsilon_{\rho_{+}}\psi_{s^{\prime}}^{\tau} \mathbf{q},\tau\big{)}\mbox{$1\!\!1$}_{E}-\mathbb{E}\ \hat{G}\big{(}\epsilon_{\rho_{+}}\psi_{s^{\prime}}^{\tau}\mathbf{q},s\big{)} \mbox{$1\!\!1$}_{E}\Big{|}\leq C_{1}(n+1)(s-s^{\prime})\mathbb{P}(E). \tag{3.13}\] Next we modify the distribution from which \(\rho_{+}\) is selected; at present, \(\rho_{+}\) is selected according to a random measure with density \[\hat{f}^{2}\big{(}a_{+},\tau,\tilde{\rho}_{n},\rho_{+}\big{)}:=\eta\big{(}\tau,\tilde{\rho}_{n}\big{)}^{-1}f^{2}\big{(}a_{+},\tau,\tilde{\rho}_{n},\rho_{+} \big{)},\] where \(\tilde{\rho}_{n}:=\gamma_{s^{\prime}}^{\tau}(\hat{\rho}_{n}^{\prime})\). From \(H\in C^{2}\), and the Lipshitzness of \(b^{i}(x,s,\rho)\) for \(i=1,2\), it is not hard to show that there exists a constant \(c_{2}\) such that \[\big{|}\hat{\rho}_{n}^{\prime}-\hat{\rho}_{n}\big{|}\leq c_{2}|s^{\prime}-s|, \ \ \ \ \big{|}\hat{\rho}_{n}^{\prime}-\tilde{\rho}_{n}\big{|}\leq c_{2}|s^{\prime}-s|. \tag{3.14}\] Let us write \(\hat{\rho}_{+}\) for an independent random variable distributed as \(\hat{f}(a_{+},s,\hat{\rho}_{n},\rho_{+})\ d\rho_{+}\). Observe \[\eta(\theta,m)=\int_{m}^{\infty}f^{2}(a_{+},\theta,m,\rho_{+})\ d\rho_{+}\geq \left[\min_{\theta^{\prime}\in[t_{0},T]}H_{\rho}(a_{+},\theta^{\prime},P_{-}) \right]\int_{m}^{\infty}f(a_{+},\theta,m,\rho_{+})\ d\rho_{+}.\] From this, (3.14), and the Lipschitzness of \(f^{2}=vf\) we can readily show \[\big{|}\hat{f}^{2}(a_{+},s,\hat{\rho}_{n},\rho_{+})-\hat{f}^{2}\big{(}a_{+}, \tau,\tilde{\rho}_{n},\rho_{+}\big{)}\big{|}\leq c_{3}|s^{\prime}-s|, \tag{3.15}\] for a constant \(c_{3}\). We then use (3.14) and (3.15) to assert that there exists a constant \(c_{4}\) such that the expression \[\Big{|}\mathbb{E}\ \left[\hat{G}\big{(}\epsilon_{\rho_{+}}\psi_{s^{\prime}}^{ \tau}\mathbf{q},s\big{)}-\hat{G}\big{(}\epsilon_{\hat{\rho}_{+}}\psi_{s^{ \prime}}^{\tau}\mathbf{q},s\big{)}\right]\mbox{$1\!\!1$}_{E}\Big{|}\,,\] is bounded above by \[\bigg{|}\mathbb{E}\ \mbox{$1\!\!1$}_{E}\int_{\tilde{\rho}_{n}\vee\hat{ \rho}_{n}}^{\infty}\hat{G}\big{(}\epsilon_{\rho_{+}}\psi_{s^{\prime}}^{\tau} \mathbf{q},s\big{)}\big{(}\hat{f}^{2}\big{(}a_{+},\tau,\tilde{\rho}_{n},\rho_ {+}\big{)}-\hat{f}^{2}(a_{+},s,\hat{\rho}_{n},\rho_{+})\big{)}\ d\rho_{+}\bigg{|}\] \[\ \ where \(R_{1}\) and \(R_{2}\) are bounded by a constant multiple of \(n+1\). This and (3.11) complete the proof of (3.10). _(Step 2)_ We wish to establish (3.7) with the aid of (3.10). Observe that \(\mu(d{\bf q},s)\) is the law of a Markov process with a bounded jump rates. For such a Markov process, we can readily show that if \({\bf n}({\bf q})\) denotes the number of jumps/particles of \({\bf q}\) in the interval \([a_{-},a_{+}]\), then \[\sup_{s\in[t_{0},T]}\int{\bf n}({\bf q})^{k}\ \mu(d{\bf q},s)<\infty, \tag{3.16}\] for every \(k\in\mathbb{N}\). Indeed if we choose \(\delta_{0}\) so that \(\lambda(x,t_{0},\rho)\geq\delta_{0}\) for all \((x,\rho)\in[a_{-},a_{+}]\times[P_{-},P_{+}]\), then there exists a Poisson random variable \(N_{\delta_{0}}\) of intensity \(\delta_{0}^{-1}(a_{+}-a_{-})\) such that \({\bf n}({\bf q})\leq N_{\delta_{0}}\) almost surely. From (3.16), and (3.10), we can write \[(s-s^{\prime})^{-1}\big{(}\hat{G}({\bf q},s^{\prime})-\hat{G}({\bf q},s)\big{)} =\sum_{r=1}^{5}\Omega_{r}(s^{\prime},s),\] where \[\Omega_{1}(s^{\prime},s) =\int_{\hat{\rho}_{n}}^{\infty}\Big{(}\mathbb{E}\left[\hat{G} \big{(}\epsilon_{\rho_{+}}\psi_{s^{\prime}}^{\tau}{\bf q},s\big{)}\ \big{|}\ E\right]-\hat{G}\big{(}\epsilon_{\rho_{+}}{\bf q},s\big{)}\Big{)}\,f^{ 2}(a_{+},s,\hat{\rho}_{n},\rho_{+})\ d\rho_{+}\] \[=\int_{\hat{\rho}_{\bf n}({\bf q})}^{\infty}\mathbb{E}\left[\hat{ G}\big{(}\varepsilon_{\rho_{+}}\psi_{s^{\prime}}^{\tau}{\bf q},s\big{)}-\hat{G} \big{(}\varepsilon_{\rho_{+}}{\bf q},s\big{)}\big{|}E\right]f^{2}(a_{+},s,\hat{ \rho}_{\bf n}({\bf q}),\rho_{+})\ d\rho_{+},\] \[\Omega_{2}(s^{\prime},s) =\int_{\hat{\rho}_{n}}^{\infty}\Big{(}\hat{G}\big{(}\epsilon_{ \rho_{+}}{\bf q},s\big{)}-\hat{G}({\bf q},s)\ \Big{)}\,f^{2}(a_{+},s,\hat{\rho}_{n},\rho_{+})\ d\rho_{+}=({\cal L}_{ bn}^{s}\hat{G})({\bf q},s),\] \[\Omega_{3}(s^{\prime},s) =\int_{\hat{\rho}_{n}}^{\infty}\Big{(}\hat{G}({\bf q},s)-\hat{G} \big{(}\psi_{s^{\prime}}^{s}{\bf q},s\big{)}\Big{)}\,f^{2}(a_{+},s,\hat{\rho} _{n},\rho_{+})\ d\rho_{+}=\Big{(}\hat{G}({\bf q},s)-\hat{G}\big{(}\psi_{s^{ \prime}}^{s}{\bf q},s\big{)}\Big{)}\,\eta(s,\hat{\rho}_{n}),\] \[\Omega_{4}(s^{\prime},s)= (s-s^{\prime})^{-1}\left(\hat{G}\big{(}\psi_{s^{\prime}}^{s}{\bf q },s\big{)}-\hat{G}({\bf q},s)\right),\] and \(|\Omega_{5}(s^{\prime},s)|=(s-s^{\prime})|R({\bf q},s^{\prime},s)|\leq c_{1}(n+ 1)(s-s^{\prime})\). (For the second equality, we used the fact that the event \(E\) depends only on the stochastic boundary that is independent from the law of \(\rho_{+}\).) By (3.16), \[\int\big{|}\Omega_{5}(s^{\prime},s)\big{|}\ \ \mu(d{\bf q},s)=O(s-s^{\prime}).\] As a result, (3.7) would follow if we can show \[\lim_{s^{\prime}\uparrow s}\int\Omega_{1}(s^{\prime},s)\ \mu(d{\bf q},s) =0, \tag{3.18}\] \[\lim_{s^{\prime}\uparrow s}\int\Omega_{3}(s^{\prime},s)\ \mu(d{\bf q},s) =0. \tag{3.17}\] and that the limit \[\lim_{s^{\prime}\uparrow s}\ \int\Omega_{4}(s^{\prime},s)\ \mu(d{\bf q},s), \tag{3.19}\] equals to the sum of the last two terms on the right-hand side of (3.7). The proof of this will be carried out in the last step. A slight modification of this proof can be carried out to establish (3.18). _(Step 3)_ We turn out attention to (3.17). Recall that \(\mu(d{\bf q},s)\) is the law of a Markov process \((\rho(x,s):\ x\in[a_{-},a_{+}])\) with generator \({\cal A}^{1}_{x,s}\). For our proof we will need a lower bound on the density \(f\). Since \(f(x,s,\rho_{-},\rho_{+})>0\) only when \((\rho_{-},\rho_{+})\) is in the interior of \(\Lambda(P_{-},P_{+})\), we wish to estimate the probability of the set \(B(\delta,s)\) consisting of those \({\bf q}\) such that for some \(x\in[a_{-},a_{+}]\), we have either \(\rho(x,s)=R_{s}({\bf q})(x)\in[P_{+}-\delta,P_{+}]\), or \(\rho(x+,s)-\rho(x-,s)\in(0,\delta)\). If we write \({\mathbb{P}}^{m}_{s}\) for the law of our Markov process \(\rho(x,s)\) associated with the generator \({\cal A}^{1}_{x,s}\), and the initial condition \(\rho(a_{-},s)=m\), and if \(m<P_{+}-\delta\), then it is not hard to show \[{\mathbb{P}}^{m}_{s}\big{(}B(\delta,s)\big{)}= {\mathbb{E}}^{m}_{s}\int_{a_{-}}^{a_{+}}\left[\int_{\rho(x,s)}^{ \rho(x,s)+\delta}+\int_{\rho(x,s)\vee(P_{+}-\delta)}^{P_{+}}\right]f(x,s,\rho (x,s),\rho_{+})\ d\rho_{+}\ dx\leq c_{5}\delta.\] From this, we learn that (3.17) would follow if we can show \[\lim_{s^{\prime}\uparrow s}\int\Omega_{1}(s^{\prime},s)\ \hat{\mu}(d{\bf q},s)=0, \tag{3.20}\] where \[\hat{\mu}(d{\bf q},s)=1\!\!1\big{(}{\bf q}\notin B(\delta,s)\big{)}\ \mu(d{\bf q},s).\] _(Step 4)_ To verify (3.20), write \(\sigma({\bf q},s^{\prime})\) for the first time \(\sigma>s^{\prime}\) at which \(\psi^{\sigma}_{s^{\prime}}{\bf q}\) experiences a collision between particles of \({\bf q}\). We claim \[\int 1\!\!1\big{(}\sigma({\bf q},s^{\prime})\leq s\big{)}\ \mu(d{\bf q},s)\leq c_{5}(s-s^{ \prime})\int{\bf n}({\bf q})\ \mu(d{\bf q},s)\leq c_{6}(s-s^{\prime}), \tag{3.21}\] for constants \(c_{6}\) and \(c_{5}\). This is an immediate consequence of (3.16) and the following fact: If \({\bf q}=(x_{0},\rho_{0},x_{1},\rho_{1},\ldots,x_{n},\rho_{n})\), and \(\sigma({\bf q},s^{\prime})\leq s\), then for some \(i\), we have \(|x_{i}-x_{i+1}|\leq 2c_{6}|s-s^{\prime}|\), where \(c_{6}\) is an upper bound on the speed of particles. Because of (3.21), the claim (3.20) is equivalent to \[\lim_{s^{\prime}\uparrow s}\left|\sum_{n=0}^{\infty}X_{n}(s^{\prime})\right|=0, \tag{3.22}\] where \(X_{n}(s^{\prime})\) is the expression \[\int\int_{\hat{\rho}_{n}}^{\infty}\mathbb{E}\ \big{[}\hat{G}\big{(}\varepsilon_{ \rho_{+}}\psi_{s^{\prime}}^{\tau}\mathbf{q},s\big{)}-\hat{G}\big{(}\varepsilon_ {\rho_{+}}\mathbf{q},s\big{)}|\ E\big{]}\mbox{1$\!$1}\big{(}\sigma(\mathbf{q},s ^{\prime})>s\big{)}\ f(a_{+},s,\hat{\rho}_{n},\rho_{+})\hat{\mu}^{n}(\mathbf{q}, s)\ d\rho_{+}d\mathbf{q}.\] On account of (3.9), the claim (3.22) would follow if we can show \[\lim_{s^{\prime}\uparrow s}(s-s^{\prime})^{-1}\left|\sum_{n=0}^{\infty}Y_{n}(s ^{\prime})\right|=0, \tag{3.23}\] where \(Y_{n}(s^{\prime})=Y_{n}^{+}(s^{\prime})-Y_{n}^{-}(s^{\prime})\), with \[Y_{n}^{+}(s^{\prime}) =\int\int_{\hat{\rho}_{n}}^{\infty}\mathbb{E}\ \hat{G}\big{(}\varepsilon_{\rho_{+}}\psi_{s^{\prime}}^{\tau}\mathbf{q},s \big{)}\mbox{1$\!$1}\big{(}\sigma(\mathbf{q},s^{\prime})>s>\tau(\mathbf{q},s^{ \prime})\big{)}\ f^{2}(a_{+},s,\hat{\rho}_{n},\rho_{+})\hat{\mu}^{n}(\mathbf{q },s)\ d\rho_{+}d\mathbf{q},\] \[Y_{n}^{-}(s^{\prime}) =\int\int_{\hat{\rho}_{n}}^{\infty}\mathbb{E}\ \hat{G}\big{(} \varepsilon_{\rho_{+}}\mathbf{q},s\big{)}\mbox{1$\!$1}\big{(}\sigma(\mathbf{q },s^{\prime})>s>\tau(\mathbf{q},s^{\prime})\big{)}\ f^{2}(a_{+},s,\hat{\rho}_{n}, \rho_{+})\hat{\mu}^{n}(\mathbf{q},s)\ d\rho_{+}d\mathbf{q}.\] _(Step 5)_ The expected value in the definition of \(Y_{n}^{\pm}\) is for the random variable \(\tau=\tau(\mathbf{q},s^{\prime})\). As was explained in the proof of Proposition 2.1**(ii)**, the variable \(\tau\) can be expressed in terms of \(\hat{\rho}_{n}\) and a standard exponential random variable. More precisely, \[\tau=\tau(\mathbf{q},s^{\prime})=\ell(r,\hat{\rho}_{n},s^{\prime}),\] with \(r>0\) a random variable with distribution \(e^{-r}\ dr\), and \(\ell(r,\hat{\rho}_{n},s^{\prime})\) denoting the inverse of the map \[\tau\mapsto r=\int_{s^{\prime}}^{\tau}\eta\big{(}\theta,\gamma_{s^{\prime}}^{ \theta}(\theta,\hat{\rho}_{n})\big{)}\ d\theta,\ \ \ \ \tau\in(s^{\prime},\infty).\] As a result, we may replace the expected values in (3.23) with an integration with respect to \(e^{-r}\ dr\). On the other hand, \[\mbox{1$\!$1}(r>0)\ e^{-r}\ dr=\mbox{1$\!$1}(\tau>s^{\prime})\ e^{-r}\eta \big{(}\tau,\gamma_{s^{\prime}}^{\tau}(\hat{\rho}_{n})\big{)}\ d\tau=\mbox{1$\!$1 }(\tau>s^{\prime})\ \big{(}\eta\big{(}s^{\prime},\hat{\rho}_{n}\big{)}+O(\tau-s^{\prime}) \big{)}\ d\tau,\] by the Lipschitz regularity of \(\eta\). Because of this, (3.23) would follow if we can show \[\lim_{s^{\prime}\uparrow s}(s-s^{\prime})^{-1}\left|\sum_{n=0}^{\infty}Z_{n}(s ^{\prime})\right|=0, \tag{3.24}\] where \(Z_{n}(s^{\prime})=Z_{n}^{+}(s^{\prime})-Z_{n}^{-}(s^{\prime})\), with \[Z_{n}^{+}(s^{\prime}) =\int\int_{\hat{\rho}_{n}}^{\infty}\int_{s^{\prime}}^{s}\hat{G} \big{(}\varepsilon_{\rho_{+}}\psi_{s^{\prime}}^{\tau}\mathbf{q},s\big{)}\mbox{1$ \!$1}\big{(}\sigma(\mathbf{q},s^{\prime})>s\big{)}\eta\big{(}\hat{\rho}_{n},s^{ \prime}\big{)}f^{2}(a_{+},s,\hat{\rho}_{n},\rho_{+})\hat{\mu}^{n}(\mathbf{q},s )\ d\tau d\rho_{+}d\mathbf{q},\] \[Z_{n}^{-}(s^{\prime}) =\int\int_{\hat{\rho}_{n}}^{\infty}\int_{s^{\prime}}^{s}\hat{G} \big{(}\varepsilon_{\rho_{+}}\mathbf{q},s\big{)}\mbox{1$\!$1}\big{(}\sigma( \mathbf{q},s^{\prime})>s\big{)}\eta\big{(}\hat{\rho}_{n},s^{\prime}\big{)}f^{2 }(a_{+},s,\hat{\rho}_{n},\rho_{+})\hat{\mu}^{n}(\mathbf{q},s)\ d\tau d\rho_{+} d\mathbf{q}.\] To prove (3.24), we carry out the \(d{\bf q}\) integration first. Fix \(\tau>0\) and \(\rho_{+}\), and make a change of variables \({\bf q}^{\prime}=\psi_{s^{\prime}}^{\tau}{\bf q}\) for \(d{\bf q}\) integration in \(Z_{n}^{+}(s^{\prime})\). For this, we wish to replace \(\hat{\mu}^{n}({\bf q},s)\), with \(\hat{\mu}^{n}\big{(}\psi_{s^{\prime}}^{\tau}{\bf q},s\big{)}\). Observe that by Hypothesis 1.1**(iii)**, then kernel \(f(x,s,\rho_{-},\rho_{+})>0\) in the interior of \(\Lambda(P_{-},P_{+})\). As a result, we can find \(\delta_{0}>0\) such that \[{\bf q}=(x_{0},\rho_{0},\ldots,x_{n},\rho_{n})\notin B(\delta,s)\quad\implies \quad f(x_{i},\hat{\rho}_{i},\rho_{i+1})\geq\delta_{1}. \tag{3.25}\] Since \(\hat{\mu}^{n}\) is supported on the complement of the event \(B(\delta,s)\), we use \[\big{[}\log\hat{\mu}^{n}\big{(}\psi_{s^{\prime}}^{\tau}{\bf q},s\big{)}\big{]} _{\tau}={\bf b}\big{(}\psi_{s^{\prime}}^{\tau}{\bf q},\tau\big{)}\cdot\nabla \left[\log\hat{\mu}^{n}\big{(}\psi_{s^{\prime}}^{\tau}{\bf q},s\big{)}\right],\] our assumption \(f\in C^{1}\), and (3.25) to assert \[\big{[}\log\hat{\mu}^{n}\big{(}\psi_{s^{\prime}}^{\tau}{\bf q},s\big{)}\big{]} _{\tau}=O(n),\] which in turn implies \[\hat{\mu}^{n}\big{(}\psi_{s^{\prime}}^{\tau}{\bf q},s\big{)}=\mu^{n}({\bf q}, s)\big{(}1+(s^{\prime}-s)O(n)\big{)}. \tag{3.26}\] Since the map \({\bf q}\mapsto\psi_{s^{\prime}}^{\tau}{\bf q}\) is the flow of the ODE associated with vector field \({\bf b}\), its Jacobian has the expansion \[1+(\tau-s^{\prime})div({\bf b})+{\bf n}({\bf q})\ o(\tau-s^{\prime}).\] Since \(div({\bf b})=O({\bf n}({\bf q}))\), a change of variable \({\bf q}^{\prime}=\psi_{s^{\prime}}^{\tau}{\bf q}\) causes a Jacobian factor of the form \[1+{\bf n}({\bf q})O(\tau-s^{\prime})=1+{\bf n}({\bf q})O(s-s^{\prime}).\] From this, (3.26), and (3.14) we learn \[\eta\big{(}\hat{\rho}_{n},s^{\prime}\big{)}f^{2}(a_{+},s,\hat{\rho}_{n},\rho_ {+})\mu^{n}({\bf q},s)\ d{\bf q}=\eta\big{(}\hat{\rho}_{n}^{\prime},s^{\prime} \big{)}f^{2}(a_{+},s,\hat{\rho}_{n}^{\prime},\rho_{+})\mu^{n}({\bf q}^{\prime},s)\big{(}1+nO(s-s^{\prime})\big{)}\ d{\bf q}^{\prime}.\] From all this we deduce that \(Z_{n}^{+}(s^{\prime})=\hat{Z}_{n}^{+}(s^{\prime})+R_{n}\), where \(\hat{Z}_{n}^{+}(s^{\prime})\) is given by \[\int\int\int_{\hat{\rho}_{n}}^{\infty}\int_{s^{\prime}}^{s}\hat{G}\big{(} \varepsilon_{\rho_{+}}{\bf q}^{\prime},s\big{)}\mbox{11}\big{(}\sigma(\psi_{ \tau}^{s^{\prime}}{\bf q}^{\prime},s^{\prime})>s\big{)}\eta\big{(}\hat{\rho}_{ n}^{\prime},s^{\prime}\big{)}f^{2}(a_{+},s,\hat{\rho}_{n},\rho_{+}^{\prime})\hat{\mu}^{n}({ \bf q}^{\prime},s)\ d\tau d\rho_{+}d{\bf q}^{\prime}.\] and the \(R_{n}\) is an error term that satisfies \[\sum_{n=0}^{\infty}|R_{n}|\leq c_{2}(s-s^{\prime})^{2}\int{\bf n}({\bf q})^{2} \ \mu(d{\bf q},s)=c_{3}(s-s^{\prime})^{2}.\] By \(\psi_{\tau}^{s^{\prime}}\) we mean the inverse of \(\psi_{s^{\prime}}^{\tau}\). After renaming \({\bf q}^{\prime}\) as \({\bf q}\) and comparing \(\hat{Z}_{n}^{+}(s^{\prime})\) with \(Z_{n}^{-}(s^{\prime})\), we learn that \(\hat{Z}_{n}^{+}(s^{\prime})-Z_{n}^{-}(s^{\prime})\) equals to \[\int\int\int_{\hat{\rho}_{n}}^{\infty}\int_{s^{\prime}}^{s}\hat{G}\big{(} \varepsilon_{\rho_{+}}{\bf q},s\big{)}\chi({\bf q};s^{\prime},\tau,s)\eta \big{(}\hat{\rho}_{n},s^{\prime}\big{)}f^{2}(a_{+},s,\hat{\rho}_{n},\rho_{+}) \mu^{n}({\bf q},s)\ d\tau d\rho_{+}d{\bf q},\] where \(\chi({\bf q};s^{\prime},\tau,s)=\big{|}1\!\!1\big{(}\sigma(\psi_{\tau}^{s^{\prime}}{ \bf q},s^{\prime})>s\big{)}-1\!\!1\big{(}\sigma({\bf q},s^{\prime})>s\big{)} \big{|}\). After replacing \(\hat{G}\) with an upper bound, and carrying out the \(d\rho_{+}\) integration, we obtain \[\sum_{n=0}^{\infty}\big{|}\hat{Z}_{n}^{+}(s^{\prime})-Z_{n}^{-}(s^{\prime}) \big{|}\leq c_{4}\int_{s^{\prime}}^{s}\int\chi({\bf q};s^{\prime},\tau,s)\ \mu(d{\bf q},s)d\tau.\] Finally, since \(\chi({\bf q};s,s)=0\), we can readily show \[\lim_{s^{\prime}\uparrow s}(s-s^{\prime})^{-1}\sum_{n=0}^{\infty}\big{(}\hat{ Z}^{+}(s^{\prime})-Z^{-}(s^{\prime})\big{)}=0,\] completing the proof of (3.24), that in turn completes the proof of (3.17). _(Final Step)_ It remains to find the limit in (3.19). The proof we present is very general, and works whenever \(\hat{G}^{n}\) is continuous, \(\mu^{n}\) is \(C^{1}\), the vector field \({\bf b}={\bf b}_{n}\) is \(C^{1}\), and the boundary of \(\Delta_{n}\) is piecewise \(C^{1}\). Fix \(s\) and for \(s^{\prime}<s\) we write \[\Delta_{n}(s^{\prime},s)=\big{\{}{\bf q}\in\Delta_{n}:\ \psi_{s^{\prime}}^{s}{ \bf q}\in\Delta_{n}\big{\}},\ \ \ \ \hat{\Delta}_{n}(s^{\prime},s)=\psi_{s^{\prime}}^{s}\big{(}\Delta_{n}(s^{ \prime},s)\big{)}.\] We make a change of variables to write \[\int_{\Delta_{n}}\hat{G}^{n}\big{(}\psi_{s^{\prime}}^{s}{\bf q},s \big{)}\mu^{n}({\bf q},s)\ d{\bf q}= \int_{\Delta_{n}\setminus\Delta_{n}(s^{\prime},s)}\hat{G}^{n} \big{(}\psi_{s^{\prime}}^{s}{\bf q},s\big{)}\mu^{n}({\bf q},s)\ d{\bf q} \tag{3.27}\] \[+\int_{\hat{\Delta}_{n}(s^{\prime},s)}\hat{G}^{n}\big{(}{\bf q},s \big{)}\mu^{n}(\psi_{s}^{s^{\prime}}{\bf q},s)\ \det\big{(}D\psi_{s}^{s^{\prime}}\big{)}({\bf q})\ d{\bf q},\] where \(\psi_{s}^{s^{\prime}}\) denotes the inverse of the function \(\psi_{s^{\prime}}^{s}\). For \(s-s^{\prime}\) small, the volume \(|\Delta_{n}\setminus\Delta_{n}(s^{\prime},s)|\) is of order \(O(n(s-s^{\prime}))\). From this, and the continuity of \(\hat{G}\) we learn \[\int_{\Delta_{n}\setminus\Delta_{n}(s^{\prime},s)}\hat{G}^{n} \big{(}\psi_{s^{\prime}}^{s}{\bf q},s\big{)}\mu^{n}({\bf q},s)\ d{\bf q}= \int_{\Delta_{n}\setminus\Delta_{n}(s^{\prime},s)}\big{(}\hat{G}^ {n}\mu^{n}\big{)}\big{(}{\bf q},s\big{)}\ d{\bf q}+o(n(s-s^{\prime})) \tag{3.28}\] \[= (s-s^{\prime})\int_{\partial\Delta_{n}}\big{(}\hat{G}^{n}\mu^{n} \big{)}\big{(}{\bf q}^{\prime},s\big{)}\big{(}{\bf N}_{n}({\bf q}^{\prime}) \cdot{\bf b}_{n}({\bf q}^{\prime},s)\big{)}\ \sigma(d{\bf q}^{\prime})\] \[+o(n(s-s^{\prime})).\] Here we have used the fact that we may parametrize the set \(\Delta_{n}\setminus\Delta_{n}(s^{\prime},s)\) by the map \[\zeta:\partial\Delta_{n}\times[s^{\prime},s]\to\Delta_{n}\setminus\Delta_{n}(s ^{\prime},s),\ \ \ \ \zeta({\bf q}^{\prime},\theta)=\psi_{s}^{\theta}{\bf q}^{\prime},\] with \(1\!\!1\big{(}{\bf q}\in\Delta_{n}\setminus\Delta_{n}(s^{\prime},s)\big{)}\ d{\bf q}\) equals to \[1\!\!1\big{(}({\bf q}^{\prime},\theta)\in\partial\Delta_{n}\times[s^{\prime},s ]\big{)}\left(1+(s-s^{\prime})\big{(}{\bf N}_{n}({\bf q}^{\prime})\cdot{\bf b}_{ n}({\bf q}^{\prime},s)\big{)}+o(n(s-s^{\prime}))\right)\ \sigma(d{\bf q}^{\prime})d\theta.\] The map \(\zeta\) is one-to-one if \(\partial\Delta_{n}\) is \(C^{1}\), and \(s-s^{\prime}\) is sufficiently small. This is no longer the case when \(C^{1}\) is only piecewise \(C^{1}\). Though the set of \({\bf q}\) for which \(\zeta^{-1}({\bf q})\) is multivalued, is of volume \(O(n(s-s^{\prime})^{2})\) (this is the set of \({\bf q}\) such that for some \(i\), we have \(x_{i+1}-x_{i},x_{i}-x_{i-1}=O(s-s^{\prime})\)). As for the second term on the right-hand side of (3.27), we use \[\mu^{n}(\psi_{s}^{s^{\prime}}{\bf q},s)= \mu^{n}({\bf q},s)+(s^{\prime}-s)\big{(}{\bf b}_{n}\cdot\nabla\mu^ {n}\big{)}({\bf q},s)+o(n(s-s^{\prime}))\] \[\det\big{(}D\psi_{s}^{s^{\prime}}\big{)}({\bf q})= \mu^{n}({\bf q},s)+(s^{\prime}-s)\big{(}\mu^{n}div\ {\bf b}_{n}\big{)}({\bf q},s)+o(n(s-s^{\prime})),\] to assert \[\mu^{n}(\psi_{s}^{s^{\prime}}{\bf q},s)\ \det\big{(}D\psi_{s}^{s^{ \prime}}\big{)}({\bf q})= \mu^{n}({\bf q},s)+(s^{\prime}-s)\big{(}\mu^{n}div\ {\bf b}_{n}\big{)}({\bf q},s)\] \[+(s^{\prime}-s)\big{(}{\bf b}_{n}\cdot\nabla\mu^{n}\big{)}({\bf q },s)+o(n(s-s^{\prime}))\] \[= \mu^{n}({\bf q},s)+(s-s^{\prime})\big{(}{\cal L}_{0n}^{s*}\mu^{n} \big{)}({\bf q},s)+o(n(s-s^{\prime})).\] From this and (3.28) we deduce that the second term on the right-hand side of (3.27) equals to \[\int_{\hat{\Delta}_{n}(s^{\prime})}\hskip-14.226378pt\hat{G}^{n} \big{(}{\bf q},s\big{)}\hat{G}^{n}\big{(}{\bf q},s\big{)}\big{[}\mu^{n}({\bf q },s)+(s-s^{\prime})\big{(}{\cal L}_{0n}^{s*}\mu^{n}\big{)}({\bf q},s)\big{]}\ d{\bf q}+o(n(s-s^{\prime}))\] \[= \int_{\Delta_{n}}\big{[}\mu^{n}({\bf q},s)+(s-s^{\prime})\big{(} {\cal L}_{0n}^{s*}\mu^{n}\big{)}({\bf q},s)\big{]}\ d{\bf q}+o(n(s-s^{\prime})).\] This, (3.27), and (3.28) complete the proof. \(\square\) ## 4 Proof of Theorem 2.1 The proof of Theorem 2.1 is carried out in five steps. _(Step 1)_ As we explained in Section 3, we only need to prove (3.5). For this, it suffices to show \[\lim_{s^{\prime}\uparrow s}(s-s^{\prime})^{-1}\big{(}{\mathbb{G}}_{n}(s)-{ \mathbb{G}}_{n}(s^{\prime})\big{)}=0, \tag{4.1}\] where \({\mathbb{G}}_{n}(s)=\int\hat{G}^{n}({\bf q},s)\ \mu^{n}(d{\bf q},s)\). Evidently \[(s-s^{\prime})^{-1}\big{(}{\mathbb{G}}_{n}(s)-{\mathbb{G}}_{n}(s^{\prime}) \big{)}=\Omega_{1}(s^{\prime})+\Omega_{2}(s^{\prime})-\Omega_{3}(s^{\prime}),\] where \[\Omega_{1}(s^{\prime}) =(s-s^{\prime})^{-1}\int\big{(}\hat{G}^{n}(\mathbf{q},s)-\hat{G}^{n} (\mathbf{q},s^{\prime})\big{)}\mu^{n}(d\mathbf{q},s)\] \[\Omega_{2}(s^{\prime}) =(s-s^{\prime})^{-1}\int\hat{G}^{n}(\mathbf{q},s)\ \big{(}\mu^{n}(d\mathbf{q},s)-\mu^{n}(d\mathbf{q},s^{\prime})\big{)}\] \[\Omega_{3}(s^{\prime}) =(s-s^{\prime})^{-1}\int\big{(}\hat{G}^{n}(\mathbf{q},s)-\hat{G}^ {n}(\mathbf{q},s^{\prime})\big{)}\ \big{(}\mu^{n}(d\mathbf{q},s)-\mu^{n}(d\mathbf{q},s^{\prime})\big{)}.\] We claim \[\lim_{s^{\prime}\uparrow s}\big{|}\Omega_{3}(s^{\prime})\big{|}=0. \tag{4.2}\] By Lemma 3.1, \[\limsup_{s\to 0}(s-s^{\prime})^{-1}\big{|}\Omega_{3}(s^{\prime})\big{|}\leq C_{0} (n+1)\limsup_{s\to 0}\int\big{|}\big{(}\mu^{n}(d\mathbf{q},s)-\mu^{n}(d \mathbf{q},s^{\prime})\big{)}\big{|}\,. \tag{4.3}\] As we will see in Step 2 below, \(\mu_{s}^{n}=X^{n}\mu^{n}\) for a term \(X^{n}\) that is explicit. From this and (4.3), it is not hard to deduce (4.2). From (4.2), and Theorem 3.1 we deduce that (4.1) would follow if we can show \[\int_{\Delta_{n}}\hat{G}^{n}(\mathbf{q},s)\ \mu_{s}^{n}(d \mathbf{q},s)= \int_{\Delta_{n}}(\mathcal{L}_{b}^{s}\hat{G})^{n}(\mathbf{q},s) \mu^{n}(\mathbf{q},s)\ d\mathbf{q}+\int_{\Delta_{n}}\hat{G}^{n}(\mathbf{q},s) \big{(}\mathcal{L}_{0n}^{s*}\mu^{n}\big{)}(\mathbf{q},s)\ d\mathbf{q} \tag{4.4}\] \[+\int_{\hat{\partial}\Delta_{n+1}}\hat{G}^{n+1}(\mathbf{q},s)\mu ^{n+1}(\mathbf{q},s)(\mathbf{b}_{n+1}\cdot\mathbf{N}_{n+1})\ \sigma(d\mathbf{q}).\] _(Step 2)_ To simplify our presentation, we assume that \(\ell\) has a density with respect to the Lebesgue measure. With a slight abuse of notation, we write \(\ell(\rho,s)\) for this density: \(\ell(d\rho,s)=\ell(\rho,s)\ d\rho\). To verify (4.4), we start with finding a tractable expression for the left-hand side. We claim \[\mu_{s}^{n}=X^{n}\mu^{n}=\left(X_{1}+X_{2}+\sum_{i=3}^{11}X_{i}^{n}\right)\mu^ {n}, \tag{4.5}\] where \[X_{1} =\frac{(\ell*f^{2})(x_{0},s,\rho_{0})}{\ell(s,\rho_{0})}, X_{2} =-\frac{\big{(}\beta(x_{0},s,\rho_{0})\ell(s,\rho_{0})\big{)}_{\rho_{0}}}{ \ell(s,\rho_{0})},\] \[X_{3}^{n} =\sum_{i=1}^{n}\frac{Q^{+}(f)(x_{i},s,\hat{\rho}_{i-1},\rho_{i})}{ f(x_{i},s,\hat{\rho}_{i-1},\rho_{i})}, X_{4}^{n} =\sum_{i=1}^{n}\frac{f_{x}^{2}\big{(}x_{i},t,\hat{\rho}_{i-1},\rho_{i} \big{)}}{f\big{(}x_{i},t,\hat{\rho}_{i-1},\rho_{i}\big{)}}\] \[X_{5}^{n} =\beta(x_{0},t,\rho_{0})\Gamma_{\rho}(x_{0},x_{1},s,\rho_{0})-A(a _{+},s,\hat{\rho}_{n})\] \[X_{6}^{n} =\sum_{i=1}^{n}\beta(x_{i},t,\rho_{i})\Gamma_{\rho}(x_{i},x_{i+1 },s,\rho_{i})\] \[X_{7}^{n} =\sum_{i=1}^{n}v(x_{i},s,\hat{\rho}_{i-1},\rho_{i})\big{(}\lambda (x_{i},s,\rho_{i})-\lambda(x_{i},s,\hat{\rho}_{i-1})\big{)}\] \[X_{8}^{n} =\sum_{i=1}^{n}\frac{\big{[}K(x_{i},s,\rho_{i},\hat{\rho}_{i-1}) f(x_{i},s,\hat{\rho}_{i-1},\rho_{i})\big{]}_{\rho_{i}}}{f(x_{i},s,\hat{\rho}_{i-1}, \rho_{i})}\] \[X_{9}^{n} =\sum_{i=1}^{n}b(x_{i},s,\hat{\rho}_{i-1})v_{\rho_{-}}(x_{i},s, \hat{\rho}_{i-1},\rho_{i})\] \[X_{10}^{n} =\sum_{i=1}^{n}v(x_{i},s,\hat{\rho}_{i-1},\rho_{i})b(x_{i},s,\hat {\rho}_{i-1})\frac{f_{\rho_{-}}(x_{i},s,\hat{\rho}_{i-1},\rho_{i})}{f(x_{i},s, \hat{\rho}_{i-1},\rho_{i})}\] \[X_{11}^{n} =-\sum_{i=1}^{n}\beta(x_{i-1},s,\rho_{i-1})\frac{\big{[}f(x_{i},s,\hat{\rho}_{i-1},\rho_{i})\big{]}_{\rho_{i-1}}}{f(x_{i},s,\hat{\rho}_{i-1}, \rho_{i})},\] where \(\hat{\rho}_{i-1}=\phi_{x_{i-1}}^{x_{i}}\big{(}\rho_{i-1};s\big{)}\). To verify (4.5), observe that by direct differentiation (see Definition 2.2**(ii)** for the definition of \(\mu^{n}\)) \[X^{n}=-\Gamma_{s}(\mathbf{q},s)+\frac{\ell_{s}(s,\rho_{0})}{\ell(s,\rho_{0})}+ \sum_{i=1}^{n}\frac{\big{[}f(x_{i},s,\hat{\rho}_{i-1},\rho_{i})\big{]}_{s}}{f(x _{i},s,\hat{\rho}_{i-1},\rho_{i})}. \tag{4.6}\] By (2.10), (1.18), and (2.8) (in this order) \[-\Gamma_{s}(\mathbf{q},s) =-\sum_{i=0}^{n}\left\{\big{(}A(x_{i+1},s,\hat{\rho}_{i})-A(x_{i},s,\rho_{i})\big{)}-\beta(x_{i},t,\rho_{i})\Gamma_{\rho}(x_{i},x_{i+1},s,\rho_{i}) \right\}\] \[=-\sum_{i=0}^{n}\big{(}A(x_{i+1},s,\hat{\rho}_{i})-A(x_{i},s,\rho_ {i})\big{)}+\beta(x_{0},t,\rho_{0})\Gamma_{\rho}(x_{0},x_{1},s,\rho_{0})+X_{6}^ {n}\] \[=\sum_{i=1}^{n}\big{(}A(x_{i},s,\rho_{i})-A(x_{i},s,\hat{\rho}_{i- 1})\big{)}+A(x_{0},s,\rho_{0})-A(x_{n+1},s,\hat{\rho}_{n})\] \[\qquad+\beta(x_{0},t,\rho_{0})\Gamma_{\rho}(x_{0},x_{1},s,\rho_{0 })+X_{6}^{n},\] \[\frac{\ell_{s}(s,\rho_{0})}{\ell(s,\rho_{0})}=X_{1}+X_{2}-A(x_{0},s,\rho_{0}),\] \[\frac{\big{[}f(x_{i},s,\hat{\rho}_{i-1},\rho_{i})]_{s}}{f(x_{i},s,\hat{\rho}_{i-1},\rho_{i})}=\frac{f_{s}(x_{i},s,\hat{\rho}_{i-1},\rho_{i})}{ f(x_{i},s,\hat{\rho}_{i-1},\rho_{i})}+U^{n}(i),\] where \[U^{n}(i)= \left[\beta(x_{i},s,\hat{\rho}_{i-1})-\beta(x_{i-1},s,\rho_{i-1}) \big{[}\phi_{x_{i-1}}^{x_{i}}(\rho_{i-1};s)\big{]}_{\rho_{i-1}}\right]\frac{f _{\rho_{-}}(x_{i},s,\hat{\rho}_{i-1},\rho_{i})}{f(x_{i},s,\hat{\rho}_{i-1}, \rho_{i})}\] \[= \beta(x_{i},s,\hat{\rho}_{i-1})\frac{f_{\rho_{-}}(x_{i},s,\hat{ \rho}_{i-1},\rho_{i})}{f(x_{i},s,\hat{\rho}_{i-1},\rho_{i})}-\beta(x_{i-1},s, \rho_{i-1})\frac{\big{[}f(x_{i},s,\hat{\rho}_{i-1},\rho_{i})\big{]}_{\rho_{i-1 }}}{f(x_{i},s,\hat{\rho}_{i-1},\rho_{i})}.\] From this, (4.6), and the kinetic equation we deduce \[X^{n}= X_{1}+X_{2}+X_{3}^{n}+X_{4}^{n}+X_{5}^{n}+X_{6}^{n}+X_{7}^{n}+U^{n}+W^{n},\] where \(U^{n}=\sum_{i=1}^{n}U^{n}(i)\), and \[W^{n}=\sum_{i=1}^{n}\frac{(Cf)(x_{i},s,\hat{\rho}_{i-1},\rho_{i})}{f(x_{i},s, \hat{\rho}_{i-1},\rho_{i})}=X_{8}^{n}+X_{9}^{n}+\sum_{i=1}^{n}K(x_{i},s,\hat{ \rho}_{i-1},\rho_{i})\frac{f_{\rho_{-}}(x_{i},s,\hat{\rho}_{i-1},\rho_{i})}{f( x_{i},s,\hat{\rho}_{i-1},\rho_{i})}.\] We are done because \(U^{n}+W^{n}=X_{8}^{n}+X_{9}^{n}+X_{10}^{n}+X_{11}^{n}\). _(Step 3)_ We now turn our attention to the right-hand side of (4.4). We certainly have \[\int(\mathcal{L}_{b}^{s}\hat{G})^{n}(\mathbf{q},s)\mu^{n}(\mathbf{q},s)\ d \mathbf{q}=Y_{b,+}^{n}-Y_{b,-}^{n}, \tag{4.7}\] where \[Y_{b,+}^{n}=\int\int_{\hat{\rho}_{n}}^{\infty}f^{2}(a_{+},s,\hat {\rho}_{n},\rho_{+})\hat{G}^{n+1}\big{(}\varepsilon_{\rho_{+}}\mathbf{q},s \big{)}\mu^{n}(\mathbf{q},s)\ d\rho_{+}d\mathbf{q},\] \[Y_{b,-}^{n}=\int A(a_{+},s,\hat{\rho}_{n})\hat{G}^{n}(\mathbf{q},s)\mu^{n}(\mathbf{q},s)\ d\mathbf{q}.\] As for the second term on the right-hand side of (4.4), we write \(\mathcal{L}_{0n}^{*}\mu^{n}=Z^{n}\mu^{n}\), with \[Z^{n}=\sum_{j=1}^{3}Z_{1j}+\sum_{i=2}^{3}\sum_{j=1}^{3}Z_{ij}^{n}, \tag{4.8}\] where \[Z_{11} =\beta(x_{0},s,\rho_{0})\Gamma_{\rho}(x_{0},x_{1},s,\rho_{0})\] \[Z_{12} =-\beta(x_{0},s,\rho_{0})\frac{\big{[}f\big{(}x_{1},s,\hat{\rho}_ {0},\rho_{1}\big{)}\big{]}_{\rho_{0}}}{f\big{(}x_{1},s,\hat{\rho}_{0},\rho_{1} \big{)}},\hskip 28.452756ptZ_{13}=-\frac{\big{(}\beta(x_{0},s,\rho_{0})\ell(s, \rho_{0})\big{)}_{\rho_{0}}}{\ell(s,\rho_{0})},\] \[Z_{21}^{n} =-\sum_{i=1}^{n}K(x_{i},s,\rho_{i},\hat{\rho}_{i-1})\Gamma_{\rho }(x_{i},x_{i+1},s,\rho_{i}),\] \[Z_{22}^{n} =\sum_{i=1}^{n-1}\frac{\big{[}K(x_{i},s,\rho_{i},\hat{\rho}_{i-1 })f(x_{i},s,\hat{\rho}_{i-1},\rho_{i})f(x_{i+1},s,\hat{\rho}_{i},\rho_{i+1}) \big{]}_{\rho_{i}}}{f(x_{i},s,\hat{\rho}_{i-1},\rho_{i})f(x_{i+1},s,\hat{\rho} _{i},\rho_{i+1})}\] \[Z_{23}^{n} =\frac{\big{[}K(x_{n},s,\rho_{n},\hat{\rho}_{n-1})f(x_{n},s,\hat{ \rho}_{n-1},\rho_{n})\big{]}_{\rho_{n}}}{f(x_{n},s,\hat{\rho}_{n-1},\rho_{n})},\] \[Z_{31}^{n} =\sum_{i=1}^{n-1}\frac{\big{[}f^{2}\big{(}x_{i},s,\hat{\rho}_{i-1 },\rho_{i}\big{)}f\big{(}x_{i+1},s,\hat{\rho}_{i},\rho_{i+1}\big{)}\big{]}_{x_{ i}}}{f\big{(}x_{i},s,\hat{\rho}_{i-1},\rho_{i}\big{)}f\big{(}x_{i+1},s,\hat{ \rho}_{i},\rho_{i+1}\big{)}},\hskip 14.226378ptZ_{32}^{n}=\frac{\big{[}f^{2} \big{(}x_{n},s,\hat{\rho}_{n-1},\rho_{n}\big{)}\big{]}_{x_{n}}}{f\big{(}x_{n},s,\hat{\rho}_{n-1},\rho_{n}\big{)}},\] \[Z_{33}^{n} =-\sum_{i=1}^{n}v\big{(}x_{i},s,\hat{\rho}_{i-1},\rho_{i}\big{)} \big{[}\Gamma(x_{i-1},x_{i},s,\rho_{i-1})+\Gamma(x_{i},x_{i+1},s,\rho_{i}) \big{]}_{x_{i}}.\] Recall that \(\mathcal{L}_{0n}^{*}\) is the adjoint of \(\mathcal{L}_{0n}\), and is obtained by an integration by parts. More specifically, * The sum \(Z_{11}+Z_{12}+Z_{13}\) comes from an integration by parts with respect to the variable \(\rho_{0}\), and the \(i\)-terms in \(Z_{21}^{n}\), \(Z_{22}^{n}\) come from an integration by parts with respect to the variable \(\rho_{i}\) for \(i\in\{1,\ldots,n-1\}\), and \(Z_{23}^{n}\) comes from an integration by parts with respect to the variable \(\rho_{n}\). The dynamics of \(\rho_{i}\) as in rule **(2)** of Definition 2.1**(iii)** is responsible for these contributions. * The \(i\)-th terms in \(Z_{31}^{n}\), \(Z_{32}^{n}\) and \(Z_{33}^{n}\) come from an integration by parts with respect to the variable \(x_{i}\). The dynamics of \(x_{i}\) as in rule **(1)** of Definition 2.1**(iii)** is responsible for this contribution. _(Step 4)_ We next focus on the third term on the right-hand side of (4.4). This term can be expressed as \[Y_{0}^{n}=\sum_{i=0}^{n}Y_{0i}^{n}+\hat{Y}_{0}^{n}, \tag{4.9}\] where \(Y_{0i}^{n}\) is the boundary contribution coming from the condition \(x_{i}=x_{i+1}\), and \(\hat{Y}_{0}^{n}\) is the boundary contribution coming from the condition \(x_{n+1}=x_{n+2}=a_{+}\). For \(i=0,\ldots,n\) \[Y_{0i}^{n}=\int_{\Delta_{n}}\hat{G}^{n}(\mathbf{q},s)W_{i}(\mathbf{q},s)\mu^{n} (\mathbf{q},s)\ d\mathbf{q}, \tag{4.10}\] where \[W_{0}=\frac{\int f^{2}(x_{0},s,\rho_{*},\rho_{0})\ \ell(s,d\rho_{*})}{\ell(s, \rho_{0})},\qquad W_{i}=\frac{Q^{+}(f)\big{(}x_{i},s,\hat{\rho}_{i-1},\rho_{i} \big{)}}{f\big{(}x_{i},s,\hat{\rho}_{i-1},\rho_{i}\big{)}},\] for \(i=1,\ldots,n\). Here, * The term \(W_{0}\) comes from the boundary term \(x_{1}=x_{0}=a_{-}\) in the integration by parts with respect to the variable \(x_{1}\). This boundary condition represents the event that \(x_{1}\) has reached \(x_{0}\) after which \(\rho_{0}\) becomes \(\rho_{1}\), and \((x_{i},\rho_{i})\) is relabeled as \((x_{i-1},\rho_{i-1})\) for \(i\geq 2\). * The term \(W_{i}\) comes from the boundary term \(x_{i}=x_{i+1}\). The relative distance \(x_{i+1}-x_{i}\) travels with speed \[-\big{[}v(x_{i+1},s,\hat{\rho}_{i},\rho_{i+1})-v(x_{i},s,\hat{\rho}_{i-1},\rho _{i})\big{]},\] As \(x_{i+1}\) catches up with \(x_{i}\), the particle \(x_{i}\) disappears and its density \(\rho_{i}=\hat{\rho}_{i}\) is renamed \(\rho_{*}\), and is integrated out. (The resulting integral is \(Q^{+}(f)\big{(}x_{i},s,\hat{\rho}_{i-1},\rho_{i})\).) We then relabel \((x_{j},\rho_{j})\), \(j>i\), as \((x_{j-1},\rho_{j-1})\). As for \(\hat{Y}_{0}^{n}\), we simply have \[\hat{Y}_{0}^{n}=-Y_{b,+}^{n}, \tag{4.11}\] where \(Y_{b,+}^{n}\) was defined in (4.7). _(Step 5)_ Recall that we wish to establish (4.4). The identities (4.5), and (4.7)-(4.11) allow us to rewrite (4.4) as \[X_{1}+X_{2}+\sum_{i=3}^{11}X_{i}^{n}=\sum_{j=1}^{3}Z_{1j}+\sum_{i=2}^{3}\sum_{ j=1}^{3}Z_{ij}^{n}-A(a_{+},s,\hat{\rho}_{n})+W_{0}+W^{n},\] where \(W^{n}=\sum_{i=1}^{n}W_{i}^{n}\). For this we only need to verify \[X_{4}^{n}+\sum_{i=6}^{11}X_{i}^{n}=Z_{12}+\sum_{i=2}^{3}\sum_{j=1}^{3}Z_{ij}^{ n}, \tag{4.12}\] because \[X_{1}=W_{0},\ \ \ \ X_{2}=Z_{13},\ \ \ \ X_{3}^{n}=W^{n},\ \ \ \ X_{5}^{n}=Z_{11}-A(a_{+},s, \hat{\rho}_{n}).\] We use the definition of \(K\) to write \(Z_{21}^{n}=Z_{211}^{n}+Z_{212}^{n}\), where \(Z_{212}^{n}=X_{6}^{n}\), and \[Z_{211}^{n}=-\sum_{i=1}^{n}b(x_{i},s,\rho_{i})v(x_{i},s,\hat{\rho}_{i-1},\rho_ {i})\big{)}\Gamma_{\rho}(x_{i},x_{i+1},s,\rho_{i}).\] Hence (4.12) is equivalent to \[X_{4}^{n}+\sum_{i=7}^{11}X_{i}^{n}=Z_{12}+Z_{211}^{n}+Z_{22}^{n}+Z_{23}^{n}+ \sum_{j=1}^{3}Z_{3j}^{n}. \tag{4.13}\] Also observe that the expression \[\big{[}\Gamma(x_{i-1},x_{i},s,\rho_{i-1})+\Gamma(x_{i},x_{i+1},s,\rho_{i}) \big{]}_{x_{i}},\] equals \[\begin{split}\left[\int_{x_{i-1}}^{x_{i}}\lambda\big{(}z,s,\phi_ {x_{i-1}}^{z}(\rho_{i-1};s)\big{)}\ dz+\int_{x_{i}}^{x_{i+1}}\lambda\big{(}z,s, \phi_{x_{i}}^{z}(\rho_{i};s)\big{)}\ dz\right]_{x_{i}}\\ =\lambda(x_{i},s,\hat{\rho}_{i-1})-\lambda(x_{i},s,\rho_{i})+\int _{x_{i}}^{x_{i+1}}\big{[}\lambda\big{(}z,s,\phi_{x_{i}}^{z}(\rho_{i};s)\big{)} \big{]}_{x_{i}}\ dz\end{split}\] From this and (2.6) we learn \[Z_{211}^{n}+Z_{33}^{n}=-\sum_{i=1}^{n}v\big{(}x_{i},s,\hat{\rho}_{i-1},\rho_{i }\big{)}\big{(}\lambda(x_{i},s,\hat{\rho}_{i-1})-\lambda(x_{i},s,\rho_{i}) \big{)}=X_{7}^{n}.\] This reduces (4.13) to \[X_{4}^{n}+\sum_{i=8}^{11}X_{i}^{n}=Z_{12}+Z_{22}^{n}+Z_{23}^{n}+Z_{31}^{n}+Z_{ 32}^{n}. \tag{4.14}\] Observe that \(Z^{n}_{22}+Z^{n}_{23}=\widehat{Z}^{n}_{22}+\widehat{Z}^{n}_{23}\), and \(Z^{n}_{31}+Z^{n}_{32}=Z^{n}_{311}+Z^{n}_{312}+Z^{n}_{313}\), where \[\widehat{Z}^{n}_{22} =\sum_{i=1}^{n}\frac{\big{[}K(x_{i},s,\rho_{i},\hat{\rho}_{i-1})f (x_{i},s,\hat{\rho}_{i-1},\rho_{i})\big{]}_{\rho_{i}}}{f(x_{i},s,\hat{\rho}_{i- 1},\rho_{i})},\] \[\widehat{Z}^{n}_{23} =\sum_{i=1}^{n-1}K(x_{i},s,\rho_{i},\hat{\rho}_{i-1})\frac{\big{[} f(x_{i+1},s,\hat{\rho}_{i},\rho_{i+1})\big{]}_{\rho_{i}}}{f(x_{i+1},s,\hat{ \rho}_{i},\rho_{i+1})},\] \[Z^{n}_{311} =\sum_{i=1}^{n}\frac{f^{2}_{x}\big{(}x_{i},s,\hat{\rho}_{i-1}, \rho_{i}\big{)}}{f\big{(}x_{i},s,\hat{\rho}_{i-1},\rho_{i}\big{)}},\] \[Z^{n}_{312} =\sum_{i=1}^{n}b\big{(}x_{i},s,\hat{\rho}_{i-1}\big{)}\frac{f^{2 }_{\rho_{-}}\big{(}x_{i},s,\hat{\rho}_{i-1},\rho_{i}\big{)}}{f\big{(}x_{i},s, \hat{\rho}_{i-1},\rho_{i}\big{)}},\] \[Z^{n}_{313} =-\sum_{i=1}^{n-1}v\big{(}x_{i},s,\hat{\rho}_{i-1},\rho_{i}\big{)} b\big{(}x_{i},s,\rho_{i}\big{)}\frac{\big{[}f\big{(}x_{i+1},s,\hat{\rho}_{i}, \rho_{i+1}\big{)}\big{]}_{\rho_{i}}}{f\big{(}x_{i+1},s,\hat{\rho}_{i},\rho_{i+ 1}\big{)}},\] where we used (2.5) for the last equation. Observe that by the definition of \(K\), \[\hat{Z}^{n}_{23}+Z^{n}_{313}=-\sum_{i=1}^{n-1}\beta(x_{i},s,\rho_{i})\frac{ \big{[}f(x_{i+1},s,\hat{\rho}_{i},\rho_{i+1})\big{]}_{\rho_{i}}}{f(x_{i+1},s, \hat{\rho}_{i},\rho_{i+1})}.\] The equation (4.14) follows because \[X^{n}_{8}=\hat{Z}^{n}_{22},\ \ \ \ X^{n}_{9}+X^{n}_{10}=Z^{n}_{312},\ \ \ \ X^{n}_{4}=Z^{n}_{311},\ \ \ \ X^{n}_{11}=Z_{12}+\hat{Z}^{n}_{23}+Z^{n}_{313}.\] ## 5 Proof of Theorem 1.2 According to Theorem 1.2 if \(\rho(\cdot,t_{0})=\rho(\cdot,t_{0};s,\mathbf{y}^{0})\), with \(\mathbf{y}^{0}\) a Markov jump process with jump rate \(g^{0}(x,y_{-},y_{+})\), then for \(t>t_{0}\), we can express \(\rho(\cdot,t)=\rho(\cdot,t;s,\mathbf{y}_{t})\), where \(\mathbf{y}_{t}\) is also a jump process with a jump rate \(g(x,t;y_{-},y_{+})\), where \(g\) is a solution to the kinetic equation (1.25). There is a one-to-one correspondence between the realization \[\mathbf{y}(x)=\sum_{i=0}^{\infty}y_{i}\mbox{1l}\big{(}x\in[x_{i},x_{i+1}) \big{)},\] and the particle configuration \[\mathbf{q}=\big{(}(x_{0},y_{0}),(x_{1},y_{1}),\dots\big{)},\] with \[x_{0}=a_{-}<x_{1}<\cdots<x_{n}<\ldots,\ \ \ \ \ y_{0}<y_{1}<\cdots<y_{n}<\ldots.\] We may translate this into a statement about the law of our particle system \({\bf q}(t)\). As before, it suffices to establish a variant of Theorem 1.2 for a finite interval \([a_{-},a_{+}]\). The condition \(H_{\rho}(a_{-},t,\rho)>0\) means that particles can cross \(a_{-}\) only from left to right. Because of this, we can treat \(a_{-}\) as a free boundary point. As it turns out, the point \(a_{+}\) will be free boundary point (no particle can cross \(a_{+}\) from right to left) if \(a_{+}\) is sufficiently large. Indeed as we will see in Proposition 5.2**(iii)**, there are positive constants \(C_{0}\) and \(C_{1}\) such that \(M(x,t;y,s)\leq-C_{1}x\) for \(x\geq C_{0}\). On the other hand, by Hypothesis 1.2**(i)**, we know \[|\rho|\leq c_{1}\big{(}1+|H_{\rho}(x,t,\rho)|\big{)},\] which in turn implies that \(H_{\rho}(x,t,\rho)\to-\infty\) as \(\rho\to-\infty\). As a result, there exists a positive constant \(C_{2}\) such that \(H_{\rho}(x,t,\rho)\leq 0\), whenever \(\rho\leq-C_{2}\). From this we deduce that \(\hat{v}(a_{+},t,y_{-},y_{+})>0\) if \(a_{+}\geq\max\{C_{2}C_{1}^{-1},C_{0}\}=:C_{3}\). From all this we learn that Theorem 1.2 would follow if we can establish the following result. **Theorem 5.1**: _Assume Hypothesis 1.2. For any fixed \(a_{+}\) such that \(a_{+}>\max\{a_{-},C_{3}\}\), consider the scalar conservation law (1.2) in \([a_{-},a_{+}]\times[t_{0},T)\) with initial condition \(\rho(x,t_{0})=M(x,t_{0};{\bf y}^{0}(x),s)\) (restricted to \([a_{-},a_{+}]\)), open boundary at \(x=a_{\pm}\). Then for all \(t>t_{0}\), we have \(\rho(x,t)=M(x,t;{\bf y}_{t}(x),s)\), where the law of \(\big{(}{\bf y}_{t}(x):\ x\in[a_{-},a_{+}]\big{)}\) is as follows:_ **(i)**: _The_ \(x=a_{-}\) _marginal is_ \(\ell(t,dy_{0})\)_, given by_ \(\dot{\ell}={\cal B}_{a_{-},t}^{2*}\ell\)_._ **(ii)**: _The rest of the path is a PDMP with generator_ \({\cal B}_{x,t}^{1}\)_._ To prove our main result Theorem 1.2, we send \(a_{+}\to\infty\). We continue with a preparatory definition. **Definition 5.1**(i)** The configuration space for our particle system \({\bf q}\), is the set \(\Delta=\cup_{n=0}^{\infty}\bar{\Delta}_{n}\), where \(\bar{\Delta}_{n}\) is the topological closure of \(\Delta_{n}\), with \(\Delta_{n}\) denoting the set \[\big{\{}{\bf q}=\big{(}(x_{i},y_{i}):i=0,1,\ldots,n\big{)}:\ x_{0}=a_{-}<x_{1} <\cdots<x_{n}<x_{n+1}=a_{+},\ \ \ y_{0}<\cdots<y_{n}\big{\}}.\] We write \({\bf n}({\bf q})\) for the number of particles i.e., \({\bf n}({\bf q})=n\) means that \({\bf q}\in\Delta_{n}\). **(ii)** Given a realization \({\bf q}=\big{(}x_{0},y_{0},x_{1},y_{1},\ldots,x_{n},y_{n}\big{)}\in\bar{\Delta }_{n}\), we define \[\rho\big{(}x,t;{\bf q}\big{)}=R_{t}({\bf q})(x)=\sum_{i=0}^{n}M(x,t;y_{i},s) \mbox{1}\hskip-2.845276pt\mbox{1}\big{(}x_{i}\leq x<x_{i+1}\big{)}.\] **(iii)** The process \({\bf q}(t)\) evolves according to the following rules: **(1)**: So long as \(x_{i}\) remains in \((x_{i-1},x_{i+1})\), it satisfies \(\dot{x}_{i}=-\hat{v}(x_{i},t,y_{i-1},y_{i})\). **(2)**: When \(x_{1}\) reaches \(a_{-}\), we relabel particles \((x_{i},y_{i}),\ i\geq 1\), as \((x_{i-1},y_{i-1})\). **(3)**: When \(x_{n}\) reaches \(a_{+}\), a particle is lost and \(\mathbf{q}\) enters \(\Delta_{n-1}\). **(4)**: When \(x_{i+1}=x_{i}\), then \(\mathbf{q}(t)\) becomes \(\mathbf{q}^{i}(t)\), that is obtained from \(\mathbf{q}(t)\) by omitting \((x_{i},y_{i})\) and relabeling particles to the right of the \(i\)-th particle. \(\square\) Some care is needed for the rule **(1)** because \(\hat{v}\) given by (1.24) is not a continuous function of \(x\). Recall that \(x_{i}(t)\) represents the location of a shock discontinuity that separates two fundamental solutions. However, the fundamental solution \(\big{(}M(x,t;y_{i},s):\ x\in(x_{i-1}(t),x_{i}(t))\) may also include some shock discontinuities. When \(x_{i}(t)\) catches up with a shock discontinuity of \(M(\cdot,t;y_{i},s)\), or \(M(\cdot,t;y_{i+1},s)\), \(\dot{x}_{i}(\cdot)\) fails to exists. Nonetheless off of such a discrete set of moments, the ODE of **(1)** is well-defined, and this is good enough to determine the evolution of \(x_{i}\) fully. We write \(\widehat{\mathcal{L}}=\widehat{\mathcal{L}}^{t}\) for the generator of the (inhomogeneous Markov) process \(\mathbf{q}(t)\). This generator can be expressed as \(\widehat{\mathcal{L}}=\widehat{\mathcal{L}}_{0}+\widehat{\mathcal{L}}_{b}\), where \(\widehat{\mathcal{L}}_{0}\) is the generator of the deterministic part of dynamics, and \(\widehat{\mathcal{L}}_{b}\) represents the Markovian boundary dynamics. The deterministic and stochastic dynamics restricted to \(\Delta_{n}\) have generators that are denoted by \(\widehat{\mathcal{L}}_{0n}\) and \(\widehat{\mathcal{L}}_{bn}\) respectively. While \(\mathbf{q}(t)\) remains in \(\Delta_{n}\), its evolution is governed by an ODE of the form \[\frac{d\mathbf{q}}{dt}(t)=\hat{\mathbf{b}}\big{(}\mathbf{q}(t),t\big{)},\] where \(\hat{\mathbf{b}}\) can be easily described with the aid of rule **(1)** of Definition 5.1**(iii)**. We establish Theorem 5.1 by verifying the forward equation \[\dot{\mu}^{n}=\big{(}\widehat{\mathcal{L}}^{*}\mu\big{)}^{n}, \tag{5.1}\] for all \(n\geq 0\), where \(\widehat{\mathcal{L}}^{*}\) is the adjoint of the operator \(\widehat{\mathcal{L}}\). We follow our strategy as in Section 3 and use a test function \(G(\mathbf{q},t)\) with is the analog of what we had in (3.4). Again, our Theorem 5.1 would follow if we can show the analog of (3.5). We follow our notation as in (3.2), and the analog of Theorem 3.1 is also valid when \(\mathcal{L}\) is replaced with \(\widehat{\mathcal{L}}\). The following variant of Proposition 2.1 ensures that our particle system produces the unique entropy solution of (1.1) in the interval \([a_{-},a_{+}]\). **Proposition 5.1**: _The function \(\rho(x,t)=\rho(x,t;\mathbf{q}(t))\), with \(\mathbf{q}(t)\) evolving as in Definition 5.1_**(iii)**_, is the unique entropy solution of \(\rho_{t}=H(x,t,\rho)_{x}\) in \((a_{-},a_{+})\times(0,\infty)\)._ **Proof** As in Section 2, we can readily check that \(\rho(x,t;{\bf q}(t))\) is a weak solution of (1.2) because the Rankin-Hugoniot condition is satisfied. To satisfy the entropy condition, we need to make sure that \(\rho(x{-},t;{\bf q}(t))<\rho(x{+},t;{\bf q}(t))\) at each discontinuity point. This is an immediate consequence of the monotonicity of the fundamental solution that is stated in Proposition 5.2**(ii)** below. The uniqueness of the entropy solution follows from the fact that the end points \(a_{\pm}\) are both free. To see this, assume that \(\rho\) and \(\rho^{\prime}\) are two solutions that are both concatenations of fundamental solutions. We use Kruzkov's inequality [K] (as in the proof of Proposition 2.1**(iii)**) to assert that weakly, \[|\rho(x,t)-\rho^{\prime}(x,t)|_{t}\leq \big{(}Q(x,t,\rho(x,t),\rho^{\prime}(x,t))\big{)}_{x}\] \[-\ sgn\big{(}\rho(x,t)-\rho^{\prime}(x,t)\big{)}\big{(}H_{x}(x,t, \rho(x,t))-H_{x}(x,t,\rho^{\prime}(x,t))\big{)}\] \[\leq \big{(}Q(x,t,\rho(x,t),\rho^{\prime}(x,t))\big{)}_{x}+c_{1}|\rho( x,t)-\rho^{\prime}(x,t)|,\] where \(Q(x,t,\rho,\rho^{\prime})=sgn(\rho-\rho^{\prime})\big{(}H(x,t,\rho)-H(x,t, \rho^{\prime})\big{)}.\) Here we have used Hypothesis 1.2**(i)** for the second inequality. As a consequence \[\big{[}e^{-c_{1}t}|\rho(x,t)-\rho^{\prime}(x,t)|\big{]}_{t}\leq e^{-c_{1}t}\big{(}Q(x,t,\rho(x,t),\rho^{\prime}(x,t))\big{)}_{x}.\] As in the proof of Proposition 2.1**(iii)**, we can integrate over \([a_{-},a_{+}]\) to assert \[\bigg{[}e^{-c_{1}t}\int_{a_{-}}^{a_{+}}|\rho(x,t)-\rho^{\prime}(x, t)|\ dx\bigg{]}_{t}\leq e^{-c_{1}t}Q(a_{+},t,\rho(a_{+},t),\rho^{\prime}(a_{+},t))\] \[-e^{-c_{1}t}Q(a_{-},t,\rho(a_{-},t),\rho^{\prime}(a_{-},t)).\] We claim that our free boundary conditions at \(a_{\pm}\) imply that the right-hand side is nonpositive. Indeed, \(\mp H_{\rho}(a_{\pm},t,M(a_{\pm},t;y_{-},y_{+}))\geq 0\) implies \[Q(a_{\pm},t,\rho(a_{\pm},t),\rho^{\prime}(a_{\pm},t))=\mp\big{|}H(a_{\pm},t, \rho(a_{\pm},t))-H(a_{\pm},t,\rho^{\prime}(a_{\pm},t))\big{|}.\] This allows us to assert \[\bigg{[}e^{-c_{1}t}\int_{a_{-}}^{a_{+}}|\rho(x,t)-\rho^{\prime}(x,t)|\ dx \bigg{]}_{t}\leq 0.\] As an immediate consequence we learn that if \(\rho(x,t_{0})=\rho^{\prime}(x,t_{0})\) for all \(x\in[a_{-},a_{+}]\), then \(\rho(x,t)=\rho^{\prime}(x,t)\) for all \((x,t)\in[a_{-},a_{+}]\times[t_{0},T]\). \(\Box\) **Proposition 5.2**: **(i)** _If \(x_{1}<x_{2}\), and \(\xi(\theta;x_{i},t;y,s)\) is a maximizing path in (1.21) for \(x=x_{i}\), then \(\xi(\theta;x_{1},t;y,s)<\xi(\theta;x_{2},t;y,s)\) for \(\theta\in(s,t]\)._ **(ii)** _The fundamental solution \(M(x,t;y,s)\) is increasing in \(y\)._ **(iii)** _Given \(s<T\), and \(\delta\in(0,1)\), there exist positive constants \(C_{0}=C_{0}(s,\delta,T)\), and \(C_{1}=C_{1}(s,\delta,T)\) such that if \(|x|\geq C_{0}\), and \(|y|\leq(1-\delta)|x|\), then \(M(x,t;y,s)\) and \(-x\) have the same sign, and_ \[C_{1}|x|\leq|M(x,t;y,s)|. \tag{5.2}\] **Proof(i)** It is well known that under Hypothesis 2.1**(i)** the following statements are true (see for example [Go]): 1. In (1.21) we may take the supremum over those \(\xi:[t_{0},T]\) such that \(\xi(s)=y,\ \xi(t)=x\), and \(\xi\) is weakly differentiable with \(\dot{\xi}\in L^{2}\big{(}[s,t];\mathbb{R}\big{)}\). 2. If the supremum in attained at \(\xi\), then necessarily \(\xi\in C^{2}\). 3. The maximizing path \(\xi\) satisfies the Euler-Lagrange equation (5.3) \[\big{(}L_{v}(\xi(\theta),\theta,\dot{\xi}(\theta))\big{)}_{\theta}=L_{x}(\xi( \theta),\theta,\dot{\xi}(\theta)).\] Equivalently, if \(p(\theta)=L_{v}(\xi(\theta),\theta,\dot{\xi}(\theta))\), then the pair \((\xi,p)\) satisfies the Hamiltonian ODE (5.4) \[\dot{\xi}(\theta)=-H_{\rho}(\xi(\theta),\theta,p(\theta)),\ \ \ \ \dot{p}(\theta)=H_{x}(\xi(\theta),\theta,p(\theta)).\] We now take \(x_{1}<x_{2}\), and write \(\xi^{1},\xi^{2}:[s,t]\to\mathbb{R}\) for the maximizing paths in (1.21) for \(x=x_{1}\) and \(x=x_{2}\) respectively. We wish to show that \(\xi^{1}(\theta)\neq\xi^{2}(\theta)\) for every \(\theta\in(s,t)\). We argue by contradiction. Suppose to the contrary \(\xi^{1}(\theta_{0})=\xi^{2}(\theta_{0})\) for some \(\theta_{0}\in(s,t)\). We define \[\eta^{2}(\theta)=\begin{cases}\xi^{1}(\theta),&\theta\in[s,\theta_{0}],\\ \xi^{2}(\theta),&\theta\in[\theta_{0},t],\end{cases},\qquad\eta^{1}(\theta)= \begin{cases}\xi^{2}(\theta),&\theta\in[s,\theta_{0}],\\ \xi^{1}(\theta),&\theta\in[\theta_{0},t].\end{cases} \tag{5.5}\] Since \(\xi^{i}\) maximizes the action, and \(\eta^{i}\) is weakly differentiable with square integrable derivative for \(i=1,2\), we learn \[\int_{s}^{t}L\big{(}\eta^{1}(\theta),\theta,\dot{\eta}^{1}(\theta )\big{)}\ d\theta \leq\int_{s}^{t}L\big{(}\xi^{1}(\theta),\theta,\dot{\xi}^{1}(\theta )\big{)}\ d\theta,\] \[\int_{s}^{t}L\big{(}\eta^{2}(\theta),\theta,\dot{\eta}^{2}(\theta )\big{)}\ d\theta \leq\int_{s}^{t}L\big{(}\xi^{2}(\theta),\theta,\dot{\xi}^{2}(\theta )\big{)}\ d\theta. \tag{5.6}\] Expressing the integrals on the left in terms of \(\xi^{i}\) would lead to \[\int_{s}^{\theta_{0}}L\big{(}\xi^{1}(\theta),\theta,\dot{\xi}^{1}(\theta) \big{)}\ d\theta=\int_{s}^{\theta_{0}}L\big{(}\xi^{2}(\theta),\theta,\dot{ \xi}^{2}(\theta)\big{)}\ d\theta.\] This in turn implies that we have equality in (5.6). As a result, \(\eta^{i}\) is also a maximizing path with \(\eta^{i}(t)=x_{i}\). Hence by (2) above, \(\eta^{i}\) must be \(C^{1}\). This means that we must have \(\dot{\xi}^{1}(\theta_{0})=\dot{\xi}^{2}(\theta_{0})\). Since we also have \(\xi^{1}(\theta_{0})=\xi^{2}(\theta_{0})\), we may use the uniqueness of the solutions to Euler-Lagrange equation (5.3), to deduce that \(\xi^{1}=\xi^{2}\) on \([s,t]\). This contradicts \(x_{1}=\xi^{1}(t)<x_{2}=\xi^{2}(t)\). As a result, \(\xi^{1}\) and \(\xi^{2}\) cannot intersect in \((s,t]\). Since \(x_{1}<x_{2}\), we must have \(\xi^{1}(\theta)<\xi^{2}(\theta)\) for \(\theta>s\). **(ii)** We take \(y_{1}<y_{2}\), and write \(\xi^{1},\xi^{2}:[s,t]\to\mathbb{R}\) for the maximizing paths in (1.21) for \(z=(y_{1},s)\) and \(z=(y_{2},s)\) respectively. We wish to show that \(\xi^{1}(\theta)\neq\xi^{2}(\theta)\) for every \(\theta\in(s,t)\). We again argue by contradiction. Suppose to the contrary \(\xi^{1}(\theta_{0})=\xi^{2}(\theta_{0})\) for some \(\theta_{0}\in(s,t)\). We define \(\eta^{1}\) and \(\eta^{2}\) as in (5.5). Again, since \(\xi^{1}\) (respectively \(\xi^{2}\)) is a maximizer in (1.21), and that \(\eta^{2}\) (respectively \(\eta^{1}\)) is weakly differentiable with square integrable derivative, we learn \[\int_{s}^{t}L\big{(}\eta^{2}(\theta),\theta,\dot{\eta}^{2}(\theta )\big{)}\ d\theta \leq\int_{s}^{t}L\big{(}\xi^{1}(\theta),\theta,\dot{\xi}^{1}( \theta)\big{)}\ d\theta, \tag{5.7}\] \[\int_{s}^{t}L\big{(}\eta^{1}(\theta),\theta,\dot{\eta}^{1}(\theta )\big{)}\ d\theta \leq\int_{s}^{t}L\big{(}\xi^{2}(\theta),\theta,\dot{\xi}^{2}( \theta)\big{)}\ d\theta.\] Expressing the integrals on the left in terms of \(\xi^{i}\) would lead to \[\int_{\theta_{0}}^{t}L\big{(}\xi^{1}(\theta),\theta,\dot{\xi}^{1}(\theta) \big{)}\ d\theta=\int_{\theta_{0}}^{t}L\big{(}\xi^{2}(\theta),\theta,\dot{\xi} ^{2}(\theta)\big{)}\ d\theta.\] This in turn implies that we have equality in (5.7), and that \(\eta^{1}\) (respectively \(\eta^{2}\)) is also a maximizing path with \(\eta^{1}(s)=y_{2}\) (respectively \(\eta^{2}(s)=y_{1}\)). Hence by (2) above, \(\eta^{i}\) must be \(C^{1}\) for \(i=1,2\). This means that we must have \(\dot{\xi}^{1}(\theta_{0})=\dot{\xi}^{2}(\theta)\). By the uniqueness of the corresponding Euler-Lagrange equation, we must have \(\xi^{1}=\xi^{2}\) on \([s,t]\). This contradicts \(y_{1}=\xi^{1}(s)<y_{2}=\xi^{2}(s)\). As a result, \(\xi^{1}\) and \(\xi^{2}\) cannot intersect in \((s,t]\). Since \(y_{1}<y_{2}\), we must have \(\xi^{1}(\theta)<\xi^{2}(\theta)\) for \(\theta\leq t\). This in particular implies that \(\xi^{1}(t)\geq\dot{\xi}^{2}(t)\). Moreover \(\dot{\xi}^{1}(t)=\dot{\xi}^{2}(t)\) would imply \(\dot{\xi}^{1}=\dot{\xi}^{2}\) by the uniqueness of the corresponding (5.3). As a result, we must have \(\dot{\xi}^{1}(t)>\dot{\xi}^{2}(t)\). This, (1.22), and the strict concavity of \(L\) in \(v\) imply the desired inequality \(M(x,t;y_{1},s)<M(x,t;y_{2},s)\). **(iii)** Recall that the pair \((\xi,p)\) satisfies (5.4), and the boundary conditions \[\xi(s)=y,\ \ \ \ \xi(t)=x,\ \ \ \ p(t)=M(x,t;y,s). \tag{5.8}\] From (5.3) and Hypothesis 1.2**(i)**, we learn \[|\dot{p}(\theta)|=\big{|}\big{(}L_{v}(\xi(\theta),\theta,\dot{\xi}(\theta)) \big{)}_{\theta}\big{|}=\big{|}L_{x}(\xi(\theta),\theta,\dot{\xi}(\theta)) \big{|}\leq c_{1},\] which in turn implies \[|p(\theta)-p(t)|\leq c_{1}(t-s), \tag{5.9}\] for \(\theta\in[s,t]\). This, and (5.4) imply \[|\dot{\xi}(\theta)|=\big{|}H_{\rho}(\xi(\theta),\theta,p(\theta))\big{|}\leq c_{0 }c_{2}^{-1}+c_{2}^{-1}|p(\theta)|\leq c(1+|p(t)|),\] for a constant \(c=c(s,T)\). Here we used \[-c_{0}+c_{2}\big{|}H_{\rho}(x,\theta,\rho)\big{|}\leq|\rho|\leq c_{0}+c_{1} \big{|}H_{\rho}(x,\theta,\rho)\big{|}, \tag{5.10}\] which follows from Hypothesis 1.2**(i)**. As a result, \[|x-y|=|\xi(t)-\xi(s)|\leq c^{\prime}(1+|p(t)|),\] for a positive constant \(c^{\prime}=c^{\prime}(s,T)\). On the other hand, if \(|y|\leq(1-\delta)|x|\), then we deduce \[|x|\leq c^{\prime}\delta^{-1}(1+|p(t)|). \tag{5.11}\] We next claim that there exists a constant \(C_{0}\) such that \[|x|\geq C_{0}\quad\implies\quad xp(t)<0. \tag{5.12}\] To see this, observe that by (5.10) and the monotonicity of \(\rho\mapsto H_{\rho}(x,\theta,\rho)\), we can find a constant \(c^{\prime\prime}\) such that \[|\rho|\geq c^{\prime\prime}\quad\Longrightarrow\quad H_{\rho}(x,\theta,\rho) \rho\geq 0, \tag{5.13}\] for all \((x,\theta)\). Let us assume that \(x\geq C_{0}\), for a positive constant \(C_{0}\) (to be determined later). Suppose contrary to (5.12), we have \(p(t)\geq 0\). From (5.11) we deduce \[p(t)\geq(c^{\prime})^{-1}\delta x-1.\] This and (5.9) imply \[p(\theta)\geq(c^{\prime})^{-1}\delta x-1-c_{1}(t-s)\geq(c^{\prime})^{-1}\delta C _{0}-1-c_{1}(T-s), \tag{5.14}\] for all \(\theta\in(s,t)\). Choose \(C_{0}\) large enough so that the right-hand side of (5.14) is at least \(c^{\prime\prime}\). This would guarantee \[H_{\rho}(\xi(\theta),\theta,p(\theta))\geq 0,\quad\mbox{ for }\ \theta\in[s,t],\] by (5.13). From this and (5.4) we deduce that \(\dot{\xi}(\theta)\leq 0\) for \(\theta\in[s,t]\). As a result, \(x-y=\xi(t)-\xi(s)\leq 0\). But this is impossible if \(y\leq(1-\delta)x\). Hence the condition \(x\geq C_{0}\) implies that \(p(t)>0\). In the same fashion, we can show that the condition \(x\leq-C_{0}\) implies that \(p(t)<0\). This completes the proof of (5.12). From this, (5.8), and (5.11), we can readily deduce (5.2). \(\Box\) We next give a recipe for the law of the process \({\bf y}_{t}\). **Definition 5.2(i)** We set \[\Gamma(a,b,t,\rho)=\int_{a}^{b}\hat{A}(g)(z,t,y)\ dz,\ \ \ \ \Gamma({\bf q},t)= \sum_{i=0}^{n}\Gamma(x_{i},x_{i+1},t,y_{i}).\] **(ii)** We define a measure \(\mu(d{\bf q},t)\) on the set \(\Delta\) that is our candidate for the law of \({\bf q}(t)\). The restriction of \(\mu\) to \(\Delta_{n}\) is given by \[\mu^{n}(d{\bf q},t):=\ell(t,dy_{0})\exp\left\{-\Gamma({\bf q},t)\right\}\prod_{ i=1}^{n}\ g\bigl{(}x_{i},t,y_{i-1},y_{i})\ dx_{i}dy_{i},\] where \(g\) solves (1.25) and \(\ell\) solves (1.26). To simplify our presentation, we assume that \(\ell(t,dy_{0})=\ell(t,y_{0})\ dy_{0}\) is absolutely continuous with respect to the Lebesgue measure. Such an assumption would allow us to express \(\mu^{n}(d{\bf q},t):=\mu^{n}({\bf q},t)\ d{\bf q}\), where \[d{\bf q}=dy_{0}\ \prod_{i=1}^{n}dx_{i}dy_{i},\ \ \ \ \mu^{n}({\bf q},t)=\ell(t,y_{0})\exp \left\{-\Gamma({\bf q},t)\right\}\prod_{i=1}^{n}\ g\bigl{(}x_{i},t,y_{i-1},y_{ i}).\] \(\Box\) **Proposition 5.3**: _Let \(g\) be a solution of (1.25). Then \(\hat{A}(g^{1})_{t}=\hat{A}(g^{2})_{x}\)._ **Proof** From integrating both sides of (1.25) with respect \(y_{+}\) we learn \[\hat{A}(g^{1})_{t}-\hat{A}(g^{2})_{x}=\hat{A}\bigl{(}\hat{Q}^{+}(g)\bigr{)}- \hat{A}\bigl{(}g\hat{J}(g)\bigr{)}. \tag{5.15}\] On the other hand, \[\hat{A}\bigl{(}\hat{Q}^{+}(g)\bigr{)}(y_{-})= \int g^{1}(y_{-},y_{*})\hat{A}(g^{2})(y_{*})\ dy_{*}-\int g^{2}(y_ {-},y_{*})\hat{A}(g^{1})(y_{*})\ dy_{*},\] \[\hat{A}\bigl{(}g\hat{J}(g)\bigr{)}(y_{-})= \int g^{1}(y_{-},y_{*})\hat{A}(g^{2})(y_{*})\ dy_{*}-\hat{A}(g^{ 2})(y_{-})\hat{A}(g^{1})(y_{-})\] \[-\int g^{2}(y_{-},y_{*})\hat{A}(g^{1})(y_{*})\ dy_{*}+\hat{A}(g^{ 1})(y_{-})\hat{A}(g^{2})(y_{-}).\] This implies that the right-hand side of (5.15) is \(0\). \(\Box\) We are now ready to present the proof of Theorem 5.1, which is similar to the proof of Theorem 2.1. **Proof of Theorem 5.1** We wish to establish the analog of (4.1) in our setting. Theorem 3.1, and a repetition of the first step of the proof of Theorem 2.1 allow us to reduce the proof of Theorem 5.1 to the verification of an analog of (4.4), namely \[\int_{\Delta_{n}}\hat{G}^{n}({\bf q},s)\ \mu_{s}^{n}(d{\bf q},s)= \int_{\Delta_{n}}\hat{G}^{n}({\bf q},s)\big{(}\widehat{\mathcal{L} }_{0n}^{s*}\mu^{n}\big{)}({\bf q},s)\ d{\bf q} \tag{5.16}\] \[+\int_{\hat{\partial}\Delta_{n+1}}\hat{G}^{n+1}({\bf q},s)\mu^{n+ 1}({\bf q},s)({\bf b}_{n+1}\cdot{\bf N}_{n+1})\ \sigma(d{\bf q}),\] with \(\hat{G}\) as in (3.5). For a more tractable expression for the left hand side of (5.16), we write \[\mu_{s}^{n}=X^{n}\mu^{n}, \tag{5.17}\] where \[X^{n}=-\Gamma_{s}({\bf q},s)+\frac{\ell_{s}(s,y_{0})}{\ell(s,y_{0})}+\sum_{i= 1}^{n}\frac{g_{s}(x_{i},s,y_{i-1},y_{i})}{g(x_{i},s,y_{i-1},y_{i})}. \tag{5.18}\] On the other hand, by Proposition 5.3, \[\Gamma_{s}({\bf q},s)= \sum_{i=0}^{n}\int_{x_{i}}^{x_{i+1}}\hat{A}(g^{1})_{s}(z,s,y_{i}) \ dz=\sum_{i=0}^{n}\int_{x_{i}}^{x_{i+1}}\hat{A}(g^{2})_{z}(z,s,y_{i})\ dz\] \[= \sum_{i=0}^{n}\big{(}\hat{A}(g^{2})(x_{i+1},s,y_{i})-\hat{A}(g^{2 })(x_{i},s,y_{i})\big{)}\] \[= \hat{A}(g^{2})(x_{n+1},s,y_{n})-\hat{A}(g^{2})(x_{0},s,y_{0})- \sum_{i=1}^{n}\big{(}\hat{A}(g^{2})(x_{i},s,y_{i})-\hat{A}(g^{2})(x_{i},s,y_{i -1})\big{)}.\] From this, (5.18), (1.26), and (1.25) we deduce \[X^{n}= -\hat{A}(g^{2})(a_{+},s,y_{n})+\frac{(\ell*g^{2})(a_{-},s,y_{0})} {\ell(s,y_{0})}+\sum_{i=1}^{n}\frac{\hat{Q}^{+}(g)(x_{i},s,y_{i-1},y_{i})}{g(x _{i},s,y_{i-1},y_{i})} \tag{5.19}\] \[+\sum_{i=1}^{n}\hat{v}(x_{i},s,y_{i-1},y_{i})\big{(}\hat{A}(g)(x_ {i},s,y_{i})-\hat{A}(g)(x_{i},s,y_{i-1})\big{)}.\] We can rewrite the right-hand side of (5.16) as \[\int_{\Delta_{n}}\hat{G}^{n}({\bf q},s)Y^{n}({\bf q})\mu^{n}({\bf q},s)\ d{ \bf q},\] where \(Y^{n}=Y^{n}_{1}+Y^{n}_{2}\), with \(Y^{n}_{1}\) and \(Y^{n}_{2}\) corresponding to two terms on the right-hand side of (5.16). Indeed, an integration by parts yields \[Y^{1}_{n}(\mathbf{q})= -\sum_{i=1}^{n}\hat{v}_{i}(x_{i},s,y_{i-1},y_{i})\Gamma(s,\mathbf{q })_{x_{i}} \tag{5.20}\] \[= \sum_{i=1}^{n}\hat{v}_{i}(x_{i},s,y_{i-1},y_{i})\big{(}\hat{A}(g)( x_{i},s,y_{i})-\hat{A}(g)(x_{i},s,y_{i-1})\big{)}.\] As for \(Y^{n}_{2}\), we write \(Y^{n}_{2}=Y^{n}_{2-}+Y^{n}_{2*}+Y^{n}_{2+}\), where the terms \(Y^{n}_{2-},\ Y^{n}_{2*}\), and \(Y^{n}_{2+}\) correspond to the boundary contributions associated with the conditions \(x_{1}=a_{-}\), \(x_{i}=x_{i+1}\), with \(i\in\{1,\ldots,n\}\), and \(x_{n+1}=a_{+}\), respectively. More precisely, \[Y^{n}_{2-}(\mathbf{q})= \frac{(\ell*g^{2})(a_{-},s,y_{0})}{\ell(s,y_{0})},\ \ \ \ \ Y^{n}_{2+}(\mathbf{q})=-\hat{A}(g^{2})(a_{+},s,y_{n}),\] \[Y^{n}_{2*}(\mathbf{q})= \sum_{i=1}^{n}\frac{\hat{Q}^{+}(g)(x_{i},s,y_{i-1},y_{i})}{g(x_{i },s,y_{i-1},y_{i})}.\] This, (5.17), (5.19), and (5.20) complete the proof of (5.16). \(\square\) ## 6 Proofs of Proposition 1.1 and Theorem 1.3 **Proof of Proposition 1.1** Let us write \[\mathcal{K}(g)=\nabla\cdot(\nu g)-\hat{Q}^{+}(g)+\hat{Q}^{-}(g).\] where \(\nu=(-\hat{v},1)\), and \(\nabla=(\partial_{x},\partial_{t})\). To ease the notation, we do not display the dependence of \(g,\ \hat{g},\eta\), and \(h\) on \((x,t)\). We certainly have \[\frac{\hat{Q}^{+}(\hat{g})}{\hat{g}}-\frac{\hat{Q}^{+}(g)}{g}=0,\] \[\left(\frac{\nabla\cdot(\nu\hat{g})}{\hat{g}}-\frac{\nabla\cdot( \nu g)}{g}\right)(y_{-},y_{+})=\left(\nu\cdot\frac{\nabla\eta}{\eta}\right)(y_ {-},y_{+})=\nu\cdot\frac{\nabla h}{h}(y_{+})-\nu\cdot\frac{\nabla h}{h}(y_{-}),\] \[\frac{\hat{Q}^{-}(g)}{g}(y_{-},y_{+})=\hat{A}(\hat{v}g)(y_{+})- \hat{A}(\hat{v}g)(y_{-})-\hat{v}(y_{-},y_{+})\left(\hat{A}(g)(y_{+})-\hat{A}(g )(y_{-})\right)\] \[\frac{\hat{Q}^{-}(\hat{g})}{\hat{g}}(y_{-},y_{+})=\frac{\hat{A}( \hat{v}g\otimes h)}{h}(y_{+})-\frac{\hat{A}(\hat{v}g\otimes h)}{h}(y_{-})\] \[\ Hence \[\left(\frac{\mathcal{K}(\hat{g})}{\hat{g}}-\frac{\mathcal{K}(g)}{g} \right)(y_{-},y_{+})= \nu(y_{-},y_{+})\cdot\left(\frac{\nabla h}{h}(y_{+})-\frac{\nabla h }{h}(y_{-})\right)+\left(\frac{\hat{Q}^{-}(\hat{g})}{\hat{g}}-\frac{\hat{Q}^{- }(g)}{g}\right)(y_{-},y_{+})\] \[= \frac{h_{t}+\mathcal{B}^{2}h}{h}(y_{+})-\frac{h_{t}+\mathcal{B}^{2 }h}{h}(y_{-})\] \[-\hat{v}(y_{-},y_{+})\left[\frac{h_{x}+\mathcal{B}^{1}h}{h}(y_{+} )-\frac{h_{x}+\mathcal{B}^{1}h}{h}(y_{-})\right].\] The right-hand side is \(0\), when \(h\) satisfies (1.31). This completes the proof because \(g\) (respectively \(\hat{g}\)) solves (1.25) if and only if \(\mathcal{K}(g)=0\) (respectively \(\mathcal{K}(\hat{g})=0\)). \(\square\) The proof of Theorem 1.3, uses Doob's \(h\)-transform that we now recall. **Proposition 6.1**: _Let \(\mathbb{P}\) be the law of Markov jump process \(\big{(}\mathbf{y}(x):\ x\in[a_{-},a_{+}]\big{)}\), with the jump kernel density \(g(x,y_{-},y_{+})\), and the generator \(\mathcal{L}_{x}\). Assume that \(g\) is \(C^{1}\) in \(x\). Let \(U\) be an interval, and let \(\widehat{\mathbb{P}}\) denote the law of \(\mathbb{P}\), conditioned on the event \(\mathbf{y}(x)\in U\) for all \(x\in[a_{-},a_{+}]\). Then \(\widehat{\mathbb{P}}\) is the law of a Markov jump process with a jump kernel density \(\hat{g}\), given by_ \[\hat{g}(x,y_{-},y_{+})=\frac{h(x,y_{+})}{h(x,y_{-})}\ g(x,y_{-},y_{+}),\ \ \ \ \ \ x\in[a_{-},a_{+}),\ \ y_{\pm}\in U, \tag{6.1}\] _where_ \[h(x,y)=\mathbb{P}\big{(}\mathbf{y}(a)\in U\ \ \ \mbox{for}\ \ a\in[x,a_{+}]\ |\ \mathbf{y}(x)=y\big{)}. \tag{6.2}\] _Moreover, \(h\) is \(C^{1}\) in \(x\), and satisfies_ \[h_{x}+\mathcal{L}_{x}h=0. \tag{6.3}\] **Proof**_(Step 1)_ We can write \[h(x,y)=\sum_{n=0}^{\infty}h_{n}(x,y), \tag{6.4}\] where \[h_{n}(x,y)=\int_{X_{n}(x,y)}\mu^{n}(\mathbf{q},x,y)\ d\mathbf{q}_{n} \tag{6.5}\] where for \(n\geq 1\), \[\mathbf{q}_{n}=(x_{1},y_{1},\ldots,x_{n},y_{n}),\ \ \ \ \ d\mathbf{q}_{n}= \prod_{i=1}^{n}dx_{i}dy_{i},\] \[\mu^{n}(\mathbf{q}_{n},x,y)=\exp\left\{-\Gamma(\mathbf{q}_{n},x,y) \right\}\prod_{i=1}^{n}\ g\big{(}x_{i},y_{i-1},y_{i}),\ \ \ \text{with}\ \ \ y_{0}=y,\] \[\Gamma(\mathbf{q}_{n},x,y)=\sum_{i=0}^{n}\Gamma(x_{i},x_{i+1},y_{ i}),\ \ \ \ \ \text{with}\ \ \ \ \ x_{0}=x,\ y_{0}=y,\] \[\Gamma(a,b,y)=\int_{a}^{b}(\hat{A}g)(z,y)\ dz,\] and the set \(X_{n}(x,y)\) consists of \(\mathbf{q}\), satisfying \[x<x_{1}<\cdots<x_{n}<a_{+},\ \ \ \ y_{1},\ldots,y_{n}\in U.\] When \(n=0\), we simply have \(h_{0}(x,y)=\exp\big{\{}-\Gamma(x,a_{+},y)\big{\}}.\) It is straightforward to verify continuous differentiability of \(h\), and deduce (6.3) from (6.4) and (6.5). _(Step 2)_ The law \(\hat{P}\) is simply given by \[\hat{P}=\sum_{n=1}^{\infty}\hat{\mu}^{n},\] where \(\hat{\mu}_{n}(d\mathbf{q}_{n})=\hat{\mu}_{n}(\mathbf{q}_{n})\ d\mathbf{q}_{n}\), with \[\hat{\mu}_{n}(\mathbf{q}_{n})=h(a_{-},y)^{-1}\ \mu_{n}(\mathbf{q}_{n})\ 1\!\!1 \big{(}y_{1},\ldots,y_{n}\in U\big{)}. \tag{6.6}\] We wish to show that \(\hat{P}\) is the law of a jump process associated with the jump density \(\hat{g}\). To achieve this, we rewrite \(\hat{\mu}_{n}\) using the fact that \(h\) satisfies (6.3). Indeed, (6.3) implies \[e^{-\int_{a}^{b}\hat{A}(g)(z,y_{-})dz}\ \frac{h(b,y_{+})}{h(a,y_{-})}=e^{- \int_{a}^{b}\hat{A}(\hat{g})(z,y_{-})dz}\ \frac{h(b,y_{+})}{h(b,y_{-})}. \tag{6.7}\] This is equivalent to asserting \[\frac{h(b,y_{-})}{h(a,y_{-})}= \exp\left(-\int_{a}^{b}\big{(}\hat{A}(\hat{g})-\hat{A}(g)\big{)} (z,y_{-})\ dz\right)\] \[= \exp\left(-\int_{a}^{b}\left(\frac{\hat{A}(g\otimes h)-\hat{A}(g )h}{h}\right)(z,y_{-})\ dz\right)\] \[= \exp\left(\int_{a}^{b}\frac{-(\mathcal{L}_{z}h)(z,y_{-})}{h(z,y_{ -})}\ dz\right)=\exp\left(\int_{a}^{b}\frac{h_{z}(z,y_{-})}{h(z,y_{-})}\ dz\right)\] \[= \exp\left(\int_{a}^{b}(\log h)_{z}(z,y_{-})\ dz\right), \tag{6.8}\] which is evidently true. We set, \(x_{0}=a_{-},\ y_{0}=y\) as before. Observe that \(\hat{\mu}_{n}({\bf q}_{n})\) of (6.6) can be written as \[\frac{1}{h(a_{+},y_{n})}\ e^{-\int_{x_{n}}^{a_{+}}\hat{A}(g)(z,y_{n })dz}\ \frac{h(a_{+},y_{n})}{h(x_{n},y_{n})}\ \prod_{i=1}^{n}e^{-\int_{x_{i-1}}^{x_{i}}\hat{A}(g)(z,y_{i-1})dz}\ \frac{h(x_{i},y_{i})}{h(x_{i-1},y_{i-1})}g(x_{i},y_{i-1},y_{i})\] \[\qquad=\frac{1}{h(a_{+},y_{n})}\ e^{-\int_{x_{n}}^{a_{+}}\hat{A}( g)(z,y_{n})dz}\prod_{i=1}^{n}e^{-\int_{x_{i-1}}^{x_{i}}\hat{A}(\hat{g})(z,y_{i-1 })dz}\ \frac{h(x_{i},y_{i})}{h(x_{i},y_{i-1})}g(x_{i},y_{i-1},y_{i})\] \[\qquad=e^{-\int_{x_{n}}^{a_{+}}\hat{A}(\hat{g})(z,y_{n})dz}\prod _{i=1}^{n}e^{-\int_{x_{i-1}}^{x_{i}}\hat{A}(\hat{g})(z,y_{i-1})dz}\ \hat{g}(x_{i},y_{i-1},y_{i}),\] where we used (6.8) and (6.7) for the first equality, and for the last equality we used the definition of \(\hat{g}\), and \(h(a_{+},y_{n})=1\), which follows from the definition of \(h\). The right-hand side is the law of a Markov jump process associated with the kernel density \(\hat{g}\), as desired. \(\Box\) **Proof of Theorem 1.3**_(Step 1)_ Recall that \(\rho(x,t)\) is the solution of (1.2) with the initial condition \(\rho(x,t_{0})=\rho(x,t_{0};{\bf y}_{t_{0}},s)\), where \({\bf y}_{t_{0}}\) is a jump process associated with the kernel \(g(x,t_{0},y_{-},y_{+})\). We wish to show that \(\rho(x,t)=\rho(x,t;{\bf y}_{t},s)\) for \((x,t)\in[a_{-},\infty)\times[t_{0},T]\). It suffices to verify this for \((x,t)\in[a_{-},a_{+}]\times[t_{0},T]\), where \(a_{+}\) is any large number in \((a_{-},\infty)\). Pick any \(\delta\in(0,1)\). From Proposition 5.2**(iii)**, and (5.13), we learn that there exist constants \(C_{0}=C_{0}(\delta)\), \(C_{1}=C_{1}(\delta)\), and \(C_{2}\) such that \[|y_{+}|\leq(1-\delta)a_{+},\quad a_{+}\geq C_{0}\quad\Longrightarrow \quad M(a_{+},t_{0};y_{+},s)<-C_{1}a_{+}, \tag{6.10}\] \[(x,\theta)\in\mathbb{R}\times[s,T],\quad|\rho|\geq C_{2}\quad \Longrightarrow\quad\rho H_{\rho}(x,t,\rho)>0, \tag{6.9}\] for every \(t\in[t_{0},T]\). Note that (6.9) and Proposition 5.2**(ii)** imply that if \(a_{+}\geq C_{0}\), then \[y_{-}<y_{+},\ \ |y_{+}|<(1-\delta)a_{+},\quad\Longrightarrow\quad M(a_{+},t_{0 };y_{-},s)<M(a_{+},t_{0};y_{+},s)<-C_{1}a_{+}.\] From this, and (6.2) we can readily deduce \[Y_{-}\leq y_{-}<y_{+}\leq(1-\delta)a_{+},\quad a_{+}\geq C_{3}\quad \Longrightarrow\quad\hat{v}(x,t,y_{-},y_{+})>0, \tag{6.11}\] for every \(t\in[t_{0},T]\), where \[C_{3}=\max\big{\{}C_{0},C_{1}^{-1}C_{2},|Y_{-}|\big{\}}.\] We pick any \(a_{+}\geq\max\{a_{-},C_{3}\}\). _(Step 2)_ We write \(\mathbb{W}\) for the law of the Markov process \(({\bf w}(t):\ t\in[t_{0},T])\), associated with the generator \({\cal B}^{2}_{a_{-},t}\), such that \({\bf w}(t_{0})=y^{0}\). We also define a family of probability measures \(\big{(}\mathbb{P}_{t}:t\in[t_{0},T]\big{)}\) with the following recipe: For each \(t\), \(\mathbb{P}_{t}\) is the law of the Markov process \(\mathbf{y}_{t}:[a_{-},a_{+}]\to[Y_{-},\infty)\), associated with the generator \(\mathcal{B}^{1}_{x,t}\), satisfying the initial condition \(\mathbf{y}_{t}(a_{-})=\mathbf{w}(t)\). We define \(\theta_{\delta}\) to be the smallest \(t>t_{0}\) such that \(\mathbf{w}(t)\notin U(\delta)\), where \[U(\delta):=[Y_{-},(1-\delta)a_{+})=:[Y_{-},Y^{\delta}).\] We also define \(\tau_{\delta}(t)\) to be the smallest \(x>a_{-}\) such that \(\mathbf{y}_{t}(x)\notin U(\delta)\). We set \[\mathbb{W}^{\delta}(A)=\mathbb{W}(A\ |\ \theta_{\delta}>T),\ \ \ \ \mathbb{P}^{\delta}_{t}(A)=\mathbb{P}_{t}(A\ |\ \tau_{\delta}(t)>a_{+}).\] We write \(\mathbf{w}^{\delta}(t),\ t\in[t_{0},T]\) for the jump process that is distributed according to \(\mathbb{W}^{\delta}\). For each \(t\in[t_{0},T]\), we write \(\mathbf{y}^{\delta}_{t}\) for the jump process that is distributed according to \(\mathbb{P}^{\delta}_{t}\). By Proposition 6.1, the process \(t\to\mathbf{w}^{\delta}(t)\), and the processes \(x\mapsto\mathbf{y}^{\delta}_{t}(x),\ t\in[t_{0},T]\) are again Markov jump processes. We set \[h^{\delta}(x,t,y) =\mathbb{P}^{\delta}_{t}\big{(}\tau_{\delta}(t)>a_{+}\ |\ \mathbf{y}_{t}(x)=y\big{)},\] \[\ell^{\delta}(t,y) =\mathbb{W}\big{(}\theta_{\delta}>T\ |\ \mathbf{w}(t)=y\big{)}.\] By (6.3), we have the following equations for \(h^{\delta}\) and \(\ell^{\delta}\): \[h^{\delta}_{x}(x,t,y)+\big{(}\mathcal{B}^{1}_{x,t}h^{\delta} \big{)}(x,t,y)=0, x\in(a_{-},a_{+}),\ y\in U(\delta),\ t\in[t_{0},T], \tag{6.13}\] \[\ell^{\delta}_{t}(t,y)+\big{(}\mathcal{B}^{2}_{a_{-},t}\ell^{ \delta}\big{)}(s,y)=0, t\in(t_{0},T),\ y\in U(\delta). \tag{6.12}\] Since, \(h^{\delta}(a_{-},t,y)=\ell^{\delta}(t,y)\), the equations (6.12) and (6.13) allow us to apply Proposition 1.2 to assert that \(h^{\delta}\) satisfies \[h^{\delta}_{x}(x,t,y)+\big{(}\mathcal{B}^{1}_{x,t}h^{\delta}\big{)}(x,t,y)=0, \ \ \ \ h^{\delta}_{t}(x,t,y)+\big{(}\mathcal{B}^{2}_{x,t}h^{\delta}\big{)}(x,t,y)=0. \tag{6.14}\] _(Step 3)_ Since the jump process \(\mathbf{y}^{\delta}_{t_{0}}\) takes value in \([Y_{-},(1-\delta)a_{+})\), our Theorem 1.2 or Theorem 5.1 is applicable. More precisely, if the initial data of (1.2) is given by \(\rho(x,t_{0};\mathbf{y}^{\delta}_{t_{0}},s)\), for some \(s<t_{0}\), and with \(y^{\delta}_{t_{0}}\) a Markov process distributed according to \(\mathbb{P}^{a_{+},\delta}_{t_{0}}\), then the solution \(\rho(x,t)\) at a later time \(t>t_{0}\) is given by \(\mathbb{P}^{\delta}_{t}\). As a consequence, Theorem 1.3 is true when the initial process satisfies \(\mathbf{y}_{t_{0}}(a_{+})\in U(\delta)\). This condition is true with probability density \(h^{\delta}(a_{-},t_{0},y^{0})\). The condition (1.29) would implies that \(h^{\delta}(a_{-},t_{0},y^{0})\to 1\), and \(\mathbf{y}^{\delta}_{t}\to\mathbf{y}_{t}\) in small \(\delta\) limit, when restricted to the interval \([a_{-},a_{+}]\). This completes the proof. \(\Box\)
2309.04928
On polynomial symmetry algebras underlying superintegrable systems in Darboux spaces
We review three different approaches to polynomial symmetry algebras underlying superintegrable systems in Darboux spaces. The first method consists of using deformed oscillator algebra to obtain finite-dimensional representations of quadratic algebras. This allow one to gain information on the spectrum of the superintegrable systems. The second method has similarities with the induced module construction approach in the context of Lie algebras and can be used to construct infinite dimensional representations of the symmetry algebras. Explicit construction of these representations is a non-trivial task due to the non-linearity of the polynomial algebras. This method allows the construction of states of the superintegrable systems beyond the reach of separation of variables. As a result, we are able to construct a large number of states in terms of Airy, Bessel and Whittaker functions which would be difficult to obtain in other ways. We also discuss the third approach which is based on the notion of commutants of subalgebras in the enveloping algebra of a Poisson algebra or a Lie algebra. This allows us to discover new superintegrable models in the Darboux spaces and to construct their integrals and symmetry algebras via polynomials in the enveloping algebras.
Ian Marquette, Junze Zhang, Yao-Zhong Zhang
2023-09-10T03:36:36Z
http://arxiv.org/abs/2309.04928v1
# On polynomial symmetry algebras underlying superintegrable systems in Darboux spaces ###### Abstract We review three different approaches to polynomial symmetry algebras underlying superintegrable systems in Darboux spaces. The first method consists of using deformed oscillator algebra to obtain finite-dimensional representations of quadratic algebras. This allow one to gain information on the spectrum of the superintegrable systems. The second method has similarities with the induced module construction approach in the context of Lie algebras and can be used to construct infinite dimensional representations of the symmetry algebras. Explicit construction of these representations is a non-trivial task due to the non-linearity of the polynomial algebras. This method allows the construction of states of the superintegrable systems beyond the reach of separation of variables. As a result, we are able to construct a large number of states in terms of Airy, Bessel and Whittaker functions which would be difficult to obtain in other ways. We also discuss the third approach which is based on the notion of commutants of subalgebras in the enveloping algebra of a Poisson algebra or a Lie algebra. This allows us to discover new superintegrable models in the Darboux spaces and to construct their integrals and symmetry algebras via polynomials in the enveloping algebras. **1. Introduction** Finite and Infinite-dimensional representations of symmetry algebras play a significant role in determining the spectral properties of physical Hamiltonians. In [1, 2] we focused on the representations of polynomial symmetry algebras underlying superintegrable systems in 2D Darboux spaces. As a result, we are able to construct a large number of states in terms of the Airy, Bessel and Whittaker functions which would be difficult to obtain in other ways. Let \(({\cal M},g_{ij})\) be a smooth manifold with a metric tensor over \(\mathbb{C}\). Suppose that \({\cal M}\) admits local separable coordinates \((x_{1},\ldots,x_{n})\). Let \[\hat{\cal H}=\sum_{i,j=1}^{n}\frac{1}{\sqrt{\det(g_{ij})}}\frac{\partial}{ \partial x_{j}}\left(\sqrt{\det(g_{ij})}g_{ij}\frac{\partial}{\partial x_{j}} \right)+V(x_{1},\ldots,x_{n})\in\Gamma\left(T^{*}{\cal M}\right) \tag{1}\] be the Hamiltonian of a superintegrable system in \(({\cal M},g_{ij})\) and \(S_{m}=\{\hat{\cal H},\hat{X}_{1},\ldots,\hat{X}_{m}\}\) be a set of integrals of motion. Let \(\hat{\cal Q}(d)\) denote the polynomial algebra of order \(d\) over the polynomial ring \(\mathbb{C}[\hat{\mathcal{H}}],\) generated by the integrals from \(S_{m}\) with the following non-trivial commutators \[[\hat{X}_{s},\hat{X}_{t}]=\sum_{q}f_{q}(\hat{\mathcal{H}})\hat{X}_{q}+\sum_{p,q} f_{p,q}(\hat{\mathcal{H}})\hat{X}_{p}\hat{X}_{q}+\sum_{p,q,r}f_{p,q,r}(\hat{ \mathcal{H}})\hat{X}_{p}\hat{X}_{q}\hat{X}_{r}+\ldots. \tag{2}\] Here \(f_{p,q,r}(\hat{\mathcal{H}})\) is a polynomial function of the Hamiltonian \(\hat{\mathcal{H}}.\) Let \(V\) be a vector space and \(\mathfrak{gl}(V)\) be the space of endomorphism of \(V.\) Then representations of the associative polynomial algebra \(\hat{\mathcal{Q}}(d)\) are given by \(\rho:\hat{\mathcal{Q}}(d)\rightarrow\mathfrak{gl}(V).\) ## 2 Construction of representations of polynomial algebras Superintegrable systems in 2D Darboux spaces were classified [3, 4] and it was found that there exist 12 distinct classes of second order superintegrable systems in the Darboux spaces. In [1] we presented exact solutions via purely algebraic means for the energies of all the 12 classes of superintegrable systems in four different 2D Darboux spaces. This was achieved by constructing the deformed oscillator realization and finite-dimensional irreducible representation of the underlying quadratic symmetry algebra generated by quadratic integrals respectively for each of the 12 superintegrable systems. ### Deformed oscillator algebra realizations One way to construct representations of quadratic symmetry algebras is through their realizations in terms of the deformed oscillator algebra. Consider the generic quadratic algebra \(\hat{\mathcal{Q}}(3)\) with three generators, proposed in [5, 6], \[\begin{split}[A,B]=C,\\ [A,C]=\alpha A^{2}+\gamma\{A,B\}+\delta A+\epsilon B+\zeta,\\ [B,C]=aA^{2}-\gamma B^{2}-\alpha\{A,B\}+dA-\delta B+z.\end{split} \tag{3}\] Its Casimir operator is given by \[\begin{split} K=& C^{2}-\alpha\{A^{2},B\}-\gamma\{A,B^{2}\}+(\alpha\gamma-\delta)\{A,B\}+(\gamma^{2}-\epsilon)B^{2}+(\gamma\delta -2\zeta)B\\ &+\frac{2a}{3}A^{3}+(d+\frac{a\gamma}{3}+\alpha^{2})A^{2}+(\frac{ a\epsilon}{3}+\alpha\delta+2z)A.\end{split}\] Notice that the generic form (3) was proposed for the case of two quadratic integrals \(A,B\) and is quite formal because in general there is no guarantee that integrals of degree 2 would close to form a quadratic algebra. Finite-dimensional representations of (3) can be conveniently constructed via the deformed oscillator algebra with generators \(\{b^{\dagger},b,\mathcal{N}\},\) which denotes by \(\mathfrak{o}(3),\) satisfying the following commutation relations [5] \[[\mathcal{N},b^{\dagger}]=b^{\dagger},\ \ \ \ [\mathcal{N},b]=-b,\ \ \ \ bb^{ \dagger}=\Phi(\mathcal{N}+1),\ \ \ \ b^{\dagger}b=\Phi(\mathcal{N}). \tag{4}\] Here \(\mathcal{N}\) is the number operator and \(\Phi\) is a well defined real function satisfying \[\Phi(0)=0,\ \Phi(x)>0,\ \text{for all}\ x\in\mathbb{R}^{+}. \tag{5}\] Then representations of \(\mathfrak{o}(3)\) are given by \[\mathcal{N}\psi_{n}=n\psi_{n},\ \ \ \ b^{\dagger}\psi_{n}=\sqrt{\Phi(n+1)}\psi_{n +1},\ \ \ b\psi_{n}=\sqrt{\Phi(n)}\psi_{n-1}. \tag{6}\] where \(\psi_{n}\) are eigenvectors that form the Fock basis of the oscillator algebra. Imposing \(\Phi(p+1)=0\) for all \(p\in\mathbb{N},\) one obtains \(p+1\)-dimensional unitary representations of \(\mathfrak{o}(3).\) Applying the deformed oscillator technique, in [1] we gave algebraic derivations of the spectra for the 12 superintegrable systems in the 2D Darboux spaces. As an example, we here review some of the results for the superintegrable system in the Darboux space II with the Hamiltonian \(\hat{\mathcal{H}}=\frac{x^{2}}{x^{2}+1}\left(\partial_{x}^{2}+\partial_{y}^{2 }\right)+\frac{x^{2}}{x^{2}+1}\left(a_{1}\left(\frac{x^{2}}{4}+y^{2}\right)+a_ {2}y+\frac{a_{3}}{x^{2}}\right)\). The constants of motion are \(A=\partial_{y}^{2}+a_{1}y^{2}+a_{2}y\) and \[B=\frac{2y}{x^{2}+1}\left(\partial_{y}^{2}-x^{2}\partial_{x}^{2}\right)+2x \partial_{x}\partial_{y}+\partial_{y}+\frac{a_{1}}{2}y\left(x^{2}+\frac{x^{2} +4y^{2}}{x^{2}+1}\right)+\frac{a_{2}}{2}\left(x^{2}+\frac{4y^{2}}{x^{2}+1} \right)-\frac{2a_{3}y}{x^{2}+1}.\] These integrals satisfy the quadratic algebra \(\mathcal{Q}(3)\)[3] \[[A,B]=C,\quad[A,C]=-4a_{1}B-4a_{2}A,\] \[[B,C]=-24A^{2}+4a_{2}B+32\hat{\mathcal{H}}A-8\hat{\mathcal{H}}^{ 2}-8a_{1}\hat{\mathcal{H}}+6a_{1}+8a_{1}a_{3}.\] with the Casimir operator given by \[K=C^{2}-16A^{3}+4a_{1}B^{2}+4a_{2}\{A,B\}+\left(4a_{1}(4a_{3}-11)-(16a_{1}\hat {\mathcal{H}}+16\hat{\mathcal{H}}^{2})\right)A+32\hat{\mathcal{H}}A^{2}.\] In term of the differential realization of \(A,B,\) the Casimir \(K\) takes the simple form \(K=(32a_{1}+4a_{2}^{2})\hat{\mathcal{H}}-a_{2}^{2}(3+4a_{3})\). It was shown in [1] that the transformation \(A=2\sqrt{-a_{1}}(\mathcal{N}+\eta),\quad B=\frac{2a_{2}}{\sqrt{-a_{1}}}( \mathcal{N}+\eta)+\frac{a_{2}\hat{\mathcal{H}}_{2,1}}{a_{1}}+b^{\dagger}+b,\) maps \(\hat{\mathcal{Q}}(3)\) is to the deformed oscillator algebra (4) with the structure function cubic in \((\mathcal{N}+\eta)\) Here \(\eta\) is a constant determined from the constraints of the structure function. In general it is very difficult to obtain analytical solutions to \(\eta\) for arbitrary coefficients \(a_{i}\). We consider the case where \(-a_{1}=a_{2}=a_{3}=a,\)\(a\in\mathbb{R}\). For such model parameters, the energies \(E\) and their corresponding structure functions for distinct \(\eta\) are \[E_{\epsilon}=\frac{1}{4}\left(8\sqrt{a}(p+1)+3a+2\epsilon\sqrt{8a^{3/2}(p+1)-2 a^{2}+a}\right), \tag{7}\] where \(\epsilon=\pm 1,\) with the associated structure function \(\Phi_{E_{\epsilon}}^{(II)}(z)=z(p+1-z)^{2}.\) The energy spectrum \(E_{\epsilon}\) is real for \(0<a\leq 1/2\). The corresponding energy spectrum of the system and structure function for the \((p+1)\)-dimensional unirreps of the deformed oscillator algebra are given respectively by \(E=p(p+2)+a+\frac{3}{4}\) and \[\Phi_{E}(z)=z(z-p-1)\left(z+\frac{1}{8a}\left(3a^{3/2}-4a(p+1)+\sqrt{a}(4p^{2} +8p+3)\right)\right). \tag{8}\] ### Verma module constructions on \(\hat{\mathcal{Q}}(d)\) In this subsection, we review our method in [2] of determining the representations of \(\hat{\mathcal{Q}}(d)\) without relying on the deformed oscillator algebra realizations. In what follows, we always assume that the Schrodinger equation \(\hat{\mathcal{H}}\Psi=E\Psi\) has solutions of the separable form \(\Psi(x_{1},\ldots,x_{n})=F_{1}(x_{1})\ldots F_{n}(x_{n})\), where \(\hat{\mathcal{H}}\) is the Hamiltonian defined in (1) and \(F_{j}\), \(j=1,2,\ldots,\) are functions of the coordinates \(x_{j}\). Notice that the solution space of the Schrodinger equation will form an infinite-dimensional vector space. As \(\hat{\mathcal{Q}}(d)\) is the symmetry algebra of the Hamiltonian \(\hat{\mathcal{H}}\), infinite dimensional representations of \(\hat{\mathcal{Q}}(d)\) can be obtained through actions of its generators on eigenstates of \(\hat{\mathcal{H}}\). Since solutions of the Schrodinger equation are separable, we assume that \(S_{p}=\{\hat{\mathcal{H}},\hat{X}_{1},\ldots,\hat{X}_{p}\}\) is a subset of \(S_{m}\) such that \(\hat{X}_{j}\Psi=\lambda_{j}\Psi,\) with \(\lambda_{j}\in\mathbb{R}\) for all \(1\leq j\leq p.\) It is clear that the existence of \(S_{p}\) depends on the explicit form of the integrals. Moreover, superintegrable systems with scalar potentials and \(n\) degrees of freedom admit up to \(n\) integrals (including the Hamiltonian) forming a set of commuting operators (then \(p\leq n\)). Then for some \(n_{j}\in\mathbb{N}\) define the following reiterated vector \[\prod_{p+1\leq j\leq m}\hat{X}_{j}^{n_{j}}\Psi=\Psi_{n_{p+1},\ldots,n_{m}}, \tag{9}\] in the eigenspace \(V_{E}\) of \(\hat{\mathcal{H}},\) i.e. \(\Psi_{n_{p+1},\ldots,n_{m}}\in V_{E}=\{\mathbf{v}:\hat{\mathcal{H}}\mathbf{v}= E\mathbf{v}\}.\) Due to the complexities of the commutation relations (2) and the form of \(\Psi,\) analytic computation of \(\prod_{p+1\leq j\leq m}\hat{X}_{j}^{n_{j}}\) on \(\Psi\) is not in general feasible. From [7, Theorem 1], actions by any elements in \(S_{p}\) on \(\Psi_{n_{p+1},\ldots,n_{m}}\) are still in the eigenspace space. For integrals \(\hat{X}_{i}\in S_{m}/S_{p},\) we aim to construct the recurrence relations \[\hat{X}_{i}\Psi_{n_{p+1},\ldots,n_{m}}=\sum_{i_{1}\in W_{1},\,\ldots,\,i_{m}\in W _{m}}\Psi_{i_{1},\ldots,i_{m}}, \tag{10}\] where \(W_{j}\) are the sets of integer tuples and \(S_{m}/S_{p}\) defines a subset of \(S_{m}\) with all the elements in \(S_{p}\) excluded. This allows us to understand how the operators in \(S_{m}/S_{p}\) acts on the eigenspace of \(S_{p}\). Let \[V_{n_{j}}:=\{\Psi,X_{p+1}\Psi,\ldots,\Psi_{n_{p+1}},\ldots,X_{p+2}\Psi_{n_{p+2} },\ldots,\Psi_{n_{p+1},\ldots,n_{m}}\}.\] In general it is not easy to show that the elements in \(V_{n_{j}}\) form a basis. However, if \(V_{n_{j}}\) is a finite-dimensional vector space of homogeneous polynomials, then the infinite-dimensional representations of \(\hat{\mathcal{Q}}(d)\) are given by \(\pi:\hat{\mathcal{Q}}(d)\rightarrow\mathfrak{gl}(V)\) with \(V=\bigoplus_{n_{j}\in\mathbb{N}}V_{n_{j}}.\) The polynomial commutation relations (2) can be used to simplify the vectors in (9) for explicit construction of the recurrence relations (10). As illustrations, we apply the above construction to the cubic algebra underlying the superintegrable integrable system \(S_{3}=\{\hat{\mathcal{H}},\hat{X}_{1},\hat{X}_{2}\}\) in the 2D Darboux space with separable local coordinates \((x,y)\). Here \(\hat{\mathcal{H}}=\varphi(x)\left(\partial_{x}^{2}+\partial_{y}^{2}+c\right)\), where \(c\) is a constant, is the Hamiltonian and \(\hat{X}_{1},\hat{X}_{2}\) are integrals. Solution of the Schrodinger equation has the separable form \(\Psi(x,y)=X(x)Y(y),\) where \(X(x)\) and \(Y(y)\) satisfy the second order ODE \(X^{\prime\prime}+\lambda\varphi(x)X=0\) and \(Y^{\prime\prime}-\lambda Y=0\) with separation constant \(\lambda.\) As shown in [1], the integrals form cubic algebra \(\hat{\mathcal{Q}}(3)\) with the following commutation relations \[\begin{split}[\hat{X}_{1},\hat{X}_{2}]=&\hat{F},\\ [\hat{X}_{1},\hat{F}]=& u_{1}\hat{X}_{1}^{2}+u_{2} \hat{X}_{1}+u_{3}\hat{X}_{2}+u,\\ [\hat{X}_{2},\hat{F}]=& v_{1}\hat{X}_{1}^{3}+v_{2} \hat{X}_{1}^{2}+v_{3}\hat{X}_{1}-u_{2}\hat{X}_{2}-u_{1}\{\hat{X}_{1},\hat{X}_{ 2}\}+v,\end{split} \tag{11}\] where \(u_{j},v_{j},\ldots,u,v\) are polynomials of the Hamiltonian \(\hat{\mathcal{H}}\) and \(v_{1}\) in (11) is non-zero coefficient. The Casimir operator \(C_{(3)}\) for the cubic algebra is given by \[\begin{split} C_{(3)}=&\hat{F}^{2}-u_{1}\{\hat{X}_ {1}^{2},\hat{X}_{2}\}-u_{2}\{\hat{X}_{1},\hat{X}_{2}\}+\frac{v_{1}}{2}\hat{X}_{ 1}^{4}+\frac{2}{3}v_{2}\hat{X}_{1}^{3}\\ &+\left(v_{3}+u_{1}^{2}\right)\hat{X}_{1}^{2}+\left(u_{1}u_{2}+2v \right)\hat{X}_{1}-2u\hat{X}_{2}-u_{3}\hat{X}_{2}^{2}.\end{split}\] For the current case, the subset \(S_{1}=\{\hat{\mathcal{H}},\hat{X}_{1}\}\subset S_{3}\) contains the simultaneously diagonalizable operators. We can show that the set \(V_{m,n}\) formed by the eigenvectors \(\hat{X}_{2}^{n}\hat{F}^{m}\) is a vector space such that \(\rho:\hat{\mathcal{Q}}(3)\rightarrow\mathfrak{gl}(V)\) provides infinite dimensional representations, where \(V=\oplus_{m,n\in\mathbb{N}}V_{m,n}\) and \(V_{m,n}=\{\Psi,\ldots,\hat{F}^{m}\Psi,\ldots,\hat{X}_{2}^{n}\hat{F}^{m}\Psi\}.\) Extending results proved in [2, Proposition 3.1, Proposition 3.4], we state **Proposition 2.1**.: _Let \(\hat{\mathcal{Q}}(3)\) be the cubic algebra (11), and let \(W\) be the space cyclically generated by \(\hat{F}.\) Let \(\psi_{m}=\hat{F}\Psi^{m}\) for all \(m\in\mathbb{N}\). Suppose that \(u_{3}=0\). Then \(V\cong W\) and \(V\) form a \(\mathcal{Q}(3)\)-module. In particular, if \(u_{1}=u_{2}=u_{3}=0,\) then the representation of \(\rho:\hat{\mathcal{Q}}(3)\rightarrow\mathfrak{gl}(W)\) has the form of_ \[\hat{F}\psi_{m}= E\psi_{m+1},\] \[\hat{X}_{1}\psi_{m}= (m-2)u\psi_{m}+\lambda\psi_{m},\] \[\hat{X}_{2}\psi_{m}= f(\psi_{m-3},\psi_{m-2},\psi_{m-1},\psi_{m},\psi_{m+2})\] _where \(f\) has coefficient depending on certain polynomials of \(m\)._ For further details, we refer the reader to the paper. ## 3 Polynomial algebras with Poisson-Lie brackets under PBW basis Let's first define the commutants relative to subalgebras of the universal enveloping algebra and the symmetry algebra of a Lie algebra \(\mathfrak{g}\). **Definition 3.1**.: Let \(\mathfrak{g}\) be a \(n\)-dimensional Lie algebra with a basis \(\beta_{\mathfrak{g}}=\{X_{1},\ldots,X_{n}\}\), and let \(\mathfrak{g}^{*}\) be its dual with a basis \(\beta_{\mathfrak{g}^{*}}=\{x_{1},\ldots,x_{n}\}\). Let \((\mathcal{U}(\mathfrak{g}),[\cdot,\cdot]\) and \((\mathcal{S}(\mathfrak{g}^{*}),\{\cdot,\cdot\})\) be the universal enveloping algebra of \(\mathfrak{g}\) and the symmetric algebra of \(\mathfrak{g}^{*}\), respectively. Then the _commutants_ relative to subalgebras \(\mathfrak{a}\subset\mathfrak{g}\) and \(\mathfrak{a}^{*}\subset\mathfrak{g}^{*}\) in \(\mathcal{U}(\mathfrak{g})\) and \(\mathcal{S}(\mathfrak{g}^{*})\), denoted respectively as \(\mathcal{C}_{\mathcal{U}(\mathfrak{g})}(\mathfrak{a})\) and \(\mathcal{C}_{\mathcal{S}(\mathfrak{g}^{*})}(\mathfrak{a}^{*})\), are defined as follows \[\mathcal{C}_{\mathcal{U}(\mathfrak{g})}(\mathfrak{a}) =\left\{Y\in\mathcal{U}(\mathfrak{g}):[X,Y]=0,\quad\forall X\in \mathfrak{a}\right\},\] \[\mathcal{C}_{\mathcal{S}(\mathfrak{g}^{*})}(\mathfrak{a}^{*}) =\left\{y\in\mathcal{S}(\mathfrak{g}^{*}):\{x,y\}=0,\quad\forall x \in\mathfrak{a}^{*}\right\}.\] The adjoint action of \(\mathfrak{g}\) and co-adjoint action of \(\mathfrak{g}^{*}\) on the universal enveloping algebra \(\mathcal{U}(\mathfrak{g})\) and the symmetric algebra \(\mathcal{S}(\mathfrak{g}^{*})\) are given by \[P\left(X_{1},\ldots,X_{n}\right)\in\mathcal{U}(\mathfrak{g}) \mapsto[X_{j},P]\in\mathcal{U}(\mathfrak{g}), \tag{12}\] \[p(x_{1},\ldots,x_{n})\in\mathcal{S}(\mathfrak{g}^{*}) \mapsto\{x_{j},p\} =\tilde{X}_{j}(p(\mathbf{x}))=\sum_{l}C_{jk}^{l}x_{l}\frac{ \partial p}{\partial x_{j}}\in\mathcal{S}(\mathfrak{g}^{*}), \tag{13}\] respectively, where \(\tilde{X}_{j}=\sum_{l}C_{jk}^{l}x_{l}\frac{\partial}{\partial x_{j}}\) are vector field realizations of the generators of the Lie algebra \(\mathfrak{g}\) and \(\{\cdot,\cdot\}:C^{\infty}(\mathfrak{g}^{*})\times C^{\infty}(\mathfrak{g}^{* })\to C^{\infty}(\mathfrak{g}^{*})\) is a Poisson-Lie structure induced by the co-adjoint action of \(\mathfrak{g}^{*}\). Let us provide a short review on the construction of symmetry algebras from the subalgebras \(\mathfrak{a}\) of \(\mathfrak{g}\) (see also [8, 9]). Using (13), the commutant \(\mathcal{C}_{\mathcal{S}(\mathfrak{g}^{*})}(\mathfrak{a}^{*})\) is generated by the linearly independent solutions of the systems of PDEs \[\tilde{X}_{j}(p_{h})(\mathbf{x})=\{x_{j},p_{h}\}(\mathbf{x})=\sum_{l}C_{jk}^{l }x_{l}\frac{\partial p_{h}}{\partial x_{j}}=0,\quad\ 1\leq j\leq\dim\mathfrak{a}\equiv s, \tag{14}\] where \(p_{h}({\bf x})\) is a homogeneous polynomial of degree \(h\geq 1\) with the generic form \[p_{h}({\bf x})=\sum_{i_{1}+\ldots+i_{s}\leq h}\Gamma_{i_{1},\ldots,i_{s}}\,x_{1} ^{i_{1}}\ldots x_{s}^{i_{s}}\in{\cal S}(\mathfrak{g}^{*}). \tag{15}\] Notice that, depending on the structure of the Lie algebra \(\mathfrak{g},\) solutions of (14) may not be polynomials ( see [10] and the reference therein). By systematically analyzing the polynomial solutions of (14) up to certain degrees, we obtain polynomials that can be decomposed in terms of products of polynomials of lower degrees. Using the symmetrization map, commutants in the enveloping algebra are obtained as \({\cal C}_{{\cal U}(\mathfrak{g})}(\mathfrak{a})=\Lambda\left({\cal C}_{{\cal S }(\mathfrak{g}^{*})}(\mathfrak{a}^{*})\right)\). Then the linearly independent monomials form a finite-generated Poisson algebra. From the construction above, one can define algebraic Hamiltonians as follows: **Definition 3.2**.: [11]. Let \(\mathfrak{a}\subset\mathfrak{g}\) be a Lie subalgebra and \({\cal C}_{{\cal U}(\mathfrak{g})}(\mathfrak{a})\) be its commutant, where \(\mathfrak{a}\) admits a basis \(\beta_{\mathfrak{a}}=\{X_{1},\ldots,X_{s}\}\) with \(\dim\mathfrak{a}=s.\) An algebraic Hamiltonian with respect to \(\mathfrak{a}\) is given by \[\hat{{\cal H}}=\sum_{1\leq i_{1},\ldots,i_{k}\leq h}^{s}\Gamma_{i_{1},\ldots, i_{k}}X_{i_{1}}\ldots X_{i_{k}}+\sum_{t}C_{t}K_{t},\] where \(\Gamma_{i_{1},\ldots,i_{k}}\) is the constant coefficient and \(K_{t}\) are the Casimir invariants of \(\mathfrak{g}.\) In the following we obtain polynomial algebras from the 2D _conformal algebra_\(\mathfrak{c}(2)\) and their representations. In [12], the connection of \(\mathfrak{c}(2)\) with 2D Darboux spaces was established. As an example we here present a derivation of an algebraic Hamiltonian and the underlying symmetry algebra from the conformal algebra. We consider the subalgebra \(\mathfrak{a}_{1}\) and construct the commutant \({\cal C}_{{\cal S}(\mathfrak{c}^{*}(2))}(\mathfrak{a}_{1}^{*})\). By solving \(\{p(x_{1},\ldots,x_{6}),x_{1}\}=0,\) we find 6 linearly independent polynomials as follows \(A_{1}=x_{1},\)\(A_{2}=x_{2},\)\(A_{3}=x_{1}x_{3}+x_{2}x_{4},\)\(A_{4}=x_{2}x_{6}+x_{3}^{2},\)\(A_{5}=x_{1}x_{6}-2x_{3}x_{4}-x_{2}x_{5},\)\(A_{6}=-x_{1}x_{5}+x_{4}^{2}.\) Then possible algebraic Hamiltonian is given by \[{\cal H}=\alpha x_{1}+\gamma_{1}{\cal C}_{1}+\gamma_{2}{\cal C}_{2}.\] It is easy to check that \(\{{\cal H},A_{j}\}=0\) for \(1\leq j\leq 6.\) The linearly independent polynomial integrals form a quadratic Poisson algebra \({\cal Q}_{1}(2)={\cal Q}_{1}\oplus{\cal Q}_{2},\) where \({\cal Q}_{1}={\rm Span}\{A_{1},A_{2}\}\) and \({\cal Q}_{2}={\rm Span}\{A_{3},\ldots,A_{6}\},\) with the following non-zero brackets \[\{A_{2},A_{3}\}=A_{1}^{2}+A_{2}^{2},\ \ \ \ \{A_{2},A_{4}\}=-\{A_{2},A_{ 6}\}=2A_{3},\] \[\{A_{3},A_{4}\}=-A_{1}A_{5}-2A_{2}A_{6}=-\{A_{3},A_{6}\}. \tag{16}\] Notice that \({\cal Z}\left({\cal Q}_{1}(2)\right)={\rm Span}\{A_{1},A_{5}\}.\) The universal enveloping Poisson algebra has the form [13] \[K_{{\cal Q}_{1}(2)}^{h}=\sum_{\alpha_{1}+\ldots+\alpha_{6}\leq h}\Gamma_{ \alpha_{1},\ldots,\alpha_{6}}A_{1}^{\alpha_{1}}\ldots A_{6}^{\alpha_{6}}\in{ \cal S}_{h}\left({\cal Q}_{1}(2)\right),\] where \({\cal S}_{h}(\cdot)\) is the symmetry algebra of the finitely-generated quadratic Poisson algebra \({\cal Q}_{1}(2).\) The functionally independent Casimir operators are \[K_{{\cal Q}_{1}(2)}^{1,1}=A_{1},\ \ \ \ K_{{\cal Q}_{1}(2)}^{1,2}=A_{5},\ \ \ \ K_{{\cal Q}_{1}(2)}^{1,3}=A_{4}+A_{6},\] \[K_{{\cal Q}_{1}(2)}^{3,1}=(A_{1}^{2}+A_{2}^{2})A_{6}+A_{2}A_{5}A_{ 1}+A_{3}^{2}.\] The corresponding commutator algebra generated by the integrals has the following commutation relations \[[\hat{A}_{2},\hat{A}_{3}] =\hat{A}_{1}^{2}+\hat{A}_{2}^{2},\ \ \ \ [\hat{A}_{2},\hat{A}_{4}]=-[\hat{A}_{2}, \hat{A}_{6}]=2\hat{A}_{3},\] \[[\hat{A}_{3},\hat{A}_{4}] =2\left(\hat{A}_{2}\hat{A}_{6}-\hat{A}_{3}\right)+2\hat{A}_{1} \hat{A}_{5}=[\hat{A}_{3},\hat{A}_{6}].\] It is clear that this is a quadratic algebra. ## 4 conclusion In this short note, we have reviewed three different approaches to the construction of polynomial symmetry algebras of superintegrable systems and their representations. We have first described the application of the deformed oscillator algebra technique to the superintegrable system in the Darboux space. Then we have described the method for constructing infinite-dimensional representations and commented its relevance to solving superintegrable systems without the use separation of variables. Finally, we have described the algebraic method based on commutants and as an example have applied the construction to obtain the algebraic Hamiltonian and underlying quadratic symmetric algebra from the 2D conformal algebra. **Acknowledgement** IM was supported by the Australian Research Council Future Fellowship FT180100099. YZZ was supported by the Australian Research Council Discovery Project DP190101529.
2309.12192
Real versus complex plane curves
We prove that a smooth, complex plane curve $C$ of odd degree can be defined by a polynomial with real coefficients if and only if $C$ is isomorphic to its complex conjugate. Counterexamples are known for curves of even degree. More generally, we prove that a plane curve $C$ over an algebraically closed field $K$ of characteristic $0$ with field of moduli $k_{C}\subset K$ is defined by a polynomial with coefficients in $k'$, where $k'/k_{C}$ is an extension with $[k':k_{C}]\le 3$ and $[k':k_{C}]\mid \operatorname{deg} C$.
Giulio Bresciani
2023-09-21T15:58:53Z
http://arxiv.org/abs/2309.12192v1
# Real versus complex plane curves ###### Abstract. We prove that a smooth, complex plane curve \(C\) of odd degree can be defined by a polynomial with real coefficients if and only if \(C\) is isomorphic to its complex conjugate. Counterexamples are known for curves of even degree. More generally, we prove that a plane curve \(C\) over an algebraically closed field \(K\) of characteristic \(0\) with field of moduli \(k_{C}\subset K\) is defined by a polynomial with coefficients in \(k^{\prime}\), where \(k^{\prime}/k_{C}\) is an extension with \([k^{\prime}:k_{C}]\leq 3\) and \([k^{\prime}:k_{C}]\mid\deg C\). ## 1. Introduction The main purpose of this note is to prove the following. **Theorem 1**.: _Let \(C\) be a smooth complex plane curve of odd degree and \(\tilde{C}\) its complex conjugate. The following are equivalent._ 1. _There exist a homogeneous polynomial with real coefficients_ \(p\in\mathbb{R}[x,y,z]\) _defining_ \(C\)_._ 2. _There exists a homogeneous polynomial_ \(p\in\mathbb{C}[x,y,z]\) _for_ \(C\) _and a linear transformation_ \(g\in\mathrm{GL}_{3}(\mathbb{C})\) _such that_ \(p\circ g=\bar{p}\)_._ 3. _The curves_ \(C\) _and_ \(\tilde{C}\) _are isomorphic as abstract curves._ We find it surprising that such a fact has gone unnoticed until now. Still, we couldn't find it in the literature. Counterexamples are known in even degree [1, Proposition 4.3]. The implications \((i)\Rightarrow(ii)\Rightarrow(iii)\) are trivial, while \((iii)\Rightarrow(ii)\) is a direct consequence of the well-known fact that a smooth plane curve of degree \(\geq 4\) has only one embedding in \(\mathbb{P}^{2}\) up to projective linear transformations [1, Appendix A, SS1, Exercise 18] (if \(\deg C=3\) then \((iii)\Rightarrow(i)\) since the \(j\)-invariant is real, while the case \(\deg C=1\) is trivial). The actual content of Theorem 1 is the implication \((ii)\Rightarrow(i)\), of which we don't know any elementary proof. Our proof relies on stack-theoretic methods and is fairly complex. Theorem 1 is a particular case of a more general result about _fields of moduli_ of plane curves. ### Fields of moduli Fix a base field \(k\) of characteristic \(0\) with algebraic closure \(K\), e.g. \(k=\mathbb{R}\). Let \(C\subset\mathbb{P}^{2}_{K}\) be a smooth plane curve. Let \(H\subset\mathrm{Gal}(K/k)\) the subgroup of elements \(\sigma\in\mathrm{Gal}(K/k)\) such that \(\sigma^{*}C\simeq C\). The field of moduli \(k_{C}\) of \(C\) is the subfield of \(K\) fixed by \(H\). Let \(K/k^{\prime}/k\) be a subextension. If \(C\) can be defined by a homogeneous polynomial with coefficients in \(k^{\prime}\), then clearly \(k_{C}\subset k^{\prime}\). More generally, if there exists a model \(\mathfrak{C}\) of \(C\) over \(k^{\prime}\) then \(k_{C}\subset k^{\prime}\). It is then natural to ask the following questions. **Question.** Does it exist a polynomial for \(C\) with coefficients in \(k_{C}\)? More generally, does it exist a model of \(C\) over \(k_{C}\)? Most of the time, the field of moduli coincides with the intersection of the subfields \(K/k^{\prime}/k\) where \(C\) is defined [Brec, Corollary 17] (the case \(k=\mathbb{R}\) is the most notable exception). Until recently, fields of moduli have been mainly studied for curves and abelian varieties [Mat58][Shi59][Shi72][Koi72][Mur96][DD97][DE99][CQ05][Hug07][Kon09][Mar13][Brea]. However, in the case of plane curves it is more natural to study the problem for the _pair_\((\mathbb{P}^{2}_{K},C)\) rather than for the abstract curve \(C\), since the embedding \(C\subset\mathbb{P}^{2}_{K}\) is unique up to projective linear transformations if \(\deg C\geq 4\)[ACGH85, Appendix A, SS1, Exercise 18] (the cases \(\deg C\leq 3\) are either trivial or well-known). From this point of view, we are studying the field of moduli of a \(2\)-dimensional variety, i.e. \(\mathbb{P}^{2}\), with the additional datum of an embedded curve. In a recent joint work with A. Vistoli [BV], we introduced a stack-theoretic method for studying fields of moduli for varieties of arbitrary dimension (whereas the previous technology was mostly restricted to dimension \(1\)). As a first application in dimension \(2\), we studied fields of moduli of finite sets and curves in \(\mathbb{P}^{2}\)[Bred] [Brec]. In particular, in [Brec] we proved that every smooth plane curve of degree prime with \(6\) is defined over the field of moduli, and this holds for most curves of degree prime with \(3\), too. These results suggest that the prime numbers \(2\) and \(3\) play a critical role in the problem of fields of moduli of plane curves. **New results.** We have already studied the case \(3\nmid\deg C\) in [Brec]; it is then natural to study the case \(2\nmid\deg C\). Write \(\zeta_{n}\) for a primitive \(n\)-th root of unity, and \(C_{m}\) for a cyclic group of order \(m\). **Theorem 2**.: _Let \(C\) be a smooth, plane curve over \(K\) of odd degree._ _If \(C\) has no models over its field of moduli, then the group \(\operatorname{Aut}(C)=\operatorname{Aut}(\mathbb{P}^{2},C)\subset \operatorname{PGL}_{2}\) is conjugate to the group \(C_{a}\times C_{an}\) generated by \(\operatorname{diag}(\zeta_{a},1,1)\), \(\operatorname{diag}(1,\zeta_{a},1)\), \(\operatorname{diag}(\zeta_{an},\zeta_{an}^{e},1)\) for some positive integers \(a,e,n\) with \(3\mid an\) and \(e^{2}-e+1\cong 0\)\((\operatorname{mod}\,n)\). Such an \(e\) exists if and only if \(n\) is odd and \(-3\) is a square modulo \(n\)._ _Furthermore, one of the following holds._ * _an_ \(\mid d\)_, or_ * \(a=1\)_,_ \(n\mid d^{2}-3d+3\) _and_ \(n\neq d^{2}-3d+3\)_._ Using Theorem 2, we prove the following strengthening of [Brec, Theorem 1]. **Theorem 3**.: _Let \(C\) be a smooth, plane curve over \(K\), write \(k_{C}\) for the field of moduli. There exists a finite extension \(k^{\prime}/k_{C}\) with \([k^{\prime}:k_{C}]\leq 3\) and \([k^{\prime}:k_{C}]\mid\deg C\) such that \(C\) is defined by a homogeneous polynomial with coefficients in \(k^{\prime}\)._ Theorem 1 is a direct consequence of Theorem 3. ## 2. The index of a gerbe In order to prove the main theorems, we need some abstract facts about the index of a finite etale gerbe. A reader not interested in such abstract facts might read the statement of Corollary 9 and skip this section. Let \(\mathscr{G}\) be a finite etale gerbe over a field \(k\) of arbitrary characteristic, write \(k^{s}/k\) for a separable closure. By [BV, Lemma 4.5], there exists a section \(b:\operatorname{Spec}k^{s}\to\mathscr{G}\). Write \(G=\pi_{1}(\mathscr{G}_{k^{s}},b)\), it is a finite group. We have a short exact sequence \[1\to G\to\pi_{1}(\mathscr{G},b)\to\operatorname{Gal}(k^{s}/k)\to 1.\] **Lemma 4**.: _The gerbe \(\mathscr{G}\) is isomorphic to the quotient stack \([\operatorname{Spec}k^{s}/\pi_{1}(\mathscr{G},b)]\), where \(\pi_{1}(\mathscr{G},b)\) acts on \(\operatorname{Spec}k^{s}\) with the projection \(\pi_{1}(\mathscr{G},b)\to\operatorname{Gal}(k^{s}/k)\)._ Proof.: This is a direct consequence of the fact that the morphism \(b:\operatorname{Spec}k^{s}\to\mathscr{G}\) is a universal cover which is a Galois \(\pi_{1}(\mathscr{G},b)\)-cover. If we have a rational point \(p:\operatorname{Spec}k\to\mathscr{G}\), by functoriality we get a section \(\operatorname{Gal}(k^{s}/k)\to\pi_{1}(\mathscr{G},p)\). An etale path on \(\mathscr{G}_{k^{s}}\) from \(p\) to \(b\) defines an isomorphism \(\pi_{1}(\mathscr{G},p)\simeq\pi_{1}(\mathscr{G},b)\) which commutes with the projection to \(\operatorname{Gal}(k^{s}/k)\), hence we get a section \(\operatorname{Gal}(k^{s}/k)\to\pi_{1}(\mathscr{G},b)\) well defined up to conjugation by elements of \(G\) (which identifies with the group of etale paths from \(b\) to itself in \(\mathscr{G}_{k^{s}}\)). Denote by \(\mathscr{S}_{\mathscr{G}/k}\) the set of sections \(\operatorname{Gal}(k^{s}/k)\to\pi_{1}(\mathscr{G},b)\) modulo the action of \(G\) by conjugation, we have constructed a map \[\mathscr{G}(k)\to\mathscr{S}_{\mathscr{G}/k}.\] Notice that \(\mathscr{G}(k)\) is a groupoid, whereas \(\mathscr{S}_{\mathscr{G}/k}\) is a set. The following lemma is well-known, see e.g. [BV15, Proposition 9.3], though the proofs available in the literature are quite technical. Let us give an elementary proof. **Lemma 5**.: _The map \(\mathscr{G}(k)\to\mathscr{S}_{\mathscr{G}/k}\) identifies \(\mathscr{S}_{\mathscr{G}/k}\) with the set of isomorphism classes of \(\mathscr{G}(k)\)._ Proof.: Consider two sections \(p,q:\operatorname{Spec}k\to\mathscr{G}\) with equal image in \(\mathscr{S}_{\mathscr{G}/k}\) and write \(p^{s},q^{s}\) for the two compositions \(\operatorname{Spec}k^{s}\to\operatorname{Spec}k\to\mathscr{G}\); equivalently, there is an etale path \(\gamma\) from \(p^{s}\) to \(q^{s}\) in \(\mathscr{G}_{k^{s}}\) such that the natural map \(\operatorname{Gal}(k^{s}/k)\xrightarrow{q_{*}}\pi_{1}(\mathscr{G},q)\) is equal to the composition \(\operatorname{Gal}(k^{s}/k)\xrightarrow{p_{*}}\pi_{1}(\mathscr{G},q) \xrightarrow{\gamma^{*}}\pi_{1}(\mathscr{G},p)\). This means that the two natural actions of \(\operatorname{Gal}(k^{s}/k)\) on the two fiber functors associated with \(p^{s}\) and \(q^{s}\) are identified by \(\gamma\). Consider the etale cover \(p:\operatorname{Spec}k\to\mathscr{G}\), its fiber over \(p^{s}\) has a preferred object which \(\gamma\) maps to a point in the fiber over \(q^{s}\), which by construction is the set of isomorphisms \(p^{s}\simeq q^{s}\); we have thus constructed a preferred isomorphism \(p^{s}\simeq q^{s}\). Now consider the fibered product \(E=\operatorname{Spec}k\times_{\mathscr{G}}\operatorname{Spec}k\), where the two maps \(\operatorname{Spec}k\to\mathscr{G}\) are \(p\) and \(q\), it is a finite etale scheme over \(k\) which represents isomorphisms between \(p\) and \(q\); in particular, we have a preferred point \(\operatorname{Spec}k^{s}\to E\). The fact two actions of \(\operatorname{Gal}(k^{s}/k)\) on the two fiber functors associated with \(p^{s}\) and \(q^{s}\) are identified by \(\gamma\) implies that the preferred point \(\operatorname{Spec}k^{s}\to E\) is Galois invariant, hence it is \(k\)-rational and we get an isomorphism \(p\simeq q\). Consider now a section \(s:\operatorname{Gal}(k^{s}/k)\to\pi_{1}(\mathscr{G},b)\). Notice that \(\operatorname{Spec}k^{s}\) is a \(\operatorname{Gal}(k^{s}/k)\)-torsor over \(k\), define \(T/k\) as the induced \(\pi_{1}(\mathscr{G},b)\)-torsor \[\operatorname{Spec}k^{s}\times^{\operatorname{Gal}(k^{s}/k)}\pi_{1}(\mathscr{G },b)=(\operatorname{Spec}k^{s}\times\pi_{1}(\mathscr{G},b))/\operatorname{Gal }(k^{s}/k)\] using \(s:\operatorname{Gal}(k^{s}/k)\to\pi_{1}(\mathscr{G},b)\). Since \(s\) is a section of \(\pi_{1}(\mathscr{G},b)\to\operatorname{Gal}(k^{s}/k)\), the induced torsor \(T\times^{\pi_{1}(\mathscr{G},b)}\operatorname{Gal}(k^{s}/k)\) is again \(\operatorname{Spec}k^{s}\), hence we get an equivariant map \(T\to\operatorname{Spec}k^{s}\). By definition of quotient stack, this gives a point \(p:\operatorname{Spec}k\to[\operatorname{Spec}k^{s}/\pi_{1}(\mathscr{G},b)]= \mathscr{G}\). It is straightforward to check that the Galois section associated with \(p\) is equivalent to \(s\) Because of Lemma 5, finding a rational point of \(\mathscr{G}\) is equivalent to splitting the associated short exact sequence of fundamental groups. Recall that the index of a gerbe \(\mathscr{G}\) is the greatest common divisor of the degrees of finite splitting fields of \(\mathscr{G}\). The degree of \(\mathscr{G}\) is the degree of the automorphism group of any geometric point. **Lemma 6**.: _Let \(\mathscr{G}\) be a finite etale gerbe over \(k\). The prime factors of the index of \(\mathscr{G}\) divide the degree of \(\mathscr{G}\)._ Proof.: We have that \(\mathscr{G}_{k^{\prime}}\simeq\mathscr{B}_{k^{\prime}}G\) with \(G=\pi_{1}(\mathscr{G}_{k^{\prime}})\). Since \(G\) is finite, there exists a normal, finite index subgroup of \(\pi_{1}(\mathscr{G})\) with trivial intersection with \(G\subset\pi_{1}(\mathscr{G})\). Because of this, there are finite groups \(H\), \(Q\) with a commutative diagram of short exact sequences where the vertical arrows are surjective. In particular, \(\pi_{1}(\mathscr{G})\) identifies with the fibered product \(H\times_{Q}\operatorname{Gal}(k^{s}/k)\). Let \(p\) be a prime which does not divide \(|G|\) and choose \(P\subset H\) a \(p\)-Sylow subgroup of \(G\); then \(P\hookrightarrow Q\) is injective and defines a \(p\)-Sylow of \(Q\). Let \(k^{\prime}\subset k^{s}\) be the fixed field of the inverse image of \(P\) in \(\operatorname{Gal}(k^{s}/k)\), we get an induced map \(\operatorname{Gal}(k^{s}/k^{\prime})\to\pi_{1}(\mathscr{G})\). By Lemma 5, \(\mathscr{G}(k^{\prime})\neq\emptyset\), hence \(p\) does not divide the index of \(\mathscr{G}\) since by construction \(p\nmid[k^{\prime}:k]=[Q:P]\). **Remark 7**.: If \(G\) is abelian, there is a natural action of \(\operatorname{Gal}(k^{s}/k)\) on \(G\) and \(\mathscr{G}\) corresponds to an element \(g\) of \(\operatorname{H}^{2}(\operatorname{Gal}(k^{s}/k),G)\), see for instance [11, Chapitre IV, SS3.4], and Lemma 6 follows from the fact that the order of \(g\) divides the index of \(\mathscr{G}\) (this can be seen by taking restriction and corestriction along the splitting fields of \(\mathscr{G}\)). The case in which \(G\) is solvable then follows by an easy induction argument. The following result is due to L. A. Semetkov and B. Sambale, see [10][10, Theorems 2, 3 and 9]. **Theorem 8** (Semetkov, Sambale).: _Consider a short exact sequence of finite groups_ \[1\to G\to H\to Q\to 1.\] _For every prime \(p\) dividing \(|G|\), assume that there exists a \(p\)-Sylow subgroup \(P\subset Q\) with a section \(P\to H\). Furthermore, assume either that_ * _the Sylow subgroups of_ \(G\) _are abelian, or_ * \(N\) _is metabelian and_ \([G,G]\) _intersects trivially the center of_ \(G\)_._ _Then there is a section \(Q\to H\)._ The case in which the Sylow subgroups of \(G\) are abelian is a direct consequence of Semetkov's theorem [10, Theorem 3]. The second case is essentially [10, Theorem 9] with a weakened hypothesis; it is straightforward to check that the proof still works with the weakened hypothesis. **Corollary 9**.: _Let \(\mathscr{G}\) be a finite etale gerbe over \(k\), write \(G=\pi_{1}(\mathscr{G}_{k^{\prime}})\). Assume either that_ * _the Sylow subgroups of_ \(G\) _are abelian, or_ _._ * \(G\) _is metabelian and_ \([G,G]\) _intersects trivially the center of_ \(G\)_._ _The index of \(\mathscr{G}\) is \(1\) if and only if \(\mathscr{G}\) is neutral._ Proof.: The "if" part is obvious, assume that the index is \(1\). As in the proof of Lemma 6, there exists a commutative diagram of short exact sequences with \(H,Q\) finite, and we get an identification \(\pi_{1}(\mathscr{G})\simeq H\times_{Q}\operatorname{Gal}(k^{s}/k)\). Since \(\mathscr{G}\) has index \(1\), for every prime \(p\) dividing \(|G|\) there exists a finite extension \(k^{\prime}/k\) with \(p\nmid[k^{\prime}:k]\) and a section \(\operatorname{Gal}(k^{\prime s}/k^{\prime})\to\pi_{1}(\mathscr{G})\). While \(k^{\prime}/k\) might be not separable, the natural map \(\operatorname{Gal}(k^{\prime s}/k^{\prime})=\operatorname{Aut}(\bar{k}/k^{ \prime})\to\operatorname{Gal}(k^{s}/k)=\operatorname{Aut}(\bar{k}/k)\) is injective. This implies that the image of \(\operatorname{Gal}(k^{\prime s}/k^{\prime})\to\pi_{1}(\mathscr{G})\) does not intersect \(G\). Hence, up to replacing \(Q\) with a larger quotient of \(\operatorname{Gal}(k^{s}/k)\), we can assume that the image of the composition \(\operatorname{Gal}(k^{\prime s}/k^{\prime})\to\pi_{1}(\mathscr{G})\to H\) maps injectively in \(Q\). Furthermore, we can assume that this holds for every prime \(p\) dividing \(|G|\). Since \(p\nmid[k^{\prime}:k]\), the image of \(\operatorname{Gal}(k^{\prime s}/k^{\prime})\to Q\) contains a \(p\)-Sylow of \(Q\), hence the hypothesis of Theorem 8 is satisfied. We thus get a section \(Q\to H\) which induces a section \(\operatorname{Gal}(k^{s}/k)\to\pi_{1}(\mathscr{G})\). We conclude by Lemma 5. **Remark 10**.: If \(G\) is abelian, Corollary 9 can be proved by an easy argument with cohomology similarly to Remark 7. The case in which \(G\) is an iterated semi-direct product of abelian groups with pairwise coprime degrees follows by an easy induction argument, and this is sufficient for the present article. Still, for future applications it is better to have the more general result, and as far as we know this requires using group theory and the theorems of Semetkov and Sambale. ## 3. Generalities about fields of moduli of plane curves Before proving the main theorems, let us recall some concepts from previous works. Let \(k\) be a field of characteristic \(0\) with algebraic closure \(K\) and \(C\subset\mathbb{P}^{2}_{K}\) a smooth plane curve over \(K\). The case \(\deg C\leq 2\) is trivial. If \(\deg C=3\), i.e. \(C\) is elliptic, it is well-known that \(C\) can be defined by a polynomial with coefficients in \(k(j_{C})\), where \(j_{C}\) is the \(j\)-invariant. Two elliptic curves over \(K\) are isomorphic if and only if they have the same \(j\)-invariant, this implies that \(k(j_{C})\) is the field of moduli of \(C\). Assume \(\deg C\geq 4\). The automorphism group \(\operatorname{Aut}(C)\) of \(C\) coincides with the automorphism group of the pair \((\mathbb{P}^{2},C)\)[1, Appendix A, SS1, Exercise 18]; this implies that the fields of moduli of \(C\) and of the pair \((\mathbb{P}^{2},C)\) coincide [10, Lemma 13]. Up to base change, we can assume that \(k\) is the field of moduli. There is a finite etale gerbe \(\mathscr{G}\) over \(k\), called residual gerbe [BV, SS3.1], which classifies twisted forms of \((\mathbb{P}^{2},C)\); this gerbe also classifies twisted forms of \(C\) as an abstract curve [10, Lemma 13]. In particular, \(C\) is defined over the field of moduli if and only if \(\mathscr{G}(k)\neq\emptyset\), and every model of \(C\) over \(k\) embeds in a unique Brauer-Severi surface over \(k\). There is a universal twisted form \((\mathscr{P},\mathscr{C})\to\mathscr{G}\) of \((\mathbb{P}^{2},C)\) over \(\mathscr{G}\) [BV, SS5.1], and the base change \(\mathscr{P}_{K}\to\mathscr{G}_{K}\) identifies naturally with \([\mathbb{P}^{2}_{K}/\operatorname{Aut}(C)]\to\mathscr{B}_{K}\operatorname{Aut }(C)=[\operatorname{Spec}K/\operatorname{Aut}(C)]\). The compression \(\mathbf{P}\) [BV, Definition 5.3] is the coarse moduli space of \(\mathscr{P}\), and the natural map \(\mathscr{P}\to\mathbf{P}\) is birational since the action of \(\operatorname{Aut}(C)\) on \(\mathbb{P}^{2}\) is faithful. In particular, we get a rational map \(\mathbf{P}\dashrightarrow\mathscr{G}\) by composition the inverse \(\mathbf{P}\dashrightarrow\mathscr{P}\) with \(\mathscr{P}\to\mathscr{G}\). Similarly, the coarse moduli space \(\mathbf{C}\) of \(\mathscr{C}\) has a rational map \(\mathbf{C}\dashrightarrow\mathscr{G}\). If \(X\) is a variety over \(k\) with quotient singularities, a rational point of \(X\) is _liftable_ if it lifts to a rational point of a resolution of singularities [BV, Definition 6.6]. A quotient singularity \((X,x)\) is of type R [BV, Definition 6.11] if every singularity which, up to base change, is etale locally isomorphic to \((X,x)\), is liftable. A finite group \(G\) is of type R if, for every faithful representation \(V\), the quotient singularity \((V/G,[0])\) is of type R. Singularities and groups of type R in dimension \(2\) are completely classified [Brec]. The typical strategy we use is to find a rational point \(p\in\mathbf{P}(k)\) of the compression such that the corresponding singularity in \(\mathbb{P}^{2}_{K}/\operatorname{Aut}(C)=P_{K}\) is of type R. This guarantees that \(p\) is liftable, which in turn implies that \(\mathscr{G}(k)\neq\emptyset\) thank to the Lang-Nishimura theorem for stacks [BV23, Theorem 4.1] applied to \(\tilde{\mathbf{P}}\dashrightarrow\mathscr{G}\), where \(\tilde{\mathbf{P}}\to P\) is a resolution of singularities. Notice that if \(c\in C\) is a point, its stabilizer acts faithfully on the tangent space of \(C\) in \(c\), hence it is cyclic. Furthermore, if \(g\in\operatorname{Aut}(C)\) is a non-trivial element fixing a line \(L\subset\mathbb{P}^{2}\), then \(L\) has transversal intersection with \(C\): in fact, a non-zero vector tangent to \(C\) in a point of \(C\cap L\) is an eigenvector for \(g\) with non-trivial eigenvalue. We are going to use these observations repeatedly, without further justification. Recall that a closed subset \(Z\subset\mathbb{P}^{2}_{K}\) is _distinguished_[Breb, Definition 17] if, for every \(k\)-linear automorphism \(\tau\) of \((\mathbb{P}^{2}_{K},C)\), we have \(\tau(Z)=Z\). We stress that this definition involves \(k\)-linear, and not only \(K\)-linear, automorphisms. If \(Z\) is distinguished, \(Z/\operatorname{Aut}(C)\subset\mathbb{P}^{2}/\operatorname{Aut}(C)\) descends to a closed subset of the compression \(\mathbf{P}\) [Breb, Lemma 18]. ## 4. Proof of Theorem 2 The genus of \(C\) is \((d-1)(d-2)/2\), write \(h\) for the genus of \(C/\operatorname{Aut}(C)\). By [Brec, Theorem 7], up to conjugation the automorphism group \(\operatorname{Aut}(C)\subset\operatorname{PGL}_{2}\) is one of the following. 1. The abelian group \(C_{a}\times C_{an}\) generated by \(\operatorname{diag}(\zeta_{a},1,1)\), \(\operatorname{diag}(1,\zeta_{a},1)\) and \(\operatorname{diag}(\zeta_{an},\zeta^{e}_{an},1)\) for positive integers \(a,n,e\) satisfying \(e^{2}-e+1\cong 0\pmod{n}\) and \(3\mid an\), 2. The abelian group \(C_{a}\times C_{a2^{b}n}\) generated by \(\operatorname{diag}(\zeta_{a},1,1)\), \(\operatorname{diag}(1,\zeta_{a},1)\), \(\operatorname{diag}(\zeta_{a2^{b}n},\zeta^{e}_{a2^{b}n},1)\) for positive integers \(a,b,n,e\) with \(e^{2}\cong 1\pmod{n}\), \(e\cong\pm 1\pmod{2^{b}}\), and \(n\) odd. 3. The Hessian group \(H_{2}\simeq C_{3}^{2}\rtimes C_{2}\) of order \(18\), see [Brec, SS3.2]. 4. The Hessian group \(H_{3}\simeq C_{3}^{2}\rtimes C_{4}\) of order \(36\), see [Brec, SS3.2]. Let us analyze each of these cases. **(1)**.: Since \(e^{2}-e+1\cong 0\pmod{n}\), we get that \(n\) is odd: the equation has no solutions modulo \(2\). It follows that the singularities in the three points \((0:0:1)\), \((0:1:0)\) and \((1:0:0)\) of \(\mathbb{P}^{2}/\operatorname{Aut}(C)\) are of type R thanks to [Bre23, Theorem 4], since their local fundamental group has odd degree \(n\). They form a distinguished subset, hence they define a \(0\)-cycle \(Z\) of degree \(3\) on the compression \(\mathbf{P}\). If \(Z\) contains a rational point, then \(\mathscr{G}(k)\neq\emptyset\) by the Lang-Nishimura theorem for stacks [1, Theorem 4.1] applied to \(\tilde{\mathbf{P}}\dashrightarrow\mathscr{G}\), where \(\tilde{\mathbf{P}}\to P\) is a resolution of singularities. This is absurd, since we are assuming that \(C\) is not defined over \(k\). Hence, \(Z=\{z\}\) contains only one point \(z\) with \([k(z):k]=3\). There are two cases: either \(\mathbf{C}\subset\mathbf{P}\) contains \(z\), or not. Equivalently, either \(C\) contains the three points \((0:0:1)\), \((0:1:0)\), \((1:0:0)\), or none. **Case 1: the three points are not in \(C\).** Consider the line \(L\) of points \((x:0:z)\), the group \(\operatorname{Aut}(C)=C_{a}\times C_{an}\) maps \(L\) to itself. There are two fixed points on \(L\), and all the other orbits have order \(an\). By assumption, \(C\) does not contain the fixed points. This implies that \(an\) divides the degree of each orbit in \(C\cap L\) considered as a \(0\)-cycle with multiplicities, this in turn implies that \(an\) divides the degree of the \(0\)-cycle \(C\cap L\), which is \(d\). **Case 2: the three points are in \(C\).** Since the stabilizers of points of \(C\) are cyclic and \(\operatorname{Aut}(C)\) fixes \((0:0:1)\), then \(a=1\). Since \(a=1\), the three points \((0:0:1)\), \((0:1:0)\) and \((1:0:0)\) are the only points of \(\mathbb{P}^{2}\) with non-trivial stabilizer. It follows that the ramification divisor of \(C\to C/\operatorname{Aut}(C)\) has degree \(3(n-1)\). By Riemann-Hurwitz, we get \[d(d-3)=2n(h-1)+3(n-1),\] or equivalently \[d^{2}-3d+3=n(2h+1).\] It remains to show that \(n\neq d^{2}-3d+3\), or equivalently \(h\neq 0\). Assume by contradiction that \(h=0\). The three points \((0:0:1)\), \((0:1:0)\) and \((1:0:0)\) are fixed and form a distinguished subset, hence they define a divisor of degree \(3\) on the compression \(\mathbf{C}\) of \(C\). Since \(\mathbf{C}\) is a twisted form of \(C/\operatorname{Aut}(C)\), it has genus \(h=0\), hence \(\mathbf{C}\) is isomorphic to \(\mathbb{P}^{1}\) since it has a divisor of odd degree. We thus get a rational map \(\mathbb{P}^{1}\dashrightarrow\mathscr{G}\), and hence \(\mathscr{G}(k)\neq\emptyset\) since \(\mathbb{P}^{1}(k)\) is dense, thus giving a contradiction. To conclude, we have to show that a solution to \(e^{2}-e+1\cong 0\pmod{n}\) exists if and only if \(n\) is odd and \(-3\) is a square modulo \(n\). The fact that \(n\) must be odd is obvious, since there are no solutions modulo \(2\). Assuming that \(n\) is odd, we can multiply by \(4\) and obtain the equation \[(2e-1)^{2}\cong-3\pmod{n}\] which clearly has solutions if and only if \(-3\) is a square modulo \(n\). **(2).** We have already shown in [1, Theorem 2, SS5.3] that if \(\operatorname{Aut}(C)\) is of type (2) then \(d\) is even (the reference's hypothesis that \(3\nmid d\) is not used in this part of the proof). **(3).** Let \(\mathscr{G}\) be the residual gerbe, thanks to Lemma 6 and Corollary 9 it is enough to show that the index of \(\mathscr{G}\) is \(1\). Let us start by showing that it divides \(4\). Each subgroup of \(H_{2}\) of order \(3\) has \(3\) fixed points (see the matrices \(M_{0}\), \(M_{1}\) in [1, SS3.2]). There are \(4\) such subgroups, and the fixed loci of these subgroups are pairwise disjoint (this follows easily from the presentation given in [1, SS3.2]); we call these the four special triangles. The elements of order \(2\) in \(H_{2}\) are all conjugate. Each of them fixes one point of each special triangle, and swaps the other two points (see the matrix \(M_{2}\) in [1, SS3.2]). Hence, the stabilizer of each of the \(12\) points of the 4 special triangles is isomorphic to \(S_{3}\), and each special triangle is an orbit for the action of \(H_{2}\). These 12 points form a distinguished subset, hence they descend to a 0-cycle \(Z\) of degree 4 in the compression \(\mathbf{P}\) of \((\mathbb{P}^{2},C)\). Since \(S_{3}\) is of type \(\mathrm{R}_{2}\)[1], the singularities of the points of \(Z\) are liftable, hence we get a map \(Z\to\mathscr{G}\) thanks to the Lang-Nishimura theorem for stacks [1, Theorem 4.1]. It follows that the index of \(\mathscr{G}\) divides 4. To conclude, it is enough to show that the index is odd. Consider the 9 lines fixed by the 9 elements of order 2 of \(H_{2}\). Any intersection point of two of these lines has non-cyclic stabilizer, since it is fixed by two elements of order 2. In particular, these intersection points are not contained in \(C\). By what we have said above, we can easily see that the intersection points of the lines are precisely the 12 points of the special triangles. It follows that the intersection of \(C\) with these lines has exactly \(9d\) points divided in \(9d/9=d\) orbits. These form a distinguished subset, hence they descend to a divisor of degree \(d\) on the compression \(\mathbf{C}\) of \(C\). Since we have a rational map \(\mathbf{C}\dashrightarrow\mathscr{G}\), by the Lang-Nishimura theorem for stacks [1, Theorem 4.1] we get that the index of \(\mathscr{G}\) divides \(d\), and hence it is odd. **(4).** We already know the fixed loci of the elements of \(H_{2}\subset H_{3}\). The 18 elements of \(H_{3}\smallsetminus H_{2}\) all have order 4, and each fixes 3 points. The 9 cyclic subgroups of order 4 are all conjugate. An element \(g\) of order 4 fixes 3 points, and none of them is also fixed by an element of order 3 (see the matrix \(M_{3}\) in [1]. For 2 of these 3 fixed points, \(g\) acts with eigenvalues \((\pm i,-1)\) on the tangent space, while on the third point the eigenvalues are \((\pm i,\mp i)\). Let us call 4-points the ones stabilized by an element of order 4 which acts with eigenvalues \((\pm i,-1)\) on the tangent space. Let \(L\) be the line fixed by \(g^{2}\): it contains two 4-points fixed by \(g\), while all the other \(<g>\)-orbits in \(L\) have order 2. Since \(L\) has transversal intersection with \(C\) and \(|C\cap L|=d\) is odd, we get that \(C\) contains exactly one of the two 4-points contained in \(L\). Since there are 9 elements of order 2, we get that \(C\) contains 9 4-points which form a unique \(H_{3}\)-orbit, since the cyclic subgroups of order 4 are conjugate. These 9 points thus form a distinguished subset and they descend to a rational point in the compression \(\mathbf{C}\subset\mathbf{P}\) of \(C\). By the Lang-Nishimura theorem for stacks [1, Theorem 4.1] applied to \(\mathbf{C}\dashrightarrow\mathscr{G}\), we conclude that \(\mathscr{G}(k)\neq\emptyset\). ## 5. Proof of Theorem 3 Assume first that \(C\) has a model \(\mathfrak{C}\) over \(k\), there exists a Brauer-Severi surface \(P_{\mathfrak{C}}\) over \(k\) with an embedding \(\mathfrak{C}\subset P_{\mathfrak{C}}\)[1, Theorem 5][1]. If \(P_{\mathfrak{C}}(k)\neq\emptyset\), then \(P_{\mathfrak{C}}\simeq\mathbb{P}_{k}^{2}\) and hence we get the desired embedding \(\mathfrak{C}\subset\mathbb{P}_{k}^{2}\) over \(k\). If \(\deg C\) is prime with 3 the index of the Brauer-Severi surface \(P_{\mathfrak{C}}\) is 1, hence \(P_{\mathfrak{C}}(k)\neq\emptyset\). We may thus assume that \(3\mid\deg C\) and \(P_{\mathfrak{C}}(k)=\emptyset\). By [1, Proposition 4.5.4], there exists a finite extension \(k^{\prime}/k\) of degree 3 with \(P_{\mathfrak{C}}(k^{\prime})\neq\emptyset\), and we get the desired embedding \(\mathfrak{C}_{k^{\prime}}\subset\mathbb{P}_{k^{\prime}}^{2}\) over \(k^{\prime}\). Assume now that \(C\) has no models over \(k\). Again, \(\mathrm{Aut}(C)\) is one the groups (1)-(4) listed in the proof of the previous theorem. Let us analyze each case. **(1).** Since \(C\) has no models over \(k\), then \(3\mid\deg C\) by [1]. Using the same notation as in the proof of Theorem 2, there exist a point \(z\) in the compression **P** such that \([k(z):k]=3\) and the \(3\) corresponding points in \(\mathbb{P}^{2}_{K}/\operatorname{Aut}(C)\) are singularities of type R. By the Lang-Nishimura theorem for stacks [1, Theorem 4.1] applied to \(\mathbf{P}\dashrightarrow\mathscr{P}\), we get \(\mathscr{P}(k(z))\neq\emptyset\). This implies that \(C\) descends to a curve \(\mathfrak{C}\) over \(k(z)\) with an embedding \(\mathfrak{C}\subset\mathbb{P}^{2}_{k(z)}\), see [1, SS2]. **(2).** Since \(C\) has no models over \(k\), then \(2\mid\deg C\) by Theorem 2. The line \((x:y:0)\) is distinguished, hence it descends to a Brauer-Severi curve \(B\) in the compression. Hence, there exists a point \(b\in B\) which is smooth in \(\mathbf{P}\) and such that \([k(b):k]=2\). By the Lang-Nishimura theorem for stacks [1, Theorem 4.1] applied to \(\mathbf{P}\dashrightarrow\mathscr{P}\), we get \(\mathscr{P}(k(b))\neq\emptyset\). This implies that \(C\) descends to a curve \(\mathfrak{C}\) over \(k(b)\) with an embedding \(\mathfrak{C}\subset\mathbb{P}^{2}_{k(b)}\), see [1, SS2]. **(3) and (4).** Since \(C\) has no models over \(k\), then \(2\mid\deg C\) by Theorem 2. In both cases (3) and (4), there are exactly \(9\) elements of order \(2\) in \(\operatorname{Aut}(C)\) and they are all conjugate. Each of them fixes a different line in \(\mathbb{P}^{2}\), these \(9\) lines form a distinguished subset. Since the \(9\) elements of order \(2\) are conjugate, \(\operatorname{Aut}(C)\) acts transitively on the set of these \(9\) lines, hence they descend to a Brauer-Severi curve \(B\) in the compression. We conclude as in case (2).
2307.16880
Remarks on the linear wave equation
We make some remarks on the linear wave equation concerning the existence and uniqueness of weak solutions, satisfaction of the energy equation, growth properties of solutions, the passage from bounded to unbounded domains, and reconciliation of different representations of solutions.
John M. Ball
2023-07-31T17:47:12Z
http://arxiv.org/abs/2307.16880v2
# Remarks on the linear wave equation ###### Abstract. We make some remarks on the linear wave equation concerning the existence and uniqueness of weak solutions, satisfaction of the energy equation, growth properties of solutions, the passage from bounded to unbounded domains, and reconciliation of different representations of solutions. ## 1. Introduction Let \(\Omega\subset\mathbb{R}^{n}\) be open with boundary \(\partial\Omega\). In this paper we consider the linear wave equation \[u_{tt}=\Delta u, \tag{1.1}\] for \(u=u(x,t)\), \(x\in\Omega\), \(t\geqslant 0\), with boundary condition \[u|_{\partial\Omega}=0, \tag{1.2}\] (interpreted appropriately if \(\Omega\) is unbounded) and initial conditions \[u(x,0)=u_{0}(x),\,u_{t}(x,0)=v_{0}(x), \tag{1.3}\] where \(u_{0}\in H^{1}_{0}(\Omega)\), \(v_{0}\in L^{2}(\Omega)\). Our aim is to make some remarks which are hard to find in the literature, although largely implicit in it, concerning the existence and uniqueness of weak solutions to (1.1)-(1.3), satisfaction of the energy equation \[\frac{1}{2}\int_{\Omega}\left(|\nabla u(x,t)|^{2}+u_{t}(x,t)^{2}\right)\,dx= \frac{1}{2}\int_{\Omega}\left(|\nabla u_{0}(x)|^{2}+v_{0}(x)^{2}\right)\,dx, \tag{1.4}\] growth properties of solutions, the passage from bounded to unbounded domains, and the reconciliation of different representations of solutions. Although these remarks are perhaps original only in various details, it is hoped that readers may find their combination useful. If \(\Omega\) is bounded, perhaps the easiest method for proving existence and uniqueness is to represent \(u\) as a Fourier expansion in the eigenfunctions \(\omega_{j}\in H^{1}_{0}(\Omega)\) of \(-\Delta\), with corresponding real eigenvalues \(\lambda_{j}>0\), in terms of which \[u(x,t)=\sum_{j=1}^{\infty}\left(u_{0j}\cos(\sqrt{\lambda_{j}}t)+v_{0j}\frac{ \sin(\sqrt{\lambda_{j}}t)}{\sqrt{\lambda_{j}}}\right)\omega_{j}, \tag{1.5}\] where \(u_{0j}=(u_{0},\omega_{j}),v_{0j}=(v_{0},\omega_{j})\) and \((\cdot,\cdot)\) denotes the inner product in \(L^{2}(\Omega)\). In the case \(\Omega=\mathbb{R}^{n}\) the solution can be given explicitly in terms of the Fourier transforms \(\hat{u}_{0}\), \(\hat{v}_{0}\) of the initial data as \[u(x,t)=\frac{1}{(2\pi)^{n/2}}\int_{\mathbb{R}^{n}}\left(\hat{u}_{0}(\xi)\cos (|\xi|t)+\hat{v}_{0}(\xi)\frac{\sin(|\xi|t)}{|\xi|}\right)e^{ix\cdot\xi}\,d\xi. \tag{1.6}\] Alternatively one can use Poisson's method of spherical means (see, for example, [21]), which in the case \(n=3\) leads to Kirchhoff's solution \[u(x,t)=\frac{1}{4\pi t}\int_{S(x,t)}v_{0}(y)\,dS_{y}+\frac{\partial}{\partial t} \left(\frac{1}{4\pi t}\int_{S(x,t)}u_{0}(y)\,dS_{y}\right), \tag{1.7}\] where \(S(x,t)=\{y\in\mathbb{R}^{3}:|y-x|=t\}\) and \(S_{y}\) is the usual \((n-1)\)-dimensional surface measure. Yet another method is to represent the solution as a superposition of plane waves via the Radon transform [14, 23]. For \(\Omega\) a general (possibly unbounded) open set these methods do not apply, and a natural approach that we review in Section 2 is via the Hille-Yosida theorem. In Section 3 we discuss the growth in time of the \(L^{2}\) norm of solutions when \(\Omega\) is unbounded. In order to make the connection with weak solutions in the sense of distributions, and thus to establish uniqueness of weak solutions, it is convenient to calculate the adjoint of the wave operator, which we do in Section 4. We show that the adjoint is injective and that there are no nontrivial linear constants of motion. The use of the phase space \(H=H^{1}_{0}(\Omega)\times L^{2}(\Omega)\) for unbounded domains \(\Omega\) is motivated by taking a limit of the semiflows corresponding to an increasing sequence of bounded open sets \(\Omega_{j}\) with union \(\Omega\) (Theorem 9). It is not completely obvious how to reconcile the different representations of solutions described above. We explore this for Kirchhoff's solution in Section 5, showing that the derivation of Kirchhoff's solution implies smoothing properties under the taking of averages over spheres and balls that are familiar in harmonic analysis. We illustrate the related harmonic analysis methods by showing (Theorem 10) that if \(B=B(0,1)\) denotes the unit ball in \(\mathbb{R}^{n}\) then for \(f\in L^{2}(\mathbb{R}^{n})\) and \(t>0\) the average \[\mathcal{N}_{t}(f)(x)=\int_{B}f(x+tz)\,dz \tag{1.8}\] belongs to \(H^{\frac{n+1}{2}}(\mathbb{R}^{n})\). ## 2. Existence and uniqueness via the Hille-Yosida theorem Let \(H=H^{1}_{0}(\Omega)\times L^{2}(\Omega)\). \(H\) is a Hilbert space with inner product \[\langle\left(\begin{array}{c}u\\ v\end{array}\right),\left(\begin{array}{c}p\\ q\end{array}\right)\rangle=\int_{\Omega}(up+\nabla u\cdot\nabla p+vq)\,dx.\] We briefly review how, for a general (possibly unbounded) domain \(\Omega\subset\mathbb{R}^{n}\), it can be proved using the Hille-Yosida theorem that (1.1)-(1.3) generates a semiflow on \(H\), and that the energy equation (1.4) holds. For equivalent treatments see [1, Chapter 7], [9, Chapter IV], both using cosine families, and for the case of bounded domains [8, p444]. We write (1.1) in the form \[\dot{w}=Aw, \tag{2.1}\] where \(v=u_{t}\), \(w=\left(\begin{array}{c}u\\ v\end{array}\right)\) and \[A=\left(\begin{array}{cc}0&\mathbf{1}\\ \Delta&0\end{array}\right). \tag{2.2}\] We regard \(A\) as an unbounded linear operator on \(H\) with domain \[D(A)=\{\left(\begin{array}{c}u\\ v\end{array}\right)\in H:\Delta u\in L^{2}(\Omega),v\in H^{1}_{0}(\Omega)\}. \tag{2.3}\] Then \(D(A)\) is dense in \(H\) and it is easily checked that \(A:D(A)\to H\) is closed. We apply the following special case of the Hille-Yosida theorem (see, for example, [25, Corollary 3.8], [8, p441]): **Theorem 1**.: _A closed densely defined linear operator \(A\) on a real Banach space \(X\) is the generator of a \(C^{0}\)-semigroup \(\{T(t)\}_{t\geqslant 0}\) satisfying for some \(\omega\in\mathbb{R}\)_ \[\|T(t)\|\leqslant e^{\omega t},\;t\geqslant 0,\] _if and only if \((\omega,\infty)\subset\rho(A)\) and \(\|R_{\lambda}\|\leqslant\frac{1}{\lambda-\omega}\) for \(\lambda>\omega\)._ Here a \(C^{0}\)_-semigroup_\(\{T(t)\}_{t\geqslant 0}\) is a family of bounded linear operators \(T(t):X\to X\) satisfying (i) \(T(0)=\)identity, (ii) \(T(s+t)=T(s)T(t)\) for all \(s,t\geqslant 0\) and (iii) \(t\mapsto T(t)p\) is continuous from \([0,\infty)\to X\) for all \(p\in X\). The _resolvent set_\(\rho(A)\) is the set of \(\lambda\) such that \(\lambda\mathbf{1}-A:D(A)\to X\) is one-to-one and onto, and for \(\lambda\in\rho(A)\) the _resolvent operator_\(R_{\lambda}:X\to X\) is defined by \(R_{\lambda}w=(\lambda\mathbf{1}-A)^{-1}w\). **Theorem 2**.: _Let \(A\) be given by (2.2),(2.3). Then \(A\) is the generator of a \(C^{0}\)-semigroup \(\{T(t)\}_{t\geqslant 0}\) on \(H\), and the energy equation_ \[E(T(t)p)=E(p),\;\;t\geqslant 0 \tag{2.4}\] _is satisfied for all \(p=\left(\begin{array}{c}u_{0}\\ v_{0}\end{array}\right)\in H\), where_ \[E(w):=\frac{1}{2}\int_{\Omega}(|\nabla u(x)|^{2}+v(x)^{2})\,dx. \tag{2.5}\] Proof.: We first show that \((0,\infty)\subset\rho(A)\). Thus we need to prove that for any \(\lambda>0\) and \(f\in H^{1}_{0}(\Omega)\), \(g\in L^{2}(\Omega)\) there exists a unique solution \(\left(\begin{array}{c}u\\ v\end{array}\right)\in H\) with \(\Delta u\in L^{2}(\Omega)\), \(v\in H^{1}_{0}(\Omega)\) to \[\lambda u-v =f, \tag{2.6}\] \[\lambda v-\Delta u =g. \tag{2.7}\] Since \(v=\lambda u-f\) we just need to show that there is a unique solution \(u\in H^{1}_{0}(\Omega)\) to \[-\Delta u+\lambda^{2}u-(\lambda f+g)=0. \tag{2.8}\] The existence follows by minimization of the functional \[I(u)=\int_{\Omega}\left(\frac{1}{2}|\nabla u|^{2}+\frac{\lambda^{2}}{2}u^{2}-( \lambda f+g)u\right)\,dx \tag{2.9}\] over \(H^{1}_{0}(\Omega)\) via the direct method of the calculus of variations and showing that the minimizer satisfies the Euler-Lagrange equation, and the uniqueness follows since the difference \(z\) of two solutions to (2.8) satisfies \(\int_{\Omega}(z^{2}+|\nabla z|^{2})\,dx=0\). To prove the resolvent estimate, note that \(R_{\lambda}\left(\begin{array}{c}f\\ g\end{array}\right)=\left(\begin{array}{c}u\\ v\end{array}\right)\), and thus taking the inner product in \(H\) of (2.6), (2.7) with \(\left(\begin{array}{c}u\\ v\end{array}\right)\) we obtain \[\lambda\int_{\Omega}(u^{2}+|\nabla u|^{2}+v^{2})\,dx=(v,u)+\int_{\Omega}(fu+ \nabla f\cdot\nabla u+gv)\,dx, \tag{2.10}\] where \((\cdot,\cdot)\) denotes the inner product in \(L^{2}(\Omega)\). But \((v,u)\leqslant\dfrac{1}{2}\int_{\Omega}(u^{2}+|\nabla u|^{2}+v^{2})\,dx\), and hence \[(\lambda-\dfrac{1}{2})\int_{\Omega}(u^{2}+|\nabla u|^{2}+v^{2})\,dx\leqslant \int_{\Omega}(f,\nabla f,g)\cdot(u,\nabla u,v)\,dx, \tag{2.11}\] from which the estimate \[\|R_{\lambda}\|\leqslant\dfrac{1}{\lambda-\frac{1}{2}}\text{ for }\lambda> \dfrac{1}{2} \tag{2.12}\] follows. It remains to prove that the energy equation (2.4) holds, this not being immediately obvious since the formal derivation of it via multiplication of (1.1) by \(u_{t}\) and integration is not valid for \(u_{0}\in H^{1}_{0}(\Omega),v_{0}\in L^{2}(\Omega)\). To this end we note that \[E(w):=\dfrac{1}{2}\int_{\Omega}(|\nabla u(x)|^{2}+v(x)^{2})\,dx \tag{2.13}\] is a \(C^{1}\) function of \(w=\left(\begin{array}{c}u\\ v\end{array}\right)\in H\), and that if \(w\in D(A)\) then \[E^{\prime}(w)(Aw)=\int_{\Omega}(\nabla u\cdot\nabla v+v\cdot\Delta u)\,dx=0. \tag{2.14}\] But by a well-known result for linear semigroups (see, for example, [25, Theorem 2.4]), the map \(t\mapsto T(t)p\) is \(C^{1}\) for \(p\in D(A)\) with derivative \(AT(t)p\). Thus if \(p\in D(A)\) then \(t\mapsto E(T(t)p)\) is \(C^{1}\) with derivative \(E^{\prime}(T(t)p)(AT(t)p)=0\). Hence \(E(T(t)p)=E(p)\) for all \(t\geqslant 0\), \(p\in D(A)\), and thus, since \(D(A)\) is dense in \(H\) and \(E\) and \(T(t)\) are continuous, also for \(p\in X\). _Remark 1_.: An alternative approach to proving existence, involving much the same calculations, is to apply the Lumer-Phillips theorem (see e.g. [25, Theorem 4.3]) by showing that the operator \(-A+\lambda\mathbf{1}\) is maximal monotone for \(\lambda>\frac{1}{2}\). _Remark 2_.: The fact that the time reversibility of (1.1) implies that \(A\) generates a _group_\(\{T(t)\}_{t\in\mathbb{R}}\) of bounded linear operators can be proved either by checking that \(-A\) generates a \(C^{0}\)-semigroup or by verifying that \[T(-t):=\left(\begin{array}{cc}\mathbf{1}&0\\ 0&-\mathbf{1}\end{array}\right)T(t)\left(\begin{array}{cc}\mathbf{1}&0\\ 0&-\mathbf{1}\end{array}\right) \tag{2.15}\] satisfies \(T(-t)T(t)=\mathbf{1}\) by calculating \(\frac{d}{dt}(T(-t)T(t)p)=0\) for \(p\in D(A)\). _Remark 3_.: Using [25, Theorem 2.4] we also have the regularity result (see [1, Theorem 7.2]) that if \(u_{0}\in H^{1}_{0}(\Omega)\), \(\Delta u_{0}\in L^{2}(\Omega)\) and \(u_{1}\in H^{1}_{0}(\Omega)\) then \(u_{tt}=\Delta u\in C([0,\infty);L^{2}(\Omega))\), \(u_{t}\in C([0,\infty);H^{1}_{0}(\Omega))\). _Remark 4_.: Note that if \(\left(\begin{array}{c}u_{0}\\ v_{0}\end{array}\right)\in H\) then, since \(\left(\begin{array}{c}u_{0}\\ 0\end{array}\right)=A\left(\begin{array}{c}0\\ u_{0}\end{array}\right)\), \[T(t)\left(\begin{array}{c}u_{0}\\ v_{0}\end{array}\right)=T(t)\left(\begin{array}{c}0\\ v_{0}\end{array}\right)+AT(t)\left(\begin{array}{c}0\\ u_{0}\end{array}\right), \tag{2.16}\] so that the semigroup is determined by its action on initial data with first component zero (for the same observation for more general hyperbolic equations see [20, p15]). Said differently, the solution \(u\) with initial data \(u(x,0)=u_{0}(x),u_{t}(x,0)=0\) is given by \(u=h_{t}\), where \(h\) is the (strong) solution of the wave equation with initial data \(h(x,0)=0,h_{t}(x,0)=u_{0}(x)\). ## 3. Growth of \(L^{2}\) norm as \(t\to\infty\). Theorem 2 gives the extra information that \(\|u(\cdot,t)\|_{2}\leqslant e^{t/2}\|p\|_{H}\), where \(\|\cdot\|_{2}\) denotes the norm in \(L^{2}(\Omega)\). However we have the better estimate \[\|u(\cdot,t)\|_{2}^{2}\leqslant\|u_{0}\|_{2}^{2}+2(u_{0},v_{0})t+2E(u_{0},v_{0 })t^{2}, \tag{3.1}\] where \(E(u_{0},v_{0}):=\frac{1}{2}\int_{\Omega}(|\nabla u_{0}|^{2}+|v_{0}|^{2})\,dx\). This follows from the energy equation (2.4) by integrating the identity \(\frac{d}{dt}(u_{t},u)=\|u_{t}\|_{2}^{2}-\|\nabla u\|_{2}^{2}\) to deduce that \[(u,u_{t})(t) =(u_{0},v_{0})+\int_{0}^{t}(\|u_{t}(\cdot,\tau)\|_{2}^{2}-\| \nabla u(\cdot,\tau)\|_{2}^{2})\,d\tau,\] \[=(u_{0},v_{0})+2E(u_{0},v_{0})t-2\int_{0}^{t}\|\nabla u(\cdot, \tau)\|_{2}^{2}\,d\tau, \tag{3.2}\] and hence \[\|u(\cdot,t)\|_{2}^{2}=\|u_{0}\|_{2}^{2}+2(u_{0},v_{0})t+2E(u_{0},v_{0})t^{2}- 4\int_{0}^{t}\int_{0}^{s}\|\nabla u(\cdot,\tau)\|_{2}^{2}\,d\tau\,ds. \tag{3.3}\] The identity (3.2) follows straightforwardly for \(p\in D(A)\), and then for an arbitrary \(p\in H\) via approximation of \(p\) by a sequence \(p^{(j)}\in D(A)\). If \(\Omega\) is bounded then the energy equation (2.4) and Poincare inequality imply that \(\|u(\cdot,t)\|_{2}\) is bounded, but for unbounded \(\Omega\) it is possible that \(\|u(\cdot,t)\|_{L^{2}(\Omega)}\to\infty\) as \(t\to\infty\). For example, in the case \(\Omega=\mathbb{R}^{n}\) with \(u_{0}=0\) we have that \[\hat{u}(\xi,t)=\hat{v}_{0}(\xi)\frac{\sin(|\xi|t)}{|\xi|}.\] For \(\varepsilon>0\) and \(r=|\xi|\) let \[\hat{v}_{0}(\xi)=\left\{\begin{array}{ll}r^{-\frac{n}{2}+ \varepsilon},&r\in[0,1),\\ 0,&r\geqslant 1.\end{array}\right. \tag{3.4}\] Then \[\int_{\mathbb{R}^{n}}|\hat{v}_{0}|^{2}d\xi=\omega_{n}\int_{0}^{1}r^{-1+2 \varepsilon}dr<\infty,\] where \(\omega_{n}=\mathcal{H}^{n-1}(S^{n-1})\), so that \(v_{0}\in L^{2}(\mathbb{R}^{n})\) and is real and radially symmetric (because the Fourier transform of a radially symmetric function is radially symmetric - see, for example, [30, Corollary 1.2]). But \[\|\hat{u}\|_{L^{2}(\mathbb{R}^{n})}^{2} =\omega_{n}\int_{0}^{1}r^{2\varepsilon-3}\sin^{2}(rt)\,dr\] \[=t^{2(1-\varepsilon)}\omega_{n}\int_{0}^{t}s^{2\varepsilon-1} \left(\frac{\sin s}{s}\right)^{2}\,ds,\] so that by Plancherel's theorem \(\lim_{t\to\infty}\|u(\cdot,t)\|_{2}t^{\varepsilon-1}=C_{\varepsilon}>0\). (It does not seem simple to construct such an example for any dimension \(n\) with \(v_{0}\) having compact support.) Note, however, that for \(\Omega=\mathbb{R}^{n}\) we always have \[\lim_{t\to\infty}t^{-1}\|u(\cdot,t)\|_{2}=0. \tag{3.5}\] This follows from Plancherel's theorem since \[t^{-2}\|\hat{u}(\cdot,t)\|_{2}^{2}\leqslant 2\int_{\mathbb{R}^{n}}\left(|\hat{u }_{0}|^{2}\frac{\cos^{2}(|\xi|t)}{t^{2}}+|\hat{v}_{0}|^{2}\frac{\sin^{2}(|\xi |t)}{|\xi|^{2}t^{2}}\right)\,d\xi, \tag{3.6}\] which tends to zero as \(t\to\infty\) by dominated convergence as \(\frac{\sin\tau}{\tau}\) is bounded. For general \(\Omega\), we see from (3.2) that (3.5) implies Cesaro equipartition of energy (see [11, p116ff] for related results), i.e. \[\lim_{t\to\infty}\frac{1}{t}\int_{0}^{t}\|u_{t}(\cdot,\tau)\|_{2}^{2}\,d\tau= \lim_{t\to\infty}\frac{1}{t}\int_{0}^{t}\|\nabla u(\cdot,\tau)\|_{2}^{2}\,d \tau=E(u_{0},v_{0}). \tag{3.7}\] In fact, for \(\Omega=\mathbb{R}^{n},n=1,2\), estimates in [19] show that blow-up of the \(L^{2}\) norm as \(t\to\infty\) occurs under the additional hypotheses \(\int_{\mathbb{R}^{n}}(1+|x|)|v_{0}(x)|\,dx<\infty\) and \(\int_{\mathbb{R}^{n}}v_{0}\,dx\neq 0\). In the case \(n=1\) the blow-up for \(v_{0}\in L^{1}(\mathbb{R})\) with \(\int_{\mathbb{R}}v_{0}\,dx\neq 0\) follows easily from d'Alembert's formula \[u(x,t)=\frac{1}{2}\left(u_{0}(x+t)+u_{0}(x-t)+\int_{x-t}^{x+t}v_{0}(z)\,dz \right), \tag{3.8}\] since if \(v_{0}\in L^{1}(\mathbb{R})\) with \(\int_{\mathbb{R}}v_{0}^{+}\,dx\neq\int_{\mathbb{R}}v_{0}^{-}\,dx\) then by Fatou's Lemma \[\infty =\int_{\mathbb{R}}\lim_{t\to\infty}\left(\int_{x-t}^{x+t}v_{0}(z) \,dz\right)^{2}dx\] \[\leqslant\liminf_{t\to\infty}\|2u(\cdot,t)-u_{0}(\cdot+t)-u_{0}( \cdot-t)\|_{2}^{2}\] \[\leqslant 8\left(\|u_{0}\|_{2}^{2}+\lim_{t\to\infty}\|u(\cdot,t)\|_{2 }^{2}\right).\] If, on the other hand, \(v_{0}\in L^{1}(\mathbb{R})\) and \(\int_{\mathbb{R}}v_{0}\,dx=0\) then it can happen either that \(\|u(\cdot,t)\|_{2}\) remains bounded as \(t\to\infty\), or that \(\lim_{t\to\infty}\|u(\cdot,t)\|_{2}=\infty\). For the first case one can take \(v_{0}=0\), and for the second take \(v_{0}\) odd with \[v_{0}(x)=\left\{\begin{array}{cc}0,&x\in[0,1],\\ x^{-\alpha},&x\in(1,\infty),\end{array}\right.\] with \(1<\alpha<\frac{3}{2}\). Then \(v_{0}\in L^{1}(\mathbb{R})\cap L^{2}(\mathbb{R})\) with \(\int_{\mathbb{R}}v_{0}\,dx=0\), and (without loss of generality taking \(u_{0}=0\)) \[4\|u(\cdot,t)\|_{2}^{2} =\int_{\mathbb{R}}\left(\int_{x-t}^{x+t}v_{0}(s)\,ds\right)^{2}dx\] \[\geqslant\int_{t+1}^{\infty}\left(\int_{x-t}^{x+t}s^{-\alpha}ds \right)^{2}dx=\int_{t+1}^{\infty}\left(\frac{(x+t)^{1-\alpha}-(x-t)^{1-\alpha} }{1-\alpha}\right)^{2}dx\] \[=t^{3-2\alpha}\int_{1+\frac{1}{2}}^{\infty}\left(\frac{(1+y)^{1- \alpha}-(y-1)^{1-\alpha}}{1-\alpha}\right)^{2}dy\geqslant Ct^{3-2\alpha}\] for \(t\geqslant 1\) and some constant \(C>0\). For any \(n\) the set of \(v_{0}\in L^{2}(\mathbb{R}^{n})\) such that \(\|u(\cdot,t)\|_{2}\) is bounded as \(t\to\infty\) is dense in \(L^{2}(\mathbb{R}^{n})\). Indeed if \(\hat{v}_{0}=0\) in some neighbourhood \(B(0,\varepsilon)\) of \(0\), then \[\|\hat{u}(\cdot,t)\|_{2}^{2}\leqslant 2\left(\|\hat{u}_{0}\|_{2}^{2}+ \varepsilon^{-2}\int_{|\xi|>\varepsilon}|\hat{v}_{0}|^{2}d\xi\right)\leqslant C<\infty \tag{3.9}\] for all \(t\geqslant 0\). The linear subspace of such functions is dense in \(L^{2}(\mathbb{R}^{n})\), since otherwise there would be some nonzero \(\theta\in L^{2}(\mathbb{R}^{n})\) with \((\theta,v_{0})=0\) whenever \(\hat{v}_{0}\in C_{0}^{\infty}(\mathbb{R}^{n}\setminus\overline{B(0,\varepsilon )})\) for some \(\varepsilon>0\). But by Plancherel's theorem this implies that \((\hat{\theta},\hat{v}_{0})=0\) for all such \(\hat{v}_{0}\), so that \(\hat{\theta}(\xi)=0\) for \(|\xi|>\varepsilon\) and any \(\varepsilon>0\), thus \(\hat{\theta}=0\) and hence \(\theta=0\), a contradiction. As shown by Brodsky [7] (see also [27]), in the case when \(\frac{\hat{v}_{0}}{|\xi|}\in L^{2}(\mathbb{R}^{n})\) (equivalently \((-\Delta)^{-\frac{1}{2}}v_{0}\in L^{2}(\mathbb{R}^{n})\)) it follows from (1.6) and the Riemann-Lebesgue lemma that \[\lim_{t\to\infty}\|u(\cdot,t)\|_{2}^{2}=\frac{1}{2}\left(\|u_{0}\|_{2}^{2}+ \int_{\mathbb{R}^{n}}\frac{|\hat{v}_{0}|^{2}}{|\xi|^{2}}d\xi\right). \tag{3.10}\] For general \(u_{0},v_{0}\) it is not clear to the author whether there can be a solution with \(\|u(\cdot,t)\|_{2}\) unbounded but not tending to infinity as \(t\to\infty\). This seems to depend delicately on the behaviour of \(\hat{v}_{0}(\xi)\) as \(|\xi|\to 0\). , ## 4. The adjoint of the wave operator and weak solutions In order to show that \(T(t)p\) in Theorem 2 is the unique weak solution, appropriately defined, of (1.1) we first recall the definition and properties (see e.g. [10]) of the _adjoint_\(A^{*}\) of a closed densely defined linear operator \(A\) on a real Banach space \(X\) with dual space \(X^{*}\). Let \(D(A^{*})\) be the set of those \(v\in X^{*}\) for which there exists \(v^{*}\in X^{*}\) such that \[\langle w,v^{*}\rangle=\langle Aw,v\rangle\text{ for all }w\in D(A).\] Since \(D(A)\) is dense, \(v^{*}\) is unique. Then \(A^{*}:D(A^{*})\to X^{*}\) is the linear operator defined on \(X^{*}\) by \(A^{*}v=v^{*}\), so that \[\langle w,A^{*}v\rangle=\langle Aw,v\rangle\text{ for all }w\in D(A).\] \(A^{*}\) is closed, and if \(X\) is reflexive then \(D(A^{*})\) is dense in \(X^{*}\). **Definition 4.1**.: Let \(\tau>0\). A function \(w:[0,\infty)\to X\) is a _weak solution of_\(\dot{w}=Aw\) _on_\([0,\infty)\) if (i) \(w:[0,\infty)\to X\) is weakly continuous, (ii) for every \(z\in D(A^{*})\) the function \(t\mapsto\langle w(t),z\rangle\) is continuously differentiable on \([0,\infty)\) and \[\frac{d}{dt}\langle w(t),z\rangle=\langle w(t),A^{*}z\rangle,\;t\geqslant 0.\] _Remark 5_.: This is weaker than the definition in [3] of a weak solution in that we do not assume that \(w:[0,\infty)\to X\) is (strongly) continuous. We will use the following uniqueness result. **Theorem 3**.: _Let \(A\) generate the \(C^{0}\)-semigroup \(\{T(t)\}_{t\geqslant 0}\) of bounded linear operators on \(X\). Then for any \(p\in X\), \(w(t):=T(t)p\) is the unique weak solution of \(\dot{w}=Aw\) on \([0,\infty)\) satisfying \(w(0)=p\)._ Proof.: We first show that \(w(t)=T(t)p\) is a weak solution. Let \(p_{j}\in D(A)\) with \(p_{j}\to p\) in \(X\). Then for \(t\geqslant 0\) and \(z\in D(A^{*})\) we have that \[\langle T(t)p_{j},z\rangle =\langle p_{j},z\rangle+\int_{0}^{t}\langle AT(s)p_{j},z\rangle ds\] \[=\langle p_{j},z\rangle+\int_{0}^{t}\langle T(s)p_{j},A^{*}z \rangle ds, \tag{4.1}\] and passing to the limit \(j\to\infty\) using the continuity of \(T(t)\) and the boundedness of \(\|T(s)p_{j}\|\) on \([0,t]\) we get \[\langle T(t)p,z\rangle=\langle p,z\rangle+\int_{0}^{t}\langle T(s)p,A^{*}z \rangle ds, \tag{4.2}\] from which (ii) follows. To prove the uniqueness, suppose that there are two weak solutions \(w,\tilde{w}\) with initial data \(p\), and let \(W=w-\tilde{w}\). Then \(W:[0,\infty)\to X\) is weakly continuous, hence strongly measurable and bounded in norm on compact subsets of \([0,\infty)\)[16, pp 59,75,84]. In particular \(W\) is (Bochner) integrable on \([0,t]\) for any \(t>0\). Hence, for any \(z\in D(A^{*})\), \[\langle W(t),z\rangle=\int_{0}^{t}\langle W(s),A^{*}z\rangle ds=\langle\int_{ 0}^{t}W(s)\,ds,A^{*}z\rangle. \tag{4.3}\] Integrating (4.3) with respect to \(t\) we have that \[\langle\int_{0}^{t}W(s)\,ds,z\rangle=\langle\int_{0}^{t}\int_{0}^{s}W(\tau)\, d\tau\,ds,A^{*}z\rangle \tag{4.4}\] for all \(z\in D(A^{*})\), so that by a lemma in [3] (see also [10, p127]) \(\int_{0}^{t}\int_{0}^{s}W(\tau)\,d\tau\,ds\in D(A)\) and \[\int_{0}^{t}W(s)\,ds=A\int_{0}^{t}\int_{0}^{s}W(\tau)\,d\tau\,ds,\;t\geqslant 0. \tag{4.5}\] Hence \(r(t):=\int_{0}^{t}\int_{0}^{s}W(\tau)\,d\tau\) is differentiable in \(t\) and solves \(\dot{r}(t)=Ar(t)\) with \(r(0)=0\), so that by a well-known result (see e.g. [22, p483], [25, Chapter 4], [26, Theorem 35.2]) \(r(t)=0\). Hence also \(\int_{0}^{t}W(s)\,ds=0\), thus \(\int_{0}^{t}\langle W(s)\,ds,z\rangle=0\) for any \(z\in X^{*}\). Differentiating with respect to \(t\) and using the continuity of \(\langle W(s),z\rangle\) we have that \(\langle W(t),z\rangle=0\) for all \(t\geqslant 0\) and \(z\). Hence \(W=0\) and \(w=\tilde{w}\) _Remark 6_.: This result is given for \(X\) a Hilbert space in [2, Corollary 4.8.1]. We note that the theorem [2, Theorem 4.8.3] of which it is a corollary appears to omit the hypothesis that \(w\) is weakly continuous, and its proof requires some consequent adjustment. _Remark 7_.: It is natural to conjecture that for a closed densely defined operator \(A\) the existence of a unique weak solution \(w(t)=S(t)p\) satisfying \(w(0)=p\) for each initial data \(p\in X\) implies that \(A\) generates a \(C^{0}\)-semigroup \(\{T(t)\}_{t\geqslant 0}\) and that \(S(t)=T(t)\). This would be a generalization of the result in [3] to the case when weak solutions are only required to be weakly continuous in \(t\). A crucial step would be to show that each linear map \(S(t)\) is continuous, which was proved in [3] using the closed graph theorem. However, to generalize this step would seem to require a closed graph theorem for a linear map from \(X\) to the space \(C([0,T];X_{w})\), the space of weakly continuous maps from \([0,T]\) to \(X\) with the compact open topology. Although there are many generalizations of the closed graph theorem to maps between topological vector spaces (see e.g. [18]), the author was unable to find one which applies to this case. Thus we need to calculate the adjoint of the wave operator (2.2) on the Hilbert space \(H=H_{0}^{1}(\Omega)\times L^{2}(\Omega)\). **Lemma 4**.: _The Laplace operator \(\Delta\) with \(D(\Delta)=\{u\in H_{0}^{1}(\Omega):\Delta u\in L^{2}(\Omega)\}\) is self-adjoint on \(L^{2}(\Omega)\)._ Proof.: This is proved in [1, Example 7.2.1]. Alternatively, one can first note that if \(v\in H_{0}^{1}(\Omega)\) there exists a sequence \(\varphi^{(j)}\in C_{0}^{\infty}(\Omega)\) with \(\varphi^{(j)}\to v\) in \(H^{1}(\Omega)\), so that if \(u\in D(\Delta)\) we have that \[(-\Delta u,v)=\lim_{j\to\infty}(-\Delta u,\varphi^{(j)})=\lim_{j\to\infty}( \nabla u,\nabla\varphi^{(j)})=(\nabla u,\nabla v),\] and hence \((-\Delta u,v)=(u,-\Delta v)\) for \(u,v\in D(\Delta)\). Hence \(-\Delta\) is symmetric. As in the proof of Theorem 2, for any \(z\in L^{2}(\Omega)\) there exists a unique \(u\in D(\Delta)\) satisfying \(-\Delta u+u=z\). Hence \(-\Delta\) is also maximal monotone, so that the self-adjointness follows from [6, Proposition 7.6]. For \(\theta\in L^{2}(\Omega)\) denote by \((\mathbf{1}-\Delta)^{-1}\theta\) the unique solution \(u\in D(\Delta)\) of \(-\Delta u+u=\theta\). **Theorem 5**.: _The adjoint of the wave operator \(A\) is given by_ \[A^{*}=\left(\begin{array}{cc}0&(\mathbf{1}-\Delta)^{-1}-\mathbf{1}\\ \mathbf{1}-\Delta&0\end{array}\right),\] _with_ \[D(A^{*})=\{\left(\begin{array}{c}\chi\\ \psi\end{array}\right)\in H:\Delta\chi\in L^{2}(\Omega),\psi\in H_{0}^{1}( \Omega)\}.\] Proof.: By the definition of \(A^{*}\) we have that \(\left(\begin{array}{c}\chi\\ \psi\end{array}\right)\in D(A^{*})\) and \(A^{*}\left(\begin{array}{c}\chi\\ \psi\end{array}\right)=\left(\begin{array}{c}p\\ q\end{array}\right)\) if and only if \(\left(\begin{array}{c}\chi\\ \psi\end{array}\right)\in H\) and \[\langle\left(\begin{array}{c}u\\ v\end{array}\right),\left(\begin{array}{c}p\\ q\end{array}\right)\rangle=\langle\left(\begin{array}{c}v\\ \Delta u\end{array}\right),\left(\begin{array}{c}\chi\\ \psi\end{array}\right)\rangle\mbox{ for all }\left(\begin{array}{c}u\\ v\end{array}\right)\in D(A), \tag{4.6}\] that is \[\int_{\Omega}(pu+\nabla p\cdot\nabla u+qv)\,dx=\int_{\Omega}(\chi v+\nabla\chi \cdot\nabla v+\psi\Delta u)\,dx \tag{4.7}\] for all \(u,v\in H^{1}_{0}(\Omega)\) with \(\Delta u\in L^{2}(\Omega)\). (4.7) is equivalent to the two equations \[\int_{\Omega}(\chi v+\nabla\chi\cdot\nabla v)\,dx =\int_{\Omega}qv\,dx\ \ \mbox{for all}\ v\in H^{1}_{0}(\Omega), \tag{4.8}\] \[\int_{\Omega}(pu+\nabla p\cdot\nabla u)\,dx =\int_{\Omega}\psi\Delta u\,dx\ \ \mbox{for all}\ u\in D(\Delta). \tag{4.9}\] But (4.8) says that \(q=(\mathbf{1}-\Delta)\chi\), while (4.9) can be written as \[\int_{\Omega}pu\,dx=\int_{\Omega}(p+\psi)\Delta u\,dx\ \ \mbox{for all}\ u\in D(\Delta), \tag{4.10}\] so that, by Lemma 4, \(p+\psi\in D(\Delta)\) and \(\Delta(p+\psi)=p\), from which it follows that \(p=[(\mathbf{1}-\Delta)^{-1}-\mathbf{1}]\psi\). _Remark 8_.: For a bounded domain \(\Omega\) one can use the equivalent inner product \(((u,v))=\int_{\Omega}\nabla u\cdot\nabla v\,dx\) on \(H^{1}_{0}(\Omega)\), when the adjoint takes the simpler form (see [5, Lemma 3.1]) \(A^{*}=-\left(\begin{array}{cc}0&\mathbf{1}\\ \Delta&0\end{array}\right)\). By Definition 4.1, \(w=\left(\begin{array}{c}u\\ v\end{array}\right)\) is a weak solution of the wave equation \(\dot{w}=Aw\) on \([0,\infty)\) if and only if \(w:[0,\infty)\to H\) is weakly continuous, and, for any \(z=\left(\begin{array}{c}\chi\\ \psi\end{array}\right)\in D(A^{*})\), the function \(t\mapsto\langle w(t),z\rangle\) is continuously differentiable with derivative \[\frac{d}{dt}\langle w(t),z\rangle=\langle w(t),A^{*}z\rangle,\ t\geqslant 0. \tag{4.11}\] Equivalently, for \(t\geqslant 0\), \[\frac{d}{dt}\int_{\Omega}(u\chi+\nabla u\cdot\nabla\chi+v\psi)\,dx=\int_{ \Omega}(up+\nabla u\cdot\nabla p+vq)\,dx, \tag{4.12}\] where \(\Delta(p+\psi)=p\) and \(q=\chi-\Delta\chi\), or \[\frac{d}{dt}\int_{\Omega}uq\,dx =\int_{\Omega}vq\,dx\ \ \mbox{for all}\ q\in L^{2}(\Omega), \tag{4.13}\] \[\frac{d}{dt}\int_{\Omega}v\psi\,dx =-\int_{\Omega}\nabla u\cdot\nabla\psi\,dx\ \ \mbox{for all}\ \psi\in H^{1}_{0}(\Omega), \tag{4.14}\] since for any \(q\in L^{2}(\Omega)\) there exists a unique solution \(\chi\in H^{1}_{0}(\Omega)\) to \(\chi-\Delta\chi=q\). But it is easily checked that (4.13) holds if and only if \(u\) is weakly differentiable with respect to \(t\) with \(u_{t}(\cdot,t)=v(\cdot,t)\) for all \(t\geqslant 0\), that is \[\int_{0}^{\infty}\varphi^{\prime}(t)u(\cdot,t)\,dt=-\int_{0}^{\infty}\varphi( t)v(\cdot,t)\,dt\ \mbox{for all}\ \varphi\in C^{\infty}_{0}(0,\infty). \tag{4.15}\] Then (4.14) becomes \[\frac{d}{dt}\int_{\Omega}u_{t}\psi\,dx=-\int_{\Omega}\nabla u\cdot\nabla\psi \,dx\ \mbox{for all}\ \psi\in H^{1}_{0}(\Omega),t\geqslant 0. \tag{4.16}\] Hence \(w\) is a weak solution if and only if \(u:[0,\infty)\to H^{1}_{0}(\Omega)\), \(v:[0,\infty)\to L^{2}(\Omega)\) are weakly continuous, \(v=u_{t}\), and (4.16) holds. In particular _weak solutions in this sense are unique_. _Remark 9_.: For the case \(\Omega=\mathbb{R}^{n}\) it is possible to prove uniqueness of weak solutions for \(u_{0},v_{0}\) merely distributions using properties of the fundamental solution of the wave equation (see [31, Theorem 13.1]). Next we give some further properties of \(A,A^{*}\) from which we deduce the absence of nontrivial linear conserved quantities. **Lemma 6**.: \((i)\)_\(\Delta D(\Delta)\) is dense in \(L^{2}(\Omega)\). \((ii)\)\(R(A)=\{Az:z\in D(A)\}\) is dense in \(H\). \((iii)\)\(A^{*}:D(A^{*})\to H=H^{*}\) is one-to-one._ Proof.: (i). Suppose that \(\int_{\Omega}z\Delta u\,dx=0\) for some \(z\in L^{2}(\Omega)\) and all \(u\in D(\Delta)\). By Lemma 4, \(z\in D(\Delta)\) and \(\Delta z=0\). Choosing \(u=z\) we thus have \(\int_{\Omega}|\nabla z|^{2}dx=0\) and so \(\nabla z=0\) in \(\Omega\). Hence \(z=0\) (for example because the extension \(\tilde{z}\) of \(z\) by zero belongs to \(H^{0}_{0}(\mathbb{R}^{n})\) and \(\nabla\tilde{z}=0\), so that \(\tilde{z}\) is constant and thus zero). (ii) If not there would exist a nonzero \(\left(\begin{array}{c}\chi\\ \psi\end{array}\right)\in H\) with \(\langle\left(\begin{array}{c}\chi\\ \psi\end{array}\right),A\left(\begin{array}{c}u\\ v\end{array}\right)\rangle=0\) for all \(\left(\begin{array}{c}u\\ v\end{array}\right)\in D(A)\), that is \[\int_{\Omega}(\chi v+\nabla\chi\cdot\nabla v+\psi\Delta u)\,dx=0 \tag{4.17}\] for all \(u,v\in H^{1}_{0}(\Omega)\) with \(\Delta u\in L^{2}(\Omega)\). Taking first \(u=0\) and \(v=\chi\) we get that \(\chi=0\). Then \(\psi=0\) by (i). (iii) Now suppose \(A^{*}\left(\begin{array}{c}\chi\\ \psi\end{array}\right)=0\), for some \(\left(\begin{array}{c}\chi\\ \psi\end{array}\right)\in D(A^{*})\). Then \[\langle\left(\begin{array}{c}u\\ v\end{array}\right),A^{*}\left(\begin{array}{c}\chi\\ \psi\end{array}\right)\rangle=0\] for all \(\left(\begin{array}{c}u\\ v\end{array}\right)\in D(A)\), so that by (ii) \(\chi=\psi=0\). Hence \(A^{*}\) is one-to-one. **Theorem 7**.: _There is no nontrivial linear constant of motion for (2.1), that is there is no nonzero \(z\in H\) such that \(\langle T(t)p,z\rangle=\langle p,z\rangle\) for all \(t\geqslant 0\) and \(p\in D(A)\)._ Proof.: If \(\langle T(t)p,z\rangle\) were constant in \(t\) for all \(p\in D(A)\) then \[0=\left.\frac{d}{dt}\langle T(t)p,z\rangle\right|_{t=0}=\langle Ap,z\rangle\] for all \(p\in D(A)\), so that \(z=0\) by Lemma 6 (ii). Finally in this section we motivate the use of the phase space \(H=H^{1}_{0}(\Omega)\times L^{2}(\Omega)\) for an unbounded domain \(\Omega\subset\mathbb{R}^{n}\). To this end we assume that \(\Omega=\bigcup_{j=1}^{\infty}\Omega_{j}\) is the union of an increasing (\(\Omega_{j}\subset\Omega_{j+1}\)) sequence of bounded open sets \(\Omega_{j}\subset\mathbb{R}^{n}\). In Theorem 9 below we show that the semiflow \(\{T(t)\}_{t\geqslant 0}\) on \(H\) for the wave equation is the limit of the corresponding semiflows \(\{T_{j}(t)\}_{t\geqslant 0}\) for the wave equation on \(\Omega_{j}\) with phase space \(H_{j}=H^{1}_{0}(\Omega_{j})\times L^{2}(\Omega_{j})\). We use the following lemma, which is a slight generalization of [4, Lemma 5.12]. **Lemma 8**.: _Let \(X\) be a reflexive Banach space, \(T>0\), and \(w^{(j)}:[0,T]\to X\) satisfy_ (i)__\(\|w^{(j)}(t)\|_{X}\leqslant M<\infty\) _for all_ \(j=1,2,\ldots\) _and_ \(t\in[0,T]\)_,_ (ii) _there is a dense subset_ \(E\) _of_ \(X^{*}\) _such that for any_ \(v\in E\) _the functions_ \(t\mapsto\langle w^{(j)},v\rangle\) _are equicontinuous on_ \([0,T]\) _for_ \(j\) _sufficiently large, that is given_ \(\varepsilon>0\)_, there exists_ \(\delta(v)>0\) _and_ \(J(v)\) _such that_ \[|\langle w^{(j)}(t),v\rangle-\langle w^{(j)}(s),v\rangle|<\varepsilon\text{ for }|s-t|\leqslant\delta(v),\;j\geqslant J(v). \tag{4.18}\] _Then there exists a weakly continuous_ \(w:[0,T]\to X\) _and a subsequence_ \(w^{(j_{k})}\) _of_ \(w^{(j)}\) _converging uniformly to_ \(w\) _in the weak topology, i.e. for any sequence_ \(s_{k}\to s\) _in_ \([0,T]\) _we have_ \(w^{(j_{k})}(s_{k})\rightharpoonup w(s)\) _in_ \(X\)_._ Proof.: By (i) and a diagonal argument we can extract a subsequence \(w^{(j_{k})}\) such that \(w^{(j_{k})}(\tau)\) converges weakly to a limit for any rational \(\tau\in[0,T]\). We claim that \(w^{(j_{k})}(t)\) converges weakly to a limit \(w(t)\) for any \(t\in[0,T]\). This follows provided \(\langle w^{(j_{k})}(t),v\rangle\) is a Cauchy sequence for any \(v\in X^{*}\), since then by (i) the limit is a bounded linear functional of \(v\), so that since \(X\) is reflexive it defines an element of \(X^{**}=X\). To prove this, let \(\varepsilon>0\) and choose \(\tilde{v}\in E\) with \(\|\tilde{v}-v\|_{X^{*}}<\frac{\varepsilon}{2M}\), and then a rational \(\tau\in[0,T]\) with \(|\tau-t|\leqslant\delta(\tilde{v})\). Then for \(k,l\) sufficiently large we have by (i) and (4.18) that \[|\langle w^{(j_{k})}(t)-w^{(j_{l})}(t),v\rangle| \leqslant|\langle w^{(j_{k})}(t)-w^{(j_{l})}(t),\tilde{v}\rangle|+\varepsilon\] \[\leqslant|\langle w^{(j_{k})}(\tau)-w^{(j_{l})}(\tau),\tilde{v} \rangle|+3\varepsilon\leqslant 4\varepsilon,\] as required. Similar arguments then show that \(w\) is weakly continuous and that \(w^{(j_{k})}(s_{k})\rightharpoonup w(s)\) if \(s_{k}\to s\). For a function \(f\in H_{j}\) set \[\bar{f}(x)=\left\{\begin{array}{rl}f(x),&x\in\Omega_{j}\\ 0,&x\in\Omega\setminus\Omega_{j}.\end{array}\right. \tag{4.19}\] Note that if \(f\in H_{j}\) then \(\bar{f}\in H\). **Theorem 9**.: _Let \(p_{j}\in H_{j}\) and \(\bar{p}_{j}\to p\) in \(H\). Then \(\overline{T_{j}(t)p_{j}}\to T(t)p\) in \(H\) uniformly on compact subsets of \([0,\infty)\)._ Proof.: We denote by \(A_{j}\) the infinitesimal generator of \(T_{j}(t)\), that is \[A_{j}=\left(\begin{array}{cc}0&\mathbf{1}\\ \Delta_{j}&0\end{array}\right), \tag{4.20}\] with \[D(A_{j})=\{\left(\begin{array}{c}u\\ v\end{array}\right)\in H:\Delta_{j}u\in L^{2}(\Omega_{j}),v\in H_{0}^{1}( \Omega_{j})\},\] where \(\Delta_{j}=\Delta\) with domain \(D(\Delta_{j})=\{u\in H_{0}^{1}(\Omega_{j}):\Delta u\in L^{2}(\Omega_{j})\}\). Then by Theorem 5 we have that \[A_{j}^{*}=\left(\begin{array}{cc}0&(\mathbf{1}-\Delta_{j})^{-1}-\mathbf{1} \\ \mathbf{1}-\Delta_{j}&0\end{array}\right),\] with \[D(A_{j}^{*})=\{\left(\begin{array}{c}\chi\\ \psi\end{array}\right)\in X_{j}:\Delta\chi\in L^{2}(\Omega_{j}),\psi\in H_{0}^{ 1}(\Omega_{j})\}.\] By Theorem 3 we have that \[\langle T_{j}(t)p_{j},v\rangle=\langle p_{j},v\rangle+\int_{0}^{t}\langle T_{j}(s) p_{j},A_{j}^{*}v\rangle ds \tag{4.21}\] for all \(t\geqslant 0\) and \(v\in D(A_{j}^{*})\). Furthermore, if \(T>0\) there exists \(M>0\) such that \[\|\overline{T_{j}(t)p_{j}}\|_{H}=\|T_{j}(t)p_{j}\|_{H_{j}}\leqslant M\text{ for all }t\in[0,T], \tag{4.22}\] this following from the energy equation (2.4) and the estimate (3.1). Let \(E=\{\left(\begin{array}{c}\chi\\ \psi\end{array}\right):\chi,\psi\in C_{0}^{\infty}(\Omega)\}\), which is a dense subset of \(H^{*}=H\). Given any \(v\in E\) we have that \(v\in D(A_{j}^{*})\) for large enough \(j\). Thus from (4.21), (4.22) we have that for \(v\in E\) and large enough \(j\) \[|\langle T_{j}(t)p_{j},v\rangle-\langle T_{j}(s)p_{j},v\rangle|\leqslant C|t-s |,\text{ for }s,t\in[0,T], \tag{4.23}\] where \(C=C(v)\) is a constant, and \(\langle\overline{T_{j}(t)p_{j}},v\rangle=\langle T_{j}(t)p_{j},v\rangle\) for each \(t\). Hence \(w^{(j)}(t):=\overline{T_{j}(t)p_{j}}\) satisfies the hypotheses of Lemma 8 for any \(T>0\) and so there is a subsequence \(w^{(j_{k})}\) and a weakly continuous \(w:[0,\infty)\to H\) such that \(w^{(j_{k})}\) converges uniformly to \(w\) on compact subsets of \([0,\infty)\) in the weak topology. Writing \(p_{j}=\left(\begin{array}{c}u_{0j}\\ v_{0j}\end{array}\right)\), \(T_{j}(t)p_{j}=\left(\begin{array}{c}u_{j}(\cdot,t)\\ v_{j}(\cdot,t)\end{array}\right)\), we have from (4.13), (4.14) that for any \(q,\psi\in C_{0}^{\infty}(\Omega)\) and \(t\geqslant 0\), and for \(k\) sufficiently large to ensure \(\operatorname{supp}q\subset\Omega_{j_{k}}\), \(\operatorname{supp}\psi\subset\Omega_{j_{k}}\), \[\int_{\Omega}u_{j_{k}}(x,t)q(x)\,dx =\int_{\Omega}u_{0j_{k}}(x)q(x)\,dx+\int_{0}^{t}\int_{\Omega}v_{j _{k}}(x,s)q(x)\,dx\,ds, \tag{4.24}\] \[\int_{\Omega}v_{j_{k}}(x,t)\psi(x)\,dx =\int_{\Omega}v_{0j_{k}}(x)\psi(x)\,dx-\int_{0}^{t}\int_{\Omega} \nabla u_{j_{k}}(x,s)\cdot\nabla\psi(x)\,dx\,ds. \tag{4.25}\] Passing to the limit \(k\to\infty\) and setting \(w(t)=\left(\begin{array}{c}u(\cdot,t)\\ v(\cdot,t)\end{array}\right)\), \(p=\left(\begin{array}{c}u_{0}\\ v_{0}\end{array}\right)\), we get that \[\int_{\Omega}u(x,t)q(x)\,dx =\int_{\Omega}u_{0}(x)q(x)\,dx+\int_{0}^{t}\int_{\Omega}v(x,s)q(x )\,dx\,ds, \tag{4.26}\] \[\int_{\Omega}v(x,t)\psi(x)\,dx =\int_{\Omega}v_{0}(x)\psi(x)\,dx-\int_{0}^{t}\int_{\Omega} \nabla u(x,s)\cdot\nabla\psi(x)\,dx\,ds. \tag{4.27}\] By approximation (4.26), (4.27) hold for all \(q\in L^{2}(\Omega),\psi\in H^{1}_{0}(\Omega)\), so that \(w\) is a weak solution on \([0,\infty)\) with \(w(0)=p\). Hence, by Theorem 3, \(w(t)=T(t)p\). The uniqueness also implies by a standard argument that the whole sequence \(w^{(j)}\) converges, so that if \(s_{j}\to s\) in \([0,\infty)\) then \(\overline{T_{j}(s_{j})p_{j}}\rightharpoonup T(s)p\) in \(H\). Since by the energy equation, for all \(t\geqslant 0\) \[\int_{\Omega}\left(|\nabla u_{j}(x,t)|^{2}+|v_{j}(x,t)|^{2}\right) \,dx =\int_{\Omega}\left(|\nabla u_{0j}|^{2}+|v_{0j}|^{2}\right)\,dx\] \[\to\int_{\Omega}\left(|\nabla u_{0}|^{2}+|v_{0}|^{2}\right)\,dx =\int_{\Omega}\left(|\nabla u(x,t)|^{2}+|v(x,t)|^{2}\right)\,dx, \tag{4.28}\] we have that \(\nabla u_{j}(\cdot,s_{j})\to\nabla u(\cdot,s)\) strongly in \(L^{2}(\Omega)^{n}\) and \(v_{j}(\cdot,s_{j})\to v(\cdot,s)\) strongly in \(L^{2}(\Omega)\). From (3.3) we deduce that also \(u_{j}(\cdot,s_{j})\to u(\cdot,s)\) strongly in \(L^{2}(\Omega)\) and hence \(\overline{T_{j}(s_{j})p_{j}}\to T(s)p\) strongly in \(X\) as claimed. _Remark 10_.: See [12] for a discussion on solving the wave equation in \(\mathbb{R}^{n}\) in a different energy space. ## 5. Kirchhoff's formula and smoothing via averaging We return to Kirchhoff's formula for a \(C^{2}\) solution of the wave equation for \(n=3\) and initial data \(u(\cdot,0)=u_{0}\), \(u_{t}(\cdot,0)=v_{0}\), namely \[u(x,t)=\frac{1}{4\pi t}\int_{S(x,t)}v_{0}(y)\,dS_{y}+\frac{\partial}{\partial t }\left(\frac{1}{4\pi t}\int_{S(x,t)}u_{0}(y)\,dS_{y}\right). \tag{5.1}\] For simplicity, and bearing in mind Remark 4, we suppose that \(u_{0}=0\), so that \[u(x,t) =\frac{1}{4\pi t}\int_{S(x,t)}v_{0}(y)\,dS_{y}\] \[=\frac{t}{4\pi}\int_{S^{2}}v_{0}(x+tz)\,dS_{z} \tag{5.2}\] Hence, formally we also have \[u_{t}(x,t) =\frac{1}{t}u(x,t)+\frac{t}{4\pi}\int_{S^{2}}\nabla v_{0}(x+tz) \cdot z\,dS_{z}\] \[=\frac{1}{t}u(x,t)+\frac{t^{2}}{4\pi}\Delta_{x}\int_{B}v_{0}(x+ tz)\,dz, \tag{5.3}\] where \(B=B(0,1)\). The representations (5.2), (5.3) are at first sight puzzling, because in view of Theorem 2 we expect them to be meaningful when we just have \(v_{0}\in L^{2}(\Omega)\). However they can be understood because of the _smoothing properties of averaging over spheres and balls_ that are well known in harmonic analysis. Indeed Stein [28] proved in particular that if we define the spherical mean \[\mathcal{M}_{t}(f)(x)=\int_{S^{2}}f(x+tz)\,dS_{z}, \tag{5.4}\] then there is a constant \(C>0\), independent of \(t\), such that \[\|\mathcal{M}_{t}(f)\|_{2}\leqslant C\|f\|_{2} \tag{5.5}\] for all \(f\in C_{0}^{\infty}(\mathbb{R}^{3})\) and \(t>0\). Approximating \(v_{0}\) in \(L^{2}(\mathbb{R}^{3})\) by functions \(f^{(j)}\in C_{0}^{\infty}(\mathbb{R}^{3})\) and applying (5.5) with \(f=f^{(j)}-f^{(k)}\) we find that \(\mathcal{M}_{t}(f^{(j)})\) is a Cauchy sequence in \(L^{2}(\mathbb{R}^{3})\), so that the integral in (5.2) can be defined for each \(t\) as an element of \(L^{2}(\mathbb{R}^{3})\). Furthermore we can estimate derivatives of \(M_{t}(f)\), as is done in [32]. Thus we find that \[\|\frac{\partial}{\partial x_{i}}\mathcal{M}_{t}(f)\|_{2}\leqslant C_{1}t^{-1 }\|f\|_{2} \tag{5.6}\] for some constant \(C_{1}\) and \(i=1,2,3\). In [32] it is also shown that for \(B=\{x\in\mathbb{R}^{3}:|x|<1\}\) the ball average \[\mathcal{N}_{t}(f)(x):=\int_{B}f(x+tz)\,dz, \tag{5.7}\] satisfies the estimates \[\|\mathcal{N}_{t}(f)\|_{2}\leqslant C_{2}\|f\|_{2},\ \|\frac{\partial}{ \partial x_{i}}\mathcal{N}_{t}(f)\|_{2}\leqslant C_{2}t^{-1}\|f\|_{2},\ \|\Delta\mathcal{N}_{t}(f)\|_{2}\leqslant C_{2}t^{-2}\|f\|_{2}, \tag{5.8}\] for some constant \(C_{2}\). In fact the derivation of Kirchhoff's solution combined with Theorem 2 gives an alternative proof of the estimates (5.5),(5.6),(5.8). It suffices to take \(\varphi^{(j)}\in C_{0}^{\infty}(\mathbb{R}^{n})\) with \(\varphi^{(j)}\to v_{0}\) in \(L^{2}(\mathbb{R}^{3})\). Then the solution \(u^{(j)}\) with \(u^{(j)}(\cdot,0)=0\), \(u^{(j)}_{t}(\cdot,0)=\varphi^{(j)}\) is smooth and given by \[u^{(j)}(x,t)=\frac{1}{4\pi t}\int_{S(x,t)}\varphi^{(j)}(y)\,dS_{y}, \tag{5.9}\] and satisfies \[u^{(j)}_{t}(x,t)=\frac{1}{t}u^{(j)}(x,t)+\frac{t^{2}}{4\pi}\Delta_{x}\int_{B} \varphi^{(j)}(x+tx)\,dz. \tag{5.10}\] By Theorem 2 we have that \(u^{(j)}\to u\) in \(C([0,\tau];H^{1}_{0}(\mathbb{R}^{3}))\) and \(u^{(j)}_{t}\to u_{t}\) in \(C([0,\tau];L^{2}(\mathbb{R}^{3}))\) for any \(\tau>0\), where \(u\) is the unique weak solution with initial data \(u(\cdot,0)=0,u_{t}(\cdot,0)=v_{0}\), and thus is given by (5.2). Hence, setting \(f=v_{0}\), we obtain (5.5) from (3.1), and (5.6) from (2.4). The first estimate in (5.8) is immediate since \[\|\mathcal{N}_{t}(f)\|_{2}^{2}\leqslant\int_{B}\left(\int_{B}1^{2}dz\int_{ \mathbb{R}^{3}}f(x+tz)^{2}dz\right)dx.\] By (5.10), (3.1), (2.4) we have that \(t^{2}\|\Delta\mathcal{N}_{t}(\varphi^{(j)})\|_{2}\leqslant M<\infty\), from which we get the third estimate. The middle estimate then follows using the relation \((g,\Delta g)=-\|\nabla g\|_{2}^{2}\). As an illustration of the harmonic analysis methods used to derive estimates such as (5.5),(5.6),(5.8) we prove the following result. **Theorem 10**.: _Let \(f\in L^{2}(\mathbb{R}^{n})\). Then for \(B=\{x\in\mathbb{R}^{n}:|x|<1\}\) the average \(\mathcal{N}_{t}(f)\) defined for \(t>0\) by_ \[\mathcal{N}_{t}(f)(x)=\int_{B}f(x+tz)\,dz \tag{5.11}\] _belongs to \(H^{\frac{n+1}{2}}:=H^{\frac{n+1}{2}}(\mathbb{R}^{n})\) with_ \[\|\mathcal{N}_{t}(f)\|_{H^{\frac{n+1}{2}}}\leqslant C\max(1,t^{-\frac{n+1}{2} })\|f\|_{2}, \tag{5.12}\] _for some constant \(C>0\) independent of \(t\)._ Proof.: We use the fact (see [13, p175], [24, pp 605-606], [29, p 338]) that the Fourier transform \(\hat{\chi}_{B}\) of the characteristic function \(\chi_{B}\) of the unit ball \(B\) satisfies \[\hat{\chi}_{B}(\xi)\leqslant C_{n}(1+|\xi|)^{-\frac{n+1}{2}} \tag{5.13}\] for some positive constant \(C_{n}\). We note that \(h:=\mathcal{N}_{1}(f)\) satisfies \(h(x)=(\chi_{B}*f)(x)\), so that \(\hat{h}(\xi)=(2\pi)^{\frac{n}{2}}\hat{\chi}_{B}(\xi)\hat{f}(\xi)\), and thus by (5.13) \[(1+|\xi|^{\frac{n+1}{2}})|\hat{h}(\xi)|\leqslant C|\hat{f}(\xi)|. \tag{5.14}\] Hence \(\|h\|_{H^{\frac{n+1}{2}}}=\|(1+|\xi|^{\frac{n+1}{2}})\hat{h}\|_{2}\leqslant C \|\hat{f}\|_{2}=C\|f\|_{2}\), and hence (5.12) holds for \(t=1\). Set \(f_{t}(x)=f(tx)\). Since \(\mathcal{N}_{t}(f)(tx)=\mathcal{N}_{1}(f_{t})(x)\) we have \(\widehat{\mathcal{N}_{t}(f)}(\xi)=t^{n}\widehat{\mathcal{N}_{1}(f_{t})}(t\xi)\), so that by (5.14) \[(1+|\xi|^{\frac{n+1}{2}})|\widehat{\mathcal{N}_{t}(f)}|(\xi) \leqslant C\left(\frac{1+|\xi|^{\frac{n+1}{2}}}{1+|t\xi|^{\frac{n +1}{2}}}\right)t^{n}|\widehat{f}_{t}(t\xi)|\] \[\leqslant C\max(1,t^{-\frac{n+1}{2}})|\hat{f}(\xi)|, \tag{5.15}\] giving (5.12) for any \(t\). _Remark 11_.: Because of results of Hlawka [17], Herz [15] the same result holds if \(B\) is replaced by a bounded convex set \(C\subset\mathbb{R}^{n}\) with sufficiently smooth boundary having everywhere positive Gaussian curvature. However if \(B\) is replaced by the cube \(Q=(-1,1)^{n}\) then \(\mathcal{N}_{1}^{Q}(f)(x):=\int_{Q}f(x+z)\,dz\) has less regularity than \(\mathcal{N}_{1}(f)\). In fact \[\hat{\chi}_{Q}(\xi) =\frac{1}{(2\pi)^{\frac{n}{2}}}\int_{Q}e^{-i\xi\cdot x}\,dx\] \[=\frac{2^{n}}{(2\pi)^{\frac{n}{2}}}\prod_{j=1}^{n}\frac{\sin\xi_{ j}}{\xi_{j}}.\] Hence if \(\alpha\geqslant 0\) then \((1+|\xi|^{\alpha})\hat{\chi}_{Q}(\xi)\in L^{\infty}(\mathbb{R}^{n})\) iff \(\alpha\leqslant 1\). Hence \(\mathcal{N}_{1}^{Q}(f)\in H^{1}(\mathbb{R}^{n})\) but in general \(\mathcal{N}_{1}^{Q}(f)\not\in H^{\alpha}(\mathbb{R}^{n})\) for \(\alpha>1\). **Acknowledgement.** This paper was completed while visiting the Hong Kong Institute for Advanced Study as a Senior Fellow. I am grateful to the referee whose comments led to improvements to Section 3.
2302.14691
Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following
In this paper, we present our finding that prepending a Task-Agnostic Prefix Prompt (TAPP) to the input improves the instruction-following ability of various Large Language Models (LLMs) during inference. TAPP is different from canonical prompts for LLMs in that it is a fixed prompt prepended to the beginning of every input regardless of the target task for zero-shot generalization. We observe that both base LLMs (i.e. not fine-tuned to follow instructions) and instruction-tuned models benefit from TAPP, resulting in 34.58% and 12.26% improvement on average, respectively. This implies that the instruction-following ability of LLMs can be improved during inference time with a fixed prompt constructed with simple heuristics. We hypothesize that TAPP assists language models to better estimate the output distribution by focusing more on the instruction of the target task during inference. In other words, such ability does not seem to be sufficiently activated in not only base LLMs but also many instruction-fine-tuned LLMs. All experiments are reproducible from https://github.com/seonghyeonye/TAPP.
Seonghyeon Ye, Hyeonbin Hwang, Sohee Yang, Hyeongu Yun, Yireun Kim, Minjoon Seo
2023-02-28T16:06:35Z
http://arxiv.org/abs/2302.14691v2
# In-Context Instruction Learning ###### Abstract Instruction learning of Large Language Models (LLMs) has enabled zero-shot task generalization. However, instruction learning has been predominantly approached as a fine-tuning problem, including instruction tuning and reinforcement learning from human feedback, where LLMs are multi-task fine-tuned on various tasks with instructions. In this paper, we present a surprising finding that applying in-context learning to instruction learning, referred to as In-Context Instruction Learning (ICIL), significantly improves the zero-shot task generalization performance for both pretrained and instruction-fine-tuned models. One of the core advantages of ICIL is that it uses a _single fixed_ prompt to evaluate all tasks, which is a concatenation of cross-task demonstrations. In particular, we demonstrate that the most powerful instruction-fine-tuned baseline (text-davinci-003) also benefits from ICIL by 9.3%, indicating that the effect of ICIL is complementary to instruction-based fine-tuning2. Footnote 2: All experiments are reproducible from github.com/seonghyeonye/ICIL. ## 1 Introduction Large Language Models (LLMs) have demonstrated the ability to adapt to target tasks during inference through few-shot demonstrations, also referred to as in-context learning. This ability has become increasingly evident as model sizes scale up, with LLMs exhibiting emergent capabilities (Wei et al., 2022; Kojima et al., 2022; Brown et al., 2020; Chowdhery et al., 2022). One of the emergent abilities is the capability to generalize to unseen tasks by following instructions. Instruction learning methods have been proposed to improve this ability, including instruction tuning or RLHF (reinforcement learning from human feedback) (Sanh et al., 2021; Wei et al., 2021; Wang et al., 2022; Ouyang et al., 2022; Min et al., 2022; Chung et al., 2022; Ye et al., 2022; Bai et al., 2022; OpenAI, 2022). However, previous work mainly focused on fine-tuning-based instruction-learning methods where the model is multi-task fine-tuned on various tasks with instructions, requiring multiple backpropagation processes. In this work, we demonstrate that In-Context Instruction Learning (ICIL) learning, which involves learning to follow instructions during inference through in-context learning, is beneficial for both off-the-shelf pretrained models and models fine-tuned to follow instructions, as shown in Figure 1. ICIL uses a prompt that consists of multiple cross-task demonstrations, where each demonstration is a concatenation of an instruction, input, and output instance of a task. ICIL is a zero-shot learning method as 1) we ensure that the tasks used for demonstrations are strictly held-out from the evaluation set, and 2) we use the same set of demonstrations for all evaluation tasks, treating them as a _single fixed_ prompt as shown in Figure 2. We use a simple heuristic-based sampling approach to construct a fixed demonstration set that is effective for various types of downstream tasks and model scales. By prepending the same fixed demonstration set for all tasks, we can easily test and reproduce baseline zero-shot performance for new target tasks or models without relying on external tools. We first observe that ICIL significantly enhances the zero-shot task generalization performance of various pretrained LLMs that are not fine-tuned to follow instructions, as shown in Figure 1. Notably, even smaller LLMs with ICIL outperform much larger language models without ICIL, such as the 6B-sized ICIL GPT-J outperforming 30 times larger 175B-sized Standard Zero-shot GPT-3 Davinci. Second, we show that applying ICIL on top of instruction-fine-tuned LLMs, improves the zero-shot instruction-following ability of LLMs especially for over 100B models. This indicates that the effect of ICIL is complementary to the effect of instruction fine-tuning. Our analysis shows that the effectiveness of ICIL comes from selecting classification tasks that include explicit answer choice in the instruction (e.g., expression of _"agent" or "customer"_ in Figure 3). This holds true even for generation target tasks, which contrasts with previous studies showing that it is crucial to retrieve demonstrations that are similar to the target task for few-shot in-context learning (Rubin et al., 2021; Liu et al., 2022). Even more counterintuitively, we observe that corrupting the input instance distribution of each demonstration by replacing it with random sentences does not significantly harm the performance. Based on this analysis, we hypothesize that LLMs learn the correspondence between the answer choice included in the instruction and output of each demonstration during inference, rather than relying on the complex correspondence between Figure 1: Average performance of 119 evaluation tasks on SuperNI benchmark. ICIL is effective for both pretrained and instruction-fine-tuned LLMs. We report the mean score of three random seeds for different demonstration sets for ICIL and the error bars of standard deviation. We provide the full demonstration sets in Appendix F. Figure 2: Overview of In-Context Instruction Learning (ICIL). We construct a fixed set of demonstrations consisting of instruction, input, and output instances to evaluate pretrained and instruction-fine-tuned LLMs for all tasks. We ensure that the tasks included in the demonstrations and the tasks being evaluated are strictly held-out, ensuring a zero-shot generalization setting. instruction, input, and output. Through this hypothesis, we suggest that the role of ICIL is to help LLMs _focus_ on the target instruction to find the cues for the answer distribution of the target task. ## 2 In-Context Instruction Learning The prompt for In-Context Instruction Learning (ICIL) consists of cross-task demonstrations where each is a concatenation of instruction, input, and output instance, as shown in Figure 3. In this section, we explain how we construct a fixed demonstration set to evaluate various tasks in a zero-shot manner for ICIL. Also, we mention the advantages of applying ICIL during inference of LLMs. ### Demonstration Set Construction From a benchmark that consists of \(N\) tasks in total where each instance of the task consists of instruction, input, and output instance, we sample \(K\) tasks to be constructed as demonstrations for ICIL3. We apply some simple heuristics to first filter the task set, randomly sample a single instance per filtered task set, and lastly, sample \(K\) instances all corresponding to different tasks. Footnote 3: Unless specified, we set \(K=8\) as default. The heuristics are as follows: 1. Task Types: We only sample from classification tasks that include an answer choice in the instruction (e.g., _"agent" or "customer"_ in Figure 3). We hypothesize that including the answer choice in the instruction might assist LLMs to follow instructions during inference. 2. Answer Choice Overlap: We ensure that the answer choices do not overlap between demonstrations. We expect that the overlap of answer choices leads to LLMs copying labels of the demonstrations, similar to few-shot in-context learning, which is an undesired behavior for zero-shot in-context learning because the answer distribution changes depending on the target task. 3. Demonstration Length: We restrict the length of the demonstration (concatenation of instruction, input, and output instance) to 256 tokens by a maximum to ensure that the input instance does not exceed the maximum sequence length4. We only sample from instances that satisfy this criterion. Footnote 4: Because we mainly experiment on 175B-sized GPT-3, we set the default maximum input sequence as 2048. 4. Demonstration Ordering: We order the demonstrations by the number of answer choices for each task in ascending order. For demonstrations that have the same number of answer choices, we sort by demonstration length in ascending order. We provide a detailed analysis and ablation of these heuristics in Section 4. ### In-Context Instruction Learning During Inference After demonstration set sampling, we construct the fixed set of demonstrations and append the concatenation of instruction and input instance of the target task to the fixed prompt consisting of demonstrations. Figure 3: The format of demonstrations of In-Context Instruction Learning (ICIL). Unlike standard zero-shot setting, ICIL prepends a cross-task demonstration set where each consists of an instruction (task definition), input, and output instance. ICIL has the following advantages to make LLMs better follow instructions and boost the zero-shot ability. 1. ICIL utilizes a single fixed prompt to adapt various models to various target tasks. Therefore, without external tools for searching and retrieving the optimal demonstration set for each task, ICIL is easy to replicate and measure as a zero-shot baseline for new models or datasets. 2. We show that ICIL significantly improves the zero-shot task generalization performance for various off-the-shelf pretrained LLMs (Figure 1). This indicates that we can make LLMs better instruction followers without backpropagation. 3. ICIL also improves the performance of instruction-fine-tuned models (instruction tuning or RLHF), especially for LLMs that have more than 100B parameters (Figure 1). This shows that ICIL can assist LLMs for zero-shot generalization even after instruction tuning or RLHF, implying the wide applicability of ICIL. 4. We demonstrate that the model-generated demonstration set is also effective for ICIL (Section 4.2). This indicates that ICIL is effective even without sampling the demonstration set from a benchmark if the heuristics are applied. ## 3 Experiments ### Experimental Setup We construct the demonstrations for ICIL from English training tasks of SuperNaturalInstructions (SuperNI) benchmark (Wang et al., 2022), which includes 756 tasks in total. To evaluate the effectiveness of ICIL, we use the held-out tasks from SuperNI for testing, which consists of 119 tasks across 12 different categories, including free-form generation, word relation reasoning, and classification tasks. We select SuperNI as our evaluation benchmark because it offers a diverse set of tasks with varying levels of complexity. Each task has 100 instances, and we exclude instances that exceed the maximum sequence length, resulting in a total of 11,802 instances. We use different evaluation metrics for each task, such as Exact Match for classification or single-word prediction tasks and ROUGE-L for free-form generation tasks, following the metric used in Wang et al. (2022). We provide the list of 12 evaluation task categories in Appendix A and more detailed evaluation settings in Appendix C. Model TypesWe evaluate 4 LLMs with various model sizes: 1) GPT-3 (Brown et al., 2020), 2) OPT (Zhang et al., 2022), 3) GPT-NeoX (Black et al., 2022), and 4) GPT-J (Wang and Komatsuzaki, 2021)5. For GPT-3, we evaluate not only the pretrained LLM, but also evaluate LLMs that are fine-tuned to follow instructions and aligned to human preferences through reinforcement learning (Ouyang et al., 2022). We evaluate the performance of GPT-3 models with sizes of 6.7B and 175B. For OPT, we evaluate models with 6.7B, 13B, and 30B parameters, while for GPT-NeoX and GPT-J, we evaluate models with 20B and 6B parameters, respectively. Footnote 5: From preliminary experiments, we observe that applying ICIL harms the performance for OPT-IML (Iyer et al., 2022) and FLAN-T5 (Chung et al., 2022) due to the characteristics of each model. We provide more discussion in Appendix B. ### Results Various pretrained LLMs benefit from ICIL.As shown in the left part of Figure 1, In-Context Instruction Learning (ICIL) consistently improves the performance of pretrained LLMs across all model scales, resulting in over 50% performance increase for OPT-13B. This simple zero-shot in-context learning is capable of outperforming LLMs with much larger parameters. Specifically, the 6B-sized GPT-J model with ICIL exceeds 30 times larger 175B-sized GPT-3 model. This shows that ICIL improves the ability of pretrained LLMs to follow instructions without fine-tuning or backpropagation. Moreover, we observe the gain from ICIL during inference is comparable to instruction tuning by comparing the performance of ICIL applied to GPT-3 models without instruction tuning and standard zero-shot setting of instruction-tuned GPT-3 models (text-davinci-001,002). The gain from ICIL is complementary to fine-tuning-based instruction learning.As shown in the right part of Figure 1, I we observe that ICIL improves the performance of LLMs fine-tuned through instruction tuning or RLHF, especially for models over 100B parameters. This implies that fine-tuning-based instruction learning might be sometimes insufficient for larger models and In-Context Instruction Learning can improve the instruction following ability orthogonally. In particular, we observe a significant performance improvement for text-davinci-002 (175B), outperforming an RLHF-tuned model text-davinci-003 with standard zero-shot learning. Also, we demonstrate that the most powerful model (text-davinci-003) also benefits from ICIL by 9.3%, achieving the best performance. We leave detailed analysis on more diverse instruction-fine-tuned models as future work. Irrelevant In-Context Instruction Learning does not harm the performance much.We observe that corrupting the distribution of input instances for each demonstration for ICIL does not harm the performance much, similar to the observation in Min et al. (2022b) for few-shot in-context learning. Instead of perturbing the input-output correspondence, done in Min et al. (2022b), we perturb the input distribution _itself_, which is a setting where there are more corruptions as shown at the top of Figure 4. Following Min et al. (2022b), we use CC-News (Hamborg et al., 2017) as an external corpus to replace the ground truth input instance with random sentences that have a similar length to the original input instance. As shown in the bottom of Figure 4, corrupting the input instance distribution of each demonstration does not harm the performance much across most model scales. This is in line with the observations made in previous works that LLMs do not make full use of all the information provided to them (Min et al., 2022b; Webson and Pavlick, 2021; Madaan and Yazdanbakhsh, 2022; Wang et al., 2022a). Interestingly, unlike few-shot in-context learning where corrupting the input distribution itself leads to significant performance degradation, we demonstrate that not only the input-output correspondence does not matter, but also the input instance distribution matters little for ICIL. Figure 4: (Top) Example of Irrelevant ICIL, where we corrupt the input instance distribution of the demonstrations. (Bottom) Comparison with Standard Zero-shot, In-Context Instruction Learning (ICIL), and Irrelevant ICIL. For most of the models, input distribution corruption does not harm the performance much. We report the mean score of three random seeds for different demonstration sets for ICIL. We report a result of a single seed for 175B-sized models due to inference costs. We provide the full demonstration sets in Appendix G. Analysis In this section, we analyze the factors that make ICIL effective and provide additional experiments. We evaluate only on pretrained GPT-3 175B checkpoint (davinci) and evaluate on a single task per task category, resulting in a total of 12 tasks due to inference cost issues6. Footnote 6: We select a single task per task category with a significant discrepancy between the lower bound and upper bound performance across davinci, text-davinci-001, 002, 003 models to see the tendency more clearly. ### Ablation Studies Instruction and output distribution of the demonstrations matters.We further analyze the effectiveness of each component of the demonstrations for ICIL by corrupting the distribution of each component: instruction, input, and output instance. For instruction corruption, we replace the ground truth sequences with random sequences from an external corpus, which is similar to how we corrupt the input distribution discussed in Section 3.2. For output corruption, we replace ground truth labels with random English words, following Min et al. (2022b). The results are shown in Table 1. Unlike input distribution corruption results of Figure 4, corrupting the distribution of the instruction or the output instance of each demonstration significantly harms the performance. In particular, corrupting the instruction distribution shows little improvement compared to standard zero-shot learning (31.18 vs 29.67). This suggests that unlike input instances, the distribution of instruction and output instances significantly affects the performance of ICIL. Constructing the demonstration set with classification tasks is important.We analyze the heuristic of constructing the demonstration set from only classification tasks in ICIL by varying the ratio of classification tasks consisting of the demonstration set. As shown in Figure 4(a), the average zero-shot task generalization performance increases as the ratio of classification tasks increases. Interestingly, we observe that constructing the demonstration set with classification tasks also benefits generation (non-classification) target tasks. This finding contrasts with few-shot in-context learning setting, where retrieving demonstrations similar to the target query enhances the few-shot performance (Rubin et al., 2021; Liu et al., 2021)7. Footnote 7: Note that the classification ratio of 0% in Figure 4(a) corresponds to constructing the demonstration set solely from generation (non-classification) tasks. \begin{table} \begin{tabular}{l|c c c|c} & Inst. & Input & Output & AVG \\ \hline ICIL & ✓ & ✓ & ✓ & **44.24** \\ Random Inst. & ✗ & ✓ & ✓ & 31.18 \\ Random Input & ✓ & ✗ & ✓ & **44.27** \\ Random Output & ✓ & ✓ & ✗ & 38.30 \\ \end{tabular} \end{table} Table 1: Corrupting the distribution of each component (instruction, input, output) of the demonstration of ICIL by replacing it with random words or sentences. Figure 5: (a) shows that the average performance increases as the ratio of classification tasks that are used as demonstrations for ICIL increases, even for generation target tasks. (b) shows that the performance increases as the number of demonstrations increases for ICIL. (c) shows that ordering the demonstration set by the number of answer choices reduces the variance on 10 demonstration sets. Increasing the number of demonstrations improves the performance.We study the impact of the number of demonstrations for ICIL. Results are shown in Figure 4(b). The mean performance improves as the number of demonstrations increase, similar to few-shot in-context learning. Notably, the zero-shot instruction-following ability of ICIL significantly improves even with 2 examples, implying that using only a small set of zero-shot demonstrations can improve the performance of LLMs. Ordering the demonstrations by the number of answer choices reduces the variance.To examine the impact of different orderings of the demonstration set, we compare the ordering of ICIL based on the number of answer choices with a random ordering. Figure 4(c) shows the result of 10 different demonstration sets by sampling them with 10 different random seeds. Although the mean performance does not show a significant difference between the two settings, we observe that applying ordering heuristics based on the number of answer choices reduces the variance and improves the worst-case accuracy. Answer choice overlap between demonstrations harms the performance.We analyze the effect of answer choice overlap between demonstrations, which is one of the heuristics used to construct the demonstration set. We compare the demonstration set used for ICIL with the demonstration set that has the same answer choice for all demonstrations. The result is demonstrated in Table 2. We observe that the demonstration set with answer choice overlap underperforms the demonstration set without overlap on average, especially for generation tasks. We find that the demonstration set with answer choice overlap tends to make the model generate short sequences for long text generation or predict the output by copying one of the labels of the demonstration set, leading to poor generalization. ### Additional Experiments ICIL shows effectiveness for machine-generated demonstration sets as well.We explore if ICIL shows effectiveness for machine-generated demonstrations instead of sampling from training tasks of SuperNI benchmark. We use ChatGPT (OpenAI, 2022) for demonstration generation by specifying the heuristics used to construct the demonstration set for ICIL. As shown in Figure 5(a), ICIL is also effective for machine-generated demonstrations, showing comparable performance to ICIL with demonstrations from SuperNI and significantly outperforming standard zero-shot setting. This finding suggests that ICIL is effective even without sampling process from benchmarks that consist of diverse instructions, indicating that the performance enhancement is not from demonstration \begin{table} \begin{tabular}{l|c c c} & Classification & Generation & Total \\ \hline \hline Overlap & **35.14** & 52.32 & 42.30 \\ No Overlap & 33.86 & **58.77** & **44.24** \\ \end{tabular} \end{table} Table 2: Effect of answer choice overlap between demonstrations. The demonstration set that has an overlap underperforms the set without overlap on average, especially for generation tasks. Figure 6: (a) shows the result of ICIL using demonstrations generated by ChatGPT (OpenAI, 2022). Machine-generated demonstrations show comparable performance to demonstrations sampled from SuperNI benchmark. (b) shows the comparison of ICIL with adaptive similarity-based in-context learning methods where the demonstration set is adaptively retrieved based on the target task (Task Adap.) or target instance (Inst. Adap.). The performance of ICIL is comparable to adaptive in-context learning methods but there is still room for improvement compared to few-shot in-context learning (dotted upper bound). construction through sampling, but is from heuristics and the format of ICIL. We provide an example of a demonstration set generated by ChatGPT in Appendix E. The performance of ICIL is comparable to adaptive in-context learning methods.We compare ICIL, which samples a fixed demonstration set for all evaluation tasks, with adaptive zero-shot in-context learning (Lyu et al., 2022), where the retrieved demonstrations vary based on the similarity of the target task or instance. Similar to ICIL, for adaptive zero-shot in-context learning, we retrieve the demonstrations which consist of instruction, input, and output instances from the training tasks of SuperNI benchmark8. We use SimCSE (Gao et al., 2021) to compute sequence embeddings and cosine similarity to retrieve the top-\(K\) similar instances for each target task or instance. We divide the adaptive in-context learning setting into task-wise and instance-wise, where the former retrieves based on the similarity of instructions only, and the latter retrieves based on the similarity of the concatenations of instruction and input instance. As shown in Figure 5(b), the performance of both task adaptive and instance adaptive is comparable to ICIL which uses a fixed set of demonstrations for all tasks. Therefore, this indicates that while being comparable to adaptive in-context learning methods, the fixed demonstration set of ICIL is more reproducible and is free from external embedding models that are used for similarity search. Footnote 8: Note that the original setting of Lyu et al. (2022) do not utilize instructions during inference, constructing demonstrations that are a concatenation of input and label by retrieving a sentence from a raw corpus. There is still room for improvement for ICIL.We compare the performance of ICIL with few-shot in-context learning, which is an upper bound for task adaptation. We compare with 8-shot in-context learning to control the factor of the number of demonstrations. Although ICIL significantly outperforms the zero-shot task generalization performance of the standard zero-shot setting, we observe that there is still a large gap between ICIL and few-shot in-context learning, shown in Figure 5(b) (44.24 vs 61.71). ## 5 Discussion From previous sections, we have observed that ICIL significantly boosts the performance of both pretrained and instruction-fine-tuned LLMs. Also, we have demonstrated that corrupting the input distribution does not harm the performance much and analyzed that constructing the demonstration set from classification tasks is crucial for performance improvement. In this section, we suggest the role of ICIL based on the findings from the previous sections. Why is constructing the demonstration set from classification tasks important?Figure 4(a) shows that constructing the demonstration set with classification tasks is important for ICIL. Then, what is the difference between classification and generation (non-classification) tasks? Because one of our heuristics for demonstration construction is to only consider classification tasks that include an answer choice in the instruction (e.g. _"agent" or "customer"_ in Figure 3), these demonstrations have more _explicit_ cues about the answer distribution. We hypothesize that during inference, LLMs learn the correspondence between answer choice in the instruction (e.g. Determine the speaker of the dialogue, "agent" or "customer".) and the label (e.g. agent) from demonstrations. Especially, because the label word appears in the instruction for classification tasks, it would be easy to exploit this relationship for LLMs. We observe that deleting only the sentence that includes answer choices in the instruction leads to a degradation in the performance of ICIL (\(44.27\to 42.89\)), supporting the hypothesis. What does the result of irrelevant ICIL imply?From Figure 4 and Table 1, we observe that the input distribution of demonstrations for ICIL does not matter much, while instruction and output distribution matter significantly. This observation bolsters the above hypothesis that LLMs learn the correspondence of answer choice in the instruction and the label of the demonstrations during ICIL. Instead of relying on complex correspondence such as the relationship between instruction, input, and output altogether, LLMs tend to focus on simple correspondence such as string matching between the instruction including answer choices and the label. Previous work also demonstrates similar findings that LLMs _takes less effort_ to adapt to a task, similar to shortcut learning (Webson and Pavlick, 2021; Min et al., 2022b). What is the role of ICIL?If LLMs learn the correspondence of the answer choice in the instruction and the label of the demonstrations during ICIL, then how does this assist the zero-shot task generalization? During ICIL, we hypothesize that the demonstrations give a signal that makes LLMs _focus_ on the instruction to find the cues of the answer distribution, making LLMs better follow instructions. We suggest that this hypothesis explains why constructing the demonstration set from classification tasks also improves the performance of generation target tasks. Although instruction fine-tuning also assists the signal of focusing on the instructions, we hypothesize that ICIL reinforces the correspondence between the instruction and the label of the demonstrations during inference directly. ## 6 Related Works Instruction-Following LLMsRecent works have shown that fine-tuning-based instruction learning including instruction tuning or RLHF, can boost the capability of LLMs to follow instructions or align to human preferences (Sanh et al., 2021; Wei et al., 2021; Wang et al., 2022; Chung et al., 2022; Min et al., 2022; Ye et al., 2022; Ouyang et al., 2022; Bai et al., 2022; OpenAI, 2022). However, whether the instruction following ability of LLMs is newly obtained through instruction tuning or is already obtained during pretraining is under-explored. Wang et al. (2022); Honovich et al. (2022) show that downstream tasks generated by LLMs itself which contain noisy instances can actually be good training instances for instruction tuning, implying that LLMs are already somewhat aware of instructions. We extend this hypothesis that LLMs already have the capability to follow instructions by applying in-context learning to instruction learning which does not require any backpropagation, using the pretrained model checkpoint without any gradient update. In-Context LearningLarge language models pretrained to predict the next token autoregressively possess the ability to adapt to the target tasks when conditioned on only a few examples without gradient update, referred to as in-context learning (Brown et al., 2020; Chowdhery et al., 2022). However, the inner workings of in-context learning are under-explored. Akyurek et al. (2022); von Oswald et al. (2022); Garg et al. (2022); Dai et al. (2022) show that language models perform implicit meta-fine-tuning during in-context learning. However, Min et al. (2022) show that assigning random labels for demonstrations does not hurt the in-context learning performance much. Motivated by this finding, Lyu et al. (2022) propose a zero-shot in-context learning method, retrieving relevant sentences from an external corpus and assigning random labels to construct demonstrations for classification target tasks. Different from Lyu et al. (2022), ICIL utilizes instructions to facilitate task adaptation, uses a fixed set of demonstrations to evaluate all tasks, and is applicable to generation target tasks as well. ## 7 Limitations Although In-Context Instruction Learning leads to impressive zero-shot task generalization performance, it suffers from increased computation during inference due to the increased number of input sequences. Also, there is still a large performance gap between few-shot in-context learning as shown in Figure 5(b). Note that this is a work-in-progress version and we plan to extensively analyze and evaluate ICIL in various settings and benchmarks in the future. ## 8 Conclusion In this paper, we observe that learning to follow instructions through a _fixed_ set of demonstrations during inference, referred to as In-Context Instruction Learning (ICIL), significantly improves the zero-shot task generalization performance of both pretrained and instruction-fine-tuned LLMs. Through detailed analysis, we hypothesize that the effect of ICIL comes from learning the correspondence between answer choices in the instruction and the label of the demonstration, leading LLMs to better focus on the instruction. To this end, we recommend ICIL to be seriously considered for maximizing zero-shot task generalization performance, especially if one is willing to trade inference speed for higher accuracy. ## 9 Acknowledgement We thank Sunkyung Kim, Hyunjik Jo, and Joel Jang for helpful discussions. We thank Sejune Joo, Seungone Kim, Yongrae Jo, Doyoung Kim, Dongkeun Yoon, Seongyun Lee, and Chaeeun Kim for helpful feedback on our paper.
2308.12293
Six Bits
The spinors of the group Spin($N$) of rotations in $N$ spacetime dimensions are indexed by a bitcode with [$N$/2] bits. A well-known promising grand unified group that contains the standard-model group is Spin(10). Fermions in the standard model are described by five bits $yzrgb$, consisting of two weak bits $y$ and $z$, and three color bits $r$, $g$, $b$. If a sixth bit $t$ is added, necessary to accommodate a time dimension, then the enlarged Spin(11,1) geometric algebra contains the standard model and Dirac algebras as commuting subalgebras, unifying the four forces of Nature. There is a unique minimal symmetry-breaking chain and associated multiplet of Higgs fields that breaks Spin(11,1) to the standard model. Unification to the Pati-Salam group Spin(4)$_w {\times}$ Spin(6)$_c$ is predicted at $10^{12}\,$GeV, and grand unification at $10^{15}\,$GeV. The grand Higgs field breaks $t$-symmetry, can drive cosmological inflation, and generates a large Majorana mass for the right-handed neutrino by flipping its $t$-bit. The electroweak Higgs field breaks $y$-symmetry, and generates masses for fermions by flipping their $y$-bit.
Andrew J. S. Hamilton
2023-07-31T14:16:37Z
http://arxiv.org/abs/2308.12293v1
# Six Bits ###### Abstract The spinors of the group \(\mathrm{Spin}(N)\) of rotations in \(N\) spacetime dimensions are indexed by a bitcode with \([N/2]\) bits. A well-known promising grand unified group that contains the standard-model group is \(\mathrm{Spin}(10)\). Fermions in the standard model are described by five bits \(yzrgb\), consisting of two weak bits \(y\) and \(z\), and three color bits \(r\), \(g\), \(b\). If a sixth bit \(t\) is added, necessary to accommodate a time dimension, then the enlarged \(\mathrm{Spin}(11,1)\) geometric algebra contains the standard model and Dirac algebras as commuting subalgebras, unifying the four forces of Nature. There is a unique minimal symmetry-breaking chain and associated multiplet of Higgs fields that breaks \(\mathrm{Spin}(11,1)\) to the standard model. Unification to the Pati-Salam group \(\mathrm{Spin}(4)_{w}\times\mathrm{Spin}(6)_{c}\) is predicted at \(10^{12}\,\mathrm{GeV}\), and grand unification at \(10^{15}\,\mathrm{GeV}\). The grand Higgs field breaks \(t\)-symmetry, can drive cosmological inflation, and generates a large Majorana mass for the right-handed neutrino by flipping its \(t\)-bit. The electroweak Higgs field breaks \(y\)-symmetry, and generates masses for fermions by flipping their \(y\)-bit. _Essay written for the Gravity Research Foundation 2023 Awards for Essays on Gravitation._ ### Spinors in the standard model of physics If there is a theory of everything, there is a good chance that spinors are at the heart of it. Look around. All known matter (fermions) is made of spinors. All known forces arise from symmetries of spinors. Introduced by Cartan [1, 2] more than a hundred years ago, spinors, objects of spin \(\frac{1}{2}\), constitute the fundamental representation of the group \(\mathrm{Spin}(N)\) of rotations in \(N\) spacetime dimensions. Spinors have the intriguing property that their index is a bitcode, with \([N/2]\) bits in \(N\) spacetime dimensions. The halving of dimensions is associated with the fact that spinors have a natural complex structure. Associated with each bit is a pair of orthonormal basis vectors \(\boldsymbol{\gamma}_{k}^{+}\) and \(\boldsymbol{\gamma}_{k}^{-}\) in an \(N\)-dimensional geometric algebra (Clifford algebra) [3, 4, 5]. Spinors with \(k\) spin up (\(\uparrow\)) and down (\(\downarrow\)) transform with opposite phases \(e^{\mp\mathrm{i}\theta/2}\) (or opposite boosts \(e^{\pm\theta/2}\)) under rotations in the 2-dimensional \(\boldsymbol{\gamma}_{k}^{+}\boldsymbol{\gamma}_{k}^{-}\) plane. The orthonormal vectors \(\boldsymbol{\gamma}_{k}^{+}\) and \(\boldsymbol{\gamma}_{k}^{-}\) can be interpreted as the real and imaginary parts of a complex vector. In the 3+1 dimensions of familiar spacetime, the number of bits is two, a Dirac spinor, with \(2^{2}=4\) complex components. A Dirac spinor has a spin bit (\(\uparrow\) or \(\downarrow\)) and a boost bit (\(\Uparrow\) or \(\Downarrow\)). The Dirac spinor is called right-handed if the spin and boost bits are aligned, left-handed if anti-aligned. The geometric algebra in 3+1 dimensions is the Dirac algebra of Dirac \(\gamma\)-matrices. The group Spin(10) of rotations in 10 dimensions, proposed in the 1970s by [6, 7], has remained a compelling candidate for a grand unified group that contains the standard-model group, the product U(1)\({}_{Y}\times\)SU(2)\({}_{L}\times\)SU(3)\({}_{c}\) of hypercharge, weak, and color groups. The standard model has 5 conserved charges consisting of hypercharge \(Y\), weak isospin \(I_{L}\), and three colors \(R\), \(G\), and \(B\). As first pointed out by [8], and reviewed by [9], Spin(10) describes a generation of fermions of the standard model with a bitcode with \([10/2]=5\) bits \(y,z,r,g,b\) consisting of two weak bits \(y\) and \(z\), and three color bits \(r,g,b\) (the naming of bits follows [10]). Each bit can be either up or down, signifying a charge of \(+\frac{1}{2}\) or \(-\frac{1}{2}\). The relation between standard-model charges and Spin(10) charges is \[Y=y+z-\tfrac{2}{3}(r+g+b)\,\quad I_{L}=\tfrac{1}{2}(z-y)\,\quad C=c+\tfrac{1}{2} \ \ (C=R,G,B,\ c=r,g,b). \tag{1}\] The electromagnetic charge \(Q\) is \[Q=\tfrac{1}{2}Y+I_{L}=z-\tfrac{1}{3}(r+g+b). \tag{2}\] Electroweak symmetry breaking is a loss of \(y\)-symmetry, a loss of conservation of \(y\)-charge. The following Spin(10) chart shows the electron generation of fermions of the standard model arrayed in columns according to the number of up-bits (compare Table 4 of [9]). The left element of each entry (before the colon) signifies which bits are up, from - (no bits up, or \(\downarrow\downarrow\downarrow\downarrow\downarrow\)) in the 0 column, to \(yzrgb\) (all bits up, or \(\uparrow\uparrow\uparrow\uparrow\uparrow\)) in the 5 column; the right element of each entry is the corresponding fermion, which comprise (electron) neutrinos \(\nu\), electrons \(e\), and up and down quarks \(u\) and \(d\), each in right- and left-handed Dirac chiralities \(R\) and \(L\), and each in (unbarred) particle and (barred) anti-particle species, a total of \(2^{5}=32\) fermions: \[\begin{array}{lccccc}\hline\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit \span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit \span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit\span\omit \omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit \omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit \omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit\span\omit \omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\span \omit\span\omit\omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit \omit\span\omit\span\omit\omit\span\omit\omit\span\omit\omit\span\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\span\omit \omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit \omit\span\omit\span\omit\omit\span\omit\span\omit\span\omit\omit \span\omit\span\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\omit\span\omit\span\omit\span\omit\span\omit\omit\span\omit\omit\span \omit\span\omit\span\omit\span\omit\span\omit\span\omit\span\omit\omit\span \omit\span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\omit\span\omit\span\omit\span \omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit \span\omit\omit\span\omit\span\omit\omit\span\omit\span\omit\omit\span\omit\omit \span\omit\span **Unification of standard-model and spacetime symmetries in the Spin(11,1) geometric algebra** The chart (3) is a riddle of striking features. The most striking feature is that Spin(10) chirality coincides with Dirac chirality. Chirality counts whether the number of up-bits is even or odd, with right-handed defined as all bits up. The odd and even columns of the Spin(10) chart (3) have respectively right-handed (\(R\)) and left-handed (\(L\)) Dirac chirality. Modulo a phase, chirality is (the eigenvalue of) the pseudoscalar of the algebra, the product of all the \(N\) vectors in the \(N\)-dimensional geometric algebra. The remarkable coincidence of Dirac and Spin(10) chiralities suggests that the vectors of spacetime are related to the vectors of Spin(10), in contrast to the usual assumption that the generators of grand unified symmetries are unrelated to (commute with) those of spacetime. A second striking feature of the Spin(10) chart (3) is that each of the \(2^{5}=32\) spinors is itself a 2-component Weyl spinor (a Dirac spinor of definite chirality, right- or left-handed), so there are actually \(2^{6}=64\) spinors in a generation. If one asks, what is the smallest geometric algebra that contains 64 chiral spinors and includes one time dimension, the answer is the geometric algebra associated with the group Spin\((11,1)\) of rotations in 11+1 spacetime dimensions. The reason it is possible to accommodate the 10 dimensions of Spin(10) and the 4 dimensions of Spin\((3,1)\) in a spacetime of just 12 dimensions is precisely that Spin\((10)\) and Spin\((3,1)\) redundantly contain the degrees of freedom associated with flipping chirality. Adding two extra dimensions to the 10 dimensions of Spin(10) adds one extra bit, the \(t\)-bit, or time bit, to the 5 \(yzrgb\) bits of Spin(10). The two extra dimensions comprise an eleventh spatial dimension \(\boldsymbol{\gamma}_{t}^{+}\) and a time dimension \(\boldsymbol{\gamma}_{0}=i\boldsymbol{\gamma}_{t}^{-}\). The conventional assumption that Spin(10) and the Lorentz group Spin\((3,1)\) combine as a direct product is motivated by the Coleman-Mandula theorem [12, 13], which requires that the algebra of any symmetry that has bosonic generators and yields non-trivial scattering amplitudes must be a direct product of internal and spacetime algebras. Spin\((11,1)\) satisfies the higher-dimensional Coleman-Mandula theorem [14] trivially, because the internal and spacetime symmetries of Spin\((11,1)\) are one and the same. After grand symmetry breaking, the Coleman-Mandula theorem requires only that _unbroken_ internal symmetries commute with those of spacetime. If indeed Spin(10) and spacetime symmetries unify in the Spin\((11,1)\) geometric algebra, then the chart (3) cannot be correct as it stands. The problem is that each of the \(2^{5}=32\) spinors in the Spin\((10)\) chart (3) is a Weyl spinor, which requires two bits for its description, whereas only one extra bit, the \(t\)-bit, is available. The key to the riddle of translating the Spin\((10)\) chart (3) into Spin\((11,1)\) is to notice that the spinors in (3) are fermions (unbarred) or antifermions (barred) as the color chirality \(\varkappa_{rgb}\) is positive or negative, that is, as the number of color up-bits is odd or even. The five Spin\((10)\) charges of a spinor are eigenvalues of the five diagonal bivectors \(\boldsymbol{\gamma}_{k}^{+}\boldsymbol{\gamma}_{k}^{-}\), \(k=y,z,r,g,b\), of the geometric algebra. If these diagonal bivectors are modified by multiplying them by \(\varkappa_{rgb}\), then their eigenvalues will measure the charge of the fermion, not the antifermion, in all entries of the Spin\((10)\) chart. A key point that allows this adjustment to be made consistently is that \(\varkappa_{rgb}\) commutes with all standard-model bivectors. Notably, \(\varkappa_{rgb}\) does not commute with SU(5) bivectors that transform between leptons and quarks; but that is fine, because SU(5) is not an unbroken symmetry of the standard model. A consistent way to implement this modification, that leaves the bivector algebra of the standard model (but not of SU(5)) unchanged, is to multiply all imaginary bivectors \(\mathbf{\gamma}_{k}^{+}\mathbf{\gamma}_{l}^{-}\) by \(\varkappa_{rgb}\), while leaving all real bivectors \(\mathbf{\gamma}_{k}^{+}\mathbf{\gamma}_{l}^{+}\) and \(\mathbf{\gamma}_{k}^{-}\mathbf{\gamma}_{l}^{-}\) unchanged, \[\mathbf{\gamma}_{k}^{+}\mathbf{\gamma}_{l}^{-}\to\mathbf{\gamma}_{k}^{+}\mathbf{\gamma}_{l}^{-}\varkappa_{rgb}\,\quad k,l=t,y,z,r,g,b. \tag{4}\] The modification (4) serves to replace each antifermion in the chart with the corresponding fermion. For example, the positron entries \(\bar{e}_{R}\) and \(\bar{e}_{L}\) are replaced by electrons \(e_{L}\) and \(e_{R}\). What about antifermions? Where have they gone? The answer is that antifermions are obtained from fermions in the usual way [15], by taking their complex conjugates and multiplying by the conjugation operator, \(\bar{\psi}\equiv C\psi^{*}\). In any geometric algebra with one time dimension, the conjugation operator \(C\) flips all bits except the time bit \(t\). Thus antifermions appear in a second copy of the Spin(10) chart (3), a conjugated version in which all fermions are replaced by antifermions. The fermionic and antifermionic (conjugated) charts are distinguished by a flip of the time-bit \(t\), a pretty conclusion. It requires some work [10] to establish the correct assignment of Dirac boost (\(\Uparrow\) or \(\Downarrow\)) and spin (\(\uparrow\) or \(\downarrow\)) bits, but the end result is the following Spin(\(11,1\)) chart of spinors, arranged in columns by the number of Spin(10) up-bits as in the earlier chart (3): \[\begin{array}{c c c c c c}\hline 0&1&2&3&4&5\\ \hline\mbox{--}:\ \frac{\bar{\nu}_{\Uparrow\downarrow}}{\nu_{\Downarrow\downarrow}}&y: \ \frac{\bar{\nu}_{\Downarrow\downarrow}}{\nu_{\Uparrow\downarrow}}&\bar{c}:\ \frac{\bar{u}_{\Uparrow\downarrow}^{\,c}}{u_{\Downarrow\downarrow}^{\,c}}&y \bar{c}:\ \frac{\bar{u}_{\Downarrow\downarrow}^{\,c}}{u_{\Downarrow\downarrow}^{\,c}}& zrgb:\ \frac{\nu_{\Downarrow\uparrow}}{\nu_{\Uparrow\uparrow}}&yzrgb:\ \frac{\nu_{\Uparrow\uparrow}}{\nu_{\Downarrow\uparrow}}\\ &z:\ \frac{\bar{e}_{\Downarrow\downarrow}}{e_{\Downarrow\downarrow}}&yz:\ \frac{\bar{e}_{ \Downarrow\downarrow}^{\,c}}{e_{\Downarrow\downarrow}^{\,c}}&rgb:\ \frac{e_{\Uparrow\uparrow}}{\bar{e}_{\Downarrow\uparrow}}&yrgb:\ \frac{\bar{e}_{\Downarrow\uparrow}}{\bar{e}_{\Uparrow\uparrow}}&\\ &c:\ \frac{d_{\Uparrow}^{\,c}}{d_{\Uparrow\uparrow}^{\,c}}&yc:\ \frac{d_{\Uparrow}^{\,c}}{d_{\Uparrow \uparrow}^{\,c}}&z\bar{c}:\ \frac{\bar{d}_{\Downarrow\downarrow}^{\,c}}{d_{\Downarrow\downarrow}^{\,c}}&yz \bar{c}:\ \frac{\bar{d}_{\Downarrow\downarrow}^{\,c}}{d_{\Downarrow\downarrow}^{\,c}}&\\ &&zc:\ \frac{u_{\Downarrow\uparrow}^{\,c}}{u_{\Uparrow\uparrow}^{\,c}}&yzc:\ \frac{u_{\Uparrow\uparrow}^{\,c}}{u_{\Downarrow\uparrow}^{\,c}}&\\ \hline\end{array} \tag{5}\] Whereas in the original Spin(10) chart (3) each entry was a two-component Weyl spinor, in the Spin(\(11,1\)) chart (5) the two components of each Weyl spinor appear in bit-flipped entries. For example, the right-handed electron \(e_{R}\) of the original chart is replaced by \(e_{\Uparrow\uparrow}\), and its spatially rotated partner \(e_{\Downarrow\downarrow}\) of the same chirality appears in the all-bit-flipped entry. Each entry still has two components, but in the Spin(\(11,1\)) chart those two components differ by their \(t\)-bit; the upper component has \(t\)-bit up, the lower \(t\)-bit down. The net number of degrees of freedom remains the same, \(2^{6}=64\). Figure 1 illustrates one generation (the electron generation) of fermions of the standard model arranged according to their Spin(\(11,1\)) \(tyzrgb\) charges. The definitive test of the viability of the Spin(\(11,1\)) model is to write down the relations between the Spin(\(11,1\)) and Dirac geometric algebras, and to check that the relations satisfy all constraints. The coincidence of Spin(10) and Dirac chiralities implies that the Dirac spacetime vectors must be higher dimensional elements of the Spin(\(11,1\)) algebra. The four Dirac spacetime vectors \(\boldsymbol{\gamma}_{m}\), \(m=0,1,2,3\) in terms of the algebra of the twelve Spin\((11,1)\) vectors \(\boldsymbol{\gamma}_{k}^{\pm}\), \(k=t,y,z,r,g,b\), are [10] \[\boldsymbol{\gamma}_{0}=i\boldsymbol{\gamma}_{t}^{-}\,\quad\boldsymbol{\gamma}_{1}= \boldsymbol{\gamma}_{y}^{-}\boldsymbol{\gamma}_{z}^{-}\boldsymbol{\gamma}_{r}^ {+}\boldsymbol{\gamma}_{g}^{+}\,\quad\boldsymbol{\gamma}_{2}=\boldsymbol{\gamma}_{y}^{-} \boldsymbol{\gamma}_{z}^{-}\boldsymbol{\gamma}_{r}^{-}\boldsymbol{\gamma}_{g}^ {-}\boldsymbol{\gamma}_{b}^{-}\,\quad\boldsymbol{\gamma}_{3}=\boldsymbol{\gamma}_{t}^{+} \boldsymbol{\gamma}_{y}^{+}\boldsymbol{\gamma}_{y}^{-}\boldsymbol{\gamma}_{z}^ {+}\boldsymbol{\gamma}_{z}^{-}. \tag{6}\] The Dirac vectors (6) all have grade 1 mod 4. The multiplication rules for the vectors \(\boldsymbol{\gamma}_{m}\) given by equations (6) agree with the usual multiplication rules for Dirac \(\gamma\)-matrices: the vectors \(\boldsymbol{\gamma}_{m}\) anticommute, and their scalar products form the Minkowski metric. All the spacetime vectors \(\boldsymbol{\gamma}_{m}\) commute with all standard-model bivectors modified per (4). The Dirac pseudoscalar \(I\) coincides with the Spin\((11,1)\) pseudoscalar \(J\), \[I\equiv\boldsymbol{\gamma}_{0}\boldsymbol{\gamma}_{1}\boldsymbol{\gamma}_{2} \boldsymbol{\gamma}_{3}=J\equiv-i\boldsymbol{\gamma}_{t}^{+}\boldsymbol{ \gamma}_{t}^{-}\boldsymbol{\gamma}_{y}^{+}\boldsymbol{\gamma}_{y}^{-} \boldsymbol{\gamma}_{z}^{+}\boldsymbol{\gamma}_{z}^{-}\boldsymbol{\gamma}_{r }^{+}\boldsymbol{\gamma}_{r}^{-}\boldsymbol{\gamma}_{g}^{+}\boldsymbol{\gamma} _{g}^{-}\boldsymbol{\gamma}_{b}^{+}\boldsymbol{\gamma}_{b}^{-}. \tag{7}\] Thus the Dirac and standard-model algebras are subalgebras of the Spin\((11,1)\) geometric algebra, such that all Dirac generators commute with all standard-model bivectors modified per (4), as required by the Coleman-Mandula theorem. Figure 1: A generation (the electron generation) of 64 fermions arranged according to their Spin\((11,1)\)\(tyzrgb\) charges. The eight boxes are distinguished by their color \(rgb\) bits. The fermions in each of the eight boxes are distinguished by their time and weak \(tyz\) bits. Flipping the \(t\)-bit of a fermion flips the fermion to its antifermionic partner of opposite boost and the same spin. Flipping all 6 \(tyzrgb\) bits of a fermion flips its Dirac boost and spin bits, thereby transforming the fermion to its Weyl companion. The time dimension \(\boldsymbol{\gamma}_{0}\) in equations (6) is just a simple vector in the \(\mathrm{Spin}(11,1)\) algebra, but the 3 spatial dimensions \(\boldsymbol{\gamma}_{k}\), \(k=1,2,3\) are all 5-dimensional. The spatial dimensions share a common 2-dimensional factor \(\boldsymbol{\gamma}_{y}^{-}\boldsymbol{\gamma}_{z}^{-}\). Aside from that common factor, each of the 3 spatial dimensions is itself 3-dimensional: \(\boldsymbol{\gamma}_{r}^{+}\boldsymbol{\gamma}_{g}^{+}\boldsymbol{\gamma}_{b} ^{+}\), \(\boldsymbol{\gamma}_{r}^{-}\boldsymbol{\gamma}_{g}^{-}\boldsymbol{\gamma}_{b} ^{-}\), and \(\boldsymbol{\gamma}_{t}^{+}\boldsymbol{\gamma}_{y}^{+}\boldsymbol{\gamma}_{z}^ {+}\). #### The road to grand unification In the \(\mathrm{Spin}(11,1)\) model, grand unification unifies all four forces, not just the three forces of the standard model, and it involves a transition from the 3+1 dimensions of today's spacetime to higher dimensions. As long as spacetime is 4-dimensional, as it is today, any internal symmetry must commute with the four Dirac vectors (6), in accordance with the Coleman-Mandula theorem. This is a tight constraint. There appears to be a unique minimal symmetry-breaking chain from \(\mathrm{Spin}(11,1)\) to the standard model [10], proceeding by the Pati-Salam group \(\mathrm{Spin}(4)_{w}\times\mathrm{Spin}(6)_{c}\)[16], as first proposed by [17], and advocated by [18, 19], \[\mathrm{Spin}(11,1)\xrightarrow[?]{}\mathrm{Spin}(10,1) \xrightarrow[10^{15}\,\mathrm{GeV}]{}\mathrm{Spin}(4)_{w}\times\mathrm{Spin}(6)_{c} \times\mathrm{Spin}(3,1)\xrightarrow[10^{12}\,\mathrm{GeV}]{}\] \[\mathrm{U}(1)_{Y}\times\mathrm{SU}(2)_{L}\times\mathrm{SU}(3)_{c }\times\mathrm{Spin}(3,1)\xrightarrow[160\,\mathrm{GeV}]{}\mathrm{U}(1)_{Q} \times\mathrm{SU}(3)_{c}\times\mathrm{Spin}(3,1). \tag{8}\] The top line of the chain (8) is the prediction, while the bottom line is the standard model. Note that the grand unified group is \(\mathrm{Spin}(10,1)\) rather than \(\mathrm{Spin}(11,1)\) itself. The predicted energy scales of unification are deduced from the running of the three coupling parameters of the standard model. The minimal Higgs sector that mediates symmetry breaking is likewise unique, consisting of the dimension 66 bivector (adjoint) representation of \(\mathrm{Spin}(11,\,1)\). In effect, the Lorentz-scalar (spin 0) Higgs sector matches the Lorentz-vector (spin 1) gauge sector. The general principles underlying symmetry breaking by the Higgs mechanism [20, 21] are: (1) the Higgs field before symmetry breaking is a scalar (spin 0) multiplet of the unbroken symmetry; (2) one component of the Higgs multiplet acquires a nonzero vacuum expectation value; (3) components of the Higgs multiplet whose symmetry is broken are absorbed into longitudinal components of the broken gauge (spin 1) fields by the Goldstone mechanism [22], giving those gauge fields mass; and (4) unbroken components of the Higgs multiplet persist as scalar fields, potentially available to mediate the next level of symmetry breaking. The 66-component Higgs multiplet contains a four-component multiplet with generators \(\boldsymbol{\gamma}_{t}^{+}\boldsymbol{\gamma}_{k}^{\pm}\), \(k=y,z\) (adjusted per (4)), whose properties match those of the electroweak Higgs multiplet required by the standard Weinberg [23] model of electroweak symmetry breaking. The Higgs multiplet breaks electroweak symmetry when it acquires a vacuum expectation value \(\langle\boldsymbol{H}\rangle\) proportional to \(\boldsymbol{\gamma}_{t}^{+}\boldsymbol{\gamma}_{y}^{-}\), \[\langle\boldsymbol{H}\rangle=\langle H\rangle\boldsymbol{\gamma}_{t}^{+} \boldsymbol{\gamma}_{y}^{-}\varkappa_{rgb}. \tag{9}\] The factor of the color chiral operator \(\varkappa_{rgb}\) is from the adjustment (4). The electroweak Higgs field \(\langle\boldsymbol{H}\rangle\) breaks \(y\)-symmetry, carries one unit of \(y\) charge, and gives masses to fermions by flipping their \(y\)-bit. The grand Higgs field \(\langle\mathbf{T}\rangle\) that breaks \({\rm Spin}(10,1)\) to the Pati-Salam group is proportional to the time bivector \(\mathbf{\gamma}_{t}^{+}\mathbf{\gamma}_{t}^{-}\), \[\langle\mathbf{T}\rangle=-i\langle T\rangle\mathbf{\gamma}_{ t}^{+}\mathbf{\gamma}_{t}^{-}\varkappa_{rgb}. \tag{10}\] Again, the factor of the color chiral operator \(\varkappa_{rgb}\) is from the adjustment (4). The grand Higgs field \(\langle\mathbf{T}\rangle\) generates a Majorana mass term for the right-handed neutrino by flipping its \(t\)-bit. Only the right-handed neutrino can acquire a Majorana mass, because only the right-handed neutrino has zero standard-model charge; for other fermions, flipping the \(t\)-bit is prohibited by conservation of standard-model charge. As is well known, a large Majorana mass, coupled with a smaller Dirac mass, can explain the small mass of left-handed neutrinos, by the see-saw mechanism [24]. A leading idea of the standard model of cosmology is that inflation in the early universe was driven by the energy associated with grand unification, e.g. [25, 26]. The grand Higgs field \(\langle\mathbf{T}\rangle\) is available to drive cosmological inflation. The bivectors \(\mathbf{\gamma}_{t}^{+}\mathbf{\gamma}_{k}^{\pm}\), \(k=y,z\) generate the electroweak Higgs field, but they cannot generate a gauge symmetry, because they fail to commute with the Higgs field \(\langle\mathbf{T}\rangle\) that breaks grand symmetry, equation (10). The only apparent solution to this problem is to postulate that the dimension \(\mathbf{\gamma}_{t}^{+}\) is a scalar dimension that does not generate any symmetry, a possibility discussed in SS4.4 of [15]. The dimension \(\mathbf{\gamma}_{t}^{+}\) stands out as the only spacelike vector of \({\rm Spin}(11,1)\) that is not a factor in any unbroken gauge symmetry of the standard model. The compactification of some dimensions is a well-known feature of string theories [27, 28], but how \({\rm Spin}(11,1)\) might break to \({\rm Spin}(10,1)\) is a matter for future research. **String theory?** It can scarcely escape notice that \({\rm Spin}(10,1)\) has the same number 11 of spacetime dimensions as maximal supergravity [28, 29, 30, 31], the low-energy limit of M theory [32, 27], which is the conjectured extension of string theory to include higher-dimensional objects, branes. Extensions of string theory to 12 spacetime dimensions, F-theory, have also been conjectured [33, 34, 35]. String-theory-inspired models usually assume that the parent spacetime is, at least locally, a product space, consisting of 4 large dimensions multiplied by a space of compactified or hidden dimensions. By contrast, in the \({\rm Spin}(11,1)\) geometric algebra, although the time dimension is a vector, each of the 3 spatial Dirac dimensions is a pentavector, a 5-dimensional multivector, equations (6). The spatial dimensions share a common 2-dimensional factor, and beyond that are each 3-dimensional. Is this arrangement viable in string theory? **Conclusion** If the ideas in this essay are correct, then the DNA of the Universe is written in a language whose letters are the six bits of spinors in 11+1 spacetime dimensions. The \({\rm Spin}(11,1)\) geometric algebra describes only the tangent space of the spacetime, not the geometry of the 11+1 dimensional spacetime itself. One may speculate, as string theorists have done, that the complexity of the laws of physics is associated with the complicated geometry of the hidden extra dimensions and the fields that wrap them. Like the DNA of life, the letters may be simple, but the stories written with those letters could be fabulously complex. This essay is based on original research presented by [10].
2301.13486
Robust Linear Regression: Gradient-descent, Early-stopping, and Beyond
In this work we study the robustness to adversarial attacks, of early-stopping strategies on gradient-descent (GD) methods for linear regression. More precisely, we show that early-stopped GD is optimally robust (up to an absolute constant) against Euclidean-norm adversarial attacks. However, we show that this strategy can be arbitrarily sub-optimal in the case of general Mahalanobis attacks. This observation is compatible with recent findings in the case of classification~\cite{Vardi2022GradientMP} that show that GD provably converges to non-robust models. To alleviate this issue, we propose to apply instead a GD scheme on a transformation of the data adapted to the attack. This data transformation amounts to apply feature-depending learning rates and we show that this modified GD is able to handle any Mahalanobis attack, as well as more general attacks under some conditions. Unfortunately, choosing such adapted transformations can be hard for general attacks. To the rescue, we design a simple and tractable estimator whose adversarial risk is optimal up to within a multiplicative constant of 1.1124 in the population regime, and works for any norm.
Meyer Scetbon, Elvis Dohmatob
2023-01-31T09:11:59Z
http://arxiv.org/abs/2301.13486v1
# Robust Linear Regression: Gradient-descent, Early-stopping, and Beyond ###### Abstract In this work we study the robustness to adversarial attacks, of early-stopping strategies on gradient-descent (GD) methods for linear regression. More precisely, we show that early-stopped GD is optimally robust (up to an absolute constant) against Euclidean-norm adversarial attacks. However, we show that this strategy can be arbitrarily sub-optimal in the case of general Mahalanobis attacks. This observation is compatible with recent findings in the case of classification Vardi et al. (2022) that show that GD provably converges to non-robust models. To alleviate this issue, we propose to apply instead a GD scheme on a transformation of the data adapted to the attack. This data transformation amounts to apply feature-depending learning rates and we show that this modified GD is able to handle any Mahalanobis attack, as well as more general attacks under some conditions. Unfortunately, choosing such adapted transformations can be hard for general attacks. To the rescue, we design a simple and tractable estimator whose adversarial risk is optimal up to within a multiplicative constant of 1.1124 in the population regime, and works for any norm. ## 1 Introduction Machine learning models are highly sensitive to small perturbations known as _adversarial examples_(Szegedy et al., 2013), which are often imperceptible by humans. While various strategies such as adversarial training (Madry et al., 2018) can mitigate this vulnerability empirically, the situation remains highly problematic for many safety-critical applications like autonomous vehicles or health, and motivates a better theoretical understanding of what mechanisms may be causing this. From a theoretical perspective, the case of classification is rather well-understood. Indeed, the hardness of classification under test-time adversarial attacks has been crisply characterized (Bhagoji et al., 2019; Bubeck et al., 2018). In the special case of linear classification, explicit lower-bounds have been obtained (Schmidt et al., 2018; Bhattacharjee et al., 2021). However, the case of regression is relatively understudied. Recently, Xing et al. (2021) have initiated a theoretical study of linear regression under Euclidean attacks, where an adversary is allowed to attack the input data point at test time. The authors proposed a two-stage estimator and proved its consistency. The optimal estimator obtained in (Xing et al., 2021) corresponds to a ridge shrinkage (i.e \(\ell_{2}\) penalization). In this paper, we consider linear regression under adversarial test-time attacks w.r.t arbitrary norms (not just Euclidean \(f\)\(\ell_{2}\)-norms as in (Xing et al., 2021)), and analyze the robustness of gradient-descent (GD) along the entire optimization path. By doing so we observe that GD might fail to capture a robust predictor along its path especially in the case of non-Euclidean attacks. We propose a variant of the GD scheme where an adapted transformation on the data is performed before applying a GD scheme. This allows us to understand the effect on robustness of early-stopping strategies (not training till the end) for general attacks. Finally we design a generic algorithm able to produce a robust predictor against any norm-attacks. ### Summary of main contributions Our main contributions are summarized as follows. **- Case of gradient-descent (GD).** In Proposition 4.2, we show that early-stopped GD achieves near-optimal adversarial risk in case of Euclidean attacks. Early-stopping is crucial because the predictor obtained by running GD till the end can be arbitrarily sub-optimal in terms of adversarial risk (Proposition 4.1). Contrasting with Proposition 4.2, we show in Proposition 4.4 that early-stopped GD can be arbitrarily sub-optimal in the non-Euclidean case, e.g when the attacker's norm is a Mahalanobis norm. Thus, GD, along its entire optimization path, can fail to find robust model in general. **- An Adapted GD scheme (GD+).** We propose a mod ified version of GD, termed GD+, in which the dynamics are forced to be non-uniform across different features and exhibit different regimes where early-stopped GD+ can achieves near-optimal adversarial risk. More precisely, (i) we show that it achieves near-optimal adversarial risk in the case of Mahalanobis norm attacks (Proposition 5.1), (ii) we also prove that it is near-optimally robust under \(\ell_{p}\)-norm attacks as soon as the features are uncorrelated (Theorem 5.2) and (iii) we study the robustness along the entire optimization path of GD+ in the case of general norm-attacks. In particular, we provide a sufficient condition on the model such that early-stopped GD+ achieves near-optimal adversarial risk under general norm-attacks in Theorem 5.1 and we show that when this condition is not satisfied, GD+ can be arbitrarily sub-optimal (Proposition 5.3). **- A two-stage estimator for the general case.** Finally, we propose a simple two-stage algorithm (Algorithm 1) which works for arbitrary norm-attacks, and achieves optimal adversarial risk up to within a multiplicative constant factor in the population regime. Consistency and statistical guarantees for the proposed estimator are also provided. ### Related Work The theoretical understanding of adversarial examples is now an active area of research. Below is a list of works which are most relevant to our current letter. Tsipras et al. (2019) considers a specific data distribution where good accuracy implies poor robustness. (Shafai et al., 2018; Mahloujifar et al., 2018; Gilmer et al., 2018; Dohmatob, 2019) show that for high-dimensional data distributions which have concentration property (e.g., multivariate Gaussians, distributions satisfying log-Sobolev inequalities, etc.), an imperfect classifier will admit adversarial examples. Dobriban et al. (2020) studies tradeoffs in Gaussian mixture classification problems, highlighting the impact of class imbalance. On the other hand, Yang et al. (2020) observed empirically that natural images are well-separated, and so locally-lipschitz classifies shouldn't suffer any kind of test error vs robustness tradeoff. In the context of linear classification(Schmidt et al., 2018; Bubeck et al., 2018; Khim and Loh, 2018; Yin et al., 2019; Bhattacharjee et al., 2021; Min et al., 2021, 2021), established results show a clear gap between learning in ordinary and adversarial settings. Li et al. (2020) studies the dynamics of linear classification on separable data, with exponential-tail losses. The authors show that GD converges to separator of the dataset, which is minimal w.r.t to a norm which is an interpolation between the the \(\ell_{2}\) norm (reminiscent of normal learning), and \(\ell_{q}\)-norm, where \(q\) is the harmonic conjugate of the attacker's norm. Vardi et al. (2022) showed that on two-layer neural networks, gradient-descent with exponential-tail loss function converges to weights which are vulnerable to adversarial examples. Javanmard et al. (2020) study tradeoffs between ordinary and adversarial risk in linear regression, and computed exact Pareto optimal curves. Javanmard and Mehrabi (2021) also revisit this tradeoff for latent models and show that this tradeoff is mitigated when the data enjoys a low-dimensional structure. Dohmatob (2021); Hassani and Javanmard (2022) study the tradeoffs between interpolation, normal risk, and adversarial risk, for finite-width over-parameterized networks with linear target functions. Javanmard and Soltanolkotabi (2022) investigate the effect of adversarial training on the standard and adversarial risks and derive a precise characterization of them for a class of minimax adversarially trained models. The work most related to ours is (Xing et al., 2021) which studied minimax estimation of linear models under adversarial attacks in Euclidean norm. They showed that the optimal robust linear model is a ridge estimator whose regularization parameter is a function of the population covariance matrix \(\Sigma\) and the generative linear model \(w_{0}\). Since neither \(w_{0}\) nor \(\Sigma\) is known in practice, the authors proposed a two-stage estimator in which the first stage is consistent estimators for \(w_{0}\) and \(\Sigma\) and the second stage is solving the ridge problem. In (Xing et al., 2021), the authors cover only the case of Euclidean attacks while here, we extend the study of the adversarial risk in linear regression under general norm-attacks. More precisely, here we are interested in understanding the robustness of GD, for general attacks and along the entire optimization path. As a separate contribution, we also propose a new consistent two-stage estimator based on a "dualization" of the adversarial problem that can be applied for general attacks. ### Outline of Manuscript In Section 2, we present the problem setup, main definitions, and some preliminary computations. The adversarial risk of (early-stopped) GD in the infinite-sample regime is analyzed in Section 4; cases of optimality and sub-optimality of this scheme are characterized. In Section 5, GD+ (an improved version of GD) is proposed, and its adversarial risk in the population regime is studied. In Section 6, we consider the finite samples regime and we propose a simple two-stage estimator, which works for all attacker norms. Its adversarial risk is shown to be optimal up to within a multiplicative factor, and additive statistical estimation error due to finite samples. ## 2 Preliminaries Notations.Let us introduce some basic notations. Additional technical notations are provided in the appendix. \([d]\) denotes the set of integers from \(1\) to \(d\) inclusive. The maximum (resp. minimum) of two real numbers \(a\) and \(b\) will be denoted \(a\lor b\) (resp. \(a\wedge b\)). The operator norm of a matrix \(A\) is denoted \(\|A\|_{op}\) and corresponds to the positive square-root of the largest eigenvalue of \(AA^{\top}\). Given a positive-definite (p.d.) matrix \(S\in\mathbb{R}^{d\times d}\) and a vector \(w\in\mathbb{R}^{d}\), the Mahalanobis norm of \(w\) induced by \(S\) is defined by \(\|w\|_{S}:=\sqrt{w^{\top}Sw}\). We denote respectively \(M_{d}(\mathbb{R})\) and \(\mathcal{S}_{d}^{++}(\mathbb{R})\) the set of \(d\times d\) matrices and positive-definite matrices. Given \(p\in[1,\infty]\), its harmonic conjugate, denoted \(q(p)\) is the unique \(q\in[1,\infty]\) such that \(1/p+1/q=1\). For a given norm \(\|\cdot\|\), we denote \(\|\cdot\|_{\star}\) its dual norm, defined by \[\|w\|_{\star}:=\sup\{x^{\top}w\mid x\in\mathbb{R}^{d},\,\|x\|\leq 1\}, \tag{1}\] Note that if the norm \(\|\cdot\|\) is an \(\ell_{p}\)-norm for some \(p\in[1,\infty]\), then the dual norm is the \(\ell_{q}\)-norm, where \(q\) is the harmonic conjugate of \(p\). For example, the dual of the euclidean norm (corresponding to \(p=2\)) is the euclidean norm itself, while the dual of the usual \(\ell_{\infty}\)-norm is the \(\ell_{1}\)-norm. The unit-ball for a norm \(\|\cdot\|\) is denoted \(B_{\|\cdot\|}^{d}\), and defined by \(B_{\|\cdot\|}^{d}:=\{x\in\mathbb{R}^{d}\mid\|x\|\leq 1\}\). Given any \(s\geq 0\), set of \(s\)-sparse vectors in \(\mathbb{R}^{d}\) is denoted \(B_{0}^{d}\) and defined by \[B_{0}^{d}(s):=\{w\in\mathbb{R}^{d}\mid\|w\|_{0}\leq s\}. \tag{2}\] Finally, define absolute constants \(c_{0}:=\sqrt{2/\pi}\), \(\alpha:=2/(1+c_{0})\approx 1.1124\) and \(\beta=1.6862\). ### Problem Setup Fix a vector \(w_{0}\in\mathbb{R}^{d}\), a positive definite matrix \(\Sigma\) of size \(d\), and consider an i.i.d. dataset \(\mathcal{D}_{n}=\{(x_{1},y_{1}),\ldots,(x_{n},y_{n})\}\) of size \(n\), given by \[y_{i}=x_{i}^{\top}w_{0}+\epsilon_{i},\text{ for all }i\in[n] \tag{3}\] where \(x_{1},\ldots,x_{n}\) i.i.d \(\sim N(0,\Sigma)\) and \(\epsilon_{1},\ldots,\epsilon_{n}\) i.i.d \(\sim N(0,\sigma_{\epsilon}^{2})\) independent of the \(x_{i}\)'s. Thus, the distribution of the features is a centered multivariate Gaussian distribution with covariance matrix \(\Sigma\), while \(w_{0}\) is the generative model. \(\sigma_{\epsilon}\geq 0\) measures the size of the noise. These assumptions on the model will be assumed along all the formal claims of the paper. We also refer to \(n\) as the _sample size_, and to \(d\) as the _input-dimension_. In most of our analysis, except otherwise explicitly stated, we will consider the case of infinite-data regime \(n=\infty\) -or more generally \(n\gg d\), which allows us to focus on the effects inherent to the data distribution (controlled by feature covariance matrix \(\Sigma\)) and the inductive bias of the norm w.r.t which the attack is measured, while side-stepping issues due to finite samples and label noise. Also note that in this infinite-data setting, label noise provably has no influence on the learned model. ### Adversarial Robustness Risk Given a linear model \(w\in\mathbb{R}^{d}\), an attacker is allowed to swap a clean test point \(x\sim N(0,\Sigma)\) with a corrupted version \(x^{\prime}=x+\delta\) thereof. The perturbation \(\delta=\delta(x)\in\mathbb{R}^{d}\) is constrained to be small: this is enforced by demanding that \(\|\delta\|\leq r\), where \(\|\cdot\|\) is a specified norm and \(r\geq 0\) is the attack budget. One way to measure the performance of a linear model \(w\in\mathbb{R}^{d}\) under such attacks of size \(r\), is via it so-called adversarial risk (Madry et al., 2018; Xing et al., 2021). **Definition 2.1**.: _For any \(w\in\mathbb{R}^{d}\) and \(r\geq 0\), define the adversarial risk of \(w\) at level \(r\geq 0\) as follows_ \[E^{\|\cdot\|}(w,w_{0},r):=\mathbb{E}_{x}\left[\sup_{\|\delta\|\leq r}((x+ \delta)^{\top}w-x^{\top}w_{0})^{2}\right], \tag{4}\] _where \(x\sim N(0,\Sigma)\) is a random test point._ It is clear that \(r\mapsto E^{\|\cdot\|}(w,w_{0},r)\) is a non-decreasing function and \(E^{\|\cdot\|}(w,w_{0},0)\) corresponds to the ordinary risk of \(w\), namely \[E(w,w_{0}):=\mathbb{E}_{x}[(x^{\top}w-x^{\top}w_{0})^{2}]=\|w-w_{0}\|_{\Sigma} ^{2}. \tag{5}\] In classical regression setting, the aim is to find \(w\) which minimizes \(E(w,w_{0})\). In the adversarial setting studied here, the aim is to minimize \(E^{\|\cdot\|}(w,w_{0},r)\) for any \(r\geq 0\). We will henceforth denote by \(E^{\|\cdot\|}_{opt}(w_{0},r)\) the smallest possible adversarial risk of a linear model for \(\|\cdot\|\)-attacks of magnitude \(r\), that is \[E^{\|\cdot\|}_{opt}(w_{0},r):=\inf_{w\in\mathbb{R}^{d}}E^{\|\cdot\|}(w,w_{0},r). \tag{6}\] We start with the following well-known elementary but useful lemma which proved in the supplemental. Also see Xing et al. (2021); Javanmard and Soltanolkotabi (2022) for the special case of Euclidean-norm attacks. **Lemma 2.1**.: _Recall that \(c_{0}=\sqrt{2/\pi}\), then for any \(w\in\mathbb{R}^{d}\) and \(r\geq 0\), it holds that_ \[\begin{split} E^{\|\cdot\|}(w,w_{0},r)=\|w-w_{0}\|_{\Sigma}^{2}+r ^{2}\|w\|_{\star}^{2}\\ \qquad\qquad\qquad\qquad\qquad\qquad+2c_{0}r\|w-w_{0}\|_{\Sigma} \|w\|_{\star}.\end{split} \tag{7}\] The mysterious constant \(c_{0}=\sqrt{2/\pi}\) in Lemma 2.1 corresponds to the expected absolute value of a standard Gaussian random variable. In order to obtain a robust predictor to adversarial attacks, one aims at minimizing the adversarial risk introduced in (4). However the objective function of the problem, even in the linear setting (7), is rather complicated to optimize due to its non-convexity. ## 3 A Proxy for Adversarial Risk The following lemma will be one of the main workhorses in subsequent results, as it allows us to replace the adversarial risk functional \(E\) with a more tractable proxy \(\widetilde{E}\). **Lemma 3.1**.: _For any \(w\in\mathbb{R}^{d}\) and \(r\geq 0\), it holds that_ \[E^{\|\cdot\|}(w,w_{0},r)\leq\widetilde{E}^{\|\cdot\|}(w,w_{0},r)\leq\alpha\cdot E ^{\|\cdot\|}(w,w_{0},r), \tag{8}\] _where \(\alpha:=2/(1+\sqrt{2/\pi})\approx 1.1124\) and_ \[\widetilde{E}^{\|\cdot\|}(w,w_{0},r):=(\|w-w_{0}\|_{\Sigma}+r\|w\|_{\star})^{ 2}. \tag{9}\] The result is proved in the appendix. Since \(\alpha\approx 1.1124\), the above approximation would allow us to get roughly \(90\%\)-optimality in the adversarial risk by minimizing the (much simpler) proxy function \(w\mapsto\widetilde{E}^{\|\cdot\|}(w,w_{0},r)\) instead. This will precisely be the focus of the next sections. We also denote \(\widetilde{E}^{\|\cdot\|}_{opt}(w_{0},r)\) the smallest possible value of the adversarial risk proxy \(\widetilde{E}^{\|\cdot\|}(\cdot,w_{0},r)\). ## 4 Gradient-Descent for Linear Regression While the optimization of adversarial risk can be complex, the minimization of ordinary risk can be obtained by a simple gradient-descent. When applying a vanilla gradient-descent (GD) scheme with a step-size \(\eta>0\), starting at \(w_{\eta}(0):=0_{d}\) to the ordinary risk defined in (5), one obtains at each iteration \(t\geq 1\) the following updates: \[\begin{split} w_{\eta}(t,w_{0})&:=w_{\eta}(t-1)- \eta\nabla_{w}E(w_{\eta}(t-1),w_{0})\\ &=(I_{d}-(I_{d}-\eta\Sigma)^{t})w_{0}\end{split} \tag{10}\] This scheme can be seen as a discrete approximation of the gradient flow induced by the following ODE: \[\left\{\begin{array}{l}\dot{w}(t)=-\Sigma(w(t)-w_{0})\\ w(0)=0_{d}\end{array}\right.\] which has a closed-form solution given by \[w(t,w_{0}):=(I_{d}-\exp(-t\Sigma))w_{0}. \tag{11}\] Our goal is to evaluate the robustness of the predictors obtained along the gradient descent path. We consider the continuous-time GD, as the analysis of discrete-time GD is analogous due to the absence of noise: the former is an infinitely small step-size \(\eta\) limit of the latter (Ali et al., 2019, 2020). In the following we mean by early-stopped GD, any predictor (indexed by training time, \(t\)) obtained along the path of the gradient flow. Observe that for such predictors one has always that \[E^{\|\cdot\|_{2}}(w(t,w_{0}),w_{0},r)\geq E^{\|\cdot\|}_{opt}(w_{0},r)\] and so for any \(t\geq 0\). In particular we will focus on the one that minimizes the adversarial risk at test time, that is \(\inf_{t\geq 0}E^{\|\cdot\|_{2}}(w(t,w_{0}),w_{0},r)\). ### Euclidean Attacks: Almost-optimal Robustness Here we consider Euclidean attacks, meaning that the attacker's norm is \(\|\cdot\|=\|\cdot\|_{*}=\|\cdot\|_{2}\). In the following proposition, we first characterize the non-robustness of the generative model \(w_{0}\). **Proposition 4.1**.: _If \(r\leq\sqrt{2/\pi}\frac{\|w_{0}\|_{2}}{\|w_{0}\|_{\Sigma-1}}\), then we have that_ \[E^{\|\cdot\|_{2}}(w_{0},w_{0},r)=E^{\|\cdot\|_{2}}_{opt}(w_{0},r).\] _and as soon as \(r>\sqrt{2/\pi}\frac{\|w_{0}\|_{2}}{\|w_{0}\|_{\Sigma-1}}\), we have_ \[E^{\|\cdot\|_{2}}(w_{0},w_{0},r)/E^{\|\cdot\|_{2}}_{opt}(w_{0},r)\geq\frac{r ^{2}\|w_{0}\|_{2}^{2}}{\|w_{0}\|_{2}^{2}}.\] It is important to notice that the generative model \(w_{0}\) can be optimal w.r.t both the standard risk \(E(\cdot,w_{0})\) and the adversarial risk \(E^{\|\cdot\|_{2}}(\cdot,w_{0},r)\) for Euclidean attacks as soon as \(r\) is sufficiently small. However, as \(r\) increases, its adversarial risk becomes arbitrarily large. Therefore, applying a GD scheme until convergence may lead to predictors which are not robust to adversarial attacks even in the Euclidean setting. In the next proposition we investigate the robustness of the predictors obtained along the path of the GD scheme \((w(t,w_{0}))_{t\geq 0}\) and we show that for any attack \(r\geq 0\), this path contains an optimally robust predictor (up to an absolute constant \(\beta:=1.6862\)). **Proposition 4.2**.: _The following hold:_ \[-\text{If }r\leq\sqrt{2/\pi}\frac{\|w_{0}\|_{2}}{\|w_{0}\|_{\Sigma-1}}\text {or }r\geq\sqrt{\pi/2}\frac{\|w_{0}\|_{\Sigma^{2}}}{\|w_{0}\|_{\Sigma}}\text{, then}\] \[\inf_{t\geq 0}E^{\|\cdot\|_{2}}(w(t,w_{0}),w_{0},r)=E^{\|\cdot\|_{2}}_{opt}(w_{ 0},r). \tag{12}\] \[-\text{If }\sqrt{2/\pi}\frac{\|w_{0}\|_{2}}{\|w_{0}\|_{\Sigma-1}}<r<\sqrt{\pi/2} \frac{\|w_{0}\|_{\Sigma^{2}}}{\|w_{0}\|_{\Sigma}}\text{, we have that}\] \[\inf_{t\geq 0}E^{\|\cdot\|_{2}}(w(t,w_{0}),w_{0},r)\leq\beta E^{\|\cdot\|_{2}}_{ opt}(w_{0},r) \tag{13}\] Therefore the early-stopped vanilla GD scheme is able to capture an almost-optimally robust predictor for any Euclidean attack of radius \(r\) (see Figure 1 for an illustration). In our proof of Proposition 4.2, we show that GD early-stopped at time \(t\) has the same adversarial risk (up to multiplicative constant) as a ridge estimator with regularization parameter \(\lambda\propto 1/t\). The result then follows from (Xing et al., 2021), where it was shown the minimizer of the adversarial risk under Euclidean attacks is a ridge estimator. In the isotropic case, i.e. when \(\Sigma=I_{d}\), early-stopped GD even achieves the exact optimal adversarial risk. **Proposition 4.3**.: _Assume that \(\Sigma=I_{d}\), then for all \(r\geq 0\), we have \(\inf_{t\geq 0}E^{\|\cdot\|_{2}}(w(t,w_{0}),r)=E^{\|\cdot\|_{2}}_{opt}(w_{0},r)\)._ However, such results are only possible when the attacks are Euclidean. In the following section, we show that vanilla GD, along its whole optimization path, can be arbitrarily sub-optimal in terms of adversarial risk for Mahalanobis Attacks. ### Mahalanobis Attacks: Sub-optimality of Gradient-Descent Let us consider attacks w.r.t the Mahalanobis norm induced by a symmetric positive definite matrix \(B\), i.e. we consider the case where \[\|\cdot\|=\|\cdot\|_{B}:=\|B^{1/2}\cdot\|_{2}\.\] In the next proposition, we present a simple case where GD fails to be adversarially robust under such attacks. **Proposition 4.4**.: _Let \(d=2\), \(\Sigma=I_{d}\) and for any integer \(m\geq 1\), let us consider the following positive-definite matrix_ \[B=B(m)=\begin{pmatrix}1/m&0\\ 0&m\end{pmatrix}. \tag{14}\] _Also, consider the following choice of generative model \(w_{0}=w_{0}(m)=(1/\sqrt{m},1)\). Then, for any fixed \(r>0\),_ \[\lim_{m\rightarrow+\infty}\frac{\inf_{t\geq 0}E^{\|\cdot\|_{B}}(w(t,w_{0}),w _{0},r)}{E^{\|\cdot\|_{B}}_{opt}(w_{0},r)}=+\infty.\] Therefore under Mahalanobis attacks, any predictor obtained along the path of GD can be arbitrarily sub-optimal. To alleviate this issue, we propose in the next section a modified version of the vanilla GD scheme which can handle such attacks and even more general ones. ## 5 An Adapted Gradient-Descent: GD+ In fact, it is possible to obtain an almost optimally robust predictor for Mahalanobis attacks using a modified version of the GD scheme, termed GD+. Let \(M\in\mathcal{M}_{d}(\mathbb{R})\) be an arbitrary invertible matrix. In order to build an almost-optimally robust predictor for such attacks, we propose to apply a GD scheme on a transformed version of the data. More precisely, we propose to apply a GD scheme to the following objective function: \[E_{M}(w,w_{0}):=\mathbb{E}_{(x,y)\sim P_{xy}}(w^{\top}Mx-y)^{2}\] which leads to the following optimization dynamics: \[w^{M}(t,w_{0}):=(I_{d}-e^{-tM\Sigma M^{T}})(M^{-1})^{\top}w_{0}. \tag{15}\] This transformation amounts to apply feature-dependent gradient steps determined by \(M\) to the classical GD scheme. In the following proposition, we show that when \(M\) is adapted to the attack, early-stopped GD+ is optimally robust (up to an absolute constant). **Proposition 5.1**.: _For any \(B\in\mathcal{S}^{++}_{d}(\mathbb{R})\) and \(r\geq 0\),_ \[\inf_{t\geq 0}E^{\|\cdot\|_{B}}(B^{1/2}w^{B^{1/2}}(t,w_{0}),r)\leq \beta E^{\|\cdot\|_{B}}_{opt}(r,w_{0}).\] Therefore by choosing \(M=B^{1/2}\), GD+ is able to obtain near-optimality under \(\|\cdot\|_{B}\)-norm attack. See Figure 2 for an illustration. Note that when \(B=B^{1/2}=I_{d}\), then \(w^{I_{d}}(t,w_{0})=w(t,w_{0})\), \(\|\cdot\|_{I_{d}}=\|\cdot\|_{2}\), and we recover as a special case our result obtained in Proposition 4.2. In the following section, we investigate the robustness of GD+ scheme on more general attacks. Figure 1: We consider a \(d=2\) dimensional case where \(w_{0}\) and \(\Sigma\) are sampled according to a Gaussian and an Exponential distributions respectively. We plot the adversarial risk under Euclidean attacks of the optimal predictor along the GD path (**GD: adv. risk**) as well as its standard risk (**GD: standard risk**) and we compare them to the risks of the optimal predictor (**Opt**) solving adversarial problem when varying the attack strength \(r\). Figure 2: Here we plot the example studied in Proposition 4.4 for a fixed radius \(r=1\) when varying \(m\). **GD+** represents the modified GD scheme with \(M=B^{1/2}\) where \(B\) is defined in Eq. (14), **GD** represent the vanilla GD and **Opt** is the optimal predictor. Observe that the optimal adversarial risk goes to \(0\) as \(m\) goes to \(+\infty\) while the adversarial risk of the vanilla GD converges towards a constant. ### Robustness Under General Attacks Our goal here is to provide a simple control of the adversarial risk of GD+ under general norm-attack \(\|\cdot\|\). For that purpose, define the _condition number_\(\kappa^{\|\cdot\|}(M)\) of any matrix \(M\), w.r.t to the attacker's norm \(\|\cdot\|\) as \[\kappa^{\|\cdot\|}(M) :=\lambda_{max}^{\|\cdot\|}(M)/\lambda_{min}^{\|\cdot\|}(M),\,\text {where}\] \[\lambda_{max}^{\|\cdot\|}(M) :=\sup_{w\neq 0}\frac{\|Mw\|_{2}}{\|w\|_{*}},\,\text{and}\] \[\lambda_{min}^{\|\cdot\|}(M) :=\inf_{w\neq 0}\frac{\|Mw\|_{2}}{\|w\|_{*}}\] where dual norm \(\|\cdot\|_{*}\) is defined with respect to the attacker-norm \(\|\cdot\|\). We are now ready to show a general upper-bound of our modified GD scheme. **Proposition 5.2**.: _For any \(r\geq 0\), and invertible matrix \(M\in\mathcal{M}_{d}(\mathbb{R})\), it holds that_ \[\frac{\inf_{t\geq 0}E^{\|\cdot\|}(M^{\top}w^{M}(t,w_{0}),w_{0},r)}{E_{opt}^{ \|\cdot\|}(r,w_{0})}\leq\beta\kappa^{\|\cdot\|}\left((M^{\top})^{-1}\right)^{ 2}.\] Therefore along the path of GD+ induced by \(M\), one can find a \(\beta\kappa^{\|\cdot\|}((M^{\top})^{-1})^{2}\)-optimally robust predictor against \(\|\cdot\|\)-attack. In particular, observe that when \(M=B^{1/2}\), and \(\|\cdot\|=\|\cdot\|_{B}\), we obtain that \(\kappa^{\|\cdot\|}((M^{\top})^{-1})=1\) and we recover as a special case the result of Proposition 5.1. **Remark 5.1**.: _It is important to notice that \(M\) can be chosen arbitrarily, and therefore adapted to the norm of the attacks such that \(M\to\kappa^{\|\cdot\|}((M^{\top})^{-1})\) is minimized. However, minimizing this quantity in general is hard due to the arbitrary choice of the norm \(\|\cdot\|\)._ In the next section, we study a specific case of GD+ and provide a sufficient condition on the model \(w_{0}\) such that this scheme is near-optimal under general norm-attacks. ### A Sufficient Condition for Optimality We consider the general case of an arbitrary norm-attack \(\|\cdot\|\) with dual \(\|\cdot\|_{*}\) and we focus on a very specific path induced GD+, which is when \(M=\Sigma^{-1/2}\). In that case, data are normalized and the path drawn by \((Mw^{M}(t,w_{0}))_{t\geq 0}\) is in fact a uniform shrinkage of the generative model. More precisely, the predictors obtained along such a path are exactly the one in the chord \([0,w_{0}]:=\{\gamma w_{0}\mid\gamma\in[0,1]\}\). In particular, the optimal adversarial risk achieved by this modified GD scheme is given by \[\begin{split}\inf_{t\geq 0}& E^{\|\cdot\|}(\Sigma^{-1/2}w ^{\Sigma^{-1/2}}(t,w_{0}),w_{0},r)\\ &=\inf_{\gamma\in[0,1]}E^{\|\cdot\|}(\gamma w_{0},w_{0},r)\end{split} \tag{16}\] Let \(g(w_{0})\in\mathbb{R}^{d}\) be a subgradient of \(\|\cdot\|_{*}\) at \(w_{0}\). For example, in the case of \(\ell_{\infty}\)-norm-attacks, one may take \(g(w_{0})=(\text{sign}(w_{0,1}),\ldots,\text{sign}(w_{0,d}))\), with \(\text{sign}(0):=0\). In the case of of a Mahalanobis attack where \(\|\cdot\|=\|\cdot\|_{B}\) for some positive-definite matrix \(B\), one can take \(g(w_{0})=B^{1/2}w_{0}/\|w_{0}\|_{B}\) with \(g(0)=0\). We can now state our sufficient condition for near-optimality of GD+. **Condition 5.1**.: _The subgradient \(g(w_{0})\in\mathbb{R}^{d}\) can be chosen such that_ \[\frac{\|g(w_{0})\|\|w_{0}\|_{*}}{\|g(w_{0})\|_{\Sigma^{-1}}\|w_{0}\|_{\Sigma}} \geq c,\] _where \(c\) is a positive absolute constant._ The above condition is sufficient in order to obtain near-optimality of GD+ as we show in the next proposition (see Figure 3 for an illustration). **Theorem 5.1**.: _Suppose Condition 5.1 is in order. Then, for any positive \(r\), it holds for \(M=\Sigma^{-1}\) that_ \[\frac{\inf_{t\geq 0}E^{\|\cdot\|}(M^{\top}w^{M}(t,w_{0}),w_{0},r)}{E_{opt}^{ \|\cdot\|}(r,w_{0})}\leq(1\lor 1/c^{2})\alpha.\] In particular, for the case of \(\ell_{\infty}\)-norm-attacks, we have the following corollary. **Corollary 5.1**.: _Consider the case where \(\ell_{\infty}\)-norm-attacks. If there exists an absolute constant \(c>0\) such that \(\|w_{0}\|_{1}\geq c\sqrt{d}\|w_{0}\|_{2}\), then with \(M=\Sigma^{-1}\) it holds that_ \[\frac{\inf_{t\geq 0}E^{\|\cdot\|_{\infty}}(M^{\top}w^{M}(t,w_{0}),w_{0},r)}{E_{opt }^{\|\cdot\|_{\infty}}(r,w_{0})}\leq\left(1\lor\frac{\kappa^{\|\cdot\|_{2}}( \Sigma)}{c^{2}}\right)\alpha.\] For example, when \(w_{0}=(1,\ldots,1)\) -or equivalently, random \(w_{0}\sim N(0,I_{d})\), one can take \(c=1\), and observe that \(\|w_{0}\|_{1}\gtrsim d\), \(\|w_{0}\|_{2}\asymp\sqrt{d}\), and so the bound in Corollary 5.1 holds. The Condition 5.1 that ensures optimality of GD+ in Theorem 5.1 cannot be removed. Indeed, we exhibit a simple case where the uniform shrinkage strategy fails miserably to find robust models even when they exist. **Proposition 5.3**.: _Let \(\Sigma=I_{d}\), then it is possible to construct \(w_{0}\in\mathbb{R}^{d}\) and \(r>0\) such that in the limit \(d\to\infty\), it holds that \(r\to 0\), \(r\sqrt{d}\to+\infty\), and_ \[\frac{\inf_{t\geq 0}E^{\|\cdot\|_{\infty}}(w(t,w_{0}),w_{0},r)}{E_{opt}^{\| \cdot\|_{\infty}}(r,w_{0})}\to+\infty.\] Therefore the uniform shrinkage strategy induced by GD+ (which here reduces to vanilla GD since \(\Sigma=I_{d}\)) is not adapted for all scenarios; it may fail to find a robust model even when one exists. In the next section, we restrict ourselves to the case of \(\ell_{p}\)-norm-attacks for \(p\in[1,+\infty]\) and show that GD+ is able to reach optimal robustness as soon as the data has uncorrelated features. ### Optimality Under \(\ell_{p}\)-norm Attacks Now, let \(p\in[1,+\infty]\) and consider attacks w.r.t the \(\ell_{p}\)-norm, i.e the attack strength is measured w.r.t the norm \(\|\cdot\|=\|\cdot\|_{p}\), with dual norm \(\|\cdot\|_{\star}=\|\cdot\|_{q}\), where \(q\in[1,+\infty]\) is the harmonic conjugate of \(p\). Popular examples in the literature are \(p=2\) (corresponding to Euclidean attacks, considered in Section 4.1) and \(p=\infty\). In this section we assume that \(\Sigma\) is a diagonal positive-definite matrix. This assumption translates the fact that, both norm \(\|\cdot\|_{\Sigma}\) and \(\|\cdot\|_{q}\) act on the same coordinates system. When these two norms are aligned, we show in the next theorem that the the minimiser of the proxy introduced in Eq. (9) is in fact a non-uniform shrinkage of \(w_{0}\) which can be recovered by GD+. An illustration of the result is provided in Figure 4. **Theorem 5.2**.: _Let \(\Sigma\) be any definite positive diagonal matrix and \(p\in[1,+\infty]\), then we have_ \[\inf_{M\in\mathcal{M}_{d}(\mathbb{R}),t\geq 0}\frac{E^{\|\cdot\|_{p}}(M^{ \top}w^{M}(t,w_{0}),w_{0},r)}{E^{\|\cdot\|_{p}}_{opt}(r,w_{0})}\leq\alpha.\] Therefore GD+ is able to reach near-optimality in term of adversarial risk, and so for any \(\ell_{p}\)-attacks with \(p\in[1,+\infty]\) as soon as \(\|\cdot\|_{p}\) and \(\|\cdot\|_{\Sigma}\) are aligned. Applying a GD+ scheme in practice might be difficult for general attacks, as the choice of the transformation \(M\) must be adapted accordingly. To alleviate this issue, we propose in the next section a simple and tractable two-stage estimator able to reach near-optimality and so for general attack. ## 6 Efficient Algorithms for Attacks in General Norms We propose a simple tractable estimator whose adversarial risk is optimal up to within a multiplicative constant of 1.1124. Here, we drop the assumption of infinite training data \(n=\infty\). Thus, the estimators are functions of the finite-training dataset \(D_{n}:=\{(x_{1},y_{1}),\ldots,(x_{n},y_{n})\}\), generated according to (3). ### A Two-Stage Estimator and its Statistical Analysis Consider any vector \(\widehat{w}\) which minimizes the adversarial risk proxy \(w\mapsto\widetilde{E}^{\|\cdot\|}(w,w_{0},r)\) defined in (9). Note apart from its clear dependence on the generative model \(w_{0}\), \(\widetilde{E}^{\|\cdot\|}\) also depends on the feature covariance matrix \(\Sigma\). However, we don't assume that \(w_{0}\) nor \(\Sigma\) are known before hand; it has to be estimated from the finite training dataset \(\mathcal{D}_{n}\). Thus, we propose a two-stage estimator described below in Algorithm 1. ``` 1:Stage 1: Compute consistent estimators \(\widehat{w}_{0}\) and \(\widehat{\Sigma}\) from \(w_{0}\) and \(\Sigma\) respectively, from the data \(\mathcal{D}_{n}\). 2:Stage 2: Compute \(\widehat{w}\) which minimizes the adversarial risk proxy \(w\mapsto\widetilde{E}^{\|\cdot\|}(w,w_{0},r)\) defined in (9). See Algorithms 2 and 3 for implementations of this step. ``` **Algorithm 1** Proposed Two-Stage Estimator. **Stage 1** of Algorithm 1 can be implemented using off-the-shelf estimators which adapt to the structural assumptions on \(\Sigma\) and \(w_{0}\) and \(\Sigma\) (sparsity, etc.). Later, we will pro Figure 4: We consider the case where \(d=2\), \(\Sigma\) is diagonal and \(w_{0}\) and the diagonal coefficients of \(\Sigma\) are sampled according to a Gaussian and an exponential distribution respectively. The choice of \(M\) is obtained by fine-tuning the GD+. We compare the adversarial and standard risks under \(\ell_{\infty}\) of the optimal predictor in the GD+ path with those of the optimally robust predictor when varying the radius \(r\). Figure 3: We consider the case exhibited in the Corollary 5.1 when \(d=2\), \(\Sigma=I_{d}\) and \(w_{0}=(1,1)\) under \(\ell_{\infty}\)-attacks. We plot the adversarial as well as the standard risks for both the optimal shrinkage predictor as well as the optimal predictor of the adversarial risk when varying \(r\). vide simple tractable algorithms for implementing **Stage 2**. Note that **Stage 2** implicitly requires the knowledge of the attacker-norm as it aims at minimizing \(\widetilde{E}^{\|\cdot\|}\). ### Consistency of Proposed Two-Stage Estimator We now establish the consistency of our proposed two-stage estimator \(\widehat{w}\) computed by Algorithm 1. Let \(\widehat{\Sigma}\) be an operator-norm consistent data-driven minimax estimator for \(\Sigma\), and let \(\widehat{w}_{0}\) be a consistent estimator of the generative model \(w_{0}\). Define error terms \[e_{1}:=\|\widehat{w}_{0}-w_{0}\|_{2},\,e_{2}:=\|\widehat{\Sigma}-\Sigma\|_{op}. \tag{17}\] We are now ready to state the adversarial risk-consistency result for our proposed two-stage estimator (Algorithm 1). **Theorem 6.1**.: _For all \(r\geq 0\), it holds that_ \[\widetilde{E}^{\|\cdot\|}_{opt}(w_{0},r) \leq\widetilde{E}^{\|\cdot\|}(\widehat{w},w_{0},r)\leq\widetilde {E}^{\|\cdot\|}_{opt}(w_{0},r)+\Delta,\] \[E^{\|\cdot\|}_{opt}(w_{0},r) \leq E^{\|\cdot\|}(\widehat{w},w_{0},r)\leq\alpha E^{\|\cdot\|}_{ opt}(w_{0},r)+\Delta,\] _where \(\alpha:=2/(1+c_{0})\approx 1.1124\), \(\Delta=O(e_{1}^{2}+e_{2}^{2})\), where the hidden constant in the big-O is of order \(\max(\|\Sigma\|_{op}^{2},\|w_{0}\|_{\Sigma}^{2})\)._ Thus, our proposed two-stage estimator is robust-optimal up to within a multiplicative factor \(2/(1+c_{0})\approx 1.1124\), and an additive term \(O(e_{1}^{2}+e_{2}^{2})\) which is due to estimation error of \(w_{0}\) and \(\Sigma\) from training data. Note that if we assume that the covariance matrix \(\Sigma\) of the features is known, or equivalently that we have access to an unlimited supply of unlabelled data from \(N(0,\Sigma)\), then we effectively have \(e_{2}=0\). In this case, the statistical error of our proposed estimator \(\widehat{w}\) is dominated by the error \(e_{1}^{2}\) associated with estimating the generative model \(w_{0}\). Under sparsity assumptions, this error term is further bounded by \(\frac{\sigma_{c}^{2}s\log(ed/s)}{n}\) (thanks to the following well-known result), which tends to zero if the input dimension \(d\) does not grow much faster than the sample size \(n\).. **Proposition 6.1** ((Bickel et al., 2009; Bellec et al., 2018)).: _If \(1\leq s\leq d/2\), then under some mild technical conditions, it holds w.h.p that_ \[\inf_{\widehat{w}_{0}}\sup_{w_{0}\in B_{0}^{d}(s)}\underbrace{\|\widehat{w}_{ 0}-w_{0}\|_{2}}_{c_{1}}\asymp\sigma_{\epsilon}\sqrt{\frac{s\log(ed/s)}{n}}. \tag{18}\] _where \(B_{0}^{d}(s)\) is defined in Eq. (2). Moreover, the above minimax bound is attained by the square-root Lasso estimator with tuning parameter \(\lambda\) given by \(\lambda\asymp\sqrt{\frac{\log(2d/s)}{n}}\)._ **Remark 6.1**.: _Note that, in the special case of Euclidean-norm attacks, our Theorem 6.1 (which works for all attack norms) recovers the adversarial risk-consistency result established in (Xing et al., 2021) as a special case._ ### Algorithm 1: Primal-Dual Algorithm We now device a simple primal-dual algorithm for computing the second stage of our proposed estimator (Algorithm 1). The algorithm works for any covariance matrix \(\Sigma\) and norm-attack with tractable proximal operator. Let \(\widehat{w}_{0}\) and \(\widehat{\Sigma}\) be the estimates computed in the **Stage 1** of Algorithm 1. Define and \(K:=\widehat{\Sigma}^{1/2}\), \(a:=K\widehat{w_{0}}\), \(f(z):=\|z-a\|_{2}\), \(g(w):=r\|w\|_{\star}\). Recall now that \(\sqrt{\widetilde{E}^{\|\cdot\|}(w,w_{0},r)}=\|w-w_{0}\|_{\Sigma}+r\|w\|_{\star}\) and so for any model \(w\). Then by "dualizing", we get that \[\begin{split}&\inf_{w\in\mathbb{R}^{d}}\|w-w_{0}\|_{\Sigma}+r\|w\|_{ \star}=\inf_{w\in\mathbb{R}^{d}}f(Kw)+g(w)\\ &=\inf_{w\in\mathbb{R}^{d}}g(w)+\sup_{z\in\mathbb{R}^{d}}z^{\top} Kw-f^{\star}(z)\\ &=\inf_{w\in\mathbb{R}^{d}}\sup_{z\in\mathbb{R}^{d}}\underbrace {z^{\top}Kw-f^{\star}(z)+g(w)}_{H(w,z)},\end{split} \tag{19}\] where \(f^{\star}\) is the Fenchel-Legendre transform of \(f\). Consider the following so-called Chambolle-Pock algorithm (Chambolle and Pock, 2010) for computing a saddle-point for the function \(H\). ``` 1:\(\widehat{w}_{0},\widehat{\Sigma},\eta_{1},\eta_{2},z^{(0)}=w^{(0)}=\mathbf{0}_{d}\). 2:\(z^{(t+1)}\leftarrow\mathrm{proj}_{\mathbb{R}^{d}\|\cdot\|_{2}}(z^{(t)}+\eta_{2} \widehat{\Sigma}^{1/2}(u^{(t)}-\widehat{w}_{0}))\) 3:\(w^{(t+1)}\leftarrow\mathrm{prox}_{\eta_{1}r\|\cdot\|_{\star}}(w^{(t)}-\eta_{1} \widehat{\Sigma}^{1/2}z^{(t+1)})\), 4:\(u^{(t+1)}\gets 2w^{(t+1)}-w^{(t)}\) ``` **Algorithm 2** Primal-Dual algorithm which implements **Stage 2** of Algorithm 1. Only one iteration is shown here. Here, the \(\eta_{k}\)'s are stepsizes chosen such that \(\eta_{1}\eta_{2}\|\widehat{\Sigma}\|_{op}^{1/2}<1\). The nice thing here is that the projection onto the \(\ell_{2}\)-ball Figure 5: Experiments with our two-stage estimator Algorithm 1. Here, we focus on \(\ell_{\infty}\)-norm-attacks. ”1+2” means **Stage 2** of our Algorithm 1 is computed via the primal-dual algorithm (Algorithm 2), while ”1+3” means it is computed via the Algorithm 3. The experimental setting here is: input-dimension \(d=200\); covariance matrix \(\Sigma\) of the features = heterogeneous diagonal, with entries from \(\mathrm{Exp}(1)\) distribution; the generative \(w_{0}\) is \(s\)-sparse vector (with \(s=10\)), normalized so that \(\|w_{0}\|_{\Sigma}^{2}=1\). Notice how the adversarial risk of the estimator improves with number of samples \(n\), as expected. See supplemental for details. (line 1) admits a simple analytic formula. In the case of \(\ell_{\infty}\)-norm-attacks, the second line corresponds to the well-known soft-thresholding operator; etc. Refer to Figures 5 & 6 for empirical illustrations of the algorithm. The following convergence result follows directly from (Chambolle and Pock, 2010). **Proposition 6.2**.: _Algorithm 2 converges to a stationary point \((w^{(\infty)},z^{(\infty)})\) of the \(H\) at an ergodic rate \(O(1/t)\)._ ### Algorithm 2: Simple Thresholding-Based Algorithm in the Case of \(\ell_{\infty}\)-Norm Attacks Now, suppose the covariance matrix of the features \(\Sigma\) is diagonal, i.e \(\Sigma=\operatorname{diag}(\lambda_{1},\dots,\lambda_{d})\). In the case of \(\ell_{\infty}\)-attacks, we provide a much simpler and faster algorithm for computing the second stage of our proposed estimator. Indeed, for \(\ell_{\infty}\)-attack we obtain an explicit form of the optimal solution minimizing Eq. (9) as shown in the next Proposition. We deduce the following result. **Proposition 6.3**.: _Let \(c:=\max_{1\leq j\leq d}|(w_{0})_{j}|\lambda_{j}\). There exists \(t\in[0,c]\) such that \(w(t)\in\mathbb{R}^{d}\) is a minimizer of the convex function \(w\mapsto\|w-w_{0}\|_{\Sigma}+r\|w\|_{1}\), where_ \[w(t)_{j}=\text{ST}((w_{0})_{j};rt/\lambda_{j}),\text{ for all }j\in[d]. \tag{20}\] _and \(\text{ST}(\cdot;s)\) is the soft-thresholding operator at level \(s\)._ An inspection of (20) reveals a kind of feature-selection. Indeed, if the \(j\)th component of the ground-truth model \(w_{0}\) is small in the sense that \(|(w_{0})_{j}|\leq rt/\lambda_{j}\), then \(w(t)_{j}=0\), i.e the \(j\)th component of \(w_{0}\) should be suppressed. On the other hand, if \(|(w_{0})_{j}|\geq rt/\lambda_{j}\), then \((w_{0})_{j}\) should be replaced by the translated version \((w_{0})_{j}-\operatorname{sign}((w_{0})_{j})rt/\lambda_{j}\). That is weak components of \(w_{0}\) are suppressed, while strong components are boosted. The result is Algorithm 3, a simple method for computing **Stage 2** of our proposed two-stage estimator (Algorithm 1). Refer to Figure 5 for an empirical illustration. ``` 1:\(\widehat{w}_{0},\widehat{\Sigma}\) 2:Compute \(\widehat{c}=\max_{1\leq j\leq d}|(\widehat{w}_{0})_{j}|\widehat{\lambda}_{j}\). 3:For each \(t\) in a finite grid of values between \(0\) and \(\widehat{c}\), use held-out to retain the value of \(t\) for which the adversarial risk of \(\widehat{w}(t)\) is minimal, where each component of \(\widehat{w}(t)\) is given by \(\widehat{w}(t)_{j}=\text{ST}((\widehat{w}_{0})_{j};rt/\widehat{\lambda}_{j})\). 4:Return \(\widehat{w}(t)\). ``` **Algorithm 3** Non-uniform soft-thresholding which implements **Stage 2** of Algorithm 1 for diagonal covariance matrix under \(\ell_{\infty}\)-attacks. ## 7 Concluding Remarks In our work, we have undertaken a study of the robustness of gradient-descent (GD), under test-time attacks in the context of linear regression. Our work provides a clear characterization of when GD and a modified version (GD+) -with feature-dependent learning rates- can succeed to achieve the optimal adversarial risk (up to an absolute constant). This characterization highlights the effect over the covariance structure of the features, and also, of the norm used to measure the strength of the attack. Finally, our paper proposes a statistically consistent and simple two-stage estimator which achieves optimal adversarial risk in the population regime, up to within a constant factor. Our proposed estimator adapts to attacks w.r.t general norms, and to the covariance structure of the features. Figure 6: We compare our proposed two-stage estimator given in Alg. 1 with the one proposed in Xing et al. (2021), in the specific setting of Euclidean attacks when varying sample size \(n\) and attack strength \(r\). Here, **Stage 2** of our two-stage estimator is computed via Alg. 2. Dashed and plain lines respectively represent the standard and adversarial risks. We see that though more general, our algorithm is able to recover similar performances as Xing et al. (2021) in the special case of Euclidean attacks.
2307.16670
Conditioning Generative Latent Optimization for Sparse-View CT Image Reconstruction
Computed Tomography (CT) is a prominent example of Imaging Inverse Problem highlighting the unrivaled performances of data-driven methods in degraded measurements setups like sparse X-ray projections. Although a significant proportion of deep learning approaches benefit from large supervised datasets, they cannot generalize to new experimental setups. In contrast, fully unsupervised techniques, most notably using score-based generative models, have recently demonstrated similar or better performances compared to supervised approaches while being flexible at test time. However, their use cases are limited as they need considerable amounts of training data to have good generalization properties. Another unsupervised approach taking advantage of the implicit natural bias of deep convolutional networks, Deep Image Prior, has recently been adapted to solve sparse CT by reparameterizing the reconstruction problem. Although this methodology does not require any training dataset, it enforces a weaker prior on the reconstructions when compared to data-driven methods. To fill the gap between these two strategies, we propose an unsupervised conditional approach to the Generative Latent Optimization framework (cGLO). Similarly to DIP, without any training dataset, cGLO benefits from the structural bias of a decoder network. However, the prior is further reinforced as the effect of a likelihood objective shared between multiple slices being reconstructed simultaneously through the same decoder network. In addition, the parameters of the decoder may be initialized on an unsupervised, and eventually very small, training dataset to enhance the reconstruction. The resulting approach is tested on full-dose sparse-view CT using multiple training dataset sizes and varying numbers of viewing angles.
Thomas Braure, Delphine Lazaro, David Hateau, Vincent Brandon, Kévin Ginsburger
2023-07-31T13:47:33Z
http://arxiv.org/abs/2307.16670v3
# Conditioning Generative Latent Optimization to Solve Imaging Inverse Problems ###### Abstract Computed Tomography (CT) is a prominent example of Imaging Inverse Problem (IIP), highlighting the unrivalled performances of data-driven methods in degraded measurements setups like sparse X-ray projections. Although a significant proportion of deep learning approaches benefit from large supervised datasets to directly map experimental measurements to medical scans, they cannot generalize to unknown acquisition setups. In contrast, fully unsupervised techniques, most notably using score-based generative models, have recently demonstrated similar or better performances compared to supervised approaches to solve IIPs while being flexible at test time regarding the imaging setup. However, their use cases are limited by two factors: (a) they need considerable amounts of training data to have good generalization properties and (b) they require a backward operator, like Filtered-Back-Projection in the case of CT, to condition the learned prior distribution of medical scans to experimental measurements. To overcome these issues, we propose an unsupervised conditional approach to the Generative Latent Optimization framework (cGLO), in which the parameters of a decoder network are initialized on an unsupervised dataset. The decoder is then used for reconstruction purposes, by performing Generative Latent Optimization with a loss function directly comparing simulated measurements from proposed reconstructions to experimental measurements. The resulting approach, tested on sparse-view CT using multiple training dataset sizes, demonstrates better reconstruction quality compared to state-of-the-art score-based strategies in most data regimes and shows an increasing performance advantage for smaller training datasets and reduced projection angles. Furthermore, cGLO does not require any backward operator and could expand use cases even to non-linear IIPs. ## 1 Introduction Imaging Inverse Problems (IIPs) are a large and swiftly growing area of research. IIPs cover a large variety of applications, ranging from generic image processing topics such as denoising, super-resolution and deconvolution to more specific domains usually involving the characterization of a system of interest from indirect observations. In this latter case, the problem of reconstructing scans from medical imaging devices such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) offers major challenges yet to be solved. Among those, CT reconstruction is a prime example of medical IIPs. The search of reduced ionizing radiations for patients has led to a sparsification of X-ray projection angles and/or a lowering of X-ray intensities resulting in noisier projection images. Using such degraded measurement setups, reconstructions obtained by conventional numerical methods, e.g. Filtered Back-Projection (FBP) [1, 125-147], are severely worsened. Recent progresses have been made using deep learning approaches for CT reconstruction. However, a great part of these novel data-driven methods employ a supervised training setup [2, 3, 4, 5, 6, 7]. In supervised training pairs, ground truths consist of 3D volumetric scans reconstructed from conventional techniques such as the well-known FBP algorithm, using high-dose and densely sampled X-ray measurements. To each ground truth is associated a degraded set of measurements, obtained from sparsification and/or dose reduction from the original X-ray projections. Building such supervised datasets thus implies that a fixed physical model has been employed for all training pairs, which must be identical to the one used for inference. In other words, supervised reconstruction strategies need a new training whenever the acquisition process changes. This can be problematic when the viewing angles, X-ray spectrum or beam geometry vary. To circumvent this drawback, several unsupervised strategies have recently been introduced. Current approaches dealing with ill-posed IIPs in an unsupervised way are mostly based on the use of generative models. As such, the infamous Generative Adversarial Networks (GANs) [8] are largely employed for unsupervised reconstruction tasks [9]. As a serious contender to GANs, Score-based Generative Models (SGMs) [10, 11] have gained a lot of attention in the past few years, as recent improvements lead to the generation of high-quality image samples without requiring a complex adversarial optimization. The basic principle of SGMs is to add noise of increasing intensity to data during the training process and learn to reverse this process in order to make a generative model from a sequence of trained denoisers. Using this general strategy, two types of approaches can be distinguished: Denoising Diffusion Probabilistic Models (DDPM) [11, 12, 13] and Score Matching methods [10, 14, 15]. In particular, Score Matching has been used to perform sparse CT reconstruction [16]. This method estimates the gradient of the prior log-likelihood, also called the score function, for each level of noise. In the continuous approach [15], the discrete sequence of noise levels is replaced by a continuum of distributions progressively diffusing data points into random noise following a Stochastic Differential Equation (SDE) describing a Markovian diffusion process. By estimating the score function with a time-dependent neural network, the reverse-time SDE is approximated. Using standard iterative sampling techniques then leads to generative capability. Both GANs and SGMs are very efficient to learn unconditional prior distributions on slices of 3D reconstructed volumes. However, difficulties arise when a conditional sampling is needed to reconstruct slices from experimental X-ray projections. Various strategies have been proposed to deal with this conditional sampling issue. GAN inversion approaches [9] start from a _trained and fixed_ decoder, and aim at estimating the latent code corresponding to a given observation. The inversion process can be learning-based [17, 18], e.g. learning an encoder to inverse the decoder, optimization-based where the optimization problem is solved by finding the latent code minimizing the given objective function [19, 20], or both [21, 22, 23]. Other approaches using a parameterization of the latent space have been proposed, where the decoder and the encoder are jointly learned using an adversarial process [24, 25, 26, 27, 28, 29, 30]. Similar to GANs, conditional inference using SGMs is also quite unnatural. It is generally performed using a biased sampling of the posterior distribution to generate approximate samples from a stochastic process conditioned to experimental measurements. For instance, in the context of sparse CT reconstruction, conditioning a SGM to X-ray projections, as in [16], involves computing the FBP on experimental measurements filled with simulated measurements from the candidate reconstruction. To the best of our knowledge, this conditioned version of a SGM (cSGM) currently achieves state-of-the-art for unsupervised sparse CT reconstruction. However, cSGM is restricted to IIPs for which both a forward and an approximate pseudo-inverse operators are available. While such an inverse operator exists for single-material CT using for instance the FBP algorithm, this approach cannot be extended to other IIPs where pseudo-inverse operators are not defined, e.g. multi-material CT. In addition to the conditional sampling issue, unsupervised generative models require large amounts of training data, which can be difficult to collect in the context of medical imaging. Furthermore, while these approaches have good generalization given infinite training data, in practice it is not clear how much is necessary. On the other side of the data-consumption spectrum, Deep Image Prior (DIP) [31] has demonstrated that the structure of a deep convolutional network inherently captures a significant amount of low-level image statistics without any learning. In other words, it induces an implicit structural bias, i.e. a prior. DIP uses a randomly initialized U-Net like architecture [32], and a fixed input noise. The network weights can then be optimized to solve any ill-posed IIP with known forward model, such as inpainting or super-resolution. It has been shown that this IIP reparameterization, when optimized with gradient descent, hierarchically reconstructs from low to high frequencies, i.e. converging faster towards "natural images" than noise [33, 34]. The method has recently been adapted in [35] to solve low-dose and sparse CT reconstruction. While this strategy showed good results on use cases with missing or small datasets, it imposes a quite rough prior which cannot compete with the reconstruction performances of its data-driven counterparts in degraded measurement setups [16]. Considering, on one side, data-hungry generative models offering strong priors and limited to specific IIPs, and data-free methods providing weak priors on the other side, the question of a possible trade-off between these two extremes naturally arises. As a way to alleviate this dilemma, this paper proposes a conditional version of the approach described as Generative Latent Optimization (GLO) [36]. The core idea of the original GLO technique is to use a decoder network and a set of _learnable_ unit noise vectors, i.e. latent codes, where each code is associated to a single individual of the unsupervised training dataset. During training, the gradient is back-propagated both on the weights of the decoder and on the latent codes. In this GLO framework, the manifold defined by the latent codes is not parameterized. A direct consequence is that it is not trivial to sample from it. Decent generated samples can be produced when evaluating the decoder on linear combinations of latent codes or when sampling from a single full-covariance Gaussian fitted to the latent codes distribution. While these generations do not suffer from mode collapse, they are outperformed by GANs except when trained on small datasets. The vast majority of articles building on GLO are focused on improving its generative characteristics [37, 38, 39]. This paper differs from these recent works and from standard GAN inversion techniques which focus on parameterizing the manifold entailed by the latent codes. Similar to DIP, our conditioned version of GLO (cGLO) exploits the inherent structural prior induced by the convolutional network of the decoder. However, contrary to DIP, cGLO can benefit from an unsupervised training dataset of any size to initialize its decoder weights, resulting in greatly improved reconstruction results. The higher the quantity of training data for initialization is used, the stronger the induced prior is. Moreover, cGLO provides what we call self-regularization, by reconstructing several slices at once, i.e. regularizing with the measurements themselves. Finally, cGLO does not need a backward operator like the FBP algorithm at any point to perform a CT reconstruction. The resulting approach is unsupervised and very flexible, depending on the type and quantity of available data. As illustrated on the special case of sparse CT reconstruction, cGLO can be used as a plug-and-play reconstruction method. This article demonstrates that cGLO is a parsimonious reconstruction method that outperforms current state-of-the-art unsupervised IIPs solvers in most data regimes. ## 2 Methods ### Computed Tomography (CT) Given a narrow beam of X-ray photons of energy \(E\in[0,\varepsilon]\), with a normalized dose spectrum \(S(E)\), tracing through a material \(m\) with homogeneous density distribution \(\rho_{m}\), the beam intensity decreases in accord with the Beer-Lambert law: \[\frac{I}{I_{0}}=\int_{0}^{\varepsilon}S(E)\exp\Biggl{[}-\biggl{(}\frac{\mu_{m} }{\rho_{m}}\biggr{)}_{E}\,\rho_{m}L\Biggr{]}dE \tag{1}\] where \(I_{0}\) and \(I\) are respectively the input and observed intensities, \(L\) is the beam path length and \(\mu_{m}\) the linear attenuation coefficient of material \(m\). The ratio \(\left(\frac{\mu_{m}}{\rho_{m}}\right)_{E}\) corresponds to the mass attenuation of material \(m\). Depending on the energy level \(E\), this macroscopic attenuation is obtained by adding up the effect of several absorption and scattering mechanisms, e.g. Rayleigh and Compton scattering, photoelectric absorption or pair production. In medical tomodensitometry, the quantity of interest is the heterogeneous density distribution reflecting the patient internals. However, further assumptions are made to recover this distribution from experimental measurements: bones and other tissues are reduced to one equivalent material with water mass attenuation and the X-ray beam is considered monoenergetic. Now, by considering an horizontal plane section through the patient body, i.e. a slice \(x\), and by moving the source and detector as indicated in figure 1, it is possible to obtain a profile \(y\), on the detector axis \(p\), for the viewing angle \(\phi\): \[y(p,\phi)=-\log\biggl{(}\frac{I}{I_{0}}\biggr{)}=\int_{L}x\,ds \tag{2}\] For continuous \(p\) and \(\phi\) the profiles \(y\) may be identified with the two-dimensional Radon transform [1, 55-65], also called the forward operator Figure 1: Parallel beam geometry profile acquisition for one viewing angle \(\mathcal{T}\), of the slice \(x\), such that \(y=\mathcal{T}(x)\). However, in practical use cases, projection profiles are acquired for a set of incremental values \(\Phi=\phi_{1},...,\phi_{N}\), which constitutes a sampling of the Radon transform. Moreover, shot noise following a Poisson process \(\tau\) usually appears in experimental profiles for lower doses. Resulting experimental measurements can then be described by: \[y_{\Phi}=\mathcal{T}_{\Phi}(x)+\tau \tag{3}\] Given a number of viewing angles close to the Nyquist sampling criterion [40, 72-74], and sufficiently low noise, the inverse of the sampled Radon transform \(\mathcal{T}_{\Phi}^{\dagger}\) can be computed with standard methods, e.g. FBP, to retrieve the slice density map without producing artefacts. By stacking several slices, the two-dimensional information may be converted to three-dimensional information. In the case of insufficient viewing angles, e.g. sparse CT, or in presence of too much noise, e.g. low-dose CT, the inverse problem becomes ill-posed. It is classically solved using iterative optimization algorithm [41] using a _Maximum A Posteriori_ (MAP) formulation of the problem: \[x^{*}=\operatorname*{arg\,min}_{x}\lVert\mathcal{T}_{\Phi}(x)-y_{\Phi} \rVert_{2}^{2}+\mathcal{R}(x) \tag{4}\] where \(\mathcal{R}\) is a real valued function used to regularize the optimization process by injecting prior information on the desired output \(x^{*}\). Most common operators encountered in the literature to model this information are \(\lVert x\rVert_{1}\), \(\lVert x\rVert_{2}\) and Total Variation (TV), \(\mathrm{TV}(x)=\lVert\nabla x\rVert_{1}\). ### Conditioning a Score-Based Generative Model (cSGM) The method developed by Song et al. [15] is generative. With a dataset of i.i.d. samples \(\mathbf{x}\in X\) from an unknown distribution \(p(\mathbf{x})\), it aims at generating new data samples. To this end, it firstly defines a continuous diffusion process, that progressively perturbs samples from \(p(\mathbf{x})\) into samples from a tractable prior distribution. It is formulated as a linear SDE as follows: \[\mathrm{d}\mathbf{x}_{t}=f(t)\mathbf{x}_{t}\mathrm{d}t+g(t)\mathrm{d}\mathbf{w }_{t} \tag{5}\] where \(t\in[0,1]\), \(\mathrm{d}t\) is an infinitesimal time step, \(f\) and \(g\) are real valued functions, \(\mathbf{w}_{t}\) is a standard Wiener process, and \(\mathbf{x}_{t}\) represents the samples through the perturbation process. The marginal probability distribution of \(\mathbf{x}_{t}\), \(p_{t}(\mathbf{x}_{t})\) can be further derived from Eq. (5), as well as the transition distribution from \(\mathbf{x}_{0}\) to \(\mathbf{x}_{t}\), \(p_{0t}(\mathbf{x}_{t}|\mathbf{x}_{0})\). \(f\) and \(g\) are carefully designed so that, for any \(p_{0}(\mathbf{x})\equiv p(\mathbf{x})\), Eq. (5) ensures convergence at \(t=1\) towards a distribution \(p_{1}(\mathbf{x})\) close to a Gaussian noise. As demonstrated by Anderson [42], the reverse of the diffusion process, in Eq. (5), is also a diffusion process which can be written as the following reverse-time SDE: \[\mathrm{d}\mathbf{x}_{t}=[f(t)\mathbf{x}_{t}+g(t)^{2}\nabla_{\mathbf{x}_{t}} \mathrm{log}p_{t}(\mathbf{x}_{t})]\mathrm{d}t+g(t)\overline{\mathbf{w}}_{t} \tag{6}\] where \(\overline{\mathbf{w}}_{t}\) is a standard Wiener process in the reverse-time direction, and \(\mathrm{d}t\) indicates an infinitesimal negative time step. Solving Eq. (6) from \(t=1\) to \(t=0\) corresponds to a continuous denoising, yielding data samples \(\mathbf{x}_{0}\sim p_{0}(\mathbf{x})\equiv p(\mathbf{x})\) from noise samples [15]. However, it requires the score function \(\nabla_{\mathbf{x}_{t}}\mathrm{log}p_{t}(\mathbf{x}_{t})\) of \(p_{t}(\mathbf{x}_{t})\). Using a time-dependent neural network \(s_{\theta}(\mathbf{x},t)\) called the score model, it is learned on \(X\) with denoising score matching. As described in [43] and [15], the score model is trained by approximating the known score functions of the transition distributions from \(\mathbf{x}_{0}\) to \(\mathbf{x}_{t}\), \(p_{0t}(\mathbf{x}_{t}|\mathbf{x}_{0})\). Optimized parameters \(\theta^{*}\) ensure that \(s_{\theta^{*}}(\mathbf{x},t)\approx\nabla_{\mathbf{x}_{t}}\mathrm{log}p_{t}( \mathbf{x}_{t})\) according to denoising score matching theory. The score model can then be plugged into Eq. (6), yielding: \[\mathrm{d}\mathbf{x}_{t}=[f(t)\mathbf{x}_{t}+g(t)^{2}s_{\theta^{*}}(\mathbf{x },t)]\mathrm{d}t+g(t)\mathrm{d}\overline{\mathbf{w}}_{t} \tag{7}\] from here, one can sample from \(p_{0}(\mathbf{x})\) using a sequence of time steps \(0=t_{0}<t_{1}<...<t_{N}=1\) with standard iterative sampling techniques: \[\mathbf{x}_{t_{i-1}}=\boldsymbol{h}(\mathbf{x}_{t_{i}},\mathbf{z}_{i},s_{ \theta^{*}}(\mathbf{x}_{t_{i}},t_{i})) \tag{8}\] where \(\mathbf{z}_{i}\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) and \(\mathbf{h}\) denotes the iterative function related to the chosen sampling algorithm, such as annealed Langevin dynamics [10] or Predictor-Corrector samplers [15]. Although it is natural to _unconditionally_ sample from Eq. (7), conditioning the sampling process to measurements, i.e. being able to sample from the posterior distribution \(p(\mathbf{x}|\mathbf{y})\), is not trivial. Building on the unconditional case, Song et al. [16] introduce perturbed measurements \(\mathbf{y}_{t}\) given the experimental observation \(\mathbf{y}\). They design an iterative sampling algorithm that promotes, at each time-step \(t_{i}\), consistency of _conditioned_ samples \(\mathbf{x}^{\prime}_{t_{i}}\) simultaneously with the perturbed measurements \(\mathbf{y}_{t_{i}}\) and the _unconditioned_ samples \(\mathbf{x}_{t_{i}}\) by solving a proximal optimization step: \[\mathbf{x}^{\prime}_{t_{i}}=\operatorname*{arg\,min}_{w}\{(1- \lambda)\|w-\mathbf{x}_{t_{i}}\|^{2}_{\mathcal{\phi}_{e}}+\lambda\min_{u}\|w- u\|^{2}_{\mathcal{T}_{\phi_{e}}}\}\quad s.t.\quad\mathcal{T}_{\phi_{e}}(u)= \mathbf{y}_{t_{i}} \tag{9}\] where \(\mathcal{T}_{\phi_{e}}\) is the Radon transform sampled on \(\phi_{e}\), the set of experimental viewing angles, and \(\|.\|^{2}_{\mathcal{T}}=\|\mathcal{T}(.)\|^{2}_{2}\). The hyper-parameter \(\lambda\in[0,1]\) balances consistency regarding experimental measurements (\(\lambda\to 1\)) and _unconditioned_ samples (\(\lambda\to 0\)). Song et al. [16] demonstrated that Eq. (9) has a tractable closed-formed solution, such that in practice, _conditioned_ samples are computed through: \[\mathbf{x}^{\prime}_{t_{i}}=\mathcal{T}^{\dagger}_{\Phi}[P_{\phi_{e}}(\lambda \mathbf{y}_{t_{i}}+(1-\lambda)\mathcal{T}_{\phi_{e}}(\mathbf{x}_{t_{i}}))+P_{ \phi_{e}}\circ\mathcal{T}_{\phi_{e}}(\mathbf{x}_{t_{i}})] \tag{10}\] where the set \(\phi_{e}\) of experimental viewing angles is completed with simulated viewing angles \(\phi_{s}\) to constitute an appropriate sampling, \(\Phi=\phi_{e}\cup\phi_{s}\), above the Nyquist criterion, so that the inverse Radon transform \(\mathcal{T}^{\dagger}_{\Phi}\) is well-defined. \(P\) denotes a profile padding operator ensuring dimensionality consistency. Given _conditioned_ samples \(\mathbf{x}^{\prime}_{t_{i}}\), at time-step \(t_{i}\), the same iterative sampling strategy as Eq. (8) is applied to draw _unconditioned_ samples \(\mathbf{x}_{t_{i}-1}\): \[\mathbf{x}_{t_{i-1}}=\mathbf{h}(\mathbf{x}^{\prime}_{t_{i}},\mathbf{ z}_{i},s_{\theta^{*}}(\mathbf{x}_{t_{i}},t_{i})) \tag{11}\] where \(s_{\theta^{*}}(\mathbf{x},t)\) is the score model trained on \(X\) in an unsupervised manner, i.e. without assuming any measurement process. The reconstruction is ultimately computed by iterating sequentially on steps described in Eq. (11) and Eq. (10). ### Deep Image Prior (DIP) Ulyanov et al. [31] introduced DIP to solve classic IIPs such as denoising and super-resolution. The core idea of DIP is to regularize IIPs by taking advantage of the structural bias of a U-Net [32]\(f_{\theta}\) parameterized with a set of weights \(\theta\). It operates through two mechanisms: reparameterization and early stopping (\(ES\)). For instance, in the case of denoising, the optimization problem is reparametrized as: \[x^{*}=\{f_{\theta^{*}}(z)\;\big{|}\;\theta^{*}=\operatorname*{arg\,min}_{\theta }\|f_{\theta}(z)-(x_{0}+\eta)\|^{2}_{2}\} \tag{12}\] where \(x_{0}\) is the initial image perturbed with unknown white noise \(\eta\) and \(z\) is a fixed white noise with the same dimensions as \(x_{0}\). Experiments in [31] showed that given sufficient capacity and time/iterations, via gradient descent, the randomly initialized and over-parameterized U-Net can fit the output signal almost perfectly, including the noise \(\eta\). However, Ulyanov et al. [31] demonstrated that the weights descent sequence \(\theta_{1},...,\theta_{N}\), with \(f_{\theta_{N}}\approx x_{0}+\eta\), contains an early stopping point \(\theta_{ES}\), such that \(f_{\theta_{ES}}\approx x_{0}\). This phenomenon has been proven to be a consequence of the structure of the generative network, more specifically the convolutional and upsampling layers [33, 34]. They induce a spectral bias in which the decoder learns to construct the image from low to high frequencies, meaning that with an appropriate choice of \(ES\), one can prevent the decoder from fitting the high frequency perturbations. This methodology has recently been tailored to reconstruct 3D CT volumes by Baguer et al. [35], using an adaptation of Eq. (12) further regularized with TV: \[x^{*}=\{f_{\theta^{*}}(z)\;\big{|}\;\theta^{*}=\operatorname*{arg\,min}_{\theta }\|\mathcal{T}_{\phi_{e}}\circ f_{\theta}(z)-y_{\phi_{e}}\|^{2}_{2}+\alpha \mathrm{TV}\circ f_{\theta}(z)\} \tag{13}\] where \(\mathcal{T}_{\phi_{e}}\) is the Radon transform sampled on experimental viewing angles \(\phi_{e}\), \(y_{\phi_{e}}\) are the experimental measurements and \(\alpha\) is an hyper-parameter balancing the regularization. In this formulation, the issue of finding an appropriate early stopping for denoising is replaced with the necessity to find an optimal value for \(\alpha\), depending on the ill-posedness of the IIP at hand. Further experiments in [35] showed improvements by initializing the decoder weights using the result \(x_{s}\) of a supervised method, e.g. Learned Primal-Dual reconstruction [3]: \[\theta_{0}=\operatorname*{arg\,min}_{\theta}\lVert f_{\theta}(z)-x_{s}\rVert_{ 2}^{2} \tag{14}\] However, replacing random weights of the decoder with such an initialization restricts the applicability of the approach to the supervised use case, i.e. fixed acquisition parameters. ### Generative Latent Optimization (GLO) Attempting to generate competitive results when compared to samples from GANs without using an adversarial training protocol, Bojanowski et al. [36] proposed a novel approach consisting in mapping one freely _learned_ unit latent vector \(z_{i}\) to each image \(x_{i}\) of a given dataset \(X=\{x_{1},...,x_{N}\}\), through a decoder network \(f_{\theta}\) with parameters \(\theta\), e.g. a DCGAN [44]. With gradient descent, it comes to jointly learn decoder network parameters and latent codes \(Z=\{z_{1},...,z_{N}\}\), both randomly initialized, with the objective function: \[Z^{*},\theta^{*}=\operatorname*{arg\,min}_{Z,\theta}\frac{1}{N}\sum_{i=1}^{N} \lVert f_{\theta}(z_{i})-x_{i}\rVert_{2}^{2}\quad s.t.\quad\lVert z_{i} \rVert_{2}=1 \tag{15}\] It is mentioned in [36] that, as is conventionally done for training GANs, vectors sampled from an n-dimensional normal distribution are close to the surface of an n-sphere with radius \(\sqrt{n}\). Following this idea, during the optimization process, the latent vectors are constrained to the unit n-sphere by projection after each update. Experiments in [36] and [37], have shown that the GLO approach does not suffer from mode collapse and significantly outperform GANs when trained on small datasets. However, like Variational Auto-Encoders (VAE) [45], the quality of generated samples deteriorates when the data variability overcomes the network capacity [46]. We also observe this behaviour in our own experiments. ### Conditioning GLO (cGLO) The method proposed to solve IIPs in this article builds on the framework described by GLO [36]. Attempting to invert GLO, i.e. sampling or exploring the manifold entailed by latent vectors learned through Eq. (15), is not straightforward [38, 39]. The latent vectors are not uniformly distributed on the surface of the n-sphere. Furthermore, the decoder generation quality rapidly worsens when evaluated on vectors outside geodesic interpolation lines between pairs of latent vectors. In other words, the manifold on which the trained decoder can produce plausible images can be described as a complete graph, heterogeneously distributed on the surface of the unit n-sphere. Instead of inverting GLO, our method cGLO uses a reparameterization of the IIP, like DIP [31], to benefit from the structural bias of the convolutional decoder. However, the objective function described in Eq. (13), is cast into the framework of GLO [36], detailed in Eq. (15) such that the reconstruction is composed of two steps. Firstly, given availability of previous full-dose CT scan reconstructions, i.e. a set of slices \(X=\{x_{1},...,x_{N}\}\), prior knowledge may be enforced by learning an appropriate initial set of parameters \(\theta_{0}\), for the decoder \(f_{\theta}\), in an unsupervised manner following Eq. (14) and Eq. (15): \[\theta_{0}=\operatorname*{arg\,min}_{Z,\theta}\frac{1}{N}\sum_{i=1}^{N} \lVert f_{\theta}(z_{i})-x_{i}\rVert_{2}^{2}\quad s.t.\quad\lVert z_{i} \rVert_{2}=1 \tag{16}\] Secondly, at examination time, the \(K\) slices \(\bar{X}=\{\bar{x}_{1},...,\bar{x}_{K}\}\) corresponding to experimental measurements are reconstructed together such that the parameters \(\theta\), and the latent unit vectors \(\bar{Z}=\{\bar{z}_{1},...,\bar{z}_{K}\}\) are jointly learned through: \[\bar{Z}^{*},\theta^{*}=\operatorname*{arg\,min}_{\bar{Z},\theta}\frac{1}{K}\ \sum_{i=1}^{K}\lVert\mathcal{T}_{\phi_{e}}\circ f_{\theta}(\bar{z}_{i})-y_{i, \ \phi_{e}}\rVert_{1}^{2}\quad s.t.\quad\lVert\bar{z}_{i}\rVert_{2}=1 \tag{17}\] \[\bar{X}^{*}=f_{\theta^{*}}(\bar{Z}^{*}) \tag{18}\] where \(\mathcal{T}_{\phi_{e}}\) is the Radon transform sampled on experimental viewing angles \(\phi_{e}\) and \(y_{i,\;\phi_{e}}\) are the profiles obtained from slice \(\bar{x}_{i}\). Because of the shared decoder between the entire set of slice reconstructions \(\bar{X}^{*}\), and with sufficient slices to reconstruct, this strategy avoids the overfitting issue of DIP. It is expected for the set of parameters \(\theta_{0}\) to be closer to \(\theta^{*}\) than random weights, thus easing the optimization of Eq. (17), especially when \(\phi_{e}\) is very sparse. The latent vectors from Eq. (16) are not reused so that the set \(\bar{Z}\) of Eq. (17) is always randomly initialized. Consequently, cGLO is an unsupervised reconstruction methodology that is very flexible as it can be adapted to various practical use cases regardless of the quantity of data at hand. ## 3 Experiments ### Datasets Experiments are conducted on two commonly used datasets which are publicly available on The Cancer Imaging Archive (TCIA) platform [47]. The Lung Image Database Consortium (LIDC) [48] image collection was collected using lung cancer screenings from 1018 patients. It consists of thoracic CT scans completed with radiologists annotations for nodule segmentation. The Low-dose CT image and projection dataset (LDCT) [49] is comprised of 299 CT scans of patients heads, chests and abdomens and their respective (full) clinical doses. In addition, the LDCT dataset also includes simulated reduced doses, by Poisson noise insertion, and the location and diagnosis for positive findings. In the presented experiments, only CT scan slices from both datasets are used. Depending on the optimization process, slices are either directly compared using a pixel-wise metric, or indirectly via simulated measurements obtained from the application of the same forward operator, i.e. a full dose parallel-beam geometry with viewing angles equally distributed across 180 degrees. Similarly to Song et al. [16], our approach treats every slice independently. Therefore, though 3D scan geometric parameters such as pixel spacing and slice thickness may vary accors patients, slices are not resampled, such that 3D volumes can have different number of slices and spatial scales. The native resolution of slices is 512x512 pixels for both datasets, but the LIDC slices are downsized to a resolution of 320x320 pixels. For experiments to be representative of various realistic situations with sparse or abundant data, training sub-datasets consisting of portions of the LIDC and LDCT collections are prepared. Details of such datasets are given in table 1. The LIDC and the LDCT test sets consist of 5 scans respectively totalling 791 and 1089 slices. Unless stated otherwise, Peak Signal to Noise Ratio (PSNR) and Structural SIMilarity (SSIM) metrics are estimated on the total test set where each 3D volume is reconstructed independently, meaning that 5 independent reconstructions are computed. ### Implementation Details In the presented experiments, the geometry of the decoder's latent space is set to the unit n-sphere, similarly to state-of-the art representation learning approaches [36]. The latent space dimension is set to 320 for experiments on the LIDC dataset and 512 for the LDCT dataset. Being a reference in the literature of generative models, the Deep Convolutional Generative Adversarial Network (DCGAN) [44] architecture is used for the decoder. As illustrated in figure 2, the latent code firstly goes through a linear block and is mapped to a feature map of shape \(2\times 2\times\mathbf{C}\), with \(\mathbf{C}\) the desired number of channels for the first convolutional layer. From layer to layer the input is upscaled by a factor 2 (last layer upscale factor may vary to fit the desired output image resolution) and the number of channels is cut by half. Hence, the image resolution, the latent dimension and \(\mathbf{C}\) completely define the structure of the decoder. The number of input channels \(\mathbf{C}\) is set to 8192 for both experiments on the LIDC and the LDCT sub-datasets. \begin{table} \end{table} Table 1: Sub-datasets used for training with their corresponding number of 3D scans and slices. 3D scans are always fully included. Optimizations are conducted with the Adam algorithm [50], with parameters kept at default values. The learning rates for the latent codes and the decoder weights are both set to \(10^{-3}\) while training on CT reconstruction, i.e. the first step described in Eq. (16), and respectively set to \(10^{-2}\) and \(10^{-4}\) when reconstructing from experimental measurements, i.e. the second step as shown in Eq. (17). Although learning rates are kept constant, batch sizes are increased along specified schedules. It leads to faster training and is statistically equivalent to decreasing learning rates [51]. Finally, the optimization described in Eq. (17) is further stabilized by artificially augmenting the number of experimental measurements, with a linear interpolation along the vertical axis, i.e. the axis orthogonal to the slices plane. It simulates a larger set of experimental measurements, which includes true experimental measurements. Reconstructions are conducted using the entire augmented set, then reconstructions corresponding to interpolated measurements are discarded. Practically, measurements are augmented by a factor 8 for reconstructions conducted on both the LIDC and the LDCT test sets. Regarding cSGM, hyper-parameters, model architecture and other implementation details are set to what is provided by Song et al. [16] for sparse-view CT experiments. ### Results In this section, our method cGLO is compared to cSGM [16], which currently achieves state-of-the-art performance for unsupervised sparse CT. The two approaches are compared on quantitative pixel-wise (PSNR) and structural (SSIM) metrics which are given in figure 3 and detailed in Table 2 for reconstructions computed from 100 experimental viewing angles. Qualitative results consists of examples of reconstructions from cGLO and cSGM trained on the smallest sub-dataset consisting of a 2% portion of the LIDC and the LDCT datasets. These reconstructions from the LIDC and the LDCT test sets are respectively presented in figure 4 and figure 5. Additional results and examples, for experimental setups corresponding to different combinations of training dataset sizes and number of viewing angles are provided in appendix A and B. #### 3.3.1 Quantitative performance As shown in figure 3, cGLO outperforms cSGM in most data and viewing angles regimes. cSGM achieves better median PSNR values only in scenarios involving more than 100 experimental viewing angles, when the portion of the employed sub-datasets is superior to 20%. The details of the performance of both models in this context are presented in table 2. In every other experimental setups, our method cGLO outperforms cSGM regarding median PSNR values. Furthermore, cGLO achieves higher median SSIM metrics values in all experimental setups, even in abundant data regimes with large training sub-datasets and many experimental viewing angles. Also, median PSNR and especially median SSIM curves in figure 3 indicate that the performance gap, at the advantage of cGLO over cSGM, increases as the set of experimental viewing angles gets sparsier and/or the training dataset gets smaller. In other words, our method is more parsimonious and more robust for solving increasingly sparse CT when compared to cSGM. Figure 2: Generator DCGAN-like architecture, latent dimension (512) and final resolution (512x512) correspond to the LDCT sub-datasets experiments. #### 3.3.2 Effect of training data As illustrated in figure 3, contrary to cSGM, our model performance plateaus as the number of experimental viewing angles increases and even slightly worsens, regarding median PSNR values, as the training datasets becomes larger. Given a fixed representation capacity for our decoder architecture, i.e. a fixed number of input channels \(\mathbf{C}\), the model becomes under-parameterized after a given training dataset size is reached. Since the model parameters and the latent codes are jointly optimized in Eq. (16) with respect to a data-likelihood objective function, the model error distribution is widely spread over the entire training dataset [46]. Consequently, when the data variability overcomes the model capacity, each additional training example further deteriorates the average prediction quality. Ideally, the model capacity should be tuned to best fit each experimental setup. The experiments realized in this paper however demonstrate that even with a fixed decoder architecture, cGLO produces superior reconstructions for a wide range of experimental setups. Figure 3: PSNR (left side) and SSIM (right side) median values curves corresponding to reconstructions of slices from the LIDC dataset (upper row) and the LDCT dataset (lower row) test sets given 9, 23, 50 and 100 experimental viewing angles. Each curve is associated with one of the training sub-datasets presented in table 1. #### 3.3.3 Reconstruction quality Figure 4 and figure 5 compare reconstructions computed with cGLO and cSGM respectively from the LIDC and the LDCT test datasets. Reconstruction quality clearly degrades for both models as the number of experimental viewing angles diminishes. The type of degradation, however, differs between cGLO and cSGM. While cGLO reconstructions lack sharp details and get noisier for very sparse CT, cSGM tends to alter the structural integrity of the slices. In addition, cSGM reproduces as a texture on top of its reconstructions the high frequency artefacts resulting from the FBP operator, when those exists in the training dataset, e.g. in the LDCT dataset. This effect is especially noticeable in reconstructions shown in figure 9 which may be zoomed in for more details. Our model does not show this behaviour in \begin{table} \begin{tabular}{c|c|c c|c c} \multirow{2}{*}{Method} & \multirow{2}{*}{Data} & \multicolumn{2}{c|}{LIDC 320x320} & \multicolumn{2}{c}{LDCT 512x512} \\ \cline{3-6} & & PSNR \(\uparrow\) & SSIM \(\uparrow\) & PSNR \(\uparrow\) & SSIM \(\uparrow\) \\ \hline \hline cSGM & 2\% & 39.07 \(\pm_{0.89}\) & 0.946 \(\pm_{0.015}\) & 34.51 \(\pm_{0.53}\) & 0.906 \(\pm_{0.012}\) \\ cGLO & 2\% & **41.48 \(\pm_{1.69}\)** & **0.987 \(\pm_{0.006}\)** & **38.97 \(\pm_{1.12}\)** & **0.976 \(\pm_{0.007}\)** \\ \hline cSGM & 10\% & 41.82 \(\pm_{1.47}\) & 0.965 \(\pm_{0.015}\) & 39.00 \(\pm_{1.08}\) & 0.962 \(\pm_{0.008}\) \\ cGLO & 10\% & **41.88 \(\pm_{1.64}\)** & **0.987 \(\pm_{0.006}\)** & **40.38 \(\pm_{1.45}\)** & **0.981 \(\pm_{0.006}\)** \\ \hline cSGM & 20\% & **42.06 \(\pm_{1.61}\)** & 0.967 \(\pm_{0.014}\) & 39.99 \(\pm_{1.18}\) & 0.969 \(\pm_{0.008}\) \\ cGLO & 20\% & 41.48 \(\pm_{1.55}\) & **0.986 \(\pm_{0.006}\)** & **40.26 \(\pm_{1.61}\)** & **0.980 \(\pm_{0.007}\)** \\ \hline cSGM & 35\% & **42.07 \(\pm_{1.67}\)** & 0.967 \(\pm_{0.013}\) & **40.27 \(\pm_{1.18}\)** & 0.970 \(\pm_{0.007}\) \\ cGLO & 35\% & 41.12 \(\pm_{1.44}\) & **0.985 \(\pm_{0.006}\)** & 40.14 \(\pm_{1.47}\) & **0.980 \(\pm_{0.007}\)** \\ \hline cSGM & 50\% & **42.24 \(\pm_{1.64}\)** & 0.968 \(\pm_{0.013}\) & **39.81 \(\pm_{1.08}\)** & 0.968 \(\pm_{0.008}\) \\ cGLO & 50\% & 40.72 \(\pm_{1.39}\) & **0.984 \(\pm_{0.006}\)** & 39.55 \(\pm_{1.15}\) & **0.977 \(\pm_{0.006}\)** \\ \hline \end{tabular} \end{table} Table 2: PSNR and SSIM median \(\pm\) half Inter Quartile Range (IQR) values for reconstructions of slices from the LIDC and the LDCT test datasets given 100 experimental viewing angles. Figure 4: Examples of reconstructions given 9, 23, 50 and 100 experimental viewing angles, obtained with cGLO (upper row) and cSGM (lower row). Both methods are trained on the sub-dataset consisting of a 2% portion of the LIDC dataset. any experiment. These two observations corroborate and explain the larger gap in performance between cGLO and cSGM when comparing median SSIM curves rather than median PSNR curves in figure 3. The fact that cSGM and cGLO yield different reconstruction results at decreasing viewing angles can be explained by the way each model employs experimental data. cSGM uses a proximal optimization step, described in Eq. (9) to mix experimental data with unconditioned samples. The cost funtion of Eq. (9), expresses a balance between data likelihood and prior, which only enforces a loose constraint on the conformity of the proposed reconstruction to experimental data. When the number of viewing angles diminishes, this is responsible for the structural deformations appearing in figure 4 and figure 5. On the contrary, the cost function of cGLO, described in Eq. (17), enforces a strict constraint on experimental data. With decreasing viewing angles, the energy landscape of this cost function becomes less smooth, leading to the presence of local minima and harder convergence of the optimization process. As seen in figure 8, this leads to degraded reconstructions with more noise and less details at 9 viewing angles for example. However, contrary to cSGM, cGLO does not tend to create hallucinated structures arising from a too strong prior. ## 4 Conclusion In this work, a novel unsupervised method to solve IIPs named cGLO was presented, built upon the general framework of GLO [36]. Contrary to supervised strategies, our method does not require a fixed experimental setup. cGLO was tested on sparse-view CT which is a widely studied reconstruction problem arising in medical imaging and compared to the current state-of-the-art unsupervised reconstruction approach cSGM [16]. Experiments conducted in this paper cover a wide range of realistic setups with varying amount of available training data and experimental viewing angles. Quantitative results demonstrate that cGLO is a parsimonious and robust reconstruction method, as the performance advantage over cSGM increases for smaller training datasets and sparsier experimental viewing angles. Moreover, reconstruction examples illustrate that cGLO also exhibits potentially attractive properties like a propensity to preserve Figure 5: Examples of reconstructions given 9, 23, 50 and 100 experimental viewing angles, obtained with cGLO (upper row) and cSGM (lower row). Both methods are trained on the sub-dataset consisting of a 2% portion of the LDCT dataset. the structural integrity of reconstructions, even for very ill-posed IIPs, thanks to its straightforward conditioning to experimental measurements. While the performance of cGLO have been tested on a tomographic reconstruction problem, the framework developed in ths article can be readily applied to other ill-posed IIPs, such as MRI reconstruction from a sparsely sampled k-space, as done with cSGM. Furthermore, since the cGLO approach only requires the knowledge of the forward operator, it could in principle be extended to solve non-linear IIPs. One of the main interests of cGLO is that it does not require any backward operator, such as FBP for tomographic reconstruction. It is thus a method of choice for IIPs where such operators do not exist, such as multi-material CT reconstruction, used for example in security screening. Future work will explore the application of cGLO to such IIPs. Since cGLO efficiently learns correlations between the output channels of its decoder network, further developments will also focus on muti-task applications, such as jointly learning reconstruction and segmentation. ## 5 Appendix A: Quantitative results tables ## 6 Appendix B: Reconstruction examples Figure 7: Examples of reconstructions of slices from the LIDC test set given : 9, 23, 50 and 100 experimental viewing angles. They are obtained with cSGM trained on the following portions of the LIDC dataset : 2%, 10%, 20%, 35% and 50%. Figure 8: Examples of reconstructions of slices from the LDCT test set given : 9, 23, 50 and 100 experimental viewing angles. They are obtained with cGLO trained on the following portions of the LDCT dataset : 2%, 10%, 20%, 35% and 50%. Figure 9: Examples of reconstructions of slices from the LDCT test set given : 9, 23, 50 and 100 experimental viewing angles. They are obtained with cSGM trained on the following portions of the LDCT dataset : 2%, 10%, 20%, 35% and 50%.
2308.16793
Hybrid Renormalization for Quasi Distribution Amplitudes of A Light Baryon
We develop a hybrid scheme to renormalize quasi distribution amplitudes of a light baryon on the lattice, which combines the self-renormalization and ratio scheme. By employing self-renormalization, the UV divergences and linear divergence at large spatial separations in quasi distribution amplitudes are removed without introducing extra nonperturbative effects, while making a ratio with respect to the zero-momentum matrix element can properly remove the UV divergences in small spatial separations. As a specific application, distribution amplitudes of the $\Lambda$ baryon made of $uds$ are investigated, and the requisite equal-time correlators, which define quasi distribution amplitudes in coordinate space, are perturbatively calculated up to the next-to-leading order in strong coupling constant $\alpha_s$. These perturbative equal-time correlators are used to convert lattice QCD matrix elements to the continuum space during the renormalization process. Subsequently, quasi distribution amplitudes are matched onto lightcone distribution amplitudes by integrating out hard modes and the corresponding hard kernels are derived up to next-to-leading order in $\alpha_s$ including the hybrid counterterms. These results are valuable in the lattice-based investigation of the lightcone distribution amplitudes of a light baryon from the first principles of QCD.
Chao Han, Yushan Su, Wei Wang, Jia-Lu Zhang
2023-08-31T15:11:11Z
http://arxiv.org/abs/2308.16793v2
# Hybrid Renormalization for Quasi Distribution Amplitudes of A Light Baryon ###### Abstract We develop a hybrid scheme to renormalize quasi distribution amplitudes of a light baryon on the lattice, which combines the self-renormalization and ratio scheme. By employing self-renormalization, the UV divergences and linear divergence at large spatial separations in quasi distribution amplitudes are removed without introducing extra nonperturbative effects, while making a ratio with respect to the zero-momentum matrix element can properly remove the UV divergences in small spatial separations. As a specific application, distribution amplitudes of the \(\Lambda\) baryon made of \(uds\) are investigated, and the requisite equal-time correlators, which define quasi distribution amplitudes in coordinate space, are perturbatively calculated up to the next-to-leading order in strong coupling constant \(\alpha_{s}\). These perturbative equal-time correlators are used to convert lattice QCD matrix elements to the continuum space during the renormalization process. Subsequently, quasi distribution amplitudes are matched onto lightcone distribution amplitudes by integrating out hard modes and the corresponding hard kernels are derived up to next-to-leading order in \(\alpha_{s}\) including the hybrid counterterms. These results are valuable in the lattice-based investigation of the lightcone distribution amplitudes of a light baryon from the first principles of QCD. ## 1 Introduction Lightcone distribution amplitudes (LCDAs) of light baryons are the fundamental non-perturbative inputs in QCD factorization for exclusive processes with a large momentum transfer [1]. An example of this type is weak decays of bottom baryons which are valuable to extract the CKM matrix element \(|V_{ub}|\)[2] and to probe new physics beyond the standard model through flavor changing neutral current process [3; 4]. In addition, the knowledge of LCDAs is also crucial for understanding the internal structure of baryons. The LCDAs characterize the distributions of the longitudinal momentum among the quarks and gluons within the dominant leading Fock state of a baryon, which are complementary to parton distribution functions that encode the probability density distribution of parton momenta in hadrons. Thus the LCDAs of light baryons have been widely investigated by theoretical techniques such as QCD sum rules [5; 6; 7; 8] and Lattice QCD [9; 10; 11; 12]. Despite its pivotal importance, LCDAs of light mesons and baryons have been less studied compared to parton distribution functions (PDFs). A major difficulty is that in an exclusive process it is very likely that more than one LCDAs enter into one physical observable in a rather complicated way through convolution integrals. Thus it makes an experimental determination of LCDAs extremely difficult. In addition, the leading-twist baryon LCDA \(\Phi\left(y_{1},y_{2},\mu\right)\) with \(y_{1},y_{2}\) being the momentum fractions of two involved quarks and the momentum fraction for the third quark satisfying \(y_{3}=1-y_{1}-y_{2}\) describes the momentum distributions of the three quarks, and by definition is a two-dimensional distribution function, which is even more complicated than the meson LCDAs. Our limited knowledge of baryon LCDAs mostly relies on non-perturbative methods, many of which are model-dependent and inevitably introduce uncontrollable uncertainties. In recent years lattice QCD has been applied to determine the normalization constants and the first moments of the distribution amplitudes for the lowest-lying baryon octet [10; 11; 12]. In particular, a latest lattice QCD study has used a large number of \(n_{f}=2+1\) ensembles with physical pion (and kaon) masses and five different lattice spacings [12]. After making the extrapolation to the continuum and infinite volume limits, they obtained results for the first two moments of LCDAs. Despite these progresses, a complete description of LCDAs can not be constructed from these few moments and the LCDA is far from being deciphered. In a previous publication [13], a direct method to extract the shape distribution of LCDA of a light baryon is proposed through the simulation of equal-time correlation functions, named as quasi distribution amplitude (DA), under the framework of large momentum effective theory (LaMET) [14; 15] (please see Refs. [16; 17; 18] for reviews on the development and successful applications in LaMET). The quasi-DA \(\tilde{\Phi}(x_{1},x_{2},P^{z},\mu)\) with \(P^{z}\) being the hadron momentum on the \(z\) direction and \(x_{1},x_{2}\) being the momentum fractions is a calculable quantity on the lattice. Since the quasi-DA and LCDA have the same infrared (IR) structure, the LCDA of a light baryon can be obtained by performing a boost on the hadron momentum to infinity, which is captured by the matching formula \[\tilde{\Phi}\left(x_{1},x_{2},P^{z},\mu\right)=\int dy_{1}dy_{2}\mathcal{C} \left(x_{1},x_{2},y_{1},y_{2},P^{z},\mu\right)\Phi\left(y_{1},y_{2},\mu\right) +\mathcal{O}\left(\frac{\Lambda_{\rm QCD}}{x_{1}P^{z}},\frac{\Lambda_{\rm QCD }}{x_{2}P^{z}},\frac{\Lambda_{\rm QCD}}{\left(1-x_{1}-x_{2}\right)P^{z}} \right), \tag{1}\] where \(\mathcal{C}(x_{1},x_{2},y_{1},y_{2},P^{z},\mu)\) is a hard kernel to compensate for the ultraviolet (UV) differences between these two distributions. The hard kernel has been calculated up to one-loop accuracy in the \(\overline{\rm MS}\) scheme [13]. The \(\mu\) in the quasi-DA comes from the renormalization of the logarithmic divergences, while the scale \(\mu\) in the LCDA is the factorization scale to split the collinear and hard modes. Thus the hard kernel \(\mathcal{C}\) contains both renormalization and factorization scales, which are chosen to be the same for convenience. To remove the remnant ultraviolet divergence in the quasi-DA, a regularization invariant momentum subtraction method (RI/MOM) [19] was adopted to renormalize the quasi-DA and the corresponding one-loop counterterm was obtained [13]. Despite the theoretical advantages of the RI/MOM scheme, discrepancies emerge when implementing this scheme to renormalize the quasi-PDFs and quasi-DAs on the lattice. Residual linear divergence continues to manifest [20] and additional uncontrollable infrared effects are avoidably introduced [21]. Moreover, the discretization effects arising in the lattice calculations also need to be considered carefully. Given all these difficulties, it remains a challenge to perform a practical lattice calculation for baryon quasi-DA. In order to address these issues, we develop a hybrid renormalization method following the same spirit of Ref. [22]. The concept of the hybrid renormalization scheme was first proposed in Ref. [22] in which the quasi-parton distribution functions were considered. This scheme has a wide range of applications [23; 24; 25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38] since its proposal. In this work, the hybrid scheme for the quasi-DA of a light baryon is developed utilizing the perturbative coordinate-space quasi-DA calculations, in which divergences appearing in both long-distance and short-distance spatial separations can be eliminated properly. By taking a ratio with respect to the zero-momentum matrix element at short distances, one can eliminate the UV divergences in lattice matrix elements, and part of the discretization effects are also expected to be canceled. Correspondingly, the short distance UV logarithms in the \(\overline{\rm MS}\) scheme can also be eliminated properly. Through the self-renormalization at large distances, one can eliminate the UV divergences without introducing additional uncontrollable non-perturbative effects. Since the baryon DA is a two-dimensional distribution, there are multi regions involving both short distance and large distance simultaneously, which will be treated separately in this scheme. As a result, this method ensures that lattice matrix elements approach the continuum limit in a more appropriate manner and allows for a realistic determination of LCDA. The rest of this paper is organized as follows: In Sec. 2, we give a brief overview of the lightcone distribution amplitudes and quasi-DA of a light baryon. The detailed calculations of the one-loop spatial correlation are presented in Appendix A. In Sec. 3, the hybrid renormalization scheme is developed and the matching kernel is presented. Some detailed results are collected in Appendix B and C. A summary is provided in the last section. ## 2 Lightcone distribution amplitudes and quasi distribution amplitudes for a light baryon In this section, we introduce the requisite notations and conventions required for subsequent discussions. In particular, we will give the definition of LCDAs, and quasi-DAs and collect the results for the one-loop matching in the \(\overline{\rm MS}\) scheme. ### LCDAs We start with the LCDAs, which are defined as the hadron-to-vacuum matrix elements of non-local operators consisting of quarks and gluon which live on the light cone. In the case of a light baryon, the three-quark matrix element can be constructed as [7] \[\left\langle 0\left|\varepsilon^{ijk}u_{\alpha}^{i^{\prime}}\left(z_{1}\right) U_{i^{\prime}i}\left(z_{1},z_{0}\right)d_{\beta}^{j^{\prime}}\left(z_{2} \right)U_{j^{\prime}j}\left(z_{2},z_{0}\right)s_{\gamma}^{k^{\prime}}\left(z_{ 3}\right)U_{k^{\prime}k}\left(z_{3},z_{0}\right)\right|\Lambda(P,\lambda) \right\rangle, \tag{1}\] where \(\,|\,\Lambda(P,\lambda)\rangle\) stands for the \(\Lambda\) baryon state with the momentum \(P\), \(P^{2}=0\) and the helicity \(\lambda\). \(\alpha\), \(\beta\) and \(\gamma\) are Dirac indices. \(i^{(\prime)}\), \(j^{(\prime)}\) and \(k^{(\prime)}\) denote color charges. In this paper, two light-cone unit vectors are defined as \(n^{\mu}=(1,0,0,-1)/\sqrt{2}\) and \(\bar{n}^{\mu}=(1,0,0,1)/\sqrt{2}\). The momentum of the baryon is along the \(\bar{n}\) direction, \(P^{\mu}=P^{+}\bar{n}^{\mu}=(P^{z},0,0,P^{z})\). The coordinates are set in the \(n\) direction, \(z_{i}^{\mu}=z_{i}n^{\mu}\). The Wilson lines \(U(x,y)\) \[U(x,y)=\mathcal{P}\exp\left[ig\int_{0}^{1}\ \mathrm{d}t(x-y)_{\mu}A^{\mu}(tx+(1-t)y)\right] \tag{2}\] are inserted to preserve the gauge invariance. In the definition, \(z_{0}\) can be chosen freely due to the gauge invariance, and in the following we will use \(z_{0}=0\) for simplicity. Also for brevity Wilson lines, color indexes, and helicity will not be written out explicitly below. Based on Lorentz invariance, and the spin and parity requirement, the matrix element can be decomposed in terms of three functions, \(V(z_{i}P\cdot n)\), \(A(z_{i}P\cdot n)\), and \(T(z_{i}P\cdot n)\) to the leading twist \[\left\langle 0\left|u_{\alpha}\left(z_{1}\right)d_{\beta}\left(z_{2 }\right)s_{\gamma}\left(z_{3}\right)\right|\Lambda(P)\right\rangle \tag{3}\] \[=f_{N}\left\{\left(\not\!\!P\right)\alpha\beta\left(\gamma_{5}u_{ \Lambda}\right)_{\gamma}V\left(z_{i}P\cdot n\right)+\left(\not\!\!P\gamma_{5}C \right)_{\alpha\beta}\left(u_{\Lambda}\right)_{\gamma}A\left(z_{i}P\cdot n \right)+\left(i\sigma_{\mu\nu}P^{\nu}C\right)_{\alpha\beta}\left(\gamma_{\mu} \gamma_{5}u_{\Lambda}\right)_{\gamma}T\left(z_{i}P\cdot n\right)\right\},\] where C signifies the charge conjugation. \(u_{\Lambda}\) stands for the \(\Lambda\) baryon spinor. Equivalently, the three leading twist functions can be projected by inserting a specific gamma matrix \(\Gamma\) into the \(u\) and \(d\) quark fields. In the following discussion, we will take \(A(z_{i}P\cdot n)\) as an example while the other matrix elements can be similarly analyzed. Then we have \[\begin{split}& M_{L}(z_{1},z_{2},z_{3},P^{+},\mu)=\left\langle 0 \left|u^{T}\left(z_{1}\right)\Gamma d\left(z_{2}\right)s\left(z_{3}\right) \right|\Lambda(P)\right\rangle_{R},\\ &\Phi_{L}\left(x_{1},x_{2},\mu\right)f_{\Lambda}(\mu)P^{+}u_{ \Lambda}(P)=\int_{-\infty}^{+\infty}\frac{d\,P^{+}z_{1}}{2\pi}\frac{d\,P^{+}z_ {2}}{2\pi}e^{ix_{1}P^{+}z_{1}+ix_{2}P^{+}z_{2}}M_{L}(z_{1},z_{2},0,P^{+},\mu), \end{split} \tag{4}\] where \(T\) means transpose and \(\Gamma=C\gamma_{5}\not{n}\). \(R\) stands for renormalization. \(x_{i}\)s label the longitudinal momentum fractions carried by the three quarks and \(0\leq x_{i}\leq 1\). The \(\mu\) denotes the renormalization scale which will be converted to the factorization scale when the factorization of quasi-DA is established. \(f_{\Lambda}(\mu)\) is the \(\Lambda\) baryon decay constant defined as follows \[f_{\Lambda}(\mu)P^{+}u_{\Lambda}(P)=M_{L}(0,0,0,P^{+},\mu). \tag{5}\] It should be noted that we have defined the LCDA \(\Phi_{L}\left(x_{1},x_{2},\mu\right)\) by separating the baryon decay constant \(f_{\Lambda}(\mu)\), which has a different convention with the recent LQCD calculation [12]. Note that \(f_{\Lambda}(\mu)\) depends on the renormalization scale \(\mu\) since the local operator here is not a conserved current. The LCDA \(\Phi_{L}\left(x_{1},x_{2},\mu\right)\) in Eq. (4) is dimensionless and normalized. ### Quasi-DAs We consider a spatial correlator \[M(z_{1},z_{2},z_{3},P^{z},\mu)=\left\langle 0\left|u^{T}\left(z_{1}\right) \widetilde{\Gamma}d\left(z_{2}\right)s\left(z_{3}\right)\right|\Lambda(P) \right\rangle_{R} \tag{6}\] to define the quasi-DA \[\tilde{\Phi}\left(x_{1},x_{2},P^{z},\mu\right)\tilde{f}_{\Lambda}(\mu)P^{z}u_ {\Lambda}(P)=(P^{z})^{2}\int_{-\infty}^{+\infty}\frac{d\,z_{1}}{2\pi}\frac{d \,z_{2}}{2\pi}e^{ix_{1}P^{z}z_{1}+ix_{2}P^{z}z_{2}}M(z_{1},z_{2},0,P^{z},\mu), \tag{7}\] Figure 1: One loop corrections for the equal-time matrix element of the \(\Lambda\) baryon. where \(\tilde{\Gamma}=C\gamma_{5}\not{n}_{z}\). For the quasi-DAs, the coordinates are set as \(z_{i}^{\mu}=z_{i}n_{z}^{\mu}\), where \(n_{z}^{\mu}=(0,0,0,1)\). The baryon decay constant \(\tilde{f}_{\Lambda}(\mu)\) here can be similarly defined as Eq. (4): \[\tilde{f}_{\Lambda}(\mu)=\frac{M(0,0,0,P^{z},\mu)}{P^{z}u_{\Lambda}(P)}. \tag{8}\] To obtain the quasi-DAs from the lattice simulations, one needs to consider their short and long-distance properties separately. From this viewpoint, it is advantageous to work in the coordinate space. To explicitly demonstrate the ultraviolet and infrared structures, we perform a perturbative calculation of quasi-DAs by sandwiching the operators between the vacuum state \(\left\langle 0\right|\) and the lowest-order Fock state \(\left|uds\right\rangle\): \[M_{p}(z_{1},z_{2},z_{3}=0,P^{z},\mu)=\left\langle 0\left|u^{T}\left(z_{1} \right)\tilde{\Gamma}d\left(z_{2}\right)s\left(0\right)\right|u(x_{1}P)d(x_{2 }P)s(x_{3}P)\right\rangle_{R}, \tag{9}\] where the lower index "\(p\)" in \(M_{p}\) denotes the perturbative calculation. The next-to-leading order Feynman diagrams are shown in Fig. 1, and the calculation details are collected in Appendix A. The final results for the spatial correlator up to one loop are given as \[M_{p}(z_{1},z_{2},0,P^{z},\mu)=\left\{1+\frac{\alpha_{s}C_{F}}{ \pi}\left(\frac{1}{2}L_{1}^{\rm UV}+\frac{1}{2}L_{2}^{\rm UV}+\frac{1}{2}L_{ 12}^{\rm UV}+\frac{3}{2}\right)\right\}M_{0}\left(z_{1},z_{2},0,P^{z},\mu\right)\] \[-\frac{\alpha_{s}C_{F}}{8\pi}\int_{0}^{1}d\eta_{1}\int_{0}^{1- \eta}d\eta_{2}\] \[+2\left(L_{12}^{\rm IR}-3+\frac{1}{\epsilon_{\rm IR}}\right)M_{0 }\left(\left(1-\eta_{1}\right)z_{1}+\eta_{1}z_{2},(1-\eta_{2})z_{2}+\eta_{2}z _{1},0,P^{z},\mu\right)\biggr{\}}\] \[-\frac{\alpha_{s}C_{F}}{4\pi}\int_{0}^{1}d\eta\times\left\{M_{0} \left(\left(1-\eta\right)z_{1}+\eta z_{2},z_{2},0,P^{z},\mu\right)\left\{ \left(L_{12}^{\rm IR}+1+\frac{1}{\epsilon_{\rm IR}}\right)\left(\frac{1-\eta }{\eta}\right)_{+}+2\left(\frac{\ln\eta}{\eta}\right)_{+}\right\}\right.\] \[+M_{0}\left(z_{1},(1-\eta)z_{2}+\eta z_{1},0,P^{z},\mu\right) \left\{\left(L_{12}^{\rm IR}+1+\frac{1}{\epsilon_{\rm IR}}\right)\left(\frac {1-\eta}{\eta}\right)_{+}+2\left(\frac{\ln\eta}{\eta}\right)_{+}\right\}\] \[+M_{0}\left(\left(1-\eta\right)z_{1},z_{2},0,P^{z},\mu\right) \left\{\left(L_{1}^{\rm IR}+1+\frac{1}{\epsilon_{\rm IR}}\right)\left(\frac{1 -\eta}{\eta}\right)_{+}+2\left(\frac{\ln\eta}{\eta}\right)_{+}\right\}\] \[-M_{0}\left(z_{1},z_{2},\eta z_{1},P^{z},\mu\right)\left\{\left(L_ {1}^{\rm IR}+1+\frac{1}{\epsilon_{\rm IR}}\right)\left(\frac{1-\eta}{\eta} \right)_{+}+2\left(\frac{\ln\eta}{\eta}\right)_{+}\right\}\] \[-M_{0}\left(z_{1},z_{2},\eta z_{2},P^{z},\mu\right)\left\{\left(L_ {2}^{\rm IR}+1+\frac{1}{\epsilon_{\rm IR}}\right)\left(\frac{1-\eta}{\eta} \right)_{+}+2\left(\frac{\ln\eta}{\eta}\right)_{+}\right\}\biggr{\}}\,, \tag{10}\] where \(M_{0}\) stands for tree-level matrix element: \[M_{0}(z_{1},z_{2},0,P^{z},\mu)=\sqrt{2}P^{z}e^{ix_{1}P^{z}z_{1}+ix_{2}P^{z}z_{ 2}}u_{s}(x_{3}P), \tag{11}\] where \(x_{3}=1-x_{1}-x_{2}\), and \(\left[u_{u}\left(x_{1}P\right)\right]^{T}\tilde{\Gamma}u_{d}\left(x_{2}P \right)=\frac{1}{2}\operatorname{tr}\left[\not{P}C\gamma^{5}\tilde{\Gamma}\right]\) is employed. \(u_{u/d/s}(P)\) denotes the spinor of u, d, or s quark with momentum \(P\). The plus function is defined as \[\int_{0}^{1}du\left[G(u)\right]_{+}F(u)=\int_{0}^{1}duG(u)[F(u)-F(0)], \tag{12}\] and some abbreviations are used in the above: \[L_{1}^{\rm IR,\ UV}=\ln\left(\frac{1}{4}\mu_{\rm IR,\ UV}^{2}z_{1}^ {2}e^{2\gamma_{E}}\right),\ \ \ L_{2}^{\rm IR,UV}=\ln\left(\frac{1}{4}\mu_{\rm IR,UV}^{2}z_{2}^{2}e^{2 \gamma_{E}}\right), \tag{13}\] \[L_{12}^{\rm IR,UV}=\ln\left(\frac{1}{4}\mu_{\rm IR,UV}^{2}(z_{1} -z_{2})^{2}e^{2\gamma_{E}}\right). \tag{14}\] We have checked that these results are consistent with the calculation in the momentum space [13]. Moreover, one can see the UV and IR behaviors clearly in the coordinate space, which is convenient for the renormalization scheme to be established below. Furthermore, one can obtain the zero-momentum matrix element in the coordinate space by letting \(P^{z}=0\) and performing the normalization with the local matrix element \[\hat{M}_{p}\left(z_{1},z_{2},z_{3}=0,P^{z}=0,\mu\right)=\frac{M_{p}(z_{1},z_{ 2},0,0,\mu)}{M_{p}(0,0,0,0,\mu)}=1+\frac{\alpha_{s}C_{F}}{2\pi}\left[\frac{7}{ 8}L_{1}^{\rm UV}+\frac{7}{8}L_{2}^{\rm UV}+\frac{3}{4}L_{12}^{\rm UV}+4\right], \tag{15}\] where the local matrix element \(M_{p}(0,0,0,0,\mu)=\left(1-\frac{\alpha_{s}C_{F}}{4\pi}\frac{1}{\epsilon_{\rm IR }}\right)M_{0}\left(0,0,0,0,\mu\right)\), see Eq. (A.25). The perturbative zero momentum matrix element in Eq. (15) will be used in the hybrid renormalization method. ### Matching of quasi-DAs in the \(\overline{\rm MS}\) scheme In the large \(P^{z}\) limit, the quasi and lightcone distribution amplitudes can be related through the QCD factorization. After separating the hard and collinear contributions, one can factorize the quasi-DAs in terms of the LCDAs and a hard kernel which can be perturbatively calculated. In the \(\overline{\rm MS}\) scheme, the one-loop hard kernel has been obtained [13] \[\mathcal{C}_{\overline{\rm MS}}\left(x_{1},x_{2},y_{1},y_{2},P^{ z},\mu\right) =\delta\left(x_{1}-y_{1}\right)\delta\left(x_{2}-y_{2}\right)+ \frac{\alpha_{s}C_{F}}{8\pi}\times\left[C_{2}\left(x_{1},x_{2},y_{1},y_{2},P^ {z},\mu\right)\delta\left(x_{2}-y_{2}\right)\right.\] \[\left.+C_{3}\left(x_{1},x_{2},y_{1},y_{2},P^{z},\mu\right)\delta \left(x_{3}-y_{3}\right)+\left\{x_{1}\leftrightarrow x_{2},y_{1}\leftrightarrow y _{2}\right\}\right]_{\oplus}, \tag{16}\] where \(\oplus\) denotes a double plus function defined as \[\left[g\left(x_{1},x_{2},y_{1},y_{2}\right)\right]_{\oplus}= g\left(x_{1},x_{2},y_{1},y_{2}\right)-\delta\left(x_{1}-y_{1}\right) \delta\left(x_{2}-y_{2}\right)\int dx_{1}^{\prime}dx_{2}^{\prime}g\left(x_{1} ^{\prime},x_{2}^{\prime},y_{1},y_{2}\right). \tag{17}\] \[\begin{split}& C_{2}\left(x_{1},x_{2},y_{1},y_{2},P^{z},\mu\right)= \\ &\left\{\begin{array}{l}\frac{\left(x_{1}+y_{1}\right)\left(x_{3}+y_ {3}\right)\ln\frac{y_{1}-x_{1}}{-x_{1}}}{y_{1}\left(y_{1}-x_{1}\right)y_{3}}- \frac{x_{3}\left(x_{1}+y_{1}+2y_{3}\right)\ln\frac{x_{3}}{-x_{1}}}{\left(y_{1 }-x_{1}\right)y_{3}\left(y_{1}+y_{3}\right)},x_{1}<0\\ \frac{\left(x_{1}-3y_{1}-2y_{3}\right)x_{1}}{y_{1}\left(x_{3}-y_{3}\right) \left(y_{1}+y_{3}\right)}-\frac{\left[\left(x_{3}-y_{3}\right)^{2}-2x_{3}y_{1} \right]\ln\frac{x_{3}-y_{3}}{x_{3}}}{y_{1}\left(x_{3}-y_{3}\right)y_{3}}+ \frac{2x_{1}\ln\frac{4x_{1}\left(x_{3}-y_{3}\right)P_{x}^{2}}{\mu^{2}}}{y_{1} \left(x_{3}-y_{3}\right)}+\frac{x_{1}\ln\frac{4x_{1}x_{3}P_{x}^{2}}{\mu^{2}}} {y_{1}\left(y_{1}+y_{3}\right)},0<x_{1}<y_{1}\\ \frac{\left(x_{3}-2y_{1}-3y_{3}\right)x_{3}}{y_{3}\left(x_{1}-y_{1} \right)\left(y_{1}+y_{3}\right)}-\frac{\left[\left(x_{1}-y_{1}\right)^{2}-2x_{ 1}y_{3}\right]\ln\frac{x_{1}-y_{1}}{x_{1}}}{\left(x_{1}-y_{1}\right)y_{1}y_{3} }+\frac{2x_{3}\ln\frac{4x_{3}\left(x_{1}-y_{1}\right)P_{x}^{2}}{\mu^{2}}}{y_{ 3}\left(y_{1}+y_{3}\right)}+\frac{x_{3}\ln\frac{4x_{1}x_{3}P_{x}^{2}}{\mu^{2}} }{y_{3}\left(y_{1}+y_{3}\right)},y_{1}<x_{1}<y_{1}+y_{3}\\ \frac{\left(x_{1}+y_{1}\right)\left(x_{3}+y_{3}\right)\ln\frac{y_{3}-x_{3}}{- x_{3}}}{y_{1}y_{3}\left(y_{3}-x_{3}\right)}-\frac{x_{1}\left(x_{3}+2y_{1}+y_{3} \right)\ln\frac{x_{1}}{-x_{3}}}{y_{1}\left(y_{3}-x_{3}\right)\left(y_{1}+y_{3} \right)},x_{1}>y_{1}+y_{3}.\end{split}\right. \tag{19}\] \[C_{3}\left(x_{1},x_{2},y_{1},y_{2},P^{z},\mu\right)= \tag{20}\] \[\frac{1}{x_{1}-y_{1}}+\frac{2x_{1}+x_{2}}{y_{1}\left(y_{1}+y_{2} \right)}+\frac{\left[\left(x_{1}+y_{2}\right)y_{1}-x_{1}^{2}\right]\ln\frac{x _{2}-y_{2}}{x_{2}}}{y_{1}\left(x_{2}-y_{2}\right)y_{2}}+\frac{x_{1}\ln\frac{4 x_{1}\left(x_{2}-y_{2}\right)P_{x}^{2}}{\mu^{2}}}{y_{1}\left(x_{2}-y_{2}\right)y_{2}}+ \frac{x_{1}\ln\frac{4x_{1}x_{2}P_{x}^{2}}{\mu^{2}}}{y_{1}\left(x_{2}-y_{2} \right)}+\frac{x_{1}\ln\frac{4x_{1}x_{2}P_{x}^{2}}{\mu^{2}}}{y_{1}\left(y_{1}+ y_{2}\right)},0<x_{1}<y_{1}\] \[\frac{1}{x_{2}-y_{2}}+\frac{x_{1}+2x_{2}}{y_{2}\left(y_{1}+y_{2} \right)}+\frac{\left[\left(x_{2}+y_{1}\right)y_{2}-x_{2}^{2}\right]\ln\frac{x _{1}-y_{1}}{x_{1}}}{\left(x_{1}-y_{1}\right)y_{1}y_{2}}+\frac{x_{2}\ln\frac{4 x_{2}\left(x_{1}-y_{1}\right)P_{x}^{2}}{\mu^{2}}}{\left(x_{1}-y_{1}\right)y_{2}}+ \frac{x_{2}\ln\frac{4x_{1}x_{2}P_{x}^{2}}{\mu^{2}}}{y_{2}\left(y_{1}+y_{2} \right)},y_{1}<x_{1}<y_{1}+y_{2}\] \[\frac{\left(x_{1}x_{2}+y_{1}y_{2}\right)\ln\frac{x_{1}-y_{1}}{x_ {1}}}{y_{1}\left(x_{1}-y_{1}\right)y_{2}}-\frac{x_{2}\left(x_{1}+y_{2}\right) \ln\frac{x_{2}}{x_{1}}}{y_{2}\left(x_{1}-y_{1}\right)\left(y_{1}+y_{2}\right) },x_{1}>y_{1}+y_{2}.\] It should be noticed that the above matching of quasi-DAs in the \(\overline{\rm MS}\) scheme is problematic which can be understood as follows. As shown in Eq. (17), the double plus function contains an integral over the momentum fractions \(x_{1}^{\prime}\) and \(x_{2}^{\prime}\) which arises from the so-called virtual corrections to DAs [13]. In the \(x_{1}^{\prime},x_{2}^{\prime}\to\infty\) limit the integrals have the asymptotic form \(\int dx^{\prime}/x^{\prime}\) and are then divergent. This divergence corresponds to the logarithmic UV divergence and should be renormalized. However, in deriving the above matching kernel, one has only calculated the so-called real diagrams to quasi-DAs and assumed that virtual contributions are the integration of real diagrams and thus combining them gives the double plus function. Keeping the remnant UV divergence leads to an improper definition of double plus function. In Ref. [39], a properly defined plus function is introduced for quasi parton distribution functions through the subtraction at the infinite momentum, while in Ref. [13], the RI/MOM renormalization scheme has been employed to subtract the asymptotic contributions for the quasi-DAs so that the plus function becomes well-defined. However it has been shown that implementing this scheme in the analysis of PDFs introduces unavoidably additional nonperturbative effects [21] and thus the final results contain uncontrollable systematic uncertainties. ## 3 Hybrid renormalization scheme for baryon quasi-DA In this section, we will present the hybrid scheme to address the problems in the matching as stated in the previous section. The aim of this scheme is to remove the UV divergences in quasi-DAs without introducing nonperturbative effects, and meanwhile, this scheme can be properly implemented on the lattice. Conceptually, the UV divergences correspond to the small-distance behavior, while nonperturbative contributions are due to the long-distance behavior. So from the viewpoint of separating these contributions, it is advantageous to work in the coordinate space. In coordinate space, the results of quasi-DAs and the hard kernel can be obtained by making a Fourier transformation of the results in momentum space. We have also directly calculated the perturbative contributions to the quasi-DAs in Appendix A, and checked the consistency. At short distances, e.g. \(z_{1}\to 0\), \(z_{2}\to 0\) or \(|z_{1}-z_{2}|\to 0\) in Eq. (10) with \(z_{3}=0\), perturbative quasi-DAs behave as the logarithmic terms like \(\ln(z_{1}^{2})\), \(\ln(z_{2}^{2})\) and \(\ln((z_{1}-z_{2})^{2})\). As stated in the above section, the \(\ln z_{i}^{2}\) terms in the real diagram of the perturbative matrix elements prevent them from performing \(z\to 0\). This is also reflected in our virtual contribution definition in which corresponding logarithm UV divergence had been kept. Subsequently, the double plus function is also not well-defined. By dividing the perturbative matrix element by a proper zero momentum matrix element, which is called the ratio scheme [40; 41; 42], all these issues arising in the perturbative matrix elements calculation can be resolved. By construction, in the ratio scheme, the \(\ln z_{i}^{2}\) terms in the numerator and the denominator are equal, then will be canceled. The corresponding virtual contributions will no longer be divergent. Then the double plus function can be guaranteed to be well-defined. Since now the perturbative matrix element is fine for \(z\to 0\), the corresponding lattice matrix elements can also be constructed and used for matching. It should be noted that the ratio scheme is only available in the perturbative region. If one performs it over the long-distance region, the IR structure may be changed. These require that only zero momentum matrix elements in the short-distance region can be chosen as denominators and the renormalization for the matrix elements in the long-distance region needs additional consideration. At large distances, e.g. \(z_{1}\sim 1/\Lambda_{\rm QCD}\), \(z_{2}\sim 1/\Lambda_{\rm QCD}\) or \(|z_{1}-z_{2}|\sim 1/\Lambda_{\rm QCD}\) in Eq. (6), a proper renormalization scheme should not introduce extra non-perturbative effects. In this work, we use the self-renormalization scheme as advocated in Ref. [21]. In the self-renormalization scheme, a renormalization factor \(Z_{R}\), which includes all typical divergences and discretion errors, is defined. One can use this renormalization factor to convert the lattice matrix elements to continuous matrix elements without bringing in any non-perturbative effects. Besides the UV divergence inherited by dimensional regularization, the linear divergence from the lattice simulation is eliminated as well. By definition, the calculation of the renormalization factors requires the input of the UV divergences and some parameters. Specifically, the UV divergences are fitted from zero momentum lattice matrix elements at small lattice spacings, and these parameters can be obtained through matching the renormalized lattice matrix element to the continuum perturbative matrix element in the perturbative region. As a result, the long-distance regions which merely involve non-perturbative scales in the renormalization can also be handled. A subtlety in applying the hybrid renormalization scheme to quasi-DAs of a light baryon is that there are multi regions involving both perturbative and non-perturbative scales, e.g. (\(z_{1}\to 0\) and \(z_{2}\sim 1/\Lambda_{\rm QCD}\)), (\(z_{2}\to 0\) and \(z_{1}\sim 1/\Lambda_{\rm QCD}\)), or (\(|z_{1}-z_{2}|\to 0\), \(z_{1}\sim 1/\Lambda_{\rm QCD}\) and \(z_{2}\sim 1/\Lambda_{\rm QCD}\)), see the blue bands in Fig. 4 except for the one around \(z_{1}\sim-z_{2}\). However, if the logarithmic divergences regarding different scales (\(z_{1}\), \(z_{2}\) and \(|z_{1}-z_{2}|\)) can be factorized separately, the UV divergences regarding those scales can be multiplicatively renormalized separately. If so, the ratio scheme can be taken for the perturbative scale and the self renormalization can be performed for the non-perturbative scale. From the one-loop result in Eq. (15), one can see that the logarithmic divergences indeed factorize, which allows to perform the hybrid renormalization. An all-order proof is left for future studies. In this section, we will give the normalization and self renormalization in the beginning, which provide the building blocks for hybrid renormalization. Then we provide a method for hybrid renormalization. The matching kernel in the hybrid scheme is presented at the end. ### Normalization One starts from a quasi-DA matrix element on lattice \(M\left(z_{1},z_{2},z_{3}=0,P^{z},a\right)\), which is defined in Eq. (6). On the lattice, the matrix element is regularized with the lattice spacing \(a\). To ensure the normalization, one has \[\hat{M}\left(z_{1},z_{2},0,P^{z},a\right)=\frac{M\left(z_{1},z_{2},0,P^{z},a \right)}{M\left(0,0,0,P^{z},a\right)}, \tag{11}\] where part of the discretization effects are eliminated in the lattice matrix elements through the normalization. ### Self renormalization In this subsection, we discuss the extraction of the renormalization factor \(Z_{R}(z_{1},z_{2},a,\mu)\) from the zero momentum matrix element \(\hat{M}\left(z_{1},z_{2},0,0,a\right)\) (Eq. (11) if \(P^{z}=0\)). This renormalization factor will be applied to the large distances in hybrid renormalization. According to Ref. [21], the renormalization factor is an asymptotic expansion with respect to \(a\), with both power dependence and logarithmic dependence \[Z_{R}(z_{1},z_{2},a,\mu)=\exp\Big{[}\left(\frac{k}{a\ln[a\Lambda_{\rm QCD}]}-m _{0}\right)\tilde{z}+\frac{\gamma_{0}}{b_{0}}\ln\left[\frac{\ln[1/(a\Lambda_{ \rm QCD})]}{\ln[\mu/\Lambda_{\overline{\rm MS}}]}\right]+\ln\left[1+\frac{d}{ \ln(a\Lambda_{\rm QCD})}\right]+f(z_{1},z_{2})a\Big{]}\,, \tag{12}\] where \(\left(\frac{k}{a\ln[a\Lambda_{\rm QCD}]}-m_{0}\right)\tilde{z}\) is the linear divergence [43; 44; 45; 46; 22] and the mass renormalization parameter [47; 48; 49; 50; 37; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69; 70; 71; 72; 73; 74; 75; 76; 77; 78; 79; 80], \(\frac{\gamma_{0}}{b_{0}}\ln\left[\frac{\ln[1/(a\Lambda_{\rm QCD})]}{\ln[\mu/ \Lambda_{\overline{\rm MS}}]}\right]+\ln\left[1+\frac{d}{\ln(a\Lambda_{\rm QCD })}\right]\) contains the log divergence. \(f(z_{1},z_{2})a\) (or \(f(z_{1},z_{2})a^{2}\)) is the discretization effect. \(\tilde{z}\) is the effective length for the linear divergence, which is defined as follows \[\tilde{z}=\left\{\begin{array}{ll}|z_{1}-z_{2}|,&z_{1}z_{2}<0\\ \max\left(|z_{1}|,|z_{2}|\right),&z_{1}z_{2}\geq 0.\end{array}\right. \tag{13}\] For \(z_{1}z_{2}<0\), the effective length is the total length of the Wilson link, see Fig. 2. For \(z_{1}z_{2}>0\), there is an overlap region between the two Wilson links. However, one can simply show (e.g. \(z_{1}>z_{2}>0\)) \[U(z_{1},0)_{i^{\prime}i}U(z_{2},0)_{j^{\prime}j}\epsilon_{ijk}=U (z_{1},z_{2})_{i^{\prime}i^{\prime\prime}}U(z_{2},0)_{i^{\prime\prime}i}U(z_{ 2},0)_{j^{\prime}j}\epsilon_{ijk}\] \[=U(z_{1},z_{2})_{i^{\prime}i^{\prime\prime}}U(z_{2},0)_{i^{\prime \prime}i}U(z_{2},0)_{j^{\prime}j}\delta_{kk^{\prime\prime}}\epsilon_{ijk^{ \prime\prime}}\] \[=U(z_{1},z_{2})_{i^{\prime}i^{\prime\prime}}U(z_{2},0)_{i^{ \prime}i^{\prime}}U(z_{2},0)_{j^{\prime}j}U^{\dagger}(z_{2},0)_{kk^{\prime}}U( z_{2},0)_{k^{\prime}k^{\prime\prime}}\epsilon_{ijk^{\prime\prime}}\] \[=U(z_{1},z_{2})_{i^{\prime}i^{\prime\prime}}U^{\dagger}(z_{2},0) _{kk^{\prime}}{\rm det}[U(z_{2},0)]\epsilon_{i^{\prime\prime}j^{\prime}k^{ \prime}}=U(z_{1},z_{2})_{i^{\prime}i^{\prime\prime}}U^{\dagger}(z_{2},0)_{kk^{ \prime}}\epsilon_{i^{\prime\prime}j^{\prime}k^{\prime}}, \tag{14}\] where the unitary and special properties of the Wilson links as SU(3) group elements are used in the second line and last line respectively. A schematic diagram for the above relation is shown in Fig. 3. Figure 2: The schematic diagram of the Wilson links for \(z_{1}z_{2}<0\). \(z_{1}\) and \(z_{2}\) denote the locations of the heads of the Wilson links. The tails of the Wilson links are located at the origin, where the color indices are contracted with the Levi-Civita symbol \(\epsilon\) to guarantee the gauge invariance. So the effective length is \(z_{1}\) for \(z_{1}>z_{2}>0\). For a general case of \(z_{1}z_{2}\geq 0\), the effective length should be the maximum value between \(|z_{1}|\) and \(|z_{2}|\). The leading order QCD beta function \(b_{0}=\dfrac{11C_{A}-2n_{f}}{6\pi}\) which satisfies \(\dfrac{d\alpha_{s}}{d\ln[\mu]}=-b_{0}\alpha_{s}^{2}\). The perturbative zero momentum matrix element in \(\overline{\rm MS}\) scheme Eq. (15) satisfies the following renormalization group equation \[\dfrac{d\ln[\hat{M}_{p}\left(z_{1},z_{2},0,0,\mu\right)]}{d\ln[\mu]}=\gamma, \tag{16}\] where \(\gamma=\gamma_{0}\alpha_{s}+...\) The leading anomalous dimension \(\gamma_{0}=\dfrac{C_{F}}{2\pi}\left(5-\dfrac{7}{4}\delta_{z_{1},0}-\dfrac{7} {4}\delta_{z_{2},0}-\dfrac{3}{2}\delta_{z_{1}-z_{2},0}\right)\) is scheme independent which can be applied to the renormalization factor Eq. (14) of lattice matrix elements. It involves the quark-link interaction as well as the evolution effect of the local operator. There are subtraction terms since the UV fluctuation is frozen on lattice for a distance to be zero. \(\Lambda_{\overline{\rm MS}}\) is the RG invariant scale for the LO running coupling, which is 0.142 GeV for \(n_{f}=3\), 0.119 GeV for \(n_{f}=4\) and 0.087 GeV for \(n_{f}=5\), determined based on the method in [51]. The parameters \(k\), \(\Lambda_{\rm QCD}\), \(f(z_{1},z_{2})\), \(m_{0}\) and \(d\) are extracted through the fit. The fit procedure is [21]: 1) fit the \(a\) dependence in \(\hat{M}\left(z_{1},z_{2},0,0,a\right)\) to extract the global parameters \(k\) and \(\Lambda_{\rm QCD}\) as well as the discretization effect \(f(z_{1},z_{2})\): \[\hat{M}\left(z_{1},z_{2},0,0,a\right)\] \[=\exp\left[\dfrac{k}{a\ln[a\Lambda_{\rm QCD}]}\tilde{z}+g(z_{1},z _{2},d)+\dfrac{\gamma_{0}}{b_{0}}\ln\left[\dfrac{\ln[1/(a\Lambda_{\rm QCD})]}{ \ln[\mu/\Lambda_{\overline{\rm MS}}]}\right]+\ln\left[1+\dfrac{d}{\ln(a \Lambda_{\rm QCD})}\right]+f(z_{1},z_{2})a\right]\,, \tag{17}\] where \(g(z_{1},z_{2},d)\) contains the non-perturbative intrinsic \(z_{1},z_{2}\) dependences, which is also extracted through the fit. \(g(z_{1},z_{2},d)\) depends on the choice of the global parameter \(d\) during the fit; 2) extract \(m_{0}\) and \(d\) through requiring the renormalized matrix element to be equal to the perturbative matrix element at short distances (\(a<z_{1},z_{2}\ll 1/\Lambda_{\rm QCD}\)) \[\dfrac{\hat{M}\left(z_{1},z_{2},0,0,a\right)}{Z_{R}(z_{1},z_{2},a,\mu)}=\exp \left[g(z_{1},z_{2},d)+m_{0}\tilde{z}\right]=\hat{M}_{p}\left(z_{1},z_{2},0, 0,\mu\right). \tag{18}\] The perturbative zero momentum matrix element is obtained from Eqs. (10) and (15), \[\hat{M}_{p}\left(z_{1},z_{2},0,0,\mu\right)=1+\dfrac{\alpha_{s}C_{F}}{2\pi} \left[\dfrac{7}{8}\ln\left(\dfrac{z_{1}^{2}\mu^{2}e^{2\gamma_{E}}}{4}\right)+ \dfrac{7}{8}\ln\left(\dfrac{z_{2}^{2}\mu^{2}e^{2\gamma_{E}}}{4}\right)+\dfrac{ 3}{4}\ln\left(\dfrac{(z_{1}-z_{2})^{2}\mu^{2}e^{2\gamma_{E}}}{4}\right)+4 \right], \tag{19}\] where the IR poles are canceled through the normalization with the local matrix element. The details of the perturbative calculation are presented in Appendix A. Figure 3: The schematic diagram of the Wilson links for \(z_{1}z_{2}>0\). The notations are the same as Fig. 2. Considering the color contraction \(\epsilon\), the two overlapped Wilson links in the same direction are equivalent to a Wilson link in the opposite direction. Thus one can define the renormalized lattice matrix element in \(\overline{\rm MS}\) scheme for the whole range as the ratio of the normalized lattice matrix element Eq. (10) to the renormalization factor Eq. (10) \[\hat{M}_{\overline{\rm MS}}\left(z_{1},z_{2},0,P^{z},\mu\right)=\frac{\hat{M} \left(z_{1},z_{2},0,P^{z},a\right)}{Z_{R}(z_{1},z_{2},a,\mu)}, \tag{11}\] where the renormalization factor \(Z_{R}\), though extracted from the zero momentum matrix element, can be applied to the large momentum matrix element since the renormalization is independent of the external states. ### Hybrid renormalization A hybrid renormalization method is presented in this subsection, based on the building blocks provided in the previous subsections, such as the normalized lattice matrix element Eq. (10), the renormalization factor Eq. (10) and the renormalized lattice matrix element in \(\overline{\rm MS}\) scheme Eq. (11). In practical calculations, it is difficult to directly perform the matching in \(\overline{\rm MS}\) scheme since there are inconsistencies between the lattice matrix element and continuum scheme at short distances. The lattice matrix elements are finite as \(z_{1}\to 0\) or \(z_{2}\to 0\) while the perturbative matrix elements, with the logarithmic terms \(\ln(z_{1}^{2})\), \(\ln(z_{2}^{2})\) and \(\ln((z_{1}-z_{2})^{2})\), are divergent as \(z_{1}\to 0\) or \(z_{2}\to 0\). Moreover, those logarithmic terms correspond to slowly decaying terms \(\sim\frac{1}{|x_{1}^{\prime}|}\) and \(\frac{1}{|x_{2}^{\prime}|}\) in the matching kernel, Figure 4: A schematic diagram of renormalization. First, both large and zero momentum matrix elements are converted to \(\overline{\rm MS}\) scheme. Then, additional ratios are taken with the \(\overline{\rm MS}\) scheme matrix elements as follows. The large momentum matrix element in the purple (hard) region is divided by the zero momentum matrix element in the purple region correspondingly. The large momentum matrix in the French blue (hard-soft) region is divided by the zero momentum matrix element on the blue-purple boundary. The large momentum matrix in the white (soft) region is divided by the zero momentum matrix element at the white-purple intersection point. which increase the difficulties of numerical calculations in preserving the normalization, though not impossible. The hybrid scheme can be viewed as a modification of \(\overline{\rm MS}\) scheme at short distance. Through the ratio at short distance, part of the discretization effects are canceled in the lattice matrix elements and the singular log terms are canceled in the perturbative matrix elements. So the lattice matrix elements become more consistent with the continuum scheme under the hybrid scheme, where it is easier to preserve the normalization. Thus there are several principles in designing the hybrid scheme [21]: 1) Eliminate all the singular logarithmic terms in the perturbative matrix elements including \(\ln(z_{1}^{2})\), \(\ln(z_{2}^{2})\) and \(\ln((z_{1}-z_{2})^{2})\) through the ratio; 2) Avoid introducing extra effects that are not perturbatively controllable at large distances; 3) Keep the renormalized matrix element continuous and the method as simple as possible. A hybrid renormalization method that satisfies the above principles is presented in the following, and a schematic diagram is shown in Fig. 4. For the sake of convenience, besides \(|z_{1}|\), \(|z_{2}|\) and \(|z_{1}-z_{2}|\), we also treat \(|z_{1}+z_{2}|\) as an argument. The purple region denotes that all arguments are perturbative scales. The white regions denote that all are non-perturbative scales. The blue regions denote that only one of the four scales is perturbative. * If the scales \(z_{1}\), \(z_{2}\) and \(z_{1}-z_{2}\) are all perturbative, e.g. (\(|z_{1}|<z_{s}\) and \(|z_{2}|<z_{s}\)) or (\(|z_{1}|<z_{s}\) and \(z_{s}<|z_{2}|<2z_{s}\)) or (\(z_{s}<|z_{1}|<2z_{s}\) and \(|z_{2}|<z_{s}\)), which corresponds to the purple region in Fig. 4, one introduces the ratio scheme on the normalized lattice matrix elements (Eq. (10)) \[\frac{\hat{M}\left(z_{1},z_{2},0,P^{z},a\right)}{\hat{M}\left(z_ {1},z_{2},0,0,a\right)}\left(\theta(z_{s}-|z_{1}|)\theta(z_{s}-|z_{2}|)+\theta (z_{s}-|z_{2}|)\theta(|z_{1}|-z_{s})\theta(2z_{s}-|z_{1}|)\right.\] \[\left.+\theta(z_{s}-|z_{1}|)\theta(|z_{2}|-z_{s})\theta(2z_{s}-| z_{2}|)\right)\] \[=\frac{\hat{M}_{\overline{\rm MS}}\left(z_{1},z_{2},0,P^{z},\mu \right)}{\hat{M}_{\overline{\rm MS}}\left(z_{1},z_{2},0,0,\mu\right)}\left( \theta(2z_{s}-|z_{1}|)\theta(z_{s}-|z_{2}|)+\theta(z_{s}-|z_{1}|)\theta(|z_{ 2}|-z_{s})\theta(2z_{s}-|z_{2}|)\right),\] (12) where \(z_{s}\) is the hybrid cutoff, which satisfies \(a\ll 2z_{s}\ll 1/\Lambda_{\rm QCD}\). So no extra non-perturbative effects are introduced. The ratio can be written with the renormalized lattice matrix element in \(\overline{\rm MS}\) scheme Eq. (11) since the renormalization factor \(Z_{R}\) is independent of momentum \(P^{z}\). There are several advantages to taking the ratio scheme at short distances. First, part of the discretization effects are canceled since \(a\ll z_{s}\). Second, the normalization can be guaranteed in both renormalized lattice matrix elements and perturbative matching. In the perturbative matrix element, we will see the cancellation of \(\ln(z_{1}^{2})\), \(\ln(z_{2}^{2})\) and \(\ln((z_{1}-z_{2})^{2})\) through the ratio, which leads to a current conserved matching kernel after Fourier transformation. * If \(z_{1}\) is perturbative while \(z_{2}\) and \(z_{1}-z_{2}\) are not, e.g. \(|z_{1}|<z_{s}\) and \(|z_{2}|>2z_{s}\), which is the blue vertical region in Fig. 4, one needs to introduce the ratio scheme for \(z_{1}\) and \(\overline{\rm MS}\) scheme (Eq. (11)) for \(z_{2}\), \[\frac{\hat{M}\left(z_{1},z_{2},0,P^{z},a\right)Z_{R}(z_{1},{\rm sign }(z_{2})2z_{s},a,\mu)}{\hat{M}\left(z_{1},{\rm sign}(z_{2})2z_{s},0,0,a\right) Z_{R}(z_{1},z_{2},a,\mu)}\theta(z_{s}-|z_{1}|)\theta(|z_{2}|-2z_{s})\] (13) \[=\frac{\hat{M}_{\overline{\rm MS}}\left(z_{1},z_{2},0,P^{z},\mu \right)}{\hat{M}_{\overline{\rm MS}}\left(z_{1},{\rm sign}(z_{2})2z_{s},0,0, \mu\right)}\theta(z_{s}-|z_{1}|)\theta(|z_{2}|-2z_{s}),\] where \(Z_{R}(z_{1},z_{2},a,\mu)\) is the renormalization factor to convert the lattice matrix element to \(\overline{\rm MS}\) scheme as we have defined in Eq. (11). The zero momentum matrix element in the denominator \(\hat{M}\left(z_{1},{\rm sign}(z_{2})2z_{s},0,0,a\right)\) will be crucial in canceling the \(\ln(z_{1}^{2})\) in the perturbative matrix element when we deduce the matching kernel. No extra non-perturbative effect is introduced in the denominator where the \(z_{2}\) dependence is truncated at \(\text{sign}(z_{2})2z_{s}\). * If \(z_{2}\) is perturbative while \(z_{1}\) and \(z_{1}-z_{2}\) are not, e.g. \(|z_{1}|>2z_{s}\) and \(|z_{2}|<z_{s}\), which is the blue horizontal region in Fig. 4, one follows the similar strategy, \[\frac{\hat{M}_{\overline{\text{MS}}}\left(z_{1},z_{2},0,P^{z},\mu\right)}{ \hat{M}_{\overline{\text{MS}}}\left(\text{sign}(z_{1})2z_{s},z_{2},0,0,\mu \right)}\theta(|z_{1}|-2z_{s})\theta(z_{s}-|z_{2}|).\] (3.12) * If both \(z_{1}\) and \(z_{2}\) are non-perturbative while \(z_{1}-z_{2}\) is perturbative, e.g. \(|z_{1}|>z_{s}\), \(|z_{2}|>z_{s}\) and \(|z_{1}-z_{2}|<z_{s}\), which is the blue diagonal region around \(z_{1}\sim z_{2}\) in Fig. 4, one takes \[\frac{\hat{M}_{\overline{\text{MS}}}\left(z_{1},z_{2},0,P^{z},\mu\right)}{ \hat{M}_{\overline{\text{MS}}}\left(z_{1}^{*},z_{2}^{*},0,0,\mu\right)} \theta(|z_{1}|-z_{s})\theta(|z_{2}|-z_{s})\theta(z_{s}-|z_{1}-z_{2}|),\] (3.13) where \(z_{1}^{*}=z_{s}+(z_{1}-z_{2})\theta(z_{1}-z_{2})\) and \(z_{2}^{*}=z_{s}+(z_{2}-z_{1})\theta(z_{2}-z_{1})\). The coordinate choices \(z_{1}^{*}\) and \(z_{2}^{*}\) apply for \(z_{1}<0\) and \(z_{2}<0\) as well because \((z_{1}\rightarrow-z_{2},z_{2}\rightarrow-z_{1})\) is a symmetry for zero momentum matrix element. The zero momentum matrix element in the denominator will be crucial in canceling the \(\ln((z_{1}-z_{2})^{2})\) in the perturbative matrix element. * For \(|z_{1}|>z_{s}\), \(|z_{2}|>z_{s}\) and \(|z_{1}+z_{2}|<z_{s}\), which is the blue diagonal region around \(z_{1}\sim-z_{2}\) in Fig. 4, one can introduce the ratio as well \[\frac{\hat{M}_{\overline{\text{MS}}}\left(z_{1},z_{2},0,P^{z},\mu\right)}{ \hat{M}_{\overline{\text{MS}}}\left(z_{1}^{**},z_{2}^{**},0,0,\mu\right)} \theta(|z_{1}|-z_{s})\theta(|z_{2}|-z_{s})\theta(z_{s}-|z_{1}+z_{2}|),\] (3.14) where \(z_{1}^{**}=z_{s}+(z_{1}+z_{2})\theta(z_{1}+z_{2})\) and \(z_{2}^{**}=-z_{s}+(z_{2}+z_{1})\theta(-z_{2}-z_{1})\). This step is for continuity and simplicity. * Finally, for \(|z_{1}|>z_{s}\), \(|z_{2}|>z_{s}\), \(|z_{1}-z_{2}|>z_{s}\) and \(|z_{1}+z_{2}|>z_{s}\), we apply the \(\overline{\text{MS}}\) scheme (Eq. (3.9)), \[\frac{\hat{M}\left(z_{1},z_{2},0,P^{z},a\right)}{Z_{R}(z_{1},z_{ 2},a,\mu)\hat{M}_{\overline{\text{MS}}}\left(\text{sign}(z_{1})z_{s},\text{ sign}(z_{2})2z_{s},0,0,\mu\right)}\theta(|z_{1}|-z_{s})\theta(|z_{2}|-z_{s}) \theta(|z_{1}-z_{2}|-z_{s})\theta(|z_{1}+z_{2}|-z_{s})\] \[=\frac{\hat{M}_{\overline{\text{MS}}}\left(z_{1},z_{2},0,P^{z}, \mu\right)}{\hat{M}_{\overline{\text{MS}}}\left(\text{sign}(z_{1})z_{s},\text{ sign}(z_{2})2z_{s},0,0,\mu\right)}\theta(|z_{1}|-z_{s})\theta(|z_{2}|-z_{s}) \theta(|z_{1}+z_{2}|-z_{s}).\] (3.15) The isospin and parity symmetries allow different choices for the denominator: \[\hat{M}_{\overline{\text{MS}}}\left(z_{s},2z_{s},0,0,\mu\right)=\hat{M}_{ \overline{\text{MS}}}\left(2z_{s},z_{s},0,0,\mu\right)=\hat{M}_{\overline{ \text{MS}}}\left(-z_{s},-2z_{s},0,0,\mu\right)=\hat{M}_{\overline{\text{MS}}} \left(-2z_{s},-z_{s},0,0,\mu\right),\] \[\hat{M}_{\overline{\text{MS}}}\left(-z_{s},2z_{s},0,0,\mu\right)=\hat{M}_{ \overline{\text{MS}}}\left(2z_{s},-z_{s},0,0,\mu\right)=\hat{M}_{\overline{ \text{MS}}}\left(z_{s},-2z_{s},0,0,\mu\right)=\hat{M}_{\overline{\text{MS}}} \left(-2z_{s},z_{s},0,0,\mu\right).\] To conclude, the hybrid renormalized matrix element is \[\begin{split}&\hat{M}_{H}(z_{1},z_{2},0,P^{z})=\frac{\hat{M}_{ \overline{\rm MS}}\left(z_{1},z_{2},0,P^{z},\mu\right)}{\hat{M}_{\overline{\rm MS }}\left(z_{1},z_{2},0,0,\mu\right)}\left(\theta(2z_{s}-|z_{1}|)\theta(z_{s}-|z _{2}|)+\theta(z_{s}-|z_{1}|)\theta(|z_{2}|-z_{s})\theta(2z_{s}-|z_{2}|)\right) \\ &+\frac{\hat{M}_{\overline{\rm MS}}\left(z_{1},z_{2},0,P^{z},\mu \right)}{\hat{M}_{\overline{\rm MS}}\left(z_{1},\text{sign}(z_{2})2z_{s},0,0, \mu\right)}\theta(z_{s}-|z_{1}|)\theta(|z_{2}|-2z_{s})+\frac{\hat{M}_{ \overline{\rm MS}}\left(z_{1},z_{2},0,P^{z},\mu\right)}{\hat{M}_{\overline{\rm MS }}\left(\text{sign}(z_{1})2z_{s},z_{2},0,0,\mu\right)}\theta(|z_{1}|-2z_{s}) \theta(z_{s}-|z_{2}|)\\ &+\frac{\hat{M}_{\overline{\rm MS}}\left(z_{1},z_{2},0,P^{z},\mu \right)}{\hat{M}_{\overline{\rm MS}}\left(z_{s}+(z_{1}-z_{2})\theta(z_{1}-z_{2 }),z_{s}+(z_{2}-z_{1})\theta(z_{2}-z_{1}),0,0,\mu\right)}\theta(|z_{1}|-z_{s} )\theta(|z_{2}|-z_{s})\theta(z_{s}-|z_{1}-z_{2}|)\\ &+\frac{\hat{M}_{\overline{\rm MS}}\left(z_{1},z_{2},0,P^{z},\mu \right)}{\hat{M}_{\overline{\rm MS}}\left(z_{s}+(z_{1}+z_{2})\theta(z_{1}+z_{2 }),-z_{s}+(z_{2}+z_{1})\theta(-z_{2}-z_{1}),0,0,\mu\right)}\theta(|z_{1}|-z_{s })\theta(|z_{2}|-z_{s})\theta(z_{s}-|z_{1}+z_{2}|)\\ &+\frac{\hat{M}_{\overline{\rm MS}}\left(z_{1},z_{2},0,P^{z},\mu \right)}{\hat{M}_{\overline{\rm MS}}\left(\text{sign}(z_{1})z_{s},\text{sign} (z_{2})2z_{s},0,0,\mu\right)}\theta(|z_{1}|-z_{s})\theta(|z_{1}-z_{2}|-z_{s}) \theta(|z_{1}+z_{2}|-z_{s}),\end{split} \tag{42}\] where \(\hat{M}_{\overline{\rm MS}}\) is the renormalized lattice matrix element in \(\overline{\rm MS}\) scheme defined in Eq. (29). ### Matching in hybrid scheme The hybrid scheme quasi baryon DA is obtained through the Fourier transformation \[\tilde{\Phi}_{H}(x_{1},x_{2},P^{z})=(P^{z})^{2}\int_{-\infty}^{+\infty}\frac{ dz_{1}}{2\pi}\frac{dz_{2}}{2\pi}e^{ix_{1}P^{z}z_{1}+ix_{2}P^{z}z_{2}}\hat{M}_{H} \left(z_{1},z_{2},0,P^{z}\right). \tag{43}\] The factorization formula is \[\tilde{\Phi}_{H}(x_{1},x_{2},P^{z})=\int dy_{1}dy_{2}\mathcal{C}_{H}(x_{1},x_{ 2},y_{1},y_{2},P^{z},\mu)\Phi_{L}(y_{1},y_{2},\mu), \tag{44}\] where \(\Phi_{L}(y_{1},y_{2},\mu)\) is the baryon LCDA in Eq. (4). \(\mathcal{C}_{H}(x_{1},x_{2},y_{1},y_{2},P^{z},\mu)\) is the matching kernel in hybrid scheme, \[\mathcal{C}_{H}(x_{1},x_{2},y_{1},y_{2},P^{z},\mu)=\mathcal{C}_{\overline{\rm MS }}(x_{1},x_{2},y_{1},y_{2},P^{z},\mu)-\delta\mathcal{C}_{H}(x_{1},x_{2},y_{1}, y_{2},P^{z},\mu), \tag{45}\] where \(\mathcal{C}_{\overline{\rm MS}}(x_{1},x_{2},y_{1},y_{2},P^{z},\mu)\) is the \(\overline{\rm MS}\) scheme matching kernel in Eq. (16). \(\delta\mathcal{C}_{H}(x_{1},x_{2},y_{1},y_{2},P^{z},\mu)\) is the hybrid counterterm and the NLO result can be obtained through the Fourier transformation [39] \[\delta\mathcal{C}_{H}^{(1)}(x_{1},x_{2},y_{1},y_{2},P^{z},\mu)=(P^{z})^{2} \int\frac{dz_{1}}{2\pi}\frac{dz_{2}}{2\pi}e^{i(x_{1}-y_{1})P^{z}z_{1}+i(x_{2} -y_{2})P^{z}z_{2}}\delta\hat{M}_{H}^{(1)}\left(z_{1},z_{2},\mu\right). \tag{46}\] \(\delta\hat{M}_{H}^{(1)}\left(z_{1},z_{2},\mu\right)\) is the perturbative correction in the hybrid scheme at NLO: \[\begin{split}&\delta\hat{M}_{H}^{(1)}\left(z_{1},z_{2},\mu \right)=\hat{M}_{p}^{(1)}\left(z_{1},z_{2},0,0,\mu\right)\left(\theta(2z_{s}-|z _{1}|)\theta(z_{s}-|z_{2}|)+\theta(z_{s}-|z_{1}|)\theta(|z_{2}|-z_{s})\theta(2 z_{s}-|z_{2}|)\right)\\ &+\hat{M}_{p}^{(1)}\left(z_{1},\text{sign}(z_{2})2z_{s},0,0,\mu \right)\theta(z_{s}-|z_{1}|)\theta(|z_{2}|-2z_{s})+\hat{M}_{p}^{(1)}\left( \text{sign}(z_{1})2z_{s},z_{2},0,0,\mu\right)\theta(|z_{1}|-2z_{s})\theta(z_{s }-|z_{2}|)\\ &+\hat{M}_{p}^{(1)}\left(z_{s}+(z_{1}-z_{2})\theta(z_{1}-z_{2}),z _{s}+(z_{2}-z_{1})\theta(z_{2}-z_{1}),0,0,\mu\right)\theta(|z_{1}|-z_{s}) \theta(|z_{2}|-z_{s})\theta(z_{s}-|z_{1}-z_{2}|)\\ &+\hat{M}_{p}^{(1)}\left(z_{s}+(z_{1}+z_{2})\theta(z_{1}+z_{2}),- z_{s}+(z_{2}+z_{1})\theta(-z_{2}-z_{1}),0,0,\mu\right)\theta(|z_{1}|-z_{s})\theta(|z_{2}|-z_{s })\theta(z_{s}-|z_{1}+z_{2}|)\\ &+\hat{M}_{p}^{(1)}\left(\text{sign}(z_{1})z_{s},\text{sign}(z_{2 })2z_{s},0,0,\mu\right)\theta(|z_{1}|-z_{s})\theta(|z_{2}|-z_{s})\theta(|z_{1}-z _{2}|-z_{s})\theta(|z_{1}+z_{2}|-z_{s}).\end{split} \tag{47}\] \(\hat{M}_{p}^{(1)}\left(z_{1},z_{2},0,0,\mu\right)\) is the NLO result of Eq. (15), \[\hat{M}_{p}^{(1)}\left(z_{1},z_{2},0,0,\mu\right)=\frac{\alpha_{s}C_{F}}{2\pi} \left[\frac{7}{8}\ln\left(\frac{z_{1}^{2}\mu^{2}e^{2\gamma_{E}}}{4}\right)+ \frac{7}{8}\ln\left(\frac{z_{2}^{2}\mu^{2}e^{2\gamma_{E}}}{4}\right)+\frac{3}{4} \ln\left(\frac{(z_{1}-z_{2})^{2}\mu^{2}e^{2\gamma_{E}}}{4}\right)+4\right].\] One performs the Fourier transformation on different regions and obtains the hybrid counterterm: \[\delta\mathcal{C}_{H}^{(1)}(x_{1},x_{2},y_{1},y_{2},P^{z},\mu)=(P^{z} )^{2}\frac{\alpha_{s}C_{F}}{2\pi}\Bigg{[}I_{\rm H}[(x_{1}-y_{1})P^{z},(x_{2}-y_{ 2})P^{z}]+I_{\rm HSI}[(x_{1}-y_{1})P^{z},(x_{2}-y_{2})P^{z}]\] \[+I_{\rm HSI}[(x_{1}-y_{1})P^{z},(x_{2}-y_{2})P^{z}]+I_{\rm HSI}[(x_ {1}-y_{1})P^{z},(x_{2}-y_{2})P^{z}]+I_{\rm HSIV}[(x_{1}-y_{1})P^{z},(x_{2}-y_{2} )P^{z}]\] \[+I_{\rm S}[(x_{1}-y_{1})P^{z},(x_{2}-y_{2})P^{z}]+\delta[(x_{1}-y_{ 1})P^{z}]\delta[(x_{2}-y_{2})P^{z}]\left(\frac{5}{2}\ln\left(\frac{\mu^{2}e^{ 2\gamma_{E}}}{4}\right)+4\right)\Bigg{]}, \tag{3.22}\] where all integrated formulas are collected in Appendix B. Since the normalization with respect to the local matrix element is performed and the ratio is taken with respect to the zero momentum matrix element at short distances, the matching in the hybrid scheme preserves the normalization requirement. That means, if the light cone DA is normalized \(\int dy_{1}dy_{2}\Phi_{L}(y_{1},y_{2},\mu)=1\), the quasi-DA obtained through matching in the hybrid scheme is also normalized \(\int dx_{1}dx_{2}\hat{\Phi}_{H}(x_{1},x_{2},P^{z})=1\). As shown in Appendix C, if the normalization is preserved, the hybrid matching kernel at one loop is a pure double plus function defined as Eq. (2.17), \[\mathcal{C}_{H}^{(1)}(x_{1},x_{2},y_{1},y_{2},P^{z},\mu)=\left[\mathcal{C}_{ \overline{\rm MS}}^{(1)}(x_{1},x_{2},y_{1},y_{2},P^{z},\mu)-\delta\mathcal{C}_ {H}^{(1)}(x_{1},x_{2},y_{1},y_{2},P^{z},\mu)\right]_{\oplus}. \tag{3.23}\] A necessary condition for the pure double plus function is that there are no linearly decaying terms \(\sim\frac{1}{|x_{1}^{\prime}|}\) and \(\frac{1}{|x_{2}^{\prime}|}\) for \(|x_{1}^{\prime}|\gg 1\) and \(|x_{2}^{\prime}|\gg 1\) respectively. One can check those linearly decaying terms in \(\mathcal{C}_{\overline{\rm MS}}^{(1)}(x_{1},x_{2},y_{1},y_{2},P^{z},\mu)\) are explicitly cancelled by the linearly decaying terms in \(\delta\mathcal{C}_{H}^{(1)}(x_{1},x_{2},y_{1},y_{2},P^{z},\mu)\), which is equivalent to the cancellation of the log terms \(\ln(z_{1}^{2})\), \(\ln(z_{2}^{2})\) and \(\ln((z_{1}-z_{2})^{2})\) through the ratio at short distance in coordinate space. For readers' convenience, a Mathematica notebook on the hybrid scheme matching kernel is attached to the source package on arXiv. ## 4 Summary To summarize, this paper continues the work on a direct method to extract the shape distribution of LCDA of a light baryon through the simulation of equal-time correlation functions under the framework of large momentum effective theory. To clear the obstacles on the road to renormalize the quasi-DAs, we have developed a hybrid renormalization scheme. By combining the self-renormalization at large spatial separations and the ratio scheme at short spatial separations, the hybrid renormalization scheme removes the UV divergences without introducing extra nonperturbative effects. The corresponding equal-time correlation functions have been calculated in the coordinate space to the next-to-leading order, and the matching kernel between the quasi-DAs and LCDAs has been derived. In the calculation, the \(\Lambda\) baryon has been taken as an example to demonstrate the scheme, but such a scheme can be straightforwardly generalized to other octet and decuplet baryons. This approach offers a practical methodology for computing LCDA which can be carried out in lattice calculations, and a preliminary analysis using this approach is now being conducted by members of Larton Parton collaboration. ## Acknowledgments We would like to thank Jun Zeng, Zhifu Deng, Minhuan Chu, Jun Hua, and Qi-An Zhang for their insightful comments and invaluable discussions. This work is supported in part by the Natural Science Foundation of China under Grants No.12125503, No. U2032102, No. 12061131006, and No. 12335003. Y.S. is partially supported by the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, contract no. DE-AC02-06CH11357. Part of the computations in this paper was run on the Siyuan-1 cluster supported by the Center for High Performance Computing at Shanghai Jiao Tong University. ## Appendix A One-loop calculation of the spatial correlator in coordinate space In this section, we will present one-loop results for the spatial correlator in dimensional regularization with \(\overline{\text{MS}}\) renormalization. The results are gauge invariant, and the Feynman gauge will be adopted in the practical calculation. As shown in Fig. 1, there are twelve distinct diagrams to calculate which can be divided into three categories: quark-quark, quark-Wilson line, and Wilson line-Wilson line. We take Fig. 1(e) as the example to illustrate the calculation, in which the one-loop corrections are \[\widetilde{O}_{e}=\left(\psi_{1}\left(z_{1}\right)\left(ig_{s}\int d^{d}\eta_ {1}\bar{\psi}_{1}\left(\eta_{1}\right)\not{A}\left(\eta_{1}\right)\psi_{1} \left(\eta_{1}\right)\right)\right){}^{T}\left(-ig_{s}\int_{0}^{1}dt_{1}z_{1} \cdot A\left(t_{1}z_{1}\right)\right)\Gamma\psi_{2}\left(z_{2}\right)\psi_{3}( 0). \tag{12}\] The color indexes and the parameter \(\left(\frac{\mu^{2}}{e^{\ln(4\pi)-\gamma_{E}}}\right)^{\epsilon}\) are not written out explicitly. The gluon and quark propagators in the coordinate space are \[G(x-y)=\frac{\Gamma(d/2-1)}{4\pi^{d/2}}\frac{-g_{\mu\nu}}{\left( -(x-y)^{2}+i\epsilon\right)^{d/2-1}}, \tag{13}\] \[Q(x-y)=\frac{\Gamma(d/2)}{2\pi^{d/2}}\frac{i\left(\not{x}-\not{ y}\right)}{\left(-(x-y)^{2}+i\epsilon\right)^{d/2}}. \tag{14}\] Following the tedious but standard routine: Schwinger parameterization, plus field Fourier transformation, completing the square, parameter shifting, Dirac algebra simplification, and parameter integration, one arrives at \[\widetilde{O}_{e}=g_{s}^{2}\frac{(-i)^{d/2-1}}{8\pi^{d/2}}\int d^{ d}k_{1}\int_{0}^{1}dt_{1}\int_{0}^{\infty}d\sigma_{1}\int_{0}^{\infty}d \sigma_{2}\sigma_{1}^{d/2-1}\sigma_{2}^{d/2-2}\left(\sigma_{1}+\sigma_{2} \right)^{-d/2} \tag{15}\] \[\times e^{\frac{i\left(4\left(\sigma_{1}+\sigma_{2}t_{1}\right)z_{1 }\left(k_{1}\cdot n_{z}\right)+k_{1}^{2}-4\sigma_{1}\sigma_{2}\left(t_{1}-1 \right)^{2}\left(-z_{1}^{2}\right)\right)}{4\left(\sigma_{1}+\sigma_{2} \right)}}\psi_{1}^{T}\left(k_{1}\right)\left(\left(-z_{1}^{2}\right)-\frac{z_{ 1}\left(k_{1}\cdot n_{z}\right)+\left(\sigma_{1}+\sigma_{2}t_{1}\right)\left( -z_{1}^{2}\right)}{\sigma_{1}+\sigma_{2}}\right)\Gamma\psi_{2}\left(z_{2} \right)\psi_{3}(0),\] where \(\sigma_{1}\) and \(\sigma_{2}\) are Schwinger parameters, and \(k_{1}\) is from the Fourier transformation of \(\psi_{1}^{T}(z_{1})\). Note that terms like \(k_{1}^{2}\) or \(\not{k}_{1}\psi(z)\) have been neglected in the calculation due to the equation of motion. By changing \(\left(\sigma_{1},\sigma_{2}\right)\) to \(\left(\sigma,\eta\right)\) with \(\sigma_{1}=\frac{\sigma}{\eta}\) and \(\sigma_{2}=\frac{\sigma}{1-\eta}\), the above result can be rearranged as \[\widetilde{O}_{e} = -g_{s}^{2}\frac{(-i)^{d/2-1}}{8\pi^{d/2}}\int d^{d}k_{1}\int_{0}^ {1}dt_{1}\int_{0}^{1}d\eta\int_{0}^{\infty}d\sigma\sigma^{\frac{d}{2}-3}\left( \left(1-\eta\right)z_{1}\left(k_{1}\cdot n_{z}\right)+\sigma\left(t_{1}-1 \right)\left(-z_{1}^{2}\right)\right) \tag{16}\] \[\times e^{-iz_{1}n_{z}\left(k_{1}\left(-\left(\eta(t_{1}-1) \right)-1\right)+\sigma\left(t_{1}-1\right)^{2}z_{1}n_{z}\right)}\psi_{1}^{T} \left(k_{1}\right)\Gamma\psi_{2}\left(z_{2}\right)\psi_{3}(0),\] and then one can calculate the two parts. For the term involving \(\left(-z_{1}^{2}\right)\) in Eq. (16), we further define \[\widetilde{O}_{e1}= -g_{s}^{2}\frac{1}{8\pi^{d/2}}\Gamma(d/2-1)\int d^{d}k_{1}\int_{ 0}^{1}dt_{1}\int_{0}^{1}d\eta\left(1-t_{1}\right)^{3-d}\left(z_{1}^{2} \right)^{2-d/2} \tag{17}\] \[\times e^{iz_{1}k_{1}\left(\eta(t_{1}-1)+1\right)}\psi_{1}^{T} \left(k_{1}\right)\Gamma\psi_{2}\left(z_{2}\right)\psi_{3}(0),\] and we can have the simplified form \[\widetilde{O}_{e1}=-g_{s}^{2}\frac{1}{8\pi^{d/2}}\Gamma(d/2-1)\left(z_{1}^{2} \right)^{2-d/2}\int_{0}^{1}dt_{0}\int_{0}^{1}d\eta\left(t_{0}\right)^{3-d}\psi_ {1}^{T}\left(\left(1-\eta t_{0}\right)z_{1}\right)\Gamma\psi_{2}\left(z_{2} \right)\psi_{3}(0), \tag{100}\] with \(t_{0}=1-t_{1}\). The \(t_{0}\to 0\) corresponds to a UV divergence since that divergence is regularized by \(d<4\) and one end of the Wilson line approaches \(z_{1}\) when \(t_{0}\to 0\). One can separate this divergence from the rest by using \(\psi^{T}((1-\eta t_{0})z_{1})=\left(\psi^{T}((1-\eta t_{0})z_{1})-\psi^{T}(z_{ 1})\right)+\psi^{T}(z_{1})\). Then it is straightforward to obtain the results for these two parts \[\widetilde{O}_{e11} = \tag{101}\] \[\widetilde{O}_{e12} = \frac{\alpha_{s}C_{F}}{2\pi}\int_{0}^{1}d\eta\left(\frac{1-\eta} {\eta}\right)_{+}\left(\psi_{1}^{T}\left((1-\eta)z_{1}\right)\Gamma\psi_{2} \left(z_{2}\right)\psi_{3}(0). \tag{102}\] For the \(z_{1}(k_{1}\cdot n_{z})\) term in Eq. (100) \[\widetilde{O}_{e2}= -g_{s}^{2}\frac{(-i)^{d/2-1}}{8\pi^{d/2}}\int d^{d}k_{1}\int_{0}^ {1}dt_{1}\int_{0}^{1}d\eta\int_{0}^{\infty}d\sigma\sigma^{\frac{d}{2}-3}\left( 1-\eta\right)z_{1}(k_{1}\cdot n_{z}) \tag{103}\] \[\times e^{-iz_{1}n_{z}\cdot\left(k_{1}\left(-(\eta(t_{1}-1))-1 \right)+\sigma(t_{1}-1)^{2}z_{1}n_{z}\right)}\psi_{1}^{T}\left(k_{1}\right) \Gamma\psi_{2}\left(z_{2}\right)\psi_{3}(0),\] there is an IR divergence: \[\widetilde{O}_{e2}= -C_{F}\frac{g_{s}^{2}}{8\pi^{2}}\int_{0}^{1}d\eta\left(\left( \ln\left(\frac{1}{4}\mu^{2}z_{1}^{2}e^{2\gamma_{E}}\right)+\frac{1}{\epsilon _{\rm IR}}+2\right)\left(\frac{1-\eta}{\eta}\right)_{+}+\left(\frac{2\ln\eta }{\eta}\right)_{+}\right) \tag{104}\] \[\times\psi_{1}^{T}\left(z_{1}(1-\eta)\right)\Gamma\psi_{2}\left(z _{2}\right)\psi_{3}(0).\] Collecting all these pieces and removing the UV divergence in the \(\overline{\rm MS}\) scheme give the final result: \[\widetilde{O}_{e} = \tag{105}\] \[-\frac{\alpha_{s}C_{F}}{2\pi}\int_{0}^{1}d\eta\left(\frac{1-\eta} {\eta}\right)_{+}\left(\ln\left(\frac{1}{4}\mu^{2}_{\rm IR}z_{1}^{2}e^{2\gamma _{E}}\right)+\frac{1}{\epsilon_{\rm IR}}+1\right)\psi_{1}^{T}\left((1-\eta)z_ {1}\right)\Gamma\psi_{2}\left(z_{2}\right)\psi_{3}(0)\] \[-\frac{\alpha_{s}C_{F}}{\pi}\int_{0}^{1}d\eta\left(\frac{\ln\eta }{\eta}\right)_{+}\psi_{1}^{T}\left((1-\eta)z_{1}\right)\Gamma\psi_{2}\left(z _{2}\right)\psi_{3}(0),\] where \(\alpha_{s}=\frac{g_{s}^{2}}{4\pi}\). We have checked that after making a Fourier transformation, the above results are consistent with Ref. [13] in momentum space. Then, in the same manner, results for the quark-Wilson-line diagrams are derived as: \[\widetilde{O}_{d} =\frac{\alpha_{s}C_{F}}{8}\left(L_{12}^{\rm UV}-L_{1}^{\rm UV} \right)\psi_{1}^{T}\left(z_{1}\right)\Gamma\psi_{2}\left(z_{2}\right)\psi_{3}(0) \tag{106}\] \[-\frac{\alpha_{s}C_{F}}{4\pi}\int_{0}^{1}d\eta\psi_{1}^{T}\left( (1-\eta)z_{1}+\eta z_{2}\right)\Gamma\psi_{2}\left(z_{2}\right)\psi_{3}(0) \left\{\left(L_{12}^{\rm IR}+1+\frac{1}{\epsilon_{\rm IR}}\right)\left(\frac{1 -\eta}{\eta}\right)_{+}+2\left(\frac{\ln\eta}{\eta}\right)_{+}\right\}\] \[+\frac{\alpha_{s}C_{F}}{4\pi}\int_{0}^{1}d\eta\psi_{1}^{T}\left( (1-\eta)z_{1}\right)\Gamma\psi_{2}\left(z_{2}\right)\psi_{3}(0)\left\{\left(L _{1}^{\rm IR}+1+\frac{1}{\epsilon_{\rm IR}}\right)\left(\frac{1-\eta}{\eta} \right)_{+}+2\left(\frac{\ln\eta}{\eta}\right)_{+}\right\},\] \[\widetilde{O}_{g} =\frac{\alpha_{s}C_{F}}{8}\left(L_{12}^{\text{UV}}-L_{2}^{\text{UV}} \right)\psi_{1}^{T}\left(z_{1}\right)\Gamma\psi_{2}\left(z_{2}\right)\psi_{3}(0) \tag{14}\] \[-\frac{\alpha_{s}C_{F}}{4\pi}\int_{0}^{1}d\eta\psi_{1}^{T}\left(z _{1}\right)\Gamma\psi_{2}\left(\eta z_{1}+(1-\eta)z_{2}\right)\psi_{2}\left(z_ {2}\right)\psi_{3}(0)\left\{\left(L_{12}^{\text{IR}}+1+\frac{1}{\epsilon_{ \text{IR}}}\right)\left(\frac{1-\eta}{\eta}\right)_{+}+2\left(\frac{\ln\eta}{ \eta}\right)_{+}\right\}\] \[+\frac{\alpha_{s}C_{F}}{4\pi}\int_{0}^{1}d\eta\psi_{1}^{T}\left(z _{1}\right)\Gamma\psi_{2}\left((1-\eta)z_{2}\right)\psi_{3}(0)\left\{\left(L_ {2}^{\text{IR}}+1+\frac{1}{\epsilon_{\text{IR}}}\right)\left(\frac{1-\eta}{ \eta}\right)_{+}+2\left(\frac{\ln\eta}{\eta}\right)_{+}\right\},\] \[\widetilde{O}_{e} =\frac{\alpha_{s}C_{F}}{4\pi}L_{1}^{\text{UV}}\psi_{1}^{T}\left( z_{1}\right)\Gamma\psi_{2}\left(z_{2}\right)\psi_{3}(0) \tag{15}\] \[-\frac{\alpha_{s}C_{F}}{2\pi}\int_{0}^{1}d\eta\psi_{1}^{T}\left( \left(1-\eta\right)z_{1}\right)\Gamma\psi_{2}\left(z_{2}\right)\psi_{3}(0) \left\{\left(L_{1}^{\text{IR}}+1+\frac{1}{\epsilon_{\text{IR}}}\right)\left( \frac{1-\eta}{\eta}\right)_{+}+2\left(\frac{\ln\eta}{\eta}\right)_{+}\right\},\] \[\widetilde{O}_{h} =\frac{\alpha_{s}C_{F}}{8\pi}L_{1}^{\text{UV}}\psi_{1}^{T}\left( z_{1}\right)\Gamma\psi_{2}\left(z_{2}\right)\psi_{3}(0) \tag{16}\] \[-\frac{\alpha_{s}C_{F}}{4\pi}\int_{0}^{1}d\eta\psi_{1}^{T}\left( z_{1}\right)\Gamma\psi_{2}\left((1-\eta)z_{2}\right)\psi_{3}(0)\left\{\left(L_ {2}^{\text{IR}}+1+\frac{1}{\epsilon_{\text{IR}}}\right)\left(\frac{1-\eta}{ \eta}\right)_{+}+2\left(\frac{\ln\eta}{\eta}\right)_{+}\right\},\] \[\widetilde{O}_{i} =\frac{\alpha_{s}C_{F}}{8\pi}L_{2}^{\text{UV}}\psi_{1}^{T}\left( z_{1}\right)\Gamma\psi_{2}\left(z_{2}\right)\psi_{3}(0) \tag{17}\] \[-\frac{\alpha_{s}C_{F}}{2\pi}\int_{0}^{1}d\eta\psi_{1}^{T}\left( z_{1}\right)\Gamma\psi_{2}\left((1-\eta)z_{2}\right)\psi_{3}(0)\left\{\left(L_ {2}^{\text{IR}}+1+\frac{1}{\epsilon_{\text{IR}}}\right)\left(\frac{1-\eta}{ \eta}\right)_{+}+2\left(\frac{\ln\eta}{\eta}\right)_{+}\right\},\] It should be mentioned that there are both IR and UV singularities in cases (e, h, f, i), while there are no UV divergences for cases (d, g). The last category is the Wilson line-Wilson line vertex, corresponding to Fig- 1 (j, k, l). After similar calculations, the one-loop results can be given as \[\widetilde{O}_{k} =\frac{\alpha_{s}C_{F}}{2\pi}\left(L_{1}^{\text{UV}}+2\right)\psi _{1}^{T}\left(z_{1}\right)\Gamma\psi_{2}\left(z_{2}\right)\psi_{3}(0), \tag{18}\] \[\widetilde{O}_{l} =\frac{\alpha_{s}C_{F}}{2\pi}\left(L_{2}^{\text{UV}}+2\right)\psi _{1}^{T}\left(z_{1}\right)\Gamma\psi_{2}\left(z_{2}\right)\psi_{3}(0),\] (19) \[\widetilde{O}_{j} =-\frac{\alpha_{s}C_{F}}{4\pi}\left(L_{1}^{\text{UV}}+L_{2}^{ \text{UV}}-L_{12}^{\text{UV}}+2\right)\psi_{1}^{T}\left(z_{1}\right)\Gamma \psi_{2}\left(z_{2}\right)\psi_{3}(0). \tag{20}\] Note that in these cases, only UV divergences arise when the two ends of the gluons coincide with each other. The cases (e, f) and cases (d, g, h, i) have different color coefficients [13]. These color differences will also appear in the following cases. More precisely, for cases (e, f, k, l), the color algebra gives the same results as the meson case, which is \(C_{F}\). For other cases, the color parameter is \(-\frac{C_{F}}{2}\). The last pattern, quark-quark, is shown in Fig-1 (a, b, c). Following the same routine, these results can be written down directly \[\widetilde{O}_{a}= -\frac{\alpha_{s}C_{F}}{4\pi}\int_{0}^{1}d\eta_{1}\int_{0}^{1-\eta_ {1}}d\eta_{2}\left(L_{12}^{\rm IR}-3+\frac{1}{\epsilon_{\rm IR}}\right)\psi_{1 }^{T}\left(z_{1}\left(1-\eta_{1}\right)+z_{2}\eta_{1}\right)\Gamma\psi_{2} \left(z_{2}\left(1-\eta_{2}\right)+z_{1}\eta_{2}\right)\psi_{3}(0), \tag{11}\] \[\widetilde{O}_{b}= -\frac{\alpha_{s}C_{F}}{8\pi}\int_{0}^{1}d\eta_{1}\int_{0}^{1-\eta _{1}}d\eta_{2}\left(L_{1}^{\rm IR}-1+\frac{1}{\epsilon_{\rm IR}}\right)\psi_{1 }^{T}\left(\left(1-\eta_{1}\right)z_{1}\right)\Gamma\psi_{2}\left(z_{2}\right) \psi_{3}\left(\eta_{2}z_{1}\right),\] (12) \[\widetilde{O}_{c}= -\frac{\alpha_{s}C_{F}}{8\pi}\int_{0}^{1}d\eta_{1}\int_{0}^{1- \eta_{1}}d\eta_{2}\left(L_{2}^{\rm IR}-1+\frac{1}{\epsilon_{\rm IR}}\right) \psi_{1}^{T}\left(z_{1}\right)\Gamma\psi_{2}\left(\left(1-\eta_{1}\right)z_{2} \right)\psi_{3}\left(\eta_{2}z_{2}\right). \tag{13}\] The UV divergences have been subtracted in all these cases. It should be mentioned that case (a) has an extra finite part compared to case (b) and case (c). Putting all these contributions together and sandwiching them between the vacuum state \(\left\langle 0\right|\) and the lowest-order Fock state \(\left|uds\right\rangle\), one can obtain the results in Eq. (10). Additionally, the one-loop correction for the local matrix elements can also be given. In this case, only the quark-quark category needs to be considered. Since the \(z^{2}\) is equal to \(0\) in the local case, the local matrix elements of quasi-DA and LCDA can be obtained by a similar calculation procedure. The corresponding local matrix element can be given as \[M_{p}(0,0,0,P^{z},\mu)=\left(1-\frac{\alpha_{s}C_{F}}{4\pi}\frac{1}{\epsilon_{ \rm IR}}\right)M_{0}\left(0,0,0,P^{z},\mu\right), \tag{14}\] in which the UV divergences have been subtracted. ## Appendix B Explicit expressions for the hybrid counterterm In the hybrid renormalization scheme, the counterterm in the matching kernel can be split into different regions, such as \(I_{\rm H}\), \(I_{\rm HSI}\) and \(I_{\rm S}\) in Eq. (32). In this appendix, we provide the explicit expressions for these terms. Those expressions can also be found in a Mathematica notebook in the source package on arXiv. We first give the necessary master formulas: \[\begin{split}\mathrm{I}\left(\left\{L_{2},L_{1}\right\},p\right) \equiv&\int_{L_{1}}^{L_{2}}\frac{dz}{2\pi}e^{ipz}\ln[z^{2}]\\ =&-\frac{i\left(2\left(\gamma_{E}+\log\left(-iL_{2}p \right)\right)+\left(-1+e^{iL_{2}p}\right)\log\left(L_{2}^{2}\right)+2\Gamma \left(0,-ipL_{2}\right)\right)}{2\pi p}\\ &+\frac{i\left(2\left(\gamma_{E}+\log\left(-iL_{1}p\right)\right) +\left(-1+e^{iL_{1}p}\right)\log\left(L_{1}^{2}\right)+2\Gamma\left(0,-ipL_{1 }\right)\right)}{2\pi p},\end{split} \tag{15}\] \[\mathrm{I}0\left(\left\{L_{2},L_{1}\right\},p\right)\equiv\int_{L_{1}}^{L_{2}} \frac{dz}{2\pi}e^{ipz}=-\frac{i\left(-1+e^{iL_{2}p}\right)}{2\pi p}+\frac{i \left(-1+e^{iL_{1}p}\right)}{2\pi p}. \tag{16}\] One can obtain the following results based on the above master integrals for convenience (\(L>0\)) \[\begin{split}\mathrm{I}\mathrm{t}\left(\left\{L,-L\right\},p \right)&\equiv i[\mathrm{I}1\left(\left\{L,0\right\},p\right)- \mathrm{I}1\left(\left\{0,-L\right\},p\right)]\\ &=\frac{2(-\mathrm{Ci}(Lp)+\log(L)\cos(Lp)+\log(p)+\gamma_{E})}{ \pi p},\end{split} \tag{17}\] \[\mathrm{I}0\left(\left\{\infty,L\right\},p\right)=\frac{\delta(p)}{2}+\frac{i }{2\pi p}+\frac{i\left(-1+e^{iLp}\right)}{2\pi p}, \tag{18}\] \[I_{\rm HSI}[p_{1},p_{2}] \equiv\int\frac{dz_{1}}{2\pi}\frac{dz_{2}}{2\pi}e^{ip_{1}z_{1}+ip_{2} z_{2}}\theta(z_{s}-|z_{1}|)\theta(|z_{2}|-2z_{s})\] (B.8) \[\qquad\qquad\times\left[\frac{7}{8}\ln\left(z_{1}^{2}\right)+ \frac{7}{8}\ln\left((2z_{s})^{2}\right)+\frac{3}{4}\ln\left((z_{1}-2z_{s}{\rm sign }[z_{2}])^{2}\right)\right]\] \[=\frac{1}{8}\left[6e^{2ip_{1}z_{s}}{\rm I}1\left(\left\{-z_{s},-3 z_{s}\right\},p_{1}\right){\rm I}0\left(\left\{\infty,2z_{s}\right\},p_{2} \right)+6e^{-2ip_{1}z_{s}}{\rm I}1\left(\left\{3z_{s},z_{s}\right\},p_{1} \right){\rm I}0\left(\left\{-2z_{s},-\infty\right\},p_{2}\right)\right.\] \[\left.+7\left({\rm I}0\left(\left\{\infty,-\infty\right\},p_{1} \right)-{\rm I}0\left(\left\{2z_{s},-2z_{s}\right\},p_{1}\right)\left(\log \left(4z_{s}^{2}\right){\rm I}0\left(\left\{z_{s},-z_{s}\right\},p_{2}\right)+ {\rm I}1\left(\left\{z_{s},-z_{s}\right\},p_{2}\right)\right)\right],\] \[I_{\rm HSI}[p_{1},p_{2}] \equiv\int\frac{dz_{1}}{2\pi}\frac{dz_{2}}{2\pi}e^{ip_{1}z_{1}+ ip_{2}z_{2}}\theta(z_{s}-|z_{1}|)\theta(|z_{2}|-2z_{s})\] (B.9) \[\qquad\qquad\times\left[\frac{7}{8}\ln\left(z_{1}^{2}\right)+ \frac{7}{8}\ln\left((2z_{s})^{2}\right)+\frac{3}{4}\ln\left((z_{1}-2z_{s}{\rm sign }[z_{2}])^{2}\right)\right]\] \[=\frac{1}{8}\left[6e^{2ip_{1}z_{s}}{\rm I}1\left(\left\{-z_{s},-3 z_{s}\right\},p_{1}\right){\rm I}0\left(\left\{\infty,2z_{s}\right\},p_{2} \right)+6e^{-2ip_{1}z_{s}}{\rm I}1\left(\left\{3z_{s},z_{s}\right\},p_{1} \right){\rm I}0\left(\left\{-2z_{s},-\infty\right\},p_{2}\right)\right.\] \[\left.+7\left({\rm I}0\left(\left\{\infty,-\infty\right\},p_{2} \right)-{\rm I}0\left(\left\{2z_{s},-2z_{s}\right\},p_{2}\right)\right)\left( \log\left(4z_{s}^{2}\right){\rm I}0\left(\left\{z_{s},-z_{s}\right\},p_{1} \right)+{\rm I}1\left(\left\{z_{s},-z_{s}\right\},p_{1}\right)\right)\right],\] \[I_{\rm HSI}[p_{1},p_{2}] \equiv\int\frac{dz_{1}}{2\pi}\frac{dz_{2}}{2\pi}e^{ip_{1}z_{1}+ ip_{2}z_{2}}\theta(|z_{1}|-2z_{s})\theta(z_{s}-|z_{2}|)\] (B.10) \[\times\left[\frac{7}{8}\ln\left((2z_{s})^{2}\right)+\frac{7}{8}\ln \left(z_{2}^{2}\right)+\frac{3}{4}\ln\left((\left\{\rm sign}[z_{1}|2z_{s}-z_{2} \right)^{2}\right)\right]\] \[=\frac{1}{8}\left[6e^{2ip_{1}z_{s}}{\rm I}1\left(\left\{-z_{s},-3 z_{s}\right\},p_{1}\right){\rm I}0\left(\left\{\infty,2z_{s}\right\},p_{2} \right)+6e^{-2ip_{1}z_{s}}{\rm I}1\left(\left\{3z_{s},z_{s}\right\},p_{1} \right){\rm I}0\left(\left\{-2z_{s},-\infty\right\},p_{2}\right)\right.\] \[\left.+7\left({\rm I}0\left(\left\{\infty,-\infty\right\},p_{1} \right)-{\rm I}0\left(\left\{2z_{s},-2z_{s}\right\},p_{1}\right)\left(\log \left(4z_{s}^{2}\right){\rm I}0\left(\left\{z_{s},-z_{s}\right\},p_{2}\right)+ {\rm I}1\left(\left\{z_{s},-z_{s}\right\},p_{1}\right)\right)\right],\] \[I_{\rm HSI}[p_{1},p_{2}] \equiv\int\frac{dz_{1}}{2\pi}\frac{dz_{2}}{2\pi}e^{ip_{1}z_{1}+ ip_{2}z_{2}}\theta(|z_{1}|-2z_{s})\theta(z_{s}-|z_{2}|)\] (B.11) \[\times\left[\frac{7}{8}\ln\left((2z_{s})^{2}\right)+\frac{7}{8}\ln \left(z_{2}^{2}\right)+\frac{3}{4}\ln\left((\left\left\{\rm sign}[z_{1}|2z_{s}- z_{2}\right)^{2}\right)\right]\right.\] \[\left.+7\left({\rm I}0\left(\left\{\infty,-\infty\right\},p_{1} \right)-{\rm I}0\left(\left\{2z_{s},-2z_{s}\right\},p_{1}\right)\left(\log \left(4z_{s}^{2}\right){\rm I}0\left(\left\{z_{s},-z_{s}\right\},p_{2}\right)+ {\rm I}1\left(\left\{z_{s},-z_{s}\right\},p_{2}\right)\right)\right],\] \[I_{\rm HSIII}[p_{1},p_{2}]\equiv\int\frac{dz_{1}}{2\pi}\frac{dz_{2}}{2 \pi}e^{ip_{1}z_{1}+ip_{2}z_{2}}\theta(|z_{1}|-z_{s})\theta(|z_{2}|-z_{s})\theta(z _{s}-|z_{1}-z_{2}|) \tag{144}\] \[\times\left[\frac{7}{8}\ln\left(\left(z_{s}+\left(z_{1}-z_{2} \right)\theta\left(z_{1}-z_{2}\right)\right){}^{2}\right)+\frac{7}{8}\ln\left( \left(z_{s}+\left(z_{2}-z_{1}\right)\theta\left(z_{2}-z_{1}\right)\right){}^{2 }\right)+\frac{3}{4}\ln\left(\left(z_{1}-z_{2}\right){}^{2}\right)\right]\] \[=\frac{1}{8}{\rm I}0\left(\left\{\infty,-\infty\right\},p_{1}+p_ {2}\right)\left[7\log\left(z_{s}^{2}\right){\rm I}0\left(\left\{z_{s},-z_{s} \right\},\frac{1}{2}\left(p_{1}-p_{2}\right)\right)+6\,{\rm II}\left(\left\{z _{s},-z_{s}\right\},\frac{1}{2}\left(p_{1}-p_{2}\right)\right)\right.\] \[+7e^{\frac{1}{2}i\left(p_{1}-p_{2}\right)z_{s}}{\rm II}\left( \left\{-z_{s},-2z_{s}\right\},\frac{1}{2}\left(p_{1}-p_{2}\right)\right)+7e^{ -\frac{1}{2}i\left(p_{1}-p_{2}\right)z_{s}}{\rm II}\left(\left\{2z_{s},z_{s} \right\},\frac{1}{2}\left(p_{1}-p_{2}\right)\right)\right]\] \[+\frac{ie^{-i\left(p_{1}+p_{2}\right)z_{s}}}{16\pi\left(p_{1}+p_ {2}\right)}\left[-7\log\left(z_{s}^{2}\right){\rm I}0\left(\left\{0,-z_{s} \right\},p_{1}\right)+7e^{2i\left(p_{1}+p_{2}\right)z_{s}}\log\left(z_{s}^{2} \right){\rm I}0\left(\left\{0,-z_{s}\right\},-p_{2}\right)\right.\] \[+7e^{2i\left(p_{1}+p_{2}\right)z_{s}}\log\left(z_{s}^{2}\right){ \rm I}0\left(\left\{z_{s},0\right\},p_{1}\right)-7\log\left(z_{s}^{2}\right) {\rm I}0\left(\left\{z_{s},0\right\},-p_{2}\right)-6{\rm II}\left(\left\{0,-z _{s}\right\},p_{1}\right)\] \[+6e^{2i\left(p_{1}+p_{2}\right)z_{s}}{\rm II}\left(\left\{0,-z_{ s}\right\},-p_{2}\right)-7e^{ip_{1}z_{s}}{\rm II}\left(\left\{-z_{s},-2z_{s} \right\},p_{1}\right)+7e^{i\left(2p_{1}+p_{2}\right)z_{s}}{\rm II}\left(\left\{ -z_{s},-2z_{s}\right\},-p_{2}\right)\] \[+6e^{2i\left(p_{1}+p_{2}\right)z_{s}}{\rm II}\left(\left\{z_{s}, 0\right\},p_{1}\right)-6{\rm II}\left(\left\{z_{s},0\right\},-p_{2}\right)+7e ^{i\left(p_{1}+2p_{2}\right)z_{s}}{\rm II}\left(\left\{2z_{s},z_{s}\right\},p_ {1}\right)-7e^{ip_{2}z_{s}}{\rm II}\left(\left\{2z_{s},z_{s}\right\},-p_{2} \right)\right],\] \[I_{\rm HSIV}[p_{1},p_{2}]\equiv\int\frac{dz_{1}}{2\pi}\frac{dz_{2}}{2 \pi}e^{ip_{1}z_{1}+ip_{2}z_{2}}\theta(|z_{1}|-z_{s})\theta(|z_{2}|-z_{s})\theta( z_{s}-|z_{1}+z_{2}|) \tag{145}\] \[\times\left[\frac{7}{8}\ln\left(\left(z_{s}+\left(z_{1}+z_{2} \right)\theta\left(z_{1}+z_{2}\right)\right){}^{2}\right)+\frac{7}{8}\ln\left( \left(-z_{s}+\left(z_{2}+z_{1}\right)\theta\left(-z_{2}-z_{1}\right)\right){}^{ 2}\right)+\frac{3}{4}\ln\left(\left(2z_{s}+|z_{1}+z_{2}|\right){}^{2}\right)\right]\] \[=\frac{1}{16}{\rm I}0\left(\left\{\infty,-\infty\right\},\frac{1 }{2}\left(p_{1}-p_{2}\right)\right)\left[7\log\left(z_{s}^{2}\right){\rm I}0 \left(\left\{z_{s},-z_{s}\right\},\frac{1}{2}\left(p_{1}+p_{2}\right)\right)\right.\] \[\left.+6e^{i\left(p_{1}+p_{2}\right)z_{s}}{\rm II}\left(\left\{-2 z_{s},-3z_{s}\right\},\frac{1}{2}\left(p_{1}+p_{2}\right)\right)+7e^{\frac{1}{2}i \left(p_{1}+p_{2}\right)z_{s}}{\rm II}1\left(\left\{-z_{s},-2z_{s}\right\}, \frac{1}{2}\left(p_{1}+p_{2}\right)\right)\right.\] \[+7e^{-\frac{1}{2}i\left(p_{1}+p_{2}\right)z_{s}}{\rm II}\left( \left\{2z_{s},z_{s}\right\},\frac{1}{2}\left(p_{1}+p_{2}\right)\right)+6e^{-i \left(p_{1}+p_{2}\right)z_{s}}{\rm II}\left(\left\{3z_{s},2z_{s}\right\}, \frac{1}{2}\left(p_{1}+p_{2}\right)\right)\right]\] \[+\frac{ie^{-i\left(p_{1}+p_{2}\right)z_{s}}}{16\pi\left(p_{1}-p_ {2}\right)}\left[-7e^{2ip_{2}z_{s}}\log\left(z_{s}^{2}\right){\rm I}0\left( \left\{0,-z_{s}\right\},p_{1}\right)+7e^{2ip_{1}z_{s}}\log\left(z_{s}^{2}\right) {\rm I}0\left(\left\{0,-z_{s}\right\},p_{2}\right)\right.\] \[\left.+7e^{2ip_{1}z_{s}}\log\left(z_{s}^{2}\right){\rm I}0\left( \left\{z_{s},0\right\},p_{1}\right)-7e^{2ip_{2}z_{s}}\log\left(z_{s}^{2}\right) {\rm I}0\left(\left\{z_{s},0\right\},p_{2}\right)-6e^{2i\left(p_{1}+p_{2} \right)z_{s}}{\rm II}1\left(\left\{-2z_{s},-3z_{s}\right\},p_{1}\right)\right.\] \[\left.+6e^{2i\left(p_{1}+p_{2}\right)z_{s}}{\rm II}\left(\left\{ -2z_{s},-3z_{s}\right\},p_{2}\right)-7e^{i\left(p_{1}+2p_{2}\right)z_{s}}{ \rm II}\left(\left\{-z_{s},-2z_{s}\right\},p_{1}\right)+7e^{i\left(2p_{1}+p_{2 }\right)z_{s}}{\rm II}\left(\left\{-z_{s},-2z_{s}\right\},p_{2}\right)\right.\] \[\left.+7e^{ip_{1}z_{s}}{\rm II}\left(\left\{2z_{s},z_{s}\right\}, p_{1}\right)-7e^{ip_{2}z_{s}}{\rm II}1\left(\left\{2z_{s},z_{s}\right\},p_{2}\right)+6{\rm II }1\left(\left\{3z_{s},2z_{s}\right\},p_{1}\right)-6{\rm II}1\left(\left\{3z _{s},2z_{s}\right\},p_{2}\right)\right],\] \[I_{\rm S}[p_{1},p_{2}] \equiv\int\frac{dz_{1}}{2\pi}\frac{dz_{2}}{2\pi}e^{ip_{1}z_{1}+ip_{2} z_{2}}\theta(|z_{1}|-z_{s})\theta(|z_{2}|-z_{s})\theta(|z_{1}-z_{2}|-z_{s}) \theta(|z_{1}+z_{2}|-z_{s}) \tag{125}\] \[\qquad\qquad\times\left[\frac{7}{8}\ln\left(z_{s}^{2}\right)+ \frac{7}{8}\ln\left(\left(2z_{s}\right)^{2}\right)+\frac{3}{4}\ln\left(\left( \text{sign}[z_{1}]z_{s}-\text{sign}[z_{2}]2z_{s}\right)^{2}\right)\right]\] \[=\delta\left(p_{1}-p_{2}\right)\delta\left(p_{1}+p_{2}\right) \left(\frac{7}{4}\log\left(4z_{s}^{4}\right)+\frac{3}{4}\log\left(9z_{s}^{4} \right)\right)\] \[-\frac{\delta\left(p_{1}+p_{2}\right)\left(6\log\left(z_{s}^{2} \right)+7\log\left(4z_{s}^{4}\right)\right)\sin\left(\frac{1}{2}\left(p_{1}-p _{2}\right)z_{s}\right)}{4\pi\left(p_{1}-p_{2}\right)}\] \[-\frac{\delta\left(p_{1}-p_{2}\right)\left(6\log\left(9z_{s}^{2} \right)+7\log\left(4z_{s}^{4}\right)\right)\sin\left(\frac{1}{2}\left(p_{1}+p _{2}\right)z_{s}\right)}{4\pi\left(p_{1}+p_{2}\right)}\] \[-\frac{\delta\left(p_{2}\right)\left(20\log\left(z_{s}\right)+ \log(3456)\right)\sin\left(p_{1}z_{s}\right)}{4\pi p_{1}}-\frac{\delta\left(p_ {1}\right)\left(20\log\left(z_{s}\right)+\log(3456)\right)\sin\left(p_{2}z_{s} \right)}{4\pi p_{2}}\] \[-\frac{\left(20\log\left(z_{s}\right)+\log(128)\right)\left(p_{1 }\cos\left(\left(p_{1}+2p_{2}\right)z_{s}\right)+p_{2}\cos\left(\left(2p_{1}+p _{2}\right)z_{s}\right)\right)}{8\pi^{2}p_{1}p_{2}\left(p_{1}+p_{2}\right)}\] \[+\frac{\left(20\log\left(z_{s}\right)+\log(93312)\right)\left(p_{ 1}\cos\left(\left(p_{1}-2p_{2}\right)z_{s}\right)-p_{2}\cos\left(2p_{1}z_{s}-p _{2}z_{s}\right)\right)}{8\pi^{2}p_{1}\left(p_{1}-p_{2}\right)p_{2}}.\] ## Appendix C Double plus function In this appendix, we demonstrate that the matching kernel at the one-loop level can be properly expressed in the form of the double plus function in Eq. (3.23). This is equivalent to proving the following lemma: **Lemma**.: _Consider a generalized function \(G(x_{1},x_{2},x_{10},x_{20})\) satisfying the following properties_ \[\lim_{|x_{1}-x_{10}|\rightarrow+\infty}|x_{1}-x_{10}|G(x_{1},x_{2 },x_{10},x_{20})=_{a.e.}0, \tag{126}\] \[\lim_{|x_{2}-x_{20}|\rightarrow+\infty}|x_{2}-x_{20}|G(x_{1},x_{2 },x_{10},x_{20})=_{a.e.}0,\] \[\lim_{|x_{3}-x_{30}|\rightarrow+\infty}|x_{3}-x_{30}|G(x_{1},x_{2 },x_{10},x_{20})=_{a.e.}0,\] \[\exists\,\epsilon>0,\,\left|\int_{x_{1}-\epsilon}^{x_{1}+\epsilon }dx_{10}\,|x_{1}-x_{10}|G(x_{1},x_{2},x_{10},x_{20})\right|<+\infty, \tag{127}\] \[\exists\,\epsilon>0,\,\left|\int_{x_{2}-\epsilon}^{x_{2}+\epsilon }dx_{20}\,|x_{2}-x_{20}|G(x_{1},x_{2},x_{10},x_{20})\right|<+\infty,\] \[\exists\,\epsilon>0,\,\left|\int_{x_{3}-\epsilon}^{x_{3}+\epsilon }dx_{30}\,|x_{3}-x_{30}|G(x_{1},x_{2},x_{10},x_{20})\right|<+\infty,\] _and in a specific form_ \[G(x_{1},x_{2},x_{10},x_{20})= A(x_{1},x_{2},x_{10},x_{20})+\delta(x_{1}-x_{10})B(x_{2},x_{10},x_{2 0})+\delta(x_{2}-x_{20})C(x_{1},x_{10},x_{20}) \tag{128}\] \[+ \delta(x_{3}-x_{30})D(x_{1},x_{2},x_{10},x_{20})+\delta(x_{1}-x_{ 10})\delta(x_{2}-x_{20})E(x_{10},x_{20}),\] _where \(x_{3}(x_{30})\) is shorthand for \(1-x_{1}-x_{2}(1-x_{10}-x_{20})\) and \(A,B,C,D,E\) can be expanded into power series with logarithmic terms without Dirac delta functions. "\(a.e.\)" is shorthand for "almost everywhere" which means the results hold except for \(x_{1}=x_{10}\), \(x_{2}=x_{20}\) or \(x_{3}=x_{30}\) here. The lemma states that if the integral \(\int dx_{1}dx_{2}G(x_{1},x_{2},x_{10},x_{20})=0\), the generalized function \(G(x_{1},x_{2},x_{10},x_{20})\) can be written as the double plus function defined in Eq. (2.17)._ _Proof._ Based on the condition that \(\int dx_{1}dx_{2}G(x_{1},x_{2},x_{10},x_{20})=0\), we obtain \[E(x_{10},x_{20})= -\int A(x_{1},x_{2},x_{10},x_{20})dx_{1}dx_{2}-\int B(x_{2},x_{10}, x_{20})dx_{2}-\int C(x_{1},x_{10},x_{20})dx_{1} \tag{104}\] \[-\int\delta(x_{3}-x_{30})D(x_{1},x_{2},x_{10},x_{20})dx_{1}dx_{2}.\] Plugging \(E(x_{10},x_{20})\) into Eq. (103) gives: \[G(x_{1},x_{2},x_{10},x_{20})=A(x_{1},x_{2},x_{10},x_{20})-\delta(x _{1}-x_{10})\delta(x_{2}-x_{20})\int A(y_{1},y_{2},x_{10},x_{20})dy_{1}dy_{2}\] \[+\delta(x_{1}-x_{10})B(x_{2},x_{10},x_{20})-\delta(x_{1}-x_{10}) \delta(x_{2}-x_{20})\int\delta(y_{1}-x_{10})B(y_{2},x_{10},x_{20})dy_{1}dy_{2}\] \[+\delta(x_{2}-x_{20})C(x_{1},x_{10},x_{20})-\delta(x_{1}-x_{10}) \delta(x_{2}-x_{20})\int\delta(y_{2}-x_{20})C(y_{1},x_{10},x_{20})dy_{1}dy_{2}\] \[+\delta(x_{3}-x_{30})D(x_{1},x_{2},x_{10},x_{20})-\delta(x_{1}-x_ {10})\delta(x_{2}-x_{20})\int\delta(y_{3}-x_{30})D(y_{1},y_{2},x_{10},x_{20})dy _{1}dy_{2}\] \[=[A(x_{1},x_{2},x_{10},x_{20})+\delta(x_{1}-x_{10})B(x_{2},x_{10},x_{20})+\delta(x_{2}-x_{20})C(x_{1},x_{10},x_{20})\] \[+\delta(x_{3}-x_{30})D(x_{1},x_{2},x_{10},x_{20})]_{\oplus}\,, \tag{105}\] which is the double plus function defined in Eq. (2.17). Because of Eqs. (101) and (102), one obtains a finite result after convoluting the generalized function \(G(x_{1},x_{2},x_{10},x_{20})\) with a Schwartz function \(\Phi(x_{10},x_{20})\), \[\int dx_{10}dx_{20}G(x_{1},x_{2},x_{10},x_{20})\Phi(x_{10},x_{20})<\infty. \tag{106}\]
2309.12804
Scalable Semantic 3D Mapping of Coral Reefs with Deep Learning
Coral reefs are among the most diverse ecosystems on our planet, and are depended on by hundreds of millions of people. Unfortunately, most coral reefs are existentially threatened by global climate change and local anthropogenic pressures. To better understand the dynamics underlying deterioration of reefs, monitoring at high spatial and temporal resolution is key. However, conventional monitoring methods for quantifying coral cover and species abundance are limited in scale due to the extensive manual labor required. Although computer vision tools have been employed to aid in this process, in particular SfM photogrammetry for 3D mapping and deep neural networks for image segmentation, analysis of the data products creates a bottleneck, effectively limiting their scalability. This paper presents a new paradigm for mapping underwater environments from ego-motion video, unifying 3D mapping systems that use machine learning to adapt to challenging conditions under water, combined with a modern approach for semantic segmentation of images. The method is exemplified on coral reefs in the northern Gulf of Aqaba, Red Sea, demonstrating high-precision 3D semantic mapping at unprecedented scale with significantly reduced required labor costs: a 100 m video transect acquired within 5 minutes of diving with a cheap consumer-grade camera can be fully automatically analyzed within 5 minutes. Our approach significantly scales up coral reef monitoring by taking a leap towards fully automatic analysis of video transects. The method democratizes coral reef transects by reducing the labor, equipment, logistics, and computing cost. This can help to inform conservation policies more efficiently. The underlying computational method of learning-based Structure-from-Motion has broad implications for fast low-cost mapping of underwater environments other than coral reefs.
Jonathan Sauder, Guilhem Banc-Prandi, Anders Meibom, Devis Tuia
2023-09-22T11:35:10Z
http://arxiv.org/abs/2309.12804v1
# Scalable Semantic 3D Mapping of Coral Reefs ###### Abstract Coral reefs are among the most diverse ecosystems on our planet, and essential to the livelihood of hundreds of millions of people who depend on them for food security, income from tourism and coastal protection. Unfortunately, most coral reefs are existentially threatened by global climate change and local anthropogenic pressures. To better understand the dynamics underlying deterioration of reefs, monitoring at high spatial and temporal resolution is key. However, conventional monitoring methods for quantifying coral cover and species abundance are limited in scale due to the extensive manual labor required. Although computer vision tools have been employed to aid in this process, in particular Structure-from-Motion (SfM) photogrammetry for 3D mapping and deep neural networks for image segmentation, analysis of the data products creates a bottleneck, effectively limiting their scalability. This paper presents a new paradigm for mapping underwater environments from ego-motion video, unifying 3D mapping systems that use machine learning to adapt to challenging conditions under water, combined with a modern approach for semantic segmentation of images. The method is exemplified on coral reefs in the northern Gulf of Aqaba, Red Sea, demonstrating high-precision 3D semantic mapping at unprecedented scale with significantly reduced required labor costs: a 100 m video transect acquired within 5 minutes of diving with a cheap consumer-grade camera can be fully automatically transformed into a semantic point cloud within 5 minutes. We demonstrate the spatial accuracy of our method and the semantic segmentation performance, and publish a large dataset of ego-motion videos from the northern Gulf of Aqaba, along with a dataset of video frames annotated for dense semantic segmentation of benthic classes. Our approach significantly scales up coral reef monitoring by taking a leap towards fully automatic analysis of video transects. The method democratizes coral reef transects by reducing the labor, equipment, logistics, and computing cost. This can help to inform conservation policies more efficiently. The underlying computational method of learning-based Structure-from-Motion has broad implications for fast low-cost mapping of underwater environments other than coral reefs. **Keywords:** Artificial Intelligence, Computer Vision, Coral Ecology, Coral Reefs, Machine Learning, Structure From Motion, Monocular Depth Estimation, Visual Odometry, Semantic Segmentation, 3D Vision ## Introduction Coral reefs are among the most diverse ecosystems on the planet: despite covering less than 0.1% of the planet's surface area, they host at least 32% of known marine species (Fisher et al., 2015). Up to half a billion people worldwide rely on the services provided by coral reefs, which include food security and tourism (NOAA, 2022). Coral reefs are at a decline worldwide Souter et al. (2021), as they locally suffer from detrimental human activities, and are globally threatened by increasingly warm oceans, which can cause corals to bleach and eventually die (Hughes et al., 2017; Knowlton and Jackson, 2008). Under current greenhouse gas emission trajectories (Masson-Delmotte et al., 2021), almost all warm-water coral reefs are projected to suffer significant losses of area or local extinction, even if global warming is limited to 1.5\({}^{\circ}\)C (Masson-Delmotte et al., 2018). Therefore, coral reefs are among the ecosystems that are most vulnerable to climate change. The frequency of mass bleaching events, in which vast areas of reefs bleach at once, will increase in the future (Dixon et al., 2022), giving many reefs little hope to recover in between. However, there is an extremely high variability in resilience to stresses between various regions, species, and even genotypes of the same species. Remarkably, regions with reefs that could withstand end-of-century ocean temperatures and acidity have been identified (Beyer et al., 2018). For example, in the Gulf of Aqaba in the northern Red Sea, prominent species of corals exhibit an exceptionally high thermal tolerance, promising to withstand sea temperature increase of more than 5\({}^{\circ}\)C (Fine et al., 2013; Voolstra et al., 2021; Krueger et al., 2017; Osman et al., 2018; Savary et al., 2021). Therefore, for corals in such refugia, the most imminent threats are local stresses, caused by destructive fishing practices, overtourism, urbanization of the coastlines, and associated local pollution. To ensure the survival of coral reefs until the end of the century and beyond, it is imperative to get a better understanding of the dynamics of how global temperature rise and local anthropogenic pollution damage coral reefs and evaluate if and how the reefs recover from them. This necessitates efficient methods for large scale monitoring at high spatial and temporal resolution. Conventional methods for monitoring corals are often inadequate in terms of scalability because they are highly labor intensive. The most universally recognized and applied method consists of line transects & photo quadrats (Obura et al., 2019; Prasil Delaval et al., 2021), in which photos of the seafloor inside a quadratic frame of a known reference size are taken along a straight transect line. Experts then analyze the photos, determining the presence of species and their health status, and subsequently extrapolate to larger areas. The results can be heavily biased by the exact locations of the quadrats, details in the photo quadrat protocol, and the human analysts processing the photos. This makes a direct comparison across studies challenging (Souter et al., 2021), with scientists essentially only agreeing on coarse metrics, such as the percentage of live coral cover. Most importantly, the involved manual effort of accurately managing and analyzing the photo quadrats, even for a relatively small transect, is very large. Nonetheless, transect lines can be rapidly deployed by divers and the GPS coordinates of the end-points accurately determined by means of marker buoys. These logistical consideration make data acquisition along transects the de-facto standard for coral reef monitoring. To collect data even faster and with less logistical overhead than photo quadrats, video transects (Carleton and Done, 1995) can also be acquired. However, determining benthic cover from video is even more cumbersome for analysts due to lack of normalized reference objects and possible overlaps between video frames. Coral monitoring tools of the future absolutely must automate the labor-intensive process of analyzing transect data. A vast proportion of coral reefs are found in countries with restricted research resources. To date, coral reef monitoring efforts have heavily focused on accessible reefs in wealthy countries. One key aspect for scalable monitoring tools is that their costs in terms of required human resources to analyze the data and in terms of logistics of diving operations should be as low as possible. Furthermore, the required equipment should be available worldwide within a reasonable budget. While hyperspectral sensors can facilitate identification of corals (Chennu et al., 2017; Schurholz and Chennu, 2023; Asner et al., 2020), their price can be prohibitive for widespread use. On the other hand, the cost of underwater color cameras has dramatically fallen, which suggests computer vision tools can be applied to successfully scale automated semantic segmentation of living corals from color camera imagery. While many computer vision tools have been proposed to aid in coral reef monitoring, the scalability in terms of fully analyzed transects per time unit and money unit remains limited. The main lines of computer vision work can be summarized into computer-aided mapping, commonly realized with Structure-from-Motion (SfM) photogrammetry (Burns et al., 2015; Leon et al., 2015; Storlazzi et al., 2016; Alonso et al., 2019; Bongaerts et al., 2021; Raoult et al., 2017; Hopkinson et al., 2020; Urbina Figure 1: Existing conventional SfM fails to produce a coherent point cloud from uncurated image collections such as video frames. This example shows the point clouds from a video transect in the King Abdullah Reef in Aqaba, Jordan. Leftmost panel: our proposed method creates a coherent point cloud for the whole transect in 310 seconds. Rightmost panel: Agisoft Metashape (AgiSoft, 2022) fails to capture a globally coherent structure and only produces a point cloud of a small section of the transect (aligning \(69\) out of \(1889\) images) after 2 hours and 32 minutes, albeit at higher resolution than our method for the part successfully reconstructed. The colorful markers in the two zoomed in versions (center panels) show the same spatial features in the two point clouds. Barreto et al., 2021), and benthic cover analysis systems, which use machine learning to recognize coral types and other benthic classes in images (Beijbom et al., 2012, 2015; Williams et al., 2019; Chen et al., 2021; Schurholz and Chennu, 2023). However, underwater environments pose particular challenges to computer vision methods due to difficult lighting conditions and diffraction effects, caustics, non-linear attenuation and scenes with many dynamic objects. This implies that computer vision algorithms often only work reliably under controlled conditions. SfM mapping techniques, for example, are often brittle. They require carefully curated high-resolution image collections for 3D reconstruction or they will fail to create coherent 3D models, as shown in Figure 1. At the same time, they are limited in scale due to the involved computational cost: at a resolution allowing identification of individual coral colonies, the largest high-resolution 3D reconstructions cover 60 m in length, while taking days of computation time (Bongaerts et al., 2021). Similarly, systems for benthic cover classification are commonly restricted to photo quadrats or orthomosaics as opposed to general images of reef scenes, and are often only trained on datasets with sparse pixel annotations (Beijbom et al., 2012; Chen et al., 2021; Yuval et al., 2021). The state-of-the-art is far from general-purpose semantic segmentation systems for coral reef scenes that work reliably across reef scenarios and conditions. Furthermore, it can be challenging to transfer results from benthic cover estimation into 3D reconstructions made with photogrammetry (Hopkinson et al., 2020). These limitations, and the manual labor to overcome them, have impeded computer vision tools from being applied under water at the same scale as in terrestrial settings. In this paper, we present an approach for 3D mapping and simultaneous semantic segmentation of coral reef areas that is significantly more scalable than existing approaches. The underlying paradigm of learning-based SfM photogrammetry uses deep neural networks that learn to adapt to challenging environments for computer vision algorithms, such as those posed under water. In particular, we show Figure 2: Example excerpts of 3D point clouds of different reef scenarios in their original RGB color, next to the points colorized by their predicted benthic class (top). A 100 m transect (bottom) can be covered by a diver in less than five minutes: the length of the created point clouds is limited only by the diver’s air availability and the camera’s battery capacity. that it is possible to create 3D maps of large areas of reef at resolution of individual coral colonies using a single affordable consumer-grade camera filming while being moved around the scene ('egomotion' video). Our method requires no expensive computing infrastructure: on a computer with a single Graphics Processing Unit (GPU), the semantic segmentation and 3D reconstruction can be obtained in real video time. From a single SCUBA dive, it is possible to obtain a 3D point cloud of more than 1km length at the resolution of individual coral colonies, as shown in Figure 2. The proposed 3D mapping approach enjoys a powerful synergy with image-based semantic segmentation systems, which assign each pixel in an image to a specific semantic class. The semantic information can be directly transferred to the 3D models which in turn enables automated computation of ecological measures of interest, such as the area covered by each benthic class. We exemplify our approach, which we name DeepReefMap, on reef areas in the northern Gulf of Aqaba, publishing a large-scale dataset of ego-motion videos taken by divers in the area. Furthermore, we publish a dataset of video frames of reef scenes annotated for pixel-wise semantic segmentation of benthic classes. We train neural networks to reconstruct the 3D geometry from the videos and for semantic segmentation of 20 benthic classes. We find that the accuracy of the estimated 3D geometry from our method is competitive with a state-of-the-art conventional SfM pipeline, whereas our approach is more robust and two orders of magnitude faster. The semantic segmentation system is evaluated on three scenes that were not seen during training, where 84.1% of pixels are correctly classified. Our implementation is open source, alleviating the reliance on proprietary SfM software. ## Materials and Methods DeepReefMap creates 3D point clouds from uncurated ego-motion videos by using deep learning-based SfM and leveraging its synergy with semantic segmentation of benthic classes. An overview is shown in Figure 3. The remainder of this section describes in detail the data collection process, the learning-based SfM system and the semantic segmentation system. ### Collection of Ego-Motion Videos The collection of data from which 3D maps are created is particularly straightforward: a diver swims forward while being between 1 and 5 meters above a reef area, filming with a consumer-grade action camera as illustrated in Figure 3 (left). From a technical side, our method is agnostic to the video camera used. Consumer-grade action cameras were chosen because they are the cheapest options. The distance covered is only limited by logistical constrains such as the diver's air reserves, the battery and memory capacity of the camera, the length of the reef, or the distance to the diving boat. Within a single SCUBA dive, a video covering more than 1 km of distance can routinely be taken. For this work, the dataset of ego-motion underwater video used was collected by divers from six reef sites in Eilat (Israel) and Aqaba (Jordan) between July 22, 2022 and Aug 25, 2022, on an expedition of the Transnational Red Sea Center 1. The videos cover a diverse set of scenes: from reefs with high structural complexity, over patchy reefs in seagrass meadows, to sites located close to human settlements, which are heavily exposed to human activity and the resulting pollution. The videos were collected in reefs ranging from 3 to 10 m in depth, with no particular constraint on the swim speed. Three 100 m transects were filmed for benchmarking purposes, while the remaining videos are taken without any reference objects placed on the substrate in a free-roaming fashion. Footnote 1: [http://trsc.org](http://trsc.org) The videos were captured with GoPro Hero 10 cameras in the linear field-of-view mode at 1080p resolution and 30 frames per second with the stabilization setting on'smooth'. In order to increase the amount of video data captured during each dive, three cameras were attached side-by-side onto a rigid pole, with about 1m of space in between each camera. In total, there are 19 hours and 49 minutes of ego-motion video. The cameras were attached using simple handebar mounts, and re-attached by hand before every dive without highly precise angle measurement, leading to a slight variation in camera angles between dives. Up to the precision of eye-balling the angles, the roll and yaw angles of the cameras were fixed at \(0^{\circ}\). The pitch angle was deliberately varied between approximately \(-5^{\circ}\) and \(-40^{\circ}\) between dives in order to include diverse settings into the training dataset. ### Deep Learning-Based SfM The 3D mapping in our approach is realized using deep learning-based SfM (Zhou et al., 2017; Bian et al., 2021), in which neural networks are trained from example videos to specialize SfM photogrammetry to the challenging conditions to which it is exposed to in a reef environment. As opposed to con Figure 3: An overview of the main method (DeepReefMap): a diver swims over a reef while taking video with a consumer-grade action camera. On the uncurated frames of this video, the learning-based SfM system is used to reconstruct the 3D geometry of the reef in real time. A semantic segmentation system is used to identify benthic classes, which can directly be transferred into the point cloud. ventional SfM, where strong assumptions on the color consistency of image features are made, leading to brittle reconstruction behavior, learning-based SfM can adapt to the challenges of underwater scenes. Learning-based SfM leverages the geometric relationship between overlapping images to formulate an unsupervised learning objective, meaning that it suffices to use a large dataset of unannotated video for training such a system, without ground-truth camera position or depth data. The underlying principle of learning-based SfM is to estimate the rigid transformation between pairs of images, formulating a differentiable loss function that provides a learning signal for a neural network. A schematic overview of a learning-based SfM system is shown in Figure 4. Consider two overlapping images \(I_{a}\) and \(I_{b}\) taken with the same camera at different locations and orientations. Let \(\hat{T}_{a,b}\) denote an estimate of the 6D camera pose transform (the 3D translation and 3D rotation) between the camera poses of the two images. Using an estimate of the depth \(\hat{D}_{a}\) of an image, i.e. the distance between the camera and each pixel, and choosing an appropriate camera model, one can take an image and its depth estimate to reproject \(I_{a}\) to the camera position of \(I_{b}\), obtaining a reprojected image \(\hat{I}_{b}\): \[\hat{I}_{b}=\text{Reproject}(I_{a},\hat{D}_{a},\hat{T}_{a,b}) \tag{1}\] The reprojected image can then be compared to the original image, using a differentiable photometric loss function between the images in RGB space, denoted \(\mathcal{L}_{P}\). This reprojection loss is calculated in both directions, by using \(\hat{D}_{b}\) and the inverse camera pose transform \(\hat{T}_{b,a}=\hat{T}_{a,b}^{-1}\). In a similar fashion, the estimated depths can be reprojected via the estimated pose transform and a regularization term \(\mathcal{R}_{G}\) that enforces their geometric consistency is added to the loss function (Bian et al., 2021). A smoothness regularization term \(\mathcal{R}_{S}\) on the estimated depths is added to deal with low-texture regions (Zhou et al., 2017). In state-of-the-art systems, a similar image-pair unsupervised formulation of optical flow is included in the training procedure, which we omit here for the sake of simplicity. The estimates of the depth and camera pose transform are predicted from the pair of images by a neural network \(f\), which has a set of learnable parameters \(\theta\): \[\hat{D}_{a},\hat{D}_{b},\hat{T}_{a,b}=f_{\theta}(I_{a},I_{b}) \tag{2}\] Using the photometric loss and the regularization terms, the parameters of the neural network are then trained by gradient-based optimization to minimize the total expected loss over the data, which is empirically realized by training on the dataset of available pairs or overlapping images: \[\min_{\theta}\mathbb{E}_{I_{a},I_{b}}\Big{[}\mathcal{L}_{P}(I_{a},\hat{I}_{a}) +\mathcal{R}_{S}(\hat{I}_{a},\hat{D}_{a})+\mathcal{L}_{P}(I_{b},\hat{I}_{b})+ \mathcal{R}_{S}(\hat{I}_{b},\hat{D}_{b})+\mathcal{R}_{G}(\hat{D}_{a},\hat{D}_{ b},\hat{T}_{a,b})\Big{]} \tag{3}\] Training a neural network on a sufficiently large and diverse dataset of videos enables accurate depth and pose estimation for the scenarios depicted in the training videos: the SfM system learns from examples to be robust against the challenging conditions posed by coral reef settings. For training our learning-based SfM system, video frames were extracted from the dataset of ego-motion videos at a resolution of \(608\times 352\) px with 20 frames per second, leading to a total of 1.4 million video frames for training the system. Our implementation is based on the SC-SfMLearner (Bian et al., 2021), with the neural network architecture chosen to be a U-Net (Ronneberger et al., 2015) with a ResNet34 (He et al., 2016) backbone, which is selected due to its large receptive field size but fast inference speed. This neural network is trained for 1 million steps of batch size 5 using the Adam optimizer (Kingma and Ba, 2015) with a learning rate of 0.0001. More implementation & training details can be found in the appendix. Using a trained learning-based SfM system to create a 3D point cloud can be realized by iterating through all the frames of a video, selecting some or all pixels of each frame, projecting them out into 3D space using the estimated depth and the camera intrinsics, and finally updating the camera position using the estimated camera pose transform. This can be done essentially at the speed of the forward pass of the trained neural networks: a U-Net with a ResNet-34 backbone can create a 3D point cloud from a video at 18 frames per second - essentially real video time. For better visual quality, the frames are added to the point cloud via integration though a truncated signed distance field (Curless and Levoy, 1996), a common technique in dense monocular 3D mapping. Naively integrating frames of an underwater video with this iterative technique will however lead to strong undesirable artifacts from the water surface, the background water column, and dynamic objects, such as fish and divers, as displayed in Figure 5. As a remedy, semantic segmentation can be employed. ### Semantic Segmentation There is a strong synergy between learning-based SfM systems and image-based semantic segmentation: information about the benthic composition can be transferred to the point cloud and unwanted classes that lead to artifacts in the point clouds can be removed. This Section describes our semantic segmentation Figure 4: Schematic overview of a deep learning-based SfM setup. A neural network takes two overlapping images \(I_{a}\) and \(I_{b}\) and estimates their respective depths as well as the camera pose transformation between them. These estimates are used to project one image onto the other and compute a regularized photometric reprojection error, which acts as an unsupervised learning signal to train the involved neural networks. system and dataset, as well as its interaction with the learning-based SfM system. We created a dataset of video frames that were annotated with 20 benthic classes of interest. In total, 1997 patches of size \(800\times 500\) px were annotated for semantic segmentation with over 10000 polygons using the AIDE annotation software 2.0 (Kellenberger et al., 2020), which extracts patches from the full \(1920\times 1080\) px video frames. Example patches and their labels are shown in Figure 6, showing how polygons are drawn around objects and some areas contain no label. Details on the label distribution from the 20 classes are provided in the appendix. Our dataset is the first dataset containing polygon-level benthic segmentation labels of 'general purpose' reef scenes when compared to existing datasets in the coral reef domain, which have either focused on specific subdomains, such as photo quadrats (Beijbom et al., 2012) or orthomosaics (Yuval et al., 2021), or only provided sparse point labels instead of segmentation labels (Beijbom et al., 2012). The dataset of video frame patches contains patches from the three evaluation transects, where the patches were chosen to maximize spatial coverage and minimize spatial overlap between frames for each Figure 5: Using a trained semantic segmentation system, unwanted classes such as the background (dark blue) and human (in yellow) can be masked out in the point cloud creation process (top). This can alleviate artifacts from these classes, as illustrated by this example from the Japanese Garden Site in Eilat, showing the point cloud made with all classes (bottom left), the background removed (bottom center), and both unwanted classes removed (bottom right). transect. The dataset also includes patches chosen from various scenes, which were selected manually to get a good coverage of the label classes and of the diversity of the scenes from which video was captured but have no spatial overlap with the evaluation transects. Inherent AmbiguityWhile some classes, such as colorful live corals or fish, have distinct features that can be labeled consistently, other classes have a strong inherent ambiguity. In particular, it can be extremely hard even for expert human annotators to draw the line between rock, macroalgae-covered substrate, dead corals, and rubble. Furthermore, while some live coral is obviously alive and some dead coral is obviously dead, there are cases in which it is difficult to assign a health label for damaged corals. While maximal accuracy would necessitate an underwater side-by-side comparison of the pigmentation of the coral with a printed color chart and a lookup in benchmark pigmentation charts of the exact species Figure 6: Example video frame patches of size \(800\times 500\) px and their annotations. In many video frames with annotations, portions of the image remain unlabeled. at hand, the annotation was instead performed with best judgment of the annotators in such ambiguous cases. Lastly, many animals, such as fish and sea urchins, take shelter in holes. Due to the limited lighting, often only parts of the animal are discernible, and it is up to the annotator's judgment where the animal ends and the class 'dark' begins. #### Training the Neural Network On the dataset of annotated video frames, we train a semantic segmentation neural network. In particular, we use the U-Net (Ronneberger et al., 2015) architecture with a ResNeXt50-32x4d (Xie et al., 2017) backbone. The input and output resolutions are chosen to be \(416\times 416\) px. The backbone is initialized with weights pre-trained on the large-scale dataset ImageNet (Deng et al., 2009). The U-Net is trained to minimize the pixel-wise cross-entropy between the classes using the Adam (Kingma and Ba, 2015) optimizer with a learning rate of 0.0001 and a batch size of 32. Unlabeled pixels are omitted from the learning objective. Data augmentations are used to artificially increase the size of the dataset: images are randomly flipped horizontally, and randomly resized crops with scales between 0.5 to 1.4, and aspect ratio modifications between 0.7 and 1.4 are chosen. When using the trained semantic segmentation network to calculate the benthic classes of images, the full \(1920\times 1080\) px image is divided into overlapping patches of size \(800\times 500\) px, which are resized to \(416\times 416\) px before being fed into the neural network, obtaining segmentations of the same size. The segmentation predictions are resized back to \(800\times 500\) px and re-stitched together (in image regions from overlapping predicted patches, the softmax class probabilities are averaged) to yield a semantic segmentation in the original video resolution. #### Use within learning-based SfM The creation of 3D point clouds from images using a learning-based SfM system iterates through the video frames, projecting image pixels with their depth into 3D space. With a trained semantic segmentation system, unwanted classes such as divers, fish, and background, can simply be excluded during this procedure by masking them out at the image level, as shown in Figure 5. For the remaining benthic classes of interest, the benthic class of the pixel is attached to the respective point in the 3D point cloud, facilitating automation of downstream ecological analysis. ### Evaluation Spatial AccuracyTo evaluate the spatial accuracy of the 3D point clouds produced by DeepRefMap, another \(38\) m long video transect was collected on Aug 4, 2023 in the Japanese Garden in Eilat with placed ground markers. Divers measured the ground-truth distance between targets. We evaluate the accuracy of our method against the ground-truth and compare against the research standard COLMAP (Schonberger and Frahm, 2016) and the industry software Aigsoft Metashape (AgiSoft, 2022). In particular, ten markers were placed and twelve distances between 70 cm and 370 cm were measured by hand. Following this, an ego-motion video of this transect was taken with the default setup of our method, leading to a video of 2 minutes and 24 seconds. Then, top-down pictures of the same transect were taken with the GoPro camera in linear photo mode (at \(5568\times 4176\) px resolution), following the protocol from Raoult et al. (2016). The top-down images acquisition took 6 minutes. These photos are taken in a way that excludes background haze, other divers, or large fish. DeepReefMap takes images of size \(608\times 352\) px as input, so the top-down images are scaled and cropped accordingly (maintaining the aspect ratio) to be used as an input. We provide quantitative results as the mean absolute relative error from the ground truth distances: \[\text{Mean Absolute Relative Error}=\frac{1}{|D|}\sum_{i,j\in D}\big{|}d_{ij}- \hat{d}_{ij}\big{|}\big{/}d_{ij}.\] Here \(D\) is the set of pairwise ground truth distances, \(d_{ij}\) is the ground truth distance between target \(i\) and target \(j\), and \(\hat{d}_{ij}\) is the measured distance using the ruler tool in Metashape. Before this, the coordinates of the point cloud from DeepReefMap are multiplied by a scalar factor of \(\text{mean}(\hat{d})/\text{mean}(d)\) in order to scale the point clouds to comparable size. Semantic SegmentationTo evaluate the semantic segmentation system of our method, we evaluate on three 100 m line transects: two transects from different sites in the King Abdullah Reef in Jordan, and one from the Japanese Garden reef in Israel. We start by evaluating the semantic segmentation system both qualitatively and quantitatively, followed by a qualitative analysis of the 3D reconstructions and the respective automatic downstream ecological analysis. We separate the annotated video frames into a train and test dataset for each of the three evaluation transects, with the test set being formed by all annotated frames of the transect. This way, the transect on which we evaluate is unseen during training and there is definitely no overlap between the train and test images, thus giving an estimate to the system's generalization performance on new data. Ortho-ProjectionOrtho-projected 2D maps from the 3D point clouds are created as follows: first, a 2D occupancy grid on which the gravity vector (obtained from the camera's inertial measurement unit) is the normal vector is chosen with a suitable grid cell size. In each 2D cell, the 30% of 3D points in the cell with the highest z-value (the lowest depth) are selected. To increase robustness against noisy points, the benthic class for the grid cell is chosen by (hard) majority voting of points present in the grid cell, whereas the RGB value and z-value for the grid cell are obtained through averaging the points. The true scale of the objects in the point cloud and 2D map can be obtained by scaling with objects of known reference size, for example a transect line. ## Results **Spatial accuracy** We present large-scale semantic 3D maps of coral reefs from six sites in the northern Gulf of Aqaba, created from video taken with one consumer-grade action camera. The output 3D point clouds have the semantic segmentation transferred directly from the video frame pixels to the 3D points, which allows automatic downstream benthic cover estimation. Each 3D map is produced directly from a single video in real time, speeding up the analysis of video transects by orders of magnitudes compared to previous methods. A quantitative evaluation of the spatial accuracy is shown in Table 1. DeepReefMap is the only method that produces a point cloud encompassing all frames in the low resolution setting. The error of DeepReefMap in the ego-motion video setting is lower than in the top-down setting and is comparable to the error of the performance of COLMAP in the high-resolution top-down setting, which, however, took more than \(100\) times longer to compute. Note that both Metashape and COLMAP fail in reconstructing the point cloud in the low resolution and ego-motion settings despite the high overlap between images, and the area covered being relatively small (\(38\) m). This is further highlighted in Figure 7, where the resulting point clouds are visualized for all methods. DeepReefMap and Metashape in the high resolution settings are the only ones providing both locally and globally coherent reconstructions. Metashape with the high-resolution images produces the best final spatial accuracy (see Table 1) at the cost of needing high resolution images and very long computation times, while DeepReefMap has much faster data acquisition and reconstruction time, and allows directly transfering semantic segmentation from the video frames to the 3D point cloud. COLMAP in the high-resolution setting and DeepReefMap in the top-down low-resolution setting produce locally coherent point clouds that are not globally coherent, and are twisting inside themselves. The top-down images have an overlap of around 80-90%, making the pose between frames much bigger than for subsequent video frames. When the camera intrinsics do not account for the diffraction caused by even linear cameras under water (if no dome port is used), the re-projection is minimized by overestimating the rotation, leading to the characteristic rounded shape. For COLMAP, this happens despite the camera intrinsics to be set to a radial model. DeepReefMap for now assumes a linear camera model, and the gravity vector is not saved in GoPro images (which could be used to alleviate the twisting), unlike in videos. For a qualitative assessment, close-up screenshots from point clouds reconstructed by DeepReefMap are shown in Figure 8, demonstrating the level of detail that can be captured, and showing the diversity of scenes in which reconstruction works reliably. \begin{table} \begin{tabular}{l|c c c|c c c} \hline Method & \multicolumn{3}{c|}{Mean Absolute Relative Error} & \multicolumn{3}{c}{Processing Time Taken (seconds)} \\ \hline & Ego-Motion & \begin{tabular}{c} Top-Down \\ (Low-Res) \\ \end{tabular} & \begin{tabular}{c} Top-Down \\ (High-Res) \\ \end{tabular} & Ego-Motion & \begin{tabular}{c} Top-Down \\ (Low-Res) \\ \end{tabular} & \begin{tabular}{c} Top-Down \\ (High-Res) \\ \end{tabular} \\ \hline DeepReefMap (ours) & 7.84\% & 9.84\%\({}^{*}\) & - & **320** & **100** & - \\ Agisoft Metashape & Fail & Fail & **1.85\%** & 9 870 & 1 800 & 21 850 \\ COLMAP & Fail & Fail & 6.58\%\({}^{*}\) & 75 660 & 1 550 & 47 520 \\ \hline \end{tabular} \end{table} Table 1: Spatial accuracy with regards to ground-truth markers placed in the Japanese Garden video of Aug. 4, 2023. The cells marked with * denote point clouds which are locally coherent and complete, but not globally coherent. **Semantic segmentation.** To evaluate the semantic segmentation, we start with a qualitative assessment: we show video frames with their respective predicted benthic class, their ground-truth annotations, and the correctly and falsely predicted in Figure 9. Most polygons are entirely classified correctly. The largest visible misclassifications come from assigning the wrong class to an entire large area, such mis Figure 8: Screenshots of various 3D reconstructions reef scenes from DeepReedMap. The top left screenshot is a close-up of the low-res top-down point cloud from Figure 7, demonstrating local coherence even when global coherence is not recovered due to absence of gravity vectors from the camera’s inertial measurement unit. The remaining images are from various ego-motion video scenes from the dataset. Figure 7: Comparison of 3D reconstructions using different input imagery. The top-down images are either of high resolution (last column, \(5568\times 4176\) px), or are downscaled to the lower input resolution of DeepReedMap (center column, \(608\times 352\) px). In general, only DeepReedMap and Metashape produced globally coherent point clouds. When compared low resolution imagery, only DeepReedMap on ego-motion video yields satisfactory results. classifying a patch of rubble as sand, or classifying a sea urchin tucked into a hole into the 'dark' class. To quantify the accuracy of the semantic segmentation system, we report the percentage of annotated pixels for which the neural network predicts the correct class as a metric. Computed accuracies for Figure 9: Example video frames from the three evaluation transects (top row), their respective benthic class predictions by the neural network (second row), the ground truth annotations (third row), and the mask of pixels which was labeled correctly and incorrectly (bottom row). the individual label classes, as well as averaged over all classes and over all pixels are displayed in Table 2. All in all, the results are largely consistent for the three transects: the mean class accuracy is comparable between the three transects. Similarly, there is a large consistency between classes that are accurately classified: live massive coral, live branching coral, sand, transect line and sand, are all classified with at least 80% accuracy on all three evaluation transects. On the other hand, classes with the lowest accuracy are consistently trash and dead coral. The total accuracy is higher than the mean class accuracy because classes that are easily segmented, such as background or sand, have more and larger \begin{table} \begin{tabular}{l c c c c c c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Transect} & \multirow{2}{*}{\begin{tabular}{c} **Total** \\ **Accuracy** \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} **Mean** \\ **Class** \\ **Accuracy** \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} **Class** \\ **Accuracy** \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} **Ranking** \\ **Recall** \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} **Ranking** \\ **Recall** \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} **Data** \\ **Recall** \\ \end{tabular} } & \multirow{2}{*}{ \begin{tabular}{c} **Mean** \\ annotated polygons. The main result is, that even with only a few hundred annotated image patches, the main benthic classes of interest can be detected with satisfactory accuracy using frame-wise dense segmentation neural networks. To better understand the misclassification errors on the image level, a confusion matrix of the union of the three evaluation transects is shown in Figure 10a. Over the three transects in total, a mean class Figure 11: Birds-eye view of the three evaluation transects of 100 m length with their RGB color, their semantic class, and their z-value (left), as well as a pie chart showing the percentage cover of the benthic substrate classes (right). accuracy of 68.6% is reached with 84.1% of pixels being correctly classified. For the 'trash' class, which has the second lowest accuracy, many pixels are misclassified as fish, sand or rock. Marine litter is a strongly heterogeneous class: from aluminum cans that share bright colors with some fish, to pale-colored bits of cloth or plastic lying partially covered by sandy or rocky substrate. Most errors in the sea urchin and fish class are from misclassifications to 'dark'. The majority of the remaining errors stem from the dead coral classes being confused with rock and rubble. To summarize, we find that a large portion of misclassifications are consistent with the challenges from the inherent ambiguity we faced while annotating the frames, as described above. The ability to distinguish the main classes of live coral from dead substrates in order to calculate the live coral cover is particularly important from an ecological perspective. This should be evaluated at the point cloud level, where unwanted classes and noisy or uncertain points have been removed, rather than at the image level. When rock, sand, rubble, macroalgae, and the dead coral classes are summarized into a non-live-coral substrate class, the mean class accuracy at the point cloud level is 80.2%, with 92.6% of points correctly classified. The remaining misclassifications are mostly from sea urchins being classified as dark, and trash being misclassified, as highlighted in Figure 10b. We show ortho-projections of the 3D models of the three evaluation transects in their RGB color, colorized by their benthic class and their z-value, along with the area covered by each static benthic class in Figure 11, with a chosen grid cell size of 5 cm. The benthic cover percentages are computed straight from the ortho-projection by normalizing by the number of occupied 2D grid cells. In photogrammetry, occlusion within the image frames creates'shadows' in the point cloud, which can form holes in the 2D ortho-projection. For all three evaluated point clouds, holes that are completely enclosed in the 2D map account for less than 0.2% of the area. This means that they have almost no impact on the benthic cover analysis. The width of the point clouds depends on the height of the diver above the seafloor and the angle of the camera for the individual videos. ## Discussion This paper presents a novel approach for rapid 3D semantic mapping of coral reefs from only ego-motion videos taken with a single action camera. This approach significantly scales up the analysis of video transects. This has broad implications on coral reef monitoring and may inspire new research directions for mapping underwater environments in general. This Section discusses limitations as well as main future developments that will ultimately determine the impact of the method. For coral reef ecology purposes, any analysis largely depends on the quality of the segmentation system. While our system can demonstrably separate the main classes of interest with a high accuracy, there is a large potential to improve the semantic segmentation to unlock a more fine-grained and accurate ecological analysis. In particular, for a more precise ecological analysis that is applicable to reef areas beyond the northern Red Sea, a much larger and more comprehensive dataset of semantic segmentation annotations must be created. This dataset should include scenes from more diverse reef environments, and annotations at the most precise taxonomic rank that is determinable from the images. With annotations down to the coral genus level and sufficiently large dataset size, opportunities for a higher level of sophistication in the deep learning process will become possible with more involved neural network architectures, training procedures, and the integration of information from multiple subsequent frames, which has the potential to lead to more precise semantic segmentation. The main methodological contribution in this work introduces the paradigm of learning-based SfM to underwater ecosystems mapping, which promises to tackle the challenging conditions of underwater environments by learning from large video collections. From a computer vision perspective, the underlying methods are rapidly evolving: learning-based SfM methods are improving dramatically with novel neural network architectures and better loss formulations (Martin et al., 2022) leading to better depth and pose estimates. Currently, while being significantly faster and more robust, learning-based SfM systems commonly do not reach the same accuracy as conventional SfM systems: global pose graph optimization is generally omitted in learning-based SfM and images fed into deep learning systems have to be significantly reduced in resolution due to limited GPU memory. Commonly, to match the final accuracy of conventional SfM, the estimated depths are scaled up using super-resolution procedures (Teed and Deng, 2020, 2021) and subsequently fed into conventional SfM systems along with the pose transformation estimates to be fine-tuned, at a substantial computational cost (Bian et al., 2021; Zhou et al., 2017). Nonetheless, while conventional SfM photogrammetry is an established research field dating back decades (Schonberger and Frahm, 2016), the advances in learning-based SfM have been staggering, promising to tackle current limitations in the near future. To the best of our knowledge, no previously existing conventional SfM system can reliably deliver 3D reconstructions from ego-motion videos of reef scenes, especially not at a comparable computational cost. Furthermore, like all SfM systems, learning-based SfM suffers from the fact that tiny errors in the camera pose transformations behave multiplicatively: while the 3D reconstructions look coherent on a local scale, the global trajectory of the point cloud is commonly increasingly erroneous as the covered distance increases. For terrestrial systems, the remedy is given by precise GPS devices. Even so, detecting when the camera revisits a place in 3D space, detecting so called _loop closures_, remains extremely challenging (Schonberger and Frahm, 2016; Chen et al., 2021). Without an accurate positioning system under water, the described method is limited to 3D reconstructions without loop closures. Such positioning systems exist, but are orders of magnitude more expensive than the cameras we used, defeating the purpose of democratizing coral reef monitoring. In our proposed low-cost approach, just as in conventional coral reef monitoring methods, accurate geolocalization is thus only possible when markers with known GPS coordinates, such as transect lines, are visible. Nonetheless, as learning-based SfM methods evolve and improve, the effect of tiny inaccuracies in the pose transformations will become less detrimental. Another future work is the fusion of point clouds from multiple video mappings from the same areas, allowing to fill the'shadows' in occluded views, and laying the foundation for more precise ecological analysis. This involve either merging the point clouds taken from cameras with different viewing angles, for example one front-facing and one rear-facing camera, or from multiple passes back and forth with one camera. Point clouds without holes from occlusion in them could, in principle, enable volumetric ecological analysis: beyond the areas covered by benthic classes, estimates for the biomass or the volume distribution of corals could be calculated. Without changing the underlying computational methods, our approach could be extended to other underwater environments, for example the submerged ecosystems of mangrove forests (Giardino et al., 2015), or to guide deep sea exploration & salvage operations (Jakobsson et al., 2021). With the availability of sizeable ego-motion video data from such scenarios, including a reasonably sized dataset of annotated video frames with the classes of interest, our approach should transfer seamlessly to such domains. ## Acknowledgments We thank Dr. Ali Al-Sawalmih and Tariq Al-Salman (Marine Science Station, Aqaba), Prof. Maoz Fine, Nahum Sela (InterUniversity Institute of Marine Science, Eilat), Dr. Assaf Zvuloni (Nature Reserve Authority Israel), and the Aqaba Special Economic Zone Authority for their support for enabling us to collect the videos. Freya Behrens is thanked for her help on computational 3D geometry. The data of this study were collected in the framework of the Transnational Red Sea Center hosted by the Laboratory for Biological Geochemistry at EPFL. This work was funded in part by FNS grant 205321_212614, as well as EPFL and the Transnational Red Sea Center. ## Author Contributions Jonathan Sauder and Devis Tuia conceived the ideas and designed methodology; Jonathan Sauder, Guilhem Banc-Prandi and Anders Meibom collected the data; Jonathan Sauder wrote the source code and analyzed the data, and led the writing of the manuscript; all authors contributed substantially to writing and revising the manuscript and gave final approval for publication. ## Data Availability Data is made available upon request. Upon final peer-reviewed publication, the data and source code will be made fully available.
2309.12810
StyloMetrix: An Open-Source Multilingual Tool for Representing Stylometric Vectors
This work aims to provide an overview on the open-source multilanguage tool called StyloMetrix. It offers stylometric text representations that cover various aspects of grammar, syntax and lexicon. StyloMetrix covers four languages: Polish as the primary language, English, Ukrainian and Russian. The normalized output of each feature can become a fruitful course for machine learning models and a valuable addition to the embeddings layer for any deep learning algorithm. We strive to provide a concise, but exhaustive overview on the application of the StyloMetrix vectors as well as explain the sets of the developed linguistic features. The experiments have shown promising results in supervised content classification with simple algorithms as Random Forest Classifier, Voting Classifier, Logistic Regression and others. The deep learning assessments have unveiled the usefulness of the StyloMetrix vectors at enhancing an embedding layer extracted from Transformer architectures. The StyloMetrix has proven itself to be a formidable source for the machine learning and deep learning algorithms to execute different classification tasks.
Inez Okulska, Daria Stetsenko, Anna Kołos, Agnieszka Karlińska, Kinga Głąbińska, Adam Nowakowski
2023-09-22T11:53:47Z
http://arxiv.org/abs/2309.12810v1
# StyloMetrix: An Open-Source Multilingual Tool for Representing Stylometric Vectors ###### Abstract This work aims to provide an overview on the open-source multilanguage tool called StyloMetrix. It offers stylometric text representations that cover various aspects of grammar, syntax and lexicon. StyloMetrix covers four languages: Polish as the primary language, English, Ukrainian and Russian. The normalized output of each feature can become a fruitful course for machine learning models and a valuable addition to the embeddings layer for any deep learning algorithm. We strive to provide a concise, but exhaustive overview on the application of the StyloMetrix vectors as well as explain the sets of the developed linguistic features. The experiments have shown promising results in supervised content classification with simple algorithms as Random Forest Classifier, Voting Classifier, Logistic Regression and others. The deep learning assessments have unveiled the usefulness of the StyloMetrix vectors at enhancing an embedding layer extracted from Transformer architectures. The StyloMetrix has proven itself to be a formidable source for the machine learning and deep learning algorithms to execute different classification tasks. ## 1 Introduction Parallel to the rise of large language models and complex transformer architectures, there is a growing interest in widely explainable solutions, aligning with the trend towards responsible and transparent AI. Soon, model explainability will not only reflect the curiosity of researchers or an engineer's goodwill towards users, but will also become a top-down requirement, for instance, within the EU due to the forthcoming AI Act. Domain knowledge is simultaneously gaining prominence, as it is the key to a profound understanding of data and emerging patterns. Moreover, as a result, it enables an informed choice of data representations that support the desired transparency and facilitate explainability. StyloMetrix, the open-source multilanguage stylometric tool presented in this paper, combines both approaches - the potential for model explainability (the interpretability of the features), and the utilization of domain expertise. It allows for highly effective feature engineering and expert analysis of the results. **Interpretable StyloMetrix vectors serve two functions - 1) they can be used as input to explainable classification models and 2) they enable new knowledge discovery through stylometric analysis of the corpora included in the model's classes**. Previous experiments have demonstrated the effectiveness of these vectors in content (including malicious content) classification, genre identification, style analysis, and authorship attribution. Combined with neural embeddings like BERT-based models, stylometric vectors enhance classification accuracy. **Currently, StyloMetrix is available in four languages: English, Polish, Ukrainian, and Russian**, but its design allows for the rapid and convenient expansion of this set to include additional languages. ## 2 Related Work Stylometry (or computational stylistics) is a broad field with a number of studies that involve the analysis of linguistic features extracted from a collection of texts in order to characterize the style of an author, document, or group of documents (Eder, 2014). The use of statistical techniques makes it possible to draw out the often subtle differences and similarities between texts that are invisible to the naked eye, and thus to delineate groups of texts based on their degree of linguistic affinity (Piasecki et al., 2018). Stylometric methods are successfully used in author authentication (e.g. Jankowska et al. (2014), Shrestha et al.), text classification (e.g. Brocardo et al., Jockers et al., Koppel et al.), active author verification (e.g. Fridman et al., Gray and Juola), genre analysis (e.g. Sarawgi et al., Sta matatos et al.), and other tasks. The development of methods and approaches in these fields has translated into the development of systems supporting stylometric analysis to a limited extent. Worth mentioning here are 'Stylometry with R' (stylo) (Eder et al., 2016) and WebSty (Piasecki et al., 2018; Eder et al., 2017; Walkowiak, 2018), open tools that enable quantitative analysis of texts in various languages, including Polish. The stylo is a user-friendly and straightforward method for unsupervised multivariate analysis. The classification function of the tool is developed using supervised learning. WebSty is an easily accessible open-sourced tool for stylometric analysis and a part of the CLARIN-PL research infrastructure. WebSty covers a wide range of languages, provides grammatical, lexical, and thematic parameters to analyze the text, and enables the user to select relevant features manually. However, unlike the StyloMetrix vector, WebSty lacks the metrics for syntactic structures and usability. The tool is only available online through a Website with a limited range of pre-set options. Machine learning is used extensively in stylometric text analysis and authorship attribution. The word2vec algorithm (Mikolov et al., 2013) looks at the surrounding context to construct word embeddings, while GloVe builds a more comprehensive vector based on the global statistics of a word co-occurrence (Pennington et al., 2014). FastText (Boganowski et al., 2017) is another popular method where a word vector is constructed as the sum of associate character n-grams. These approaches of word embeddings are further extended to the document embeddings (Doc2Vec, (Le and Mikolov, 2014)) and are commonly used in modern text analysis. However, their interpretability remains unclear and seems like a black box to the user. We share Yu's (Yu, 2008) perspective on the feature extraction process. The scholar claims that the feature vector should be plausible and easily interpretable before it is employed in any machine learning or deep learning algorithm. Therefore, the purpose of the StyloMetrix is primarily to be a syntactic and lexical vector representation that can be interpreted by any linguist expert. The developed metrics are common and widely utilized in the linguistic community, however some of them are more elaborate and task-specific, which enables a user to choose the exact set of metrics that is needed for their purposes. Although the StyloMetrix is one of the rare multilingual comprehensive corpus analysis tools, the first commonly used apparatus for an exhaustive readability analysis was developed in 2006. The tool called Coh-Metrix1 was developed by a group of four researchers from the Institute for Intelligent Systems, The University of Memphis, and released to the public via an online platform. In September 2022, the Coh-Metrix project was revived and transformed into Coh-Metrix 3.0 (McNamara et al., 2014). The number of metrics has increased to 200. However, the size of an analyzed text is still limited to 1000 words. Moreover, Coh-Metrix offers a choice of statistical estimation (mean or standard deviation). Coh-Metrix encompasses a wide array of metrics covering grammar, syntax, and semantics of the English language. It analyzes texts on over 50 types of cohesion relations and over 200 measures of language, text, and readability. CohMetrix has been widely used by about 6000 researchers from all over the world. It has also been a source of inspiration for analyzing languages other than English: there are documented versions of CohMetrix for Spanish (Quispesaravia et al., 2016), Portuguese2, and Chinese (Ye, 2013), however no versions tailored for Ukrainian, Polish, or Russian languages have yet been developed. The StyloMetrix metrics described further in this paper share a conceptual similarity but diverge in their intended applications. While Coh-Metrix primarily focuses on coherence, cohesion, and readability scores, StyloMetrix vectors are designed to encompass a wide and unconstrained spectrum of grammatical and syntactic distinctions found across different text genres. Footnote 1: [http://cohmetrix.memphis.edu/cohmetrixhome/](http://cohmetrix.memphis.edu/cohmetrixhome/) Footnote 2: fw.nilc.icmc.uscp.br:2338@cohmetrixport ## 3 StyloMetrix: Architecture and Design Drawing inspiration from stylistic features for stylometric analysis, we have developed metrics grounded in grammar, syntax, and lexis. Each metric calculates the total count of tokens adhering to specific linguistic rules. The outcome of the StyloMetrix tool is a normalized vector for each input text. This feature facilitates the comparison of texts of varying lengths within the same genre or those authored by the same individual, among other potential use cases. The StyloMetrix Python package is readily available as an open-source tool on GitHub3, accessible to all. Furthermore, the tool's architecture is designed to be customizable by computational linguists, who can either modify existing metric sets or introduce their own. Footnote 3: [https://github.com/ZLIiAT-NASK](https://github.com/ZLIiAT-NASK) The foundation of the StyloMetrix tool is rooted in an open-source spaCy model developed by Explosion AI Honnibal and Montani (2017). SpaCy models have established themselves as front-runners, showcasing state-of-the-art performance across a spectrum of tasks including tokenization, parsing, tagging, and named entity recognition, among others. The English, Ukrainian, and Russian StyloMetrix pipelines draw upon the output generated by spaCy models. Unlike those, the Polish StyloMetrix utilizes a model developed by the Institute of Computer Science (Polish Academy of Sciences), which incorporates a wider range of morphological features, compared to the standard spaCy version. Further, the attributes such as part of speech label, tag and a set of morphological characteristics are leveraged to create higher level syntactic and grammatical metrics. The input text is analyzed according to the hand-crafted rules and settings. As a result, as an output the user receives a csv file with normalized statistics for each metric. See: fig.1. ## 4 Potential for Explainability A pivotal objective ingrained in the design of StyloMetrix is the interpretability within the vectors it generates. Interpretation is possible at different levels, depending on the specific task or the level of technical expertise of the users. Beyond delivering normalized statistics, the tool extends its utility by providing debug files as an auxiliary output format. These debug csv files encapsulate each token captured by each metric, allowing the user to delve deeper into linguistic analysis. For comparative analysis in computational linguistics, statistical tests can be used to determine the significance of different features in discriminating between certain groups of texts. For the different tasks of text classification, both the most important features and the correlations between pairs of these features can be used to gain insight into the model's decision making. To explain the model's decisions in a more complex way, i.e. taking into account the interaction of pairs of features and also the decomposition of the model's decisions into features that influenced the decision in a certain way or introduced uncertainty, one can use the open source Artemis library4. It builds upon the Dalex package Biecek (2018) that is also suitable for visualizing other type of model explanation such as Shapley Values (see fig. 2). Footnote 4: [https://github.com/pyartemis/artemis](https://github.com/pyartemis/artemis) ## 5 StyloMetrix Metrics ### Metrics for Polish The Polish language, which is classified as a Western or sometimes Northern Slavic branch of the wider Indo-European language family, is distinguished by its synthetic and inflectional or fusional nature Smoczynska (1985); Reid and Marslen-Wilson (2000); Lewandowska-Tomaszczyk and Wilson (2022). A comprehensive set of 172 metrics has been developed specifically for the Polish language. These metrics encompass various linguistic aspects, including: 1. Grammatical forms 2. Punctuation 3. Syntax 4. Inflection 5. Graphical representation 6. Lexical attributes 7. Psycholinguistic features 8. Descriptive characteristics All of these metrics have been meticulously crafted by experts to address the unique challenges faced by morphologizers, parsers, and taggers based on the spaCy framework. The linguistic information associated with each token is structured across six levels: (1) Lemmas (the base or Figure 1: The concept of StyloMetrix root forms of words), (2) Part-of-Speech (POS) tagging (assigning grammatical categories to words), (3) Morphology (examining the internal structure and form of words), (4) Tags (utilizing the National Corpus of Polish tagset for tagging)5, (5) Dependency (analyzing the syntactic relationships between words), (6) Entity label (identifying named entities and categorizing them). The choice of which linguistic description levels to consider in the programming code depends on the specific metric type. This strategic approach aims to capture the intended linguistic properties effectively while mitigating potential inconsistencies arising from errors made by the spaCy tools. Footnote 5: The National Corpus of Polish (NKJP): [http://nkjp.pl/](http://nkjp.pl/). See: Lewandowska-Tomaszczyk et al. (2012). #### 5.1.1 Grammatical Forms The Grammatical Forms class has been crafted to encompass the prevalence of distinct parts of speech. This comprehensive classification begins with nouns, verbs, adjectives, adverbs, and extends to a spectrum of diverse pronouns, including personal, demonstrative, possessive, total, interrogative, indefinite, and negative ones. Additionally, it encompasses particles, adpositions, coordinating and subordinating conjunctions, numerals, collective numerals, interjections, symbols, abbreviations, and other linguistic elements, primarily targeting foreign lexemes that may not be readily recognized by Polish language processing tools. #### 5.1.2 Inflection The most extensive class is primarily dedicated to the intricate aspects of inflection, which are of utmost importance to the Polish language. This category encompasses a total of 64 distinct metrics, covering the nuances of noun and pronoun declension, the comparison of adjectives and adverbs, and the conjugation patterns of verbs, providing a comprehensive framework for the analysis of grammatical inflection. 7 metrics are focused on the number and gender of nouns, encompassing both masculine and non-masculine personal genders. In addition to addressing the standard verb conjugation, this set also encompasses 24 metrics devoted to specific properties of verbs. These encompass a wide range, including the presence of finite and infinitive verbs, the utilization of present, past, and future tenses, the application of active and passive voices, the consideration of perfect and imperfect aspects, and the incorporation of imperative and conditional moods. Additionally, noun-like gerunds (nominalized verbs), quasi-verbs (mostly impersonal non-inflected forms functioning like verbs), distinct types of participles (adverbial and adjectival ones) and impersonal verb forms in perfective and imperfective aspect are included. #### 5.1.3 Syntax In the present iteration of StyloMetrix, the Syntax class encompasses a total of 19 metrics. However, it is essential to note that the development of syntax-oriented metrics for Polish is an ongoing endeavor. This ongoing effort is driven by the inherent challenges posed by the complex and flexible nature of Polish syntax, which lacks strict rules govern Figure 2: Shapley values for the Stylometrix features visualized with the Dalex package in a classification task of Polish poetry. StyloMetrix works effectively with easy-to-explain models such as Random Forest. And since the features translate directly into specific grammar-related patterns, explaining model’s decision allows also for linguistic interpretation of the analyzed samples. ing word order. Additionally, the intricacies of dependency tagging for Polish further contribute to the complexity of this task. In the current state, these metrics are designed to analyze various aspects of longer text units. They include the identification of words within declaratory, exclamatory, interrogative, negative, ellipsis-ended, and nominal sentences, as well as words enclosed in quotation marks. Specific considerations are made for colloquial speech, particularly in instances where infinitive verbs are used to convey an imperative-like intention. Acknowledging the unique characteristics of Polish syntax, the metrics encompass nominal predicates, Object-Verb-Subject (OVS) word order, and inverted epithets. Additionally, they extend to cover nominal phrases, words situated within modifiers, flat multiword expressions (FME), and appositional modifiers. Two additional metrics have been devised to identify similes, with one specifically targeting constructions in which nouns and pronouns function as objects of the comparison. The other metric is dedicated to the analysis of adjectival comparisons. An additional set of metrics, specifically concentrating on subordinate and compound sentences, including various types of adverbial clauses such as causal, conditional, and concessive clauses, is currently in development and being closely monitored. Once the testing phase is successfully completed, these metrics will be made available on our GitHub account for public access and use. #### 5.1.4 Lexical attributes A total of 34 metrics have been designed to delve into the lexical aspects of texts. Among these, 12 metrics cover proper names, including masculine and feminine forms, and named entities identified using the Named Entity Recognition (NER) component integrated with the Polish spaCy model6. These metrics cover a wide range of entities, including person names in masculine and feminine forms, organization names, and dates. Furthermore, these metrics exhibit a nuanced approach by combining entity labels with animacy status, as provided by morphological analysis. This combination enables the creation of distinct metrics that can capture place and geographical names on one hand, and ethonyms and demonyms, which are lexemes related to humans, on the other hand. Additionally, the metrics encompass a category of adjectives derived from place and geographical names. Footnote 6: [https://spacy.io/api/entityrecognizer](https://spacy.io/api/entityrecognizer) 15 lexical metrics have been curated to capture the diversity of the vocabulary used in texts. Among these metrics, four are dedicated to the measurement of word length, a characteristic quantified by syllable count. For this purpose, we used Spacy Syllables7, another component of the SpaCy pipeline that adds multilingual syllable annotations to tokens8. Concurrently, other lexical metrics are devoted to the distinction between content words and function words, considering both lemmatized and non-lemmatized forms. Furthermore, a dedicated metric scrutinizes the prevalence of stop words, drawing from the spaCy stop words list tailored for the Polish language. The Type-Token Ratio, crucial for assessing the lexical richness and diversity of a text [17], reflected in the ratio between tokens and word classes (types) in a corpus, is analyzed in two dimensions, including both lemmatised and non-lemmatised tokens. Additionally, two metrics facilitate the identification and quantification of the 1% and 5% most frequently occurring token types within the text. Footnote 8: It is worth noting that word length, measured in syllables, is considered an important indicator of text comprehensibility and is used in various readability measures (usually the ratio of words with 3 or more syllables to all words or, in the case of measures developed specifically for Polish, the ratio of words with 4 or more syllables to all words is often the basis for estimating the level of text difficulty, see [10, 11]. In future iterations of SyloMetricx, various readability measures will be incorporated within the lexical metrics class Within this subset of 6 lexical metrics, a dictionary-based approach is employed for linguistic analysis. The metric for vulgarisms focuses on the detection of vulgar language and expressions by utilizing a predefined dictionary containing inflected words derived from the most common formative Polish vulgar roots. Another metric is designed to pinpoint frequently occurring linguistic errors and relies on a dedicated dictionary that has been compiled in reference to an annual report detailing the "100 most frequent errors on the Internet." The growing prevalence of Greek-origin prefixes, such as _hiper_, _mega_, or _super_, which primarily intensify adjectives but can also affect other parts of speech, is discernible through the lexical metrics. To accommodate the diversity of non-standard word formation spellings, apart from defining a comprehensive list of these intensifiers, an auxiliary dictionary of exceptions was compiled to exclude fossilized lexemes that commence with prefixes like _giga_, etc. Tracking the occurrence of fixed adverbial phrases within the text is also possible owing to a list of such phrases sourced from Wiki slownik, a component of Wiktionary. Furthermore, three dictionary-based metrics account for the incidence of adverbs of time, duration, and frequency. #### 5.1.5 Psycholinguistic features Psycholinguistic features related to emotional affect were extracted from an experimental study on affective norms for Polish words [16, 17]. A dictionary comprising 2,650 words, developed by Imbir et al., was employed. In the study, individual words underwent evaluation by a total of 1,380 participants using self-report scales in three two-dimensional spaces: valence (which encompassed dimensions of positivity and negativity), origin (including automaticity and reflectiveness), and activation (involving subjective significance and arousal). In designing the metrics for the Psycholinguistic class, we took into account the six dimensions emphasised in the study, with each dimension further broken down into two metrics: the first metric counted the proportion of words exceeding the mean value for a given feature (e.g., positivity), while the second metric counted the proportion of words falling below the mean value for a given feature. In total, 12 psycholinguistic metrics were created. #### 5.1.6 Descriptive characteristics The Descriptive Characteristics class was designed to detect intricate linguistic patterns, encompassing the precise utilization of adjectives and adverbs in the descriptions of qualities, the examination of adverb pairs and phrases where an adverb precedes an adjective. This category also places particular emphasis on complex apostrophes, incorporating either adjectives or verbs, as well as encompassing longer nominal phrases. Furthermore, it accounts for the distinctive use of nouns and pronouns in the vocative case. #### 5.1.7 Punctuation and the Graphical representation The Punctuation and Graphical Representation class addresses various aspects of the textual structure. The punctuation metrics are tailored to record the frequency of punctuation marks, also in proximity to nouns or verbs. On the other hand, the graphical properties metrics focus on identifying capitalized tokens and recognizing distinctive features commonly found in social media texts, such as emojis, emoticons, lenny faces, URLs, hashtags, and mentions with "@" symbols. For the detection of emojis comprising one or more Unicode characters, we employed the spacymoji, an extension and pipeline component integrated with spaCy 9. Emoticons were identified using the Emoticon Dictionary, comprising 225 of the most frequently used emoticons along with their textual representations [1]. Lenny faces, URLs, hashtags, and mentions with "@" symbols were extracted through the application of custom-designed regular expressions. Footnote 9: [https://pypi.org/project/spacymoji](https://pypi.org/project/spacymoji) The succinct overview of distinct metric groups already underscores the notion that various features have been tailored to capture characteristics specific to different types of texts and genres. While metrics aimed at grammatical forms, inflection, or syntax are universally applicable to all types of texts, spanning from literary to practical, others have been devised to detect attributes unique to social media posts (as exemplified by those encompassed in Graphical Representation) or, in a broader context, to informal speech patterns, encompassing online discourse. While the presented set of metrics is by no means exhaustive, particularly in its coverage of a plethora of non-normative linguistic patterns, a substantial number of metrics have been thoughtfully curated by linguistic experts who have drawn upon their extensive experience in the analysis of informal Polish discourse, encompassing both commercial and non-commercial blog posts, as well as offensive or harmful content on social media platforms. ### Metrics for English The StyloMetrix model for English is built upon the SpaCy [15] English transformer pipeline (RoBERTa-based) with components: transformer, tagger, parser, NER, attribute ruler, and lemmatizer. The choice of the model is driven by the accuracy evaluation. The English version of the StyloMetrix leverages POS Tags by the PennTree Bank, dependency labels, and morphological features. Although the SpaCy English parser performs well, we encountered some drawbacks while designing the metrics. To implement the extension for the present tenses, we take into consideration the incorrect sentence parsing. For instance, in the sentence: _Diego's trying to splash water onto her back_, - "s" is tagged as the auxiliary verb in the passive voice; when it should be the auxiliary verb in the active voice. The SpaCy English transformer pipeline performs well on sentences without contractions, where each word is disambiguated. Moreover, while the sentence length increases, the parser's performance decreases due to the long-distance dependencies and projectivity. We found it essential to compare existing parser tools with the spaCy transformer, to assure that spaCy provides the most rigorous results. To perform the comparative analysis we applied Stanford De Marneffe et al. (2006), Berkeley Johnson and Ural (2010) and SpaCy English parser. As predicted, the spaCy parser showed the best performance on the task as a word disambiguation, where the sentence incorporates the contractions such as _we'd_ or _I'd_, that can be parsed in two ways: as a verb _had_ or a modal verb _would_. After the preliminary stage of parser and tagger analysis we have crafted the metrics based on the same principles which were applied for the Polish language version. Hence, we will not repeat the shortcomings and obstacles acquainted during the process of creating the rules. In total the English version covers 196 metrics, divided by groups: 1. Detailed grammatical forms 2. General grammar forms 3. Detailed lexical forms 4. Additional lexical items 5. Parts of speech 6. Social media 7. Syntactic forms 8. General text statistics #### 5.2.1 Detailed and general grammatical forms The detailed grammar group is also the most extensive one. It incorporates 55 rules which cover most of the English tenses. That is, it encompasses the present, past and future tenses in their various verb forms and two aspects: the present / past / future simple; continuous; perfect; perfect continuous active or passive aspect. The modal verbs also belong to this groups. The StyloMetrix covers most of the frequently used modal verbs, such as: can / could, may / might, shall / should, must and would. Each of them are evaluated in the present, continuous, perfect forms; active and passive voice. Pervasively, we rely on the spaCy morphology and dependency parsers and implement extensions for each rule, which allow to add metrics as the custom component to the main pipeline. General grammar forms are the consolidation of the principal grammatical rules. Under this category falls the present tenses; the past tenses; the future tenses; infinitive forms; modal verbs in the simple form; modal verbs in the continuous form; modal verbs in the perfect form; verbs in the active voice and verbs in the passive voice. Therefore, a user is able to choose which group of grammar metrics is more pertinent to their task. #### 5.2.2 Detailed lexical forms and Additional lexical items The second largest group is the detailed lexical. There are 48 metrics that mostly cover different types of pronouns. The subsets include subject, object, possessive, reflexive pronouns and four groups that incorporate the first person singular pronouns, the second person pronouns, the third person singular and plural pronouns. During the experiment stages we found that some lexical items contribute to the decisions made by a classification model to discern text genres, detect hate speech and abusive language, hence the hurtex dictionary was added to the group of additional lexical items10 which encompasses a range of rude, abusive and hurtful words that are present on the mass media platforms. These groups of lexemes help to differentiate abusive or offensive language from non offensive. Indeed, this is the separate subgroup and can also be turned on or off based on the user's desire. To the additional lexical group belong punctuation (the same instances as described for the Polish language), retweets, urls, mentions with an "@" sign, hashtags, content words and function words. Plural, singular, proper nouns, personal names, noun phrases and three forms of adjectives and adverbs constitute this group as well. Furthermore, we created the dictionary which helps to calculate words that denote common patterns such as: time and location; manner; cause and purpose; condition; limitation and contradiction; example; agreement and similarity; effect and consequences. We have observed that these linking expressions are useful for genre and author identification and can be a worthwhile extension to the existing lexical sample. #### 5.2.3 Parts of speech and Social media Parts of speech and Social media are not as numerous as previous examples. The first is primarily oriented to calculate the general frequency of a specific POS in the text. These 23 metrics fully rely on the POS tagger by spaCy which, in turn, utilizes the Penn Treebank universally standardized annotation. The social media group consists of 7 metrics, where two of them calculate positve or negative sentiment of the text by an open-sourse library VaderSentiment11. Other metrics aim to trace the lexical intensifiers, e.g. _utterly_, _tremendous_, and nomenclature words, e.g. _occasionally_, _little_, _marginally_. examples of masked words (e.g. f**k) and digits (e.g. 4 u) also belong to the social media part. Footnote 11: [https://vadersentiment.readthedocs.io/en/latest/](https://vadersentiment.readthedocs.io/en/latest/) #### 5.2.4 Syntactic forms and General text statistics 22 metrics comprise the syntactic group which covers five types of questions: general, special, tag, alternative and negative; coordinate and subordinate sentences and number of punctuation marks in them; narrative and negative sentences as well as a direct speech. Some figures of speech are added to the group. Fronting refers to a construction where a group of constituents that usually follows a verb preceeds it instead. This figure of speech is also called preposing or front-focused. For instance, _"Carefullt, a baby is sleeping."_ or _"On the corner stood a little shop."12_. Syntactic irritation is built upon the rule, that continuous form of any tense together with intensifiers such as "constantly", "continuously", "always", "all the time", "every time" -create the effect of irritation or dissatisfaction. For example, _"She's always coming late."_; _"He was constantly losing his temper in public"_. Syntactic intensifiers affirm the sentence meaning and are presented in the form of auxiliary verbs: do, did, and does. An example of it we can find in the sentence _"I do love dogs"_. Simile as a figure of speech is also implemented in this group, it covers instances such as: _She looks like her mother._; _"He's as busy as a bee"_. Inverse sentences aim to catch such statements as _"You will find only what you bring in."_; _"Once you start down the dark path, forever will it dominate your destiny"13_. Footnote 12: [https://dictionary.cambridge.org/grammar/british-grammar/fronting](https://dictionary.cambridge.org/grammar/british-grammar/fronting) General text statistics incorporates 12 metrics. A large number of them are designed to calculate the distance between the specific nodes. For instance, the distance between noun, verb, adverbial, prepositional and adjectival phrases within one text. Three types of the type-token ratio intend to show the readability score of the text and its cohesiveness. Repetitions of words and sentences also serve as markers of a text cohesion. ### Metrics for Ukrainian and Russian The sets of metrics for the Ukrainian and Russian languages are at the development stage, where at this moment they encompass 104 metrics These metrics are subdivided into four main clusters: 1. Lexical forms 2. Parts of speech 3. Syntactic forms 4. Verb forms There is one additional group for the Ukrainian StyloMetrix that covers readability scores which are identical to the English version, so we will not concentrate much attention on them. As Ukrainian and Russian are fusional languages they have similar syntactic structures and grammatical characteristics. The main goal of developing both languages is to conduct the analysis on disinformation detection, as it is one of the most relevant tasks in NLP these days. The set of metrics for the Ukrainian and Russian languages are built based on the same principles and assumptions that were applied for the Polish and English languages. Hence, the lexical metric provide the information of plural and singular nouns, moreover we extended them to cover more morphological features such as animacy (animate/inanimate), gender(feminine, masculine and neutral), distinguish between first and second name. We also added diminutives, as they serve as a distinctive feature of these languages. Direct and indirect objects as well as cardinal and ordinal numerals are in the general set. We strive to pinpoint the distinctive lexical forms such as seven cases in Ukrainian vs six in Russian; demonstrative, personal, total, relative and indexical pronouns; as well as qualitative, quantitative, relative, direct and indirect adjectives. These are the unique features that bring not only semantic but also syntactical information about the text and help to classify genre peculiarities. Other lexical subgroups as punctuation, direct speech and three types of adverb and adjective comparison are similar to the English version. The Parts of speech class relies on the assumptions utilized by Polish and English languages. On the syntactic level we created an extension for the noun phrases, a common rule to detect the direct speech as well as narrative, negative, and interrogative sentences. One of the most prominent figures of speech that can be traced in both languages are parataxis, ellipsis and positioning. Parataxis is a syntactic mean that is based on the omission of conjunctions14. For example, _"I came, I saw, I conquered."_ instead of _"I came, and I saw, and I conquered."_. Ellipsis is an omission of words, represented as three dots, it aims to show a pause, or suggests there's something left unsaid15. For instance, _"She remained silent... then things suddenly changed."_. Typically, ellipses are common in poetic or fictional texts and, hence, are a fruitful way of a genre discretion. Positioning is pertinent only to the Ukrainian language, it's a special way of constructing a phrase, where the first part (usually an adjective) describes the second one (usually noun). Unlike other languages, in Ukrainian such cases are written using a dash (-) and count as obsolete units. The described figures of speech are efficient at the classification tasks for author identification or genre differentiation. An example will be provided in the next section. Footnote 14: [https://www.litcharts.com/literary-devices-and-terms/parataxis](https://www.litcharts.com/literary-devices-and-terms/parataxis) Footnote 15: [https://www.grammarly.com/blog/ellipsis/](https://www.grammarly.com/blog/ellipsis/) The last group consists mainly of the verb forms which cover past, present and future tenses forms as well as verbs in perfect and imperfect aspects, transitive and intransitive forms, and participles. The Ukrainian language varies from Russian due to the presence of four conjugation groups which are constructed as custom extensions in the pipeline. Moreover, incidence of adverbial perfect and imperfect participles are built for the Ukrainian StyloMetrix. As it was mentioned earlier, the Ukrainian and Russian versions of the StyloMetrix are in their preliminary stage and still developing. The main emphasis lies on the syntactic features, such as figures of speech, which are common and prominent for these languages. ## 6 Applications and Use Cases We designed the StyloMetrix to be a versatile text representation that can be utilized together with machine learning models. **The StyloMetrix output is calculated by the general statistical formula to estimate a mean value, i.e. the set of words which fall under a specific rule is divided by the total number of words in the text. Hence, each metric is always estimated in the range [0, 1]**. The algorithm of obtaining the StyloMetrix logits is rather straightforward. One can download and install the tool via GitHub, then run the package on the local machine and generate the csv output. The inputs can be texts of any length. The StyloMetrix does nor require any additional preprocessing, e.g. url omission or hashtags elimination, as these features are covered on the metrics level. An important thing to remember is that the analyzed documents should be stored together in a separate folder for the tool to give a desired output. After generating the csv file a user can proceed to utilizing the StyloMetrix vector as inputs to the machine learning and deep learning algorithms. In this section we present a couple of example settings that cover all languages provided by the StyloMetrix. ### Integration with Machine Learning Classifiers The StyloMetrics vectors prove themselves being robust and exhaustive mean to classify news genres and sources. These experiments we led by relying on the Ukrainian and Russian sources. As Ukrainian is a low-resource language, and more datasets are yet to be developed; we have chosen a benchmark news corpus provided by Panchenko et al. (2022). It was compiled of seven Ukrainian news websites: BBC News Ukraine, New Voice Ukraine, Ukrainian Pravda, Economic Pravda, European Pravda, Life Pravda, and Unian. We strive to show how the StyloMetrix vectors can handle multi-class data by being executed only as an input to the Random Forest Classifier. Figure 3 displays a confusion matrix for seven news classes. We can deduce that most of the resources path the 50% prediction threshold, which is a significant accomplishment of the StyloMetrix tool, paying attention that we have utilized a simple Random Forest classifier. It is worth noticing that as we dwell deeper into the texts that attain to every news resource, we have discovered that such classes as Ukrainian Pravda and Economic Pravda belong to one publisher, therefore the style of writing is rather similar. Hence, we can see the confusion between the classes 4 (Ukrainian Pravda) and 6 (Economic Pravda). To compare the output for the Ukrainian test we led a similar experiment with the Russian language. The benchmark news corpus provided by the Huggingface 16 offers five classes that cover different news topics: culture, economics, politics, social, and sport. Unlike the Ukrainian experiment, this one focuses on the topic classification. The test settings remained the same: Random Forest classifier with the StyloMetrix vectors as an input. Footnote 16: [https://huggingface.co/datasets/mlsum](https://huggingface.co/datasets/mlsum) Taking into consideration that both languages capture similar features and syntactic patters we expect for the Russian model to perform on the same level as the Ukrainian model, regardless the difference in a task characteristics. Figure 4 presents the outcome for the Russian news classification. The general tendency of the models performance is above the 50% threshold except the economics class that gets confused with the social data. A high number of false positives can be driven by the similarity in syntactic constructions which are typically utilized in these types of texts. Overall, the StyloMetrix vectors prove their relevance for classification tasks in Ukrainian and Russian languages. Although, the total number of metrics for each language is quite low, only 104, they indeed help to differentiate the stylometric nuances of text genres and discern the source of each text. The third experiment concentrated on the hate speech identification in the English language. A lot of experiments have been offered to differentiate offensive and neutral language, however most of them applied pervasively large Transformer or deep learning models. Unlike those, we tested the Voting Classifier on the Gab (Qian et al., 2019) dataset. The model is grounded on the Random Forest classifier, Logistic Regression and SVM algorithms. The StyloMetrix vectors managed to rich the overall performance close to the state-of-the-art deep learning models. The Table 1 compares the Voting Classifier to the results of the deep learning algorithms obtained by Qian et al. (2019). As can be inferred from our experiment, the potential of the syntactic representations to distinguish hate speech from neutral is not a far-fetched idea, and can be executed with application of the simple rule-based approach like the StyloMetrix. Figure 4: A confusion matrix for the Ukrainian news classification task Figure 3: A confusion matrix for the Ukrainian news classification task Speakleash library17 is an open dataset for the Polish language systematically collected by the association Speakleash a.k.a Spichlerz, comprises over 300 GB of data (more than 54 million documents) from various categories. One of the significant advantages of this dataset lies in its immense diversity. Six categories were randomly selected (Job offers, Literature, News, Style blogs, Web articles and Wikipedia), with 30 texts randomly sampled from each. The Random Forest model was trained with 30 important features selected with the topK algorithm. Footnote 17: [https://github.com/speakleash/speakleash](https://github.com/speakleash/speakleash) Even with such a small number of samples, using StyloMetrix vector representation, these categories could be clearly distinguished with high accuracy, confirming the hypothesis that specific text types differ in grammatical structures and combinations. This example also illustrates applications where StyloMetrix vectors outperform large models like BERT-based ones, as the limited sample size per class would not suffice for their fine-tuning. Another clear advantage of StyloMetrix vectors is their independence from text length. Since the metrics are normalized to the number of tokens in the document, always yielding values between 0 and 1, the issue of overfitting due to varying document lengths between classes is not directly encountered. ### Combination with Deep Learning Embeddings One of the most exhaustive analysis has been conducted utilizing the English version of the tool. Our primary concern is to track whether the additional information from the StyloMetrix can contribute to the semantic embeddings produced by Transformers, such as RoBERTa (Zhu et al., 2020) or BERT, to classify offensive or hate speech.The Null hypothesis claims that the StyloMetrix vectors can enhance the semantic embeddings and provide a more rigorous classification of abusive content. With this in mind, at the preliminary stage, we developed the experiments that leverage standard machine learning algorithms such as Random Forest and Logistic Regression. Further, switching to the Transformer models such as HateBERT18 and RoBERTa. The full list of hyper-parameters and models' layers can be found in Appendix A. Footnote 18: [https://huggingface.co/GroNLP/hateBERT](https://huggingface.co/GroNLP/hateBERT) As for the datasets, we have evaluated two most widespread corpora for the hate speech detection: ETHOS19 and Reddit (Israeli and Tsur, 2022) datasets. Footnote 19: [https://www.bing.com/search?pg1t=673kq=ethos+dataset&covid=c25ed?cede15416&ab9d2708493c2da8](https://www.bing.com/search?pg1t=673kq=ethos+dataset&covid=c25ed?cede15416&ab9d2708493c2da8) aags=edge.6915?j96j6164j6911004.2564j6j18FORM=ANANB180FC=E223 The Table 2 presents the general tendency of the StyloMetrix vector to enhance the performance of any classification model regardless the dataset. The only exception is the RoBERTa model on the Reddit corpus, where the StyloMetrix decreased the F1 \begin{table} \begin{tabular}{l|l|l|l|l} \hline **Model** & **Prec.** & **Rec.** & F1 weighted & F1 hate \\ \hline Voting: StyloMetrix & 0.83 & 0.84 & 0.84 & 0.84 \\ \hline CNN & 0.95 & 0.96 & 0.90 & - \\ \hline RNN & 0.95 & 0.95 & 0.89 & - \\ \hline \end{tabular} \end{table} Table 1: Voting Classifier with the StyloMetrix vectors for hate speech detection task. Figure 5: Confusion matrix for text classification task on six categories from the Speakleash library Figure 6: The same six categories text genres from Speakleash library clustered with t-SNE using StyloMetrix vectors score on hate detection, nevertheless improving the weighted F1 average for both classes. An analogous experiment regarding cyberbullying detection was also conducted in the Polish language. The dataset included posts and comments gathered from the Wykop.pl platform, often referred to as the Polish equivalent of Reddit. A detailed description of the dataset is available here [1]. The following experiments were conducted using a transformer model and StyleMetrix vectors: 1. The output probabilities from the Polish RoBERTa large model was used as an input to the Logistic Regression (LR) model. 2. Then fine-tuned RoBERTa large with an additional classification layer was considered. The model used the average of the last hidden state from RoBERTa as a pooled output from the model. This output was activated with ReLU and went to a fully connected layer. The used loss function was BCEWITHLOGITSLOSS (combining Sigmoid layer and the BCELoss function). The learning rate used for the experiment was 1.5e-6. 3. Finally, a modification of the fine-tuned RoBERTa with the exact same settings except for the embedding size was considered. Now the StyloMetrix vector has been concatenated with pooled RoBERTa output yielding the inputs of 1133 length (instead of 1024) for the last 2 layers as shown in Fig 7. Additionally, a set of new metrics has been developed dedicated for offensive comments, addressing different types of apostrophe (the figure of speech). As shown in Table 3 concatenating the RoBERTa embeddings with the StyloMetrix vectors increases the overall metrics of the model also in Polish language. This concise description of experiments presents one of many possibilities to employ the StyloMetrix vectors with machine learning and deep learning algorithms, enhancing their operation and yielding stronger predictions. ## 7 Previous Applications of StyloMetrix StyloMetrix was initially applied in NLP tasks to classify text genres with and without adult content and to analyze sentiment in customer reviews [1]. Experiments using StyloMetrix vectors as input for Random Forests were conducted on small datasets. In the case of genre classification, the results outperformed classification results obtained with deep \begin{table} \begin{tabular}{|l|l|l|} \hline **Model** & **weigh. F1** & **avg. F1** \\ \hline \multicolumn{3}{|c|}{**ETHOS**} \\ \hline RoBERTa only & 0.73 & 0.63 \\ \hline RoBERTa with SM & 0.75 & 0.67 \\ \hline HateBERT only & 0.77 & 0.70 \\ \hline HateBERT with SM & 0.81 & 0.74 \\ \hline \multicolumn{3}{|c|}{**Reddit**} \\ \hline RoBERTa only & 0.73 & 0.61 \\ \hline RoBERTa with SM & 0.77 & 0.60 \\ \hline HateBERT only & 0.79 & 0.61 \\ \hline HateBERT with SM & 0.80 & 0.63 \\ \hline \end{tabular} \end{table} Table 2: Comparative evaluation of the transformer models with and without the StyloMetrix vectors. \begin{table} \begin{tabular}{|l|l|l|} \hline **Model** & **Rec** & **F1** \\ \hline \hline Pre-trained RoBERTa & 0.88 & 0.90 \\ RoBERTa fine-tuned & 0.93 & 0.93 \\ RoBERTa with SM & **0.94** & **0.94** \\ \hline \end{tabular} \end{table} Table 3: Results of the models for the Wykop content moderation task – Recall and weighted F1 score Figure 7: RoBERTa and HateBERT models’ layers with StyloMetrix vectors learning models on the same data, achieving an mean accuracy of 93.5% and balanced results across classes. Text representations generated with StyloMetrix were used alongside HerBERT-based embeddings to detect genres and persuasion techniques in Polish (Modzelewski et al., 2023). The experiments employed classical machine learning models: LightGBM, XGBoost, and logistic regression. The approach adopted for detecting persuasion techniques resulted in a third-place finish in SemEval-2023 Task 3 (Piskorski et al., 2023). In the realm of computational linguistics, StyloMetrix text analysis was used in a profound linguistic analysis of offensive language and hate speech from Wykop.pl web service (Okulska and Kolos, 2023). The tool was also used for the task of author classification in Polish poetry (Okulska et al., 2023). These applications underscore the tool's utility in addressing complex societal issues and fostering a deeper understanding of online language dynamics, as well adapting to diverse text analysis challenges for literary research purposes. The efficiency of StyloMetrix for the Ukrainian language was analyzed in detail by Stetsenko and Okulska (2023). The authors provide a more in-depth analysis of the language characteristics, some limitations of the tool and offer an overview on the StyloMetrix usage as an explainable model for text classification. ## 8 Limitations The reasons for divergences in the tool output may be several. SpaCy models have limited accuracy, and the errors in model performance may propagate and skew metrics values (although some mistakes are corrected as an element of the StyloMetrix pipe). Language models do not perform well on contaminated data - typos, imperfect data sources (e.g., OCR), colloquial language or inconsistent grammar may result in incorrect labeling and disrupt dependency parsing. Since the metrics calculations rely on substantive knowledge, they have to be implemented into the system by an expert in the field, which opens the possibility for human error. Therefore in sensitive (e.g., jurisdictional) applications, it is recommended to use the function of debugging information output for manual check. In the phase of data analysis, vectors of short texts may turn out to be too scarce in non-empty values to represent a writing style, and therefore the classification may be ineffective. Low accuracy could also occur when different texts having similar structure. There will be potentially classes where solely stylometric representation will still require support from a semantic classifier to yield fully satisfying results. ## 9 Conclusion There are many different methods for vectorizing textual data, and it is crucial to select the appropriate approach for a given task. Stylometric approach shows a great potential for solving various NLP problems. The open-source multilingual StyloMetrix vectors presented in the paper offer: 1. Normalized advanced stylometric statistics with a fixed length regardless of the input text size. 2. The ability to serve as input for text classification tasks, such as content, genre, authorship, or style classification. They are also effective in semantic classification, for instance, in detecting hate speech. 3. The potential to enhance existing text embeddings, leading to higher classification effectiveness. 4. Model's explainability, as they perform well on simple tree-based models like Random Forest, with each value in the StyloMetrix vector (each model's feature) directly explaining a specific grammatical or lexical pattern. 5. An open-source for advanced linguistic analysis of the entire text corpora. 6. Availability for up to four languages: English, Polish, Ukrainian, and Russian.
2308.16787
Exploring the data of blockchain-based metaverses
In recent years the concept of metaverse has evolved in the attempt of defining richer immersive and interactive environments supporting various types of virtual experiences and interactions among users. This has led to the emergence of various different metaverse platforms that utilize blockchain technology and non-fungible tokens (NFTs) to establish ownership of metaverse elements and attach features and information to it. This article will delve into the heterogeneity of the data involved in these metaverse platforms, as well as highlight some dynamics and features of them. Moreover, the paper introduces a metaverse analysis tool developed by the authors, which leverages machine learning techniques to collect and analyze daily data, including blockchain transactions, platform-specific metadata, and social media trends. Experimental results are reported are presented with a use-case scenario focused on the trading of digital parcels, commonly referred to as metaverse real estate.
Simone Casale-Brunet, Leonardo Chiariglione, Marco Mattavelli
2023-08-31T15:03:44Z
http://arxiv.org/abs/2308.16787v1
# Exploring the data of blockchain-based metaverses ###### Abstract In recent years the concept of metaverse has evolved in the attempt of defining richer immersive and interactive environments supporting various types of virtual experiences and interactions among users. This has led to the emergence of various different metaverse platforms that utilize blockchain technology and non-fungible tokens (NFTs) to establish ownership of metaverse elements and attach features and information to it. This article will devote into the heterogeneity of the data involved in these metaverse platforms, as well as highlight some dynamics and features of them. Moreover, the paper introduces a metaverse analysis tool developed by the authors, which leverages machine learning techniques to collect and analyze daily data, including blockchain transactions, platform-specific metadata, and social media trends. Experimental results are reported are presented with a use-case scenario focused on the trading of digital parcels, commonly referred to as metaverse real estate. metaverse, blockchain, NFT, machine learning ## I Introduction The metaverse can be defined as a digital platform consisting of virtual environments and virtual worlds enabled by various technologies. These virtual environments can be created and customized by individuals or organizations for a variety of purposes, including entertainment, education, communication, and business. The metaverse can consist of multiple virtual layers, which can be connected through metachains and secured through the use of blockchain technology [1, 2, 3]. Its implementation may also require the use of technologies such as virtual reality, augmented reality, and artificial intelligence, depending on the specific use case [4], with the human experience and interaction remaining a key component [5]. This paper focuses on a specific type of metaverse platform based on the concept of virtual land parcels, in which blockchain technology is used to enable ownership and representation of these digital assets. In other words, these platforms implement the concept of real estate in a distributed, trustless, and interactive environment. A comprehensive understanding of this new digital asset class requires knowledge of topics such as traditional real estate and financial markets, blockchain technology, cryptocurrency assets, and non-fungible tokens (NFTs). Studies on blockchain-based metaverse platforms, such as Decentraland and The Sandbox Game, have shown that the location of virtual parcels is a key factor in determining their value, and that the market for digital land represented as NFTs is similar to the market for physical real estate [6, 7, 8]. This paper presents a technical analysis of the key components of blockchain-based metaverse platforms based on virtual land parcels. It illustrates and demonstrates how various data points, such as blockchain transactions, the number of users connected to the platform, and social media engagement, can be collected and effectively used to create accurate statistical models for determining the value of each individual parcel within each metaverse. In contrast to the state of the art, where studies focus on a specific platform and generally only consider transactions on the blockchain, this study presents a cross-sectional analysis of the top five Ethereum-based metaverses in which all collected heterogeneous data is analyzed on a daily basis, giving users the ability to assess the economic value of the parcels in these platforms. The paper is structured as follows: Section II provides an overview of the main technical components of blockchain-based metaverse platforms based on virtual land parcels and NFTs; Section III illustrates the different types of data that can be extracted and collected from these platforms; Section IV analyzes the collected data and demonstrates how it can be used to build effective statistical models based on machine learning techniques to estimate the fair economic value of each parcel; Finally, Section V concludes the paper and discusses future research directions. ## II Blockchain-based metaverse environments Digital real estate markets within metaverses, also known as virtual worlds, often exhibit characteristics similar to traditional real estate markets, such as limited availability of land and the inability to easily move or transfer ownership of property [6]. However, these markets utilize decentralized technologies, such as blockchain and smart contracts, to facilitate secure and trustless transactions. This means that individuals can directly participate in the economy and own digital real estate, referred to as digital parcels, without the need for a central authority to verify or mediate the transaction. In this article, we examined five Ethereum-based platforms based on their popularity, trading volume and our own expertise. These are: Voxels, Decentraland, The Sandbox Game, Somium Space and Otheside. It is important to point out that the contents of this list are derived entirely from the knowledge and expertise of the authors. It is important to note that this list should not be interpreted as providing any form of financial advice, as it is intended for informational purposes only. ## III Data collection To fully understand and evaluate the value of virtual worlds in metaverse platforms, it is important to consider the types of data that can be analyzed for each environment. These data can be classified into two categories: on-chain data, which refers to financial transactions involving NFTs of parcels and are stored on the blockchain, and off-chain data such as parcel descriptions (e.g., location, size) and utilization (e.g., traffic patterns) that are generally not persistent and available from centralized servers. These data must be aggregated and carefully organized in order to be analyzed effectively. In the following sections, we will explore the main types of data that make up these metaverses and how we have implemented these data in our daily data acquisition and analysis tool, shown in Figure 1. We have made this tool publicly accessible [9] and we have also developed an API that allows users to retrieve heterogeneous data from various metaverses with a common semantics, ensuring that the data is always up-to-date. In the following, we present a discussion on the various types of data. a) _Metaverse-specific data_: information about each parcel in a virtual world, including its location and maximum build size, is often stored on centralized servers and represented as JSON files. This information, known as metadata, is usually encoded according to the _de-facto_ ERC721 metadata standard or the Enjin metadata recommendations. More advanced information about a parcel can typically be obtained through the metaverse platform's public API (e.g., see [10] for Decentralized, and [11] for Voxels). b) _Blockchain transactions_: the data stored on public blockchains, which are publicly accessible by design, can be efficiently accessed when structured in formats such as SQL [12]. The metaverse environments in this study are primarily based on Ethereum (with some secondary use of Polygon). The data collection techniques used are the ones we described in [13]. c) _NFT exchange-specific data_: parcels can be traded on exchanges specific to the platform, such as the Decentralized Marketplace, and on more broadly deployed exchanges like OpenSea, LooksRare, and others. The information about the parcels that are on sale, including their price, is not stored on the blockchain but rather on the exchange website. To keep track of this information, it may be necessary to use the exchange API (if available, e.g., OpenSea API [14]) or web scraping techniques. Two interesting insights to consider when looking at this list of parcels for sale are the lowest price on the list, also known as the floor price, and the size of the list. The size of the list, along with the number of daily transactions, can give an indication of how 'liquid' the collection trading is. d) _Media and social media popularity_. The popularity and social community of cryptocurrencies and NFT assets in mainstream and social media is a very important factor. In fact, studies such as [15] have emphasized this phenomenon. It is therefore important to monitor the sentiment on main social media platforms (e.g., Twitter, Reddit, Google). This can provide insight into the popularity of each metaverse platform and the broader concept of metaverse which, as we will see in the next sections, are correlated on the average price of the parcels. ## IV Data Analysis and Visualisation In the following, we describe how we analyzed the market for the five metaverses described in Section II for the period from January 1, 2021 to November 30, 2022. First, we will describe the techniques implemented for data collection, followed by the types of analysis carried out: starting with a global market analysis, then for each separate platform, and ending with the implementation of a machine learning model where, using the available data, it has been possible to define a suitable value for each land in the various metaverses. ### _Dataset_ We obtained information on the blockchain transactions of the parcel NFTs and the average daily price of cryptocurrencies related to each metaverse (e.g., SAND and MANA) using the Dune platform [16]. This platform provides SQL-structured and querable data on major blockchains, including raw transactions data and aggregated views (e.g., NFT trades). Using the official Twitter API, we collected data on social trends by gathering all tweets mentioning the accounts of each project and those containing the "#metaverse" term hashtag, as well as the Google trend for the "metaverse" term. For each metaverse platform, specific information was then gathered based on the information available from their metadata. This is summarized in Table I. All of the resulting data and metadata we obtained were saved in our local database, as illustrated in Figure 1 where the metaverse analysis framework we developed is shown. Table II summarizes the volumes of trades in USD and the number of tweets. For the purpose of this study, we considered only transactions with an economic value (i.e., not those where only the owner of the token associated with the parcel has changed without a corresponding economic exchange). We also filtered these transactions by eliminating, for each project, those above the 99th percentile. According to the table, the total volume of transactions that we considered was approximately USD 1,500M and included approximately 160k transactions (with 10% of the total volume and 1% of the transactions already subtracted). At this stage, we did not perform any filtering on the collected tweets. ### _Metaverse market trends_ During the period we studied, several notable events occurred: Facebook rebranded itself to Meta in late October Fig. 1: Metaverse analysis frameworks developed in the paper. 2021, leading to a surge in mainstream interest in the term "metaverse"; the rapid growth of the cryptocurrency market, driven primarily by Ethereum and Bitcoin [17], reached alltime highs in November 2021; the Otherside platform was launched on 30th April, 2022; successively the market as a whole saw contraction and crash in both equities and cryptocurrency due to challenging macroeconomic conditions. These events likely had an impact on the trend in digital land sales for the five metaverses we analyzed, as shown in Figure 2. ### _Platform-specific market trends_ We can further delve into which metaverse platform had the most success in terms of trade volume and social media engagement by examining Figure 2. We can see that all collections saw the number of transactions and their average value increase following the explosion of interest in the metaverse topic in November 2021, and then followed the downward trend that began in spring 2022 (Figures 1(b) and 1(a)). The overall market considering all the five projects might not have been negatively impacted, however this is only due to the launch of "Otherside" by the creators of the BAYCs (which is one of the most successful and influential collection in the NFT market today). In fact, "Otherside" has managed to become one of the metaverse projects with the most traded volume in a short period of time (see Table II). It is interesting to see the distribution of daily transactions versus average daily price illustrated in Figure 1(c): from here, we can see that the market is clustered into two main groups, with "The Sandbox Game" and "Otherside" forming one group and the remaining collections forming the other. By analyzing the exchanges where these transactions take place, we estimated that approximately 88% of the USD transaction volume occurs on OpenSea, while the next two most-used exchanges are x2y2 and the Decentralized (DCL) marketplace (note that in this latter only Decentralized parcels can be traded) with approximately 6% and 3.6%, respectively. We also find that ETH and WETH are the most common crypto-currencies used for trading, accounting for 80% and 10% of the total USD volume, respectively. WETH, which are ERC-20 tokens that represent 1:1 ETH, are often used to purchase parcels (and other NFTs) through a bidding process. Bids are usually placed below the current lowest price of the collection, known as the floor price. Once a parcel has been acquired, it may be resold in an attempt to make a (quick) profit. This is known as flipping. During times when the market is experiencing a negative trend, such as a liquidation phase, there may be an increase in the number of accepted bids for WETH. This can be seen in Figure 1(d), which shows the ratio (represented by the green line) between the daily trading volume of WETH and other currencies. This ratio tends to increase significantly when the market is experiencing a negative trend and average parcel prices are declining. ### _Parcels position, geometry and traffic_ In the previous section, we analyzed various metaverses individually, examining the average daily price of parcels sold. If we instead focus on individual parcels, recent studies have shown that location is a key factor that can significantly impact the parcel value when compared to the average value. For example, studies [6] and [8] on Decentralized and The \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline **Reference** & **Completed** & **DCL** & **DCL** & **DCL** & **DCL** & **DCL** & **DCL** \\ \hline \(\text{{}^{\text{}^{\text{}^{\text{}^{\text{}^{\text{}^{\text{}^{\text{ }^{\text{\text{}^{\text{\text{}^{\text{\text{}^{\text{\text{\text{\text{\text \ Sandbox Game respectively have both concluded that, despite the absence of travel distance in the metaverse, location is extremely important. These studies, however, focus on two specific platforms where the size of each parcel is uniform. In the more general case of Voxels and Somnium Space parcel size may also affect the price of a parcel. Therefore, the framework we implemented (shown in Figure 1) also gathers the metadata for each parcel, including information about the available area and volume for construction on the parcel. In addition, for Decentraland, Somnium Space, and Voxels, we have also collected information about the traffic on each parcel. In the following, we analyze the information we have collected for each individual parcel in addition to their geographical location, as shown in Table I. a) _Voxels_: each parcel has different dimensions, with associated height and area limits for building. For each parcel, we are able to obtain the daily cumulative number of unique users who have logged in. b) _Decentraland_: all the parcels have the same size of 16m x 16m, but adjacent parcels can be grouped into estates. As of now, there are approximately 2,160 estates. For each parcel, we are able to collect the number of concurrent users connected per hour: Figure 3 shows the maximum number of users connected to the platform from June 2022 to the end of November 2022 (the period for which we have data). c) _The Sandbox Game_: all the parcels have the same size of 96m x 96m, but adjacent parcels can be grouped into estates in fixed sizes of 3x3, 6x6, 12x12, and 24x24 parcels. d) _Somnium Space_: there are three types of parcels with different sizes: 'S' (2,000m\({}^{3}\)), 'M' (15,000m\({}^{3}\)), and 'XL' (75,000m\({}^{3}\)). For each plot, we collect the number of connected users per hour, distinguishing between spectators and players. d) _Otherwise_: for each parcel, we identify sediments, artifacts, and the possible presence of one of the 10,000 Koda NFTs. ### _Machine learning models of metaverses_ To examine potential correlations and build a statistical model to determine the economic value of each parcel, we first collected and organized data from various sources and at different levels. We then used a Spearman correlation analysis to analyze the following variables: daily average price, volume, and number of parcel sales; metaverse topic popularity on Google (measured through Google Trends); daily tweets related to the specific metaverse platform and the metaverse topic in general; and the daily dollar price of ETH and any platform's cryptocurrency. The results are displayed in Figure 4. In order to improve the accuracy of the analysis, we first removed any seasonal components from each time series. We can see that the average price and trading volume are strongly correlated with the number of tweets for the Oherside platform (see Figure 3(b)), while for the other projects, there seems to be a stronger link with popularity on other channels, as indicated by the correlation with Google Trends (e.g., see Figure 3(a)). This probably indicates that Twitter is less influential for NFT metaverse projects compared to what was observed for example in [15] for NFT profile pictures (PFP) projects. We believe that the current nature of Oherside's trading and its underdeveloped gaming environment make it more akin to a PFP project rather than a metaverse one. The second step was to understand in more detail which variables most influence the selling price of a plot. To do this, we used XGBoost (eXtreme Gradient Boosting) [18], a widely-used machine learning algorithm that is particularly effective for regression and classification tasks. We conducted separate experiments for each platform, training the model to predict the prices of the plots based on the other available data. We randomly divided the dataset described in Table II into two parts: a training set containing 80% of the transactions for each platform, and a test set containing the remaining 20%. We then evaluated the model's accuracy and reliability using the test set by comparing its predictions to the actual sale prices of each plot transaction (e.g., see Figure 4(a)). A randomized search for hyperparameter tuning was used to identify the best parameters configuration for each model. The number of features and accuracy of each metaverse model are summarized in Table III. In general, we found that parcel location (in terms of x, y coordinates) is the factor that most influences the sale price on each metaverse, as already demonstrated in [6, 7, 8]. However, we can add that other factors with a significant influence on the selling price of a parcel are the average daily price of other plots sold, the daily price of ETH (which can also serve as a general crypto market indicator), and the level of activity on a parcel, as for example in the case of Decentraland (see Figure 4(b)). The results of this study indicate that user traffic on a parcel is not a significant determinant of its price. Instead, factors related to the revenue-generating potential of the parcel are more likely to play a role. In our opinion, this is because we are currently in an exploratory phase of the market, where individuals and organizations investing in digital parcels are primarily focused on acquiring strategic locations as a form of marketing investment. allows users to browse different platforms at various levels, displaying various types of information directly on the plots. These include: 1) Land view, which colors parcels based on their characteristics in the metaverse (e.g., size); 2) Trading view, which highlights parcels for sale on different exchanges with different colors based on their sale price; 3) Last price view, which colors parcels based on the last sale price; 4) Value view, which colors parcels based on the ratio between the sale price and our estimated price; 5) Fair value view, which colors parcels based on our estimated fair value price; 6) Flip view, which uses color to indicate how many times the parcel has been traded. Depending on the structure of a particular metaverse, there may also be specific metrics such as: 7) Traffic view, which uses color to highlight the most heavily trafficked parcels; and 8) Resources view, which uses color to indicate the availability of different resources. ## V Conclusions In this article, we conducted a technical analysis of five major blockchain-based metaverse platforms that implement the concept of digital land and real estate in a decentralized digital environment. We described the various technological components and how various types of data, such as blockchain transactions, parcel traffic, and engagement on social networks, can be effectively extracted and analyzed from these platforms. The results obtained have been: 1) the development of the first cross-platform metaverse data collection and analysis tool, data that is now accessible through a public and unified API; 2) the systematic creation of machine learning models that, through data fusion and curation, are able to estimate a fair value of each individual parcel in each metaverse; 3) the verification, thanks to these models, that location is a generally fundamental factor in determining the value of a parcel, as already demonstrated bysome state-of-the-art work. In comparison to these studies, which only focus on two specific platforms, our work has been performed on the five main Ethereum-based platforms. Future studies will aim to improve the accuracy of these estimation models and study more complex traffic patterns, for example by testing whether it is possible to distinguish between real users and bots.
2309.08848
On Effective Sato-Tate Distributions for Surfaces Arising from Products of Elliptic Curves
We prove, with an unconditional effective error bound, the Sato-Tate distributions for two families of surfaces arising from products of elliptic curves, namely a one-parameter family of K3 surfaces and double quadric surfaces. To prove these effective Sato-Tate distributions, we prove an effective form of the joint Sato-Tate distribution for two twist-inequivalent elliptic curves, along with an effective form of the Sato-Tate distribution for an elliptic curve for primes in arithmetic progressions. The former completes the previous work of Thorner by including the cases in which one of the elliptic curves has CM.
Quanlin Chen, Eric Shen
2023-09-16T02:59:33Z
http://arxiv.org/abs/2309.08848v1
# Effective Sato-Tate distributions for surfaces arising from products of elliptic curves ###### Abstract. We prove, with an unconditional effective error bound, the Sato-Tate distributions for two families of surfaces arising from products of elliptic curves, namely a one-parameter family of K3 surfaces and double quadric surfaces. To prove these effective Sato-Tate distributions, we prove an effective form of the joint Sato-Tate distribution for two twist-inequivalent elliptic curves, along with an effective form of the Sato-Tate distribution for an elliptic curve for primes in arithmetic progressions. The former completes the previous work [11] of Thorner by including the cases in which one of the elliptic curves has CM. ## 1. Introduction and Statement of Result Let \(E/\mathbb{Q}\) be an elliptic curve over \(\mathbb{Q}\) of conductor \(N_{E}\) without complex multiplication (CM). Hasse proved that for each prime \(p\), the group \(E(\mathbb{F}_{p})\) of \(\mathbb{F}_{p}\)-rational points on the reduction of \(E\) modulo \(p\) satisfies the bound \[|p+1-\#E(\mathbb{F}_{p})|<2\sqrt{p}.\] For \(p\nmid N_{E}\), we define \(a_{E}(p):=p+1-\#E(\mathbb{F}_{p})\). Hasse's bound implies that there exists \(\theta_{E}(p)\in[0,\pi]\) such that \[a_{E}^{*}(p):=\frac{a_{E}(p)}{\sqrt{p}}=2\cos\theta_{E}(p).\] In 2011, Barnet-Lamb, Geraghty, Harris, and Taylor [1] proved the celebrated Sato-Tate conjecture, which states that for a fixed subinterval \([a,b]\subseteq[-2,2]\), we have that \[\lim_{x\to\infty}\frac{\#\left\{p\leq x:a_{E}^{*}(p)\in[a,b]\right\}}{\#\{p \leq x\}}=\frac{1}{\pi}\int_{a}^{b}\sqrt{1-\left(\frac{t}{2}\right)^{2}}dt.\] Thorner [11] quantified the rate of convergence with effective dependence on \(E\). In particular, for every subinterval \([a,b]\subset[-2,2]\), we have that1 Footnote 1: In this paper, all implied constants are positive, absolute, and effectively computable. The sequence \(c_{1},c_{2},c_{3},\ldots\) denotes a sequence of certain positive, absolute, and effectively computable constants. \[\left|\frac{\#\left\{p\leq x:a_{E}^{*}(p)\in[a,b]\right\}}{\#\{p\leq x\}}- \frac{1}{\pi}\int_{a}^{b}\sqrt{1-\left(\frac{t}{2}\right)^{2}}dt\right|\ll \frac{\log(N_{E}\log x)}{\sqrt{\log x}},\qquad x\geq 3.\] The proof crucially relies on the work of Newton and Thorne [14, 15], proving that for all integers \(m\geq 1\), the \(m\)-th symmetric power \(L\)-function \(L(s,\operatorname{Sym}^{m}E)\) is the \(L\)-function of a unitary cuspidal automorphic representation of \(\operatorname{GL}_{m+1}\). If for each \(m\geq 1\) the generalized Riemann hypothesis is known for \(L(s,\operatorname{Sym}^{m}E)\), then one can prove a more rapid rate of convergence. For a closed subinterval \(I\subseteq[0,\pi]\), a change of variables yields the equivalent statement \[\left|\frac{\#\left\{p\leq x:\theta_{E}(p)\in I\right\}}{\#\{p\leq x\}}- \frac{2}{\pi}\int_{I}(\sin\theta)^{2}d\theta\right|\leq c_{1}\frac{\log(N_{E} \log x)}{\sqrt{\log x}},\qquad x\geq 3.\] Note that \(\frac{2}{\pi}(\sin\theta)^{2}d\theta\) is the pushforward of the Haar measure on \(\operatorname{SU}_{2}(\mathbb{C})\) under the trace map. This observation leads to a natural generalization of the Sato-Tate conjecture to other abelian varieties \(A/\mathbb{Q}\). One might hope to find a suitable topological group \(\operatorname{ST}(A)\) (the Sato-Tate group of \(A\)) such that the sequence \((x_{p})\) of conjugacy classes of normalized images of Frobenius elements in \(\operatorname{ST}(A)\) at ## 1. Introduction The aim of this paper is to study the existence of a solution of a certain class of solutions of the Cauchy problem (1.1) \[\begin{cases}\frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{ \partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2} \frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x ^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{ \partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+ \frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{ \partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2} \frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x ^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{ \partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+ \frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{ \partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2} \frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x ^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{ \partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+ \frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{ \partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2} \frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x ^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{ \partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+ \frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{\partial^{2} }{\partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2} \frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x ^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{ \partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+ \frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{\partial^{2}} {\partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2} \frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x ^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{ \partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+ \frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{\partial^{2}} {\partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2} \frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x ^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{ \partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+ \frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{\partial^{2} }{\partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2} \frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x ^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{ \partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+ \frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{ \partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2} \frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x ^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{ \partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+ \frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{ \partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac {\partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+ \frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{ \partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac {\partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+ \frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{ \partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac {\partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+ \frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{ \partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2} \frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x ^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{ \partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+ \frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{ \partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac {\partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+ \frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{ \partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2} \frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+ \frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{ \partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{ \partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+ \frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{ \partial x^{2}}+\frac{1}{2}\frac{\partial^{2}}{\partial x^{2}}+\frac{1}{2} \frac{\partial^{2}}{\partial x^{2 **Theorem 1.1**.: _Let \(\lambda\in\mathbb{Q}\) satisfy_ \[\lambda\notin\{r\in\mathbb{Q}\colon\sqrt{r+1}\in\mathbb{Q}\}\cup\{-64,-4,-\tfrac {1}{4},-\tfrac{1}{64},\tfrac{1}{8},1\}.\] _Let \(\lambda_{1},\lambda_{2}\in\mathbb{Z}\) satisfy \(\gcd(\lambda_{1},\lambda_{2})=1\) and \(\lambda+1=\frac{\lambda_{1}}{\lambda_{2}}\). Let \(q_{\lambda}\) be the squarefree part of \(\lambda_{1}\lambda_{2}\). Let \(N_{\lambda}\) be the conductor of the Clausen elliptic curve \(E^{\mathrm{Cl}}_{-\lambda/(\lambda+1)}\) defined by (1.1). Then there exists an absolute constant \(c_{2}\) such that if \(x\geq 3\) and \([a,b]\subseteq[-3,3]\), then_ \[\Big{|}\frac{\#\{p\leq x:a^{*}_{X_{\lambda}}(p)\in[a,b]\}}{\#\{p\leq x\}}-\int_ {a}^{b}B(t)dt\Big{|}\ll x^{-c_{2}/\sqrt{q_{\lambda}}}+\frac{\log(N_{\lambda}q _{\lambda}\log x)}{\sqrt{\log x}}.\] **Theorem 1.2**.: _Let \(\lambda\in\{r\in\mathbb{Q}\colon\sqrt{r+1}\in\mathbb{Q}\}-\{0,-1,8\}\), and let \(N_{\lambda}\) be the conductor of the Clausen elliptic curve \(E^{\mathrm{Cl}}_{-\lambda/(\lambda+1)}\) defined by (1.1). If \(x\geq 3\) and \([a,b]\subseteq[-3,3]\), then_ \[\Big{|}\frac{\#\{p\leq x:a^{*}_{X_{\lambda}}(p)\in[a,b]\}}{\#\{p\leq x\}}-\int _{a}^{b}\frac{1}{2\pi}\sqrt{\frac{3-t}{1+t}}dt\Big{|}\ll\frac{\log(N_{\lambda} \log x)}{\sqrt{\log x}}.\] _Remark 1.3_.: The "Batman" distribution and the distribution given by \(\frac{1}{2\pi}\sqrt{\frac{3-t}{1+t}}\) are the pushforwards of the Haar measures on the corresponding Sato-Tate groups \(\mathrm{O}_{3}(\mathbb{R})\) and \(\mathrm{SO}_{3}(\mathbb{R})\) under the trace map. _Remark 1.4_.: The corresponding results for all other \(\lambda\in\mathbb{Q}-\{0,-1\}\) are given in Theorem 2.1. Now, consider any two twist-inequivalent elliptic curves \[E:y^{2}z=x^{3}+\Lambda_{1}xz^{2}+\Lambda_{2}z^{3}\] and \[E^{\prime}:y^{2}z^{\prime}=x^{\prime 3}+\Lambda_{3}x^{\prime}z^{\prime 2}+ \Lambda_{4}z^{\prime 3}\] over \(\mathbb{Q}\). The second family of surfaces that we consider are the _double quadric surfaces_ \[\mathcal{Z}=\mathcal{Z}(E,E^{\prime}):y^{2}=zz^{\prime}(x^{3}+\Lambda_{1}xz^{ 2}+\Lambda_{2}z^{3})(x^{\prime 3}+\Lambda_{3}x^{\prime}z^{\prime 2}+\Lambda_{4}z^{ \prime 3}).\] The normalized trace of Frobenius for \(\mathcal{Z}\) is \[a^{*}_{\mathcal{Z}}(p)=\frac{1}{p}\sum_{x,x^{\prime}}\left(\frac{x^{3}+\Lambda _{2}x+\Lambda_{3}}{p}\right)\left(\frac{x^{\prime 3}+\Lambda_{4}x^{\prime}+ \Lambda_{5}}{p}\right)=a^{*}_{E}(p)a^{*}_{E^{\prime}}(p).\] Our effective form of the Sato-Tate distribution for \(\mathcal{Z}\) may be stated as follows. **Theorem 1.5**.: _Let \(E\) and \(E^{\prime}\) be two twist-inequivalent non-CM elliptic curves with conductors \(N_{E}\) and \(N_{E^{\prime}}\), respectively. Let \(C_{1}(t)\) be given by_ \[C_{1}(t)=\frac{2}{\pi^{2}}\int_{|t|/2}^{2}\frac{1}{u}\sqrt{\left(1-\left(\frac {u}{2}\right)^{2}\right)\left(1-\left(\frac{|t|}{2u}\right)^{2}\right)}du.\] _If \(x\geq 16\) and \([a,b]\subset[-4,4]\), then_ \[\Big{|}\frac{\#\left\{p\leq x:a^{*}_{\mathcal{Z}}(p)\in[a,b]\right\}}{\#\{p \leq x\}}-\int_{a}^{b}C_{1}(t)dt\Big{|}\ll\frac{\sqrt{\log(N_{E}N_{E^{\prime} }\log\log x)}}{\sqrt[4]{\log\log x}}.\] _Remark 1.6_.: The corresponding results in the cases where \(E,E^{\prime}\) are possibly CM are given in Theorem 2.2. In order to prove Theorems 1.1, 1.2, and 1.5, we make effective the joint Sato-Tate distribution for any pair of elliptic curves \(E\) and \(E^{\prime}\). While generically \(E\) will not be a twist of \(E^{\prime}\), the effective joint Sato-Tate distribution for \(E\) and \(E^{\prime}\) when they are twist-equivalent can be recovered by understanding the effective Sato-Tate distribution for \(E\) with the primes restricted to certain arithmetic progressions. With this reduction in mind, we state our main result as being a classification of the effective joint Sato-Tate distributions for arbitrary pairs of elliptic curves. We will deduce Theorems 1.1, 1.2, and 1.5 as corollaries. To state our main result, we first introduce some notation. Given coprime integers \(a,q\geq 1\), define \[\pi(x) :=\#\{p\leq x\}\] \[\pi(x;q,a) :=\#\{p\leq x:p\equiv a\bmod q\}.\] Now, let \(E\) and \(E^{\prime}\) be twist-inequivalent elliptic curves of conductors \(N_{E}\) and \(N_{E^{\prime}}\), respectively. Given any interval \(I\) and real number \(r\), let \(\mathbf{1}_{I}\) be the indicator function for \(I\) and let \(\mathbf{1}_{r\in I}:=\mathbf{1}_{I}(r)\). Assuming \(I,I^{\prime}\subseteq[0,\pi]\), define \[\pi_{E,I}(x) :=\sum_{p\leq x}\mathbf{1}_{I}(\theta_{E}(p)),\] \[\pi_{E,I}(x;q,a) :=\sum_{\begin{subarray}{c}p\leq x\\ p\equiv a\bmod q\end{subarray}}\mathbf{1}_{I}(\theta_{E}(p)),\] \[\pi_{E,E^{\prime},I,I^{\prime}}(x) :=\sum_{p\leq x}\mathbf{1}_{I}(\theta_{E}(p))\mathbf{1}_{I^{ \prime}}(\theta_{E^{\prime}}(p)).\] Define \[\mu_{\text{CM}}(I):=\frac{|I|}{2\pi}+\frac{1}{2}\mathbf{1}_{\frac{\pi}{2}\in I },\qquad\mu_{\text{ST}}(I)=\frac{2}{\pi}\int_{I}(\sin\theta)^{2}d\theta.\] Finally, if \(E\) has complex multiplication (CM) over an imaginary quadratc field \(K\), then let \(D_{K}\) be the discriminant of \(K\) and define \[\delta(K,q)=\begin{cases}1&\text{if }D_{K}|\,q,\\ 0&\text{otherwise}.\end{cases}\] We may now state our main result as follows. **Theorem 1.7**.: _Let \(E,E^{\prime}\) be two twist-inequivalent elliptic curves over \(\mathbb{Q}\). Let \(a,q\) be coprime positive integers, let \(I=[\alpha,\beta]\subseteq[0,\pi]\), and let \(I^{\prime}=[\alpha^{\prime},\beta^{\prime}]\subseteq[0,\pi]\). Finally, if \(E,E^{\prime}\) both have complex multiplication over a (possibly distinct) imaginary quadratic field, denote those two fields as \(K\) and \(K^{\prime}\), respectively._ 1. _The following are true for all_ \(x\geq 16\)_._ 1. _If_ \(E\) _and_ \(E^{\prime}\) _are both non-CM,_ \[\left|\pi_{E,E^{\prime},I,I^{\prime}}(x)-\mu_{\text{ST}}(I)\mu_{\text{ST}}(I^ {\prime})\pi(x)\right|\ll\pi(x)\frac{\log(N_{E}N_{E^{\prime}}\log\log x)}{ \sqrt{\log\log x}}.\] 2. _If_ \(E\) _is CM but_ \(E^{\prime}\) _is non-CM,_ \[\left|\pi_{E,E^{\prime},I,I^{\prime}}(x)-\mu_{\text{CM}}(I)\mu_{\text{ST}}(I^ {\prime})\pi(x)\right|\ll\pi(x)\frac{\log(N_{E})^{4}\log(N_{E^{\prime}}\log x )}{\sqrt{\log x}}.\] 3. _If_ \(E,E^{\prime}\) _are both CM:_ 1. _When the discriminants of_ \(K\) _and_ \(K^{\prime}\) _are coprime, there exists an absolute constant_ \(c_{3}\) _such that_ \[\left|\pi_{E,E^{\prime},I,I^{\prime}}(x)-\mu_{\text{CM}}(I)\mu_{\text{CM}}(I^ {\prime})\pi(x)\right|\ll\pi(x)\exp\Big{(}\frac{-c_{3}\log x}{\sqrt{\log x}+3 \log N_{E}N_{E^{\prime}}}\Big{)}.\] 2. _When the discriminants of_ \(K\) _and_ \(K^{\prime}\) _are not coprime, there exists an absolute constant_ \(c_{4}\) _such that_ \[\left|\pi_{E,E^{\prime},I,I^{\prime}}(x)-\frac{1}{2}\Big{(}\frac{|I||I^{\prime }|}{\pi^{2}}+\mathbf{1}_{\frac{\pi}{2}\in I}\mathbf{1}_{\frac{\pi}{2}\in I^ {\prime}}\Big{)}\pi(x)\right|\ll\pi(x)\exp\Big{(}\frac{-c_{4}\log x}{\sqrt{\log x }+3\log N_{E}N_{E^{\prime}}}\Big{)}.\] 2. _The following are true for all_ \(x\geq 16\)_._ 1. _If_ \(E\) _is non-CM, then there exists an absolute constant_ \(c_{5}\) _such that_ \[\left|\pi_{E,I}(x;q,a)-\mu_{\text{ST}}(I)\frac{\pi(x)}{\varphi(q)}\right|\ll \frac{\pi(x)}{\varphi(q)}\Big{(}x^{-c_{5}/\sqrt{q}}+\frac{\log(N_{E}q\log x)}{ \sqrt{\log x}}\Big{)}.\] _._ 2. _If_ \(E\) _is CM, then there exists an absolute constant_ \(c_{6}\) _such that_ \[\Big{|}\pi_{E,I}(x;q,a) -\mu_{\mathrm{CM}}(I)\frac{\pi(x)}{\varphi(q)}-\chi_{K}(a)\delta(K,q )\frac{\pi(x)}{2\varphi(q)}\Big{(}\frac{|I|}{\pi}-\mathbf{1}_{\pi/2\in I}\Big{)} \Big{|}\] \[\ll\frac{\pi(x)}{\varphi(q)}\Big{(}x^{-c_{5}/\sqrt{q}}+(\log x)^{ 9/2}\exp\Big{(}\frac{-c_{6}\log x}{\sqrt{\log x}+\log(N_{E}q)}\Big{)}\Big{)}.\] This paper is organized as follows. In Section 2 we state and prove the general results for the effective Sato-Tate distributions for K3 surfaces and double quadric surfaces, including Theorem 1.1, Theorem 1.2 and Theorem 1.5. The proof applies Theorem 1.7 which will be proven later. In Section 3, we present background information on Beurling-Selberg polynomials, elliptic curves, and symmetric power \(L\)-functions that will be required to prove Theorem 1.7. In Sections 4 and 5, we prove all the zero-free regions and prime number theorems (respectively), used in the proof of Theorem 1.7. In Sections 6, 7, 8, 9, we prove parts 2.a, 2.b, 1.b, and 1.c of Theorem 1.7, respectively. In Appendix A, we plot the Sato-Tate distributions for all surfaces covered in Theorem 2.1 and Theorem 2.2 against numerical examples. ## Acknowledgements The authors would like to thank Jesse Thorner for advising this project and for many helpful comments and discussions, and Ken Ono and Hasan Saad for many helpful discussions and valuable comments. The authors were participants in the 2022 UVA REU in Number Theory. They are grateful for the support of grants from the National Science Foundation (DMS-2002265, DMS-2055118, DMS-2147273), the National Security Agency (H98230-22-1-0020), and the Templeton World Charity Foundation. The authors used Wolfram Mathematica for computations. ## 2. Proofs of Theorem 1.1, Theorem 1.2, and Theorem 1.5 ### Statement of the general results We will start by stating the most general results for the effective Sato-Tate distributions of K3 surfaces and double quadric surfaces. Throughout this section, let \(\lambda\in\mathbb{Q}-\{0,1\}\) denote a rational number. Let \(\lambda_{1},\lambda_{2}\in\mathbb{Z}\) satisfy \(\gcd(\lambda_{1},\lambda_{2})=1\) and \(\lambda+1=\frac{\lambda_{1}}{\lambda_{2}}\), and let \(q_{\lambda}\) be the squarefree part of \(\lambda_{1}\lambda_{2}\). Let \(N_{\lambda}\) be the conductor of the Clausen elliptic curve \(E^{\mathrm{CL}}_{-\lambda/(\lambda+1)}\) defined in (1.1). For a real \(r\in\mathbb{R}\) and closed interval \(I=[a,b]\subset\mathbb{R}\), define \(\mathbf{1}_{r\in I}\) to be \(1\) if \(r\in I\) and \(0\) otherwise. Let \(E\) and \(E^{\prime}\) be two elliptic curves of conductors \(N_{E}\) and \(N_{E^{\prime}}\), respectively. Finally, define the "flying Batman" distribution \(B_{1}(x)\) by \[B_{1}(x)=\begin{cases}\frac{1}{4\pi\sqrt{3}-2x-x^{2}}+\frac{1}{4\pi\sqrt{3}+2 x-x^{2}}&|x|<1,\\ \frac{1}{4\pi\sqrt{3}+2|x|-x^{2}}&1\leq|x|\leq 3,\\ 0&\text{otherwise}.\end{cases}\] **Theorem 2.1**.: _Fix the notation in Section 1. Then there exists absolute constants \(c_{5},c_{6}\) such that the following are true for all \(x\geq 16\)._ 1. _If_ \(\lambda\not\in(\mathbb{Q}^{2}-1)\cup\{1/8,1,-1/4,-1/64,-4,-64\}\)_, then for every subinterval_ \([a,b]\subset[-3,3]\)_,_ \[\frac{\#\left\{p\leq X:a^{*}_{\chi_{\lambda}}(p)\in[a,b]\right\}}{\#\{p\leq X \}}-\int_{a}^{b}B(t)dt\ll x^{-\frac{c_{5}}{\sqrt{q}}}+\frac{\log(N_{E}q\log x )}{\sqrt{\log x}}.\] 2. _If_ \(\lambda\in(\mathbb{Q}^{2}-1)-\{0,-1,8\}\)_, then for every subinterval_ \([a,b]\subset[-1,3]\)_,_ \[\Big{|}\frac{\#\left\{p\leq x:a^{*}_{\chi_{\lambda}}(p)\in[a,b]\right\}}{\#\{p \leq x\}}-\int_{a}^{b}\frac{1}{2\pi}\sqrt{\frac{3-t}{1+t}}dt\Big{|}\ll\frac{ \log(N_{E}\log x)}{\sqrt{\log x}}.\] _._ 3. _If_ \(\lambda\in\{1/8,1,-1/4,-1/64\}\)_, then for every subinterval_ \([a,b]\subset[-3,3]\)_,_ \[\Big{|}\frac{\#\left\{p\leq x:a_{X_{\lambda}}^{*}(p)\in[a,b]\right\}}{ \#\{p\leq x\}}-\frac{\mathbf{1}_{1\in[a,b]}+\mathbf{1}_{-1\in[a,b]}}{4}-\int_{ a}^{b}B_{1}(t)dt\Big{|}\] \[\ll x^{-c_{5}/\sqrt{q}}+(\log x)^{9/2}\exp\Big{(}\frac{-c_{6}\log x }{\sqrt{\log x}+\log(N_{E}q)}\Big{)}.\] 4. _If_ \(\lambda\in\{-4,-64\}\)_, then for every subinterval_ \([a,b]\subset[-1,3]\)_,_ \[\Big{|}\frac{\#\left\{p\leq x:a_{X_{\lambda}}^{*}(p)\in[a,b] \right\}}{\#\{p\leq x\}}-\frac{\mathbf{1}_{1\in[a,b]}}{2}-\int_{a}^{b}\frac{ dt}{2\pi\sqrt{3+2t-t^{2}}}\Big{|}\] \[\ll x^{-c_{5}/\sqrt{q}}+(\log x)^{9/2}\exp\Big{(}\frac{-c_{6}\log x }{\sqrt{\log x}+\log(N_{E}q)}\Big{)}.\] 5. _If_ \(\lambda=8\)_, then for every subinterval_ \([a,b]\subset[-1,3]\)_,_ \[\Big{|}\frac{\#\left\{p\leq X:a_{X_{\lambda}}^{*}(p)\in[a,b] \right\}}{\#\{p\leq X\}}-\frac{\mathbf{1}_{-1\in[a,b]}}{2}-\int_{a}^{b}\frac{ dt}{2\pi\sqrt{3+2t-t^{2}}}\Big{|}\] \[\ll x^{-c_{5}/\sqrt{q}}+(\log x)^{9/2}\exp\Big{(}\frac{-c_{6}\log x }{\sqrt{\log x}+\log(N_{E}q)}\Big{)}.\] To state the Sato-Tate distributions for double quadric surfaces, we define the following functions \[C_{1}(t) =\frac{2}{\pi^{2}}\int_{|t|/2}^{2}\frac{1}{u}\sqrt{\left(1-\left( \frac{u}{2}\right)^{2}\right)\left(1-\left(\frac{|t|}{2u}\right)^{2}\right)}du,\] \[C_{2}(t) =\frac{1}{2\pi^{2}}\int_{|t|/2}^{2}\frac{1}{u}\sqrt{\frac{1-(u/2) ^{2}}{1-(|t|/2u)^{2}}}du,\] \[C_{3}(t) =\frac{1}{8\pi^{2}}\int_{|t|/2}^{2}\frac{1}{u\sqrt{(1-(u/2)^{2})( 1-(|t|/2u)^{2})}}du.\] **Theorem 2.2**.: _Let \(E\) and \(E^{\prime}\) be two twist-inequivalent elliptic curves of conductors \(N_{E}\) and \(N_{E^{\prime}}\). The following are true for all \(x\geq 16\)._ 1. _If_ \(E\) _and_ \(E^{\prime}\) _are both non-CM, then_ \[\Big{|}\frac{\#\left\{p\leq x:a_{\mathcal{Z}}^{*}(p)\in[a,b]\right\}}{\#\{p \leq x\}}-\int_{a}^{b}C_{1}(t)dt\Big{|}\ll\frac{\sqrt{\log(N_{E}N_{E^{\prime}} \log\log x)}}{\sqrt[4]{\log\log x}}.\] 2. _If_ \(E\) _is non-CM and_ \(E^{\prime}\) _is CM, then_ \[\Big{|}\frac{\#\left\{p\leq x:a_{\mathcal{Z}}^{*}(p)\in[a,b]\right\}}{\#\{p \leq x\}}-\int_{a}^{b}C_{2}(t)dt-\frac{1}{2}\mathbf{1}_{0\in I}\Big{|}\ll \frac{\sqrt[3]{\log(N_{E})^{4}\log(N_{E^{\prime}}\log x)}}{\sqrt[4]{\log x}}.\] 3. _If_ \(E,E^{\prime}\) _are both CM, and the discriminants of their CM fields are coprime, then there exists an absolute constant_ \(c_{7}\) _such that_ \[\Big{|}\frac{\#\left\{p\leq x:a_{\mathcal{Z}}^{*}(p)\in[a,b]\right\}}{\#\{p \leq x\}}-\int_{a}^{b}C_{3}(t)dt-\frac{3}{4}\mathbf{1}_{0\in I}\Big{|}\ll \exp\left(\frac{-c_{7}\log x}{\sqrt{\log x}+3\log N_{E}N_{E^{\prime}}}\right).\] 4. _If_ \(E,E^{\prime}\) _are both CM elliptic curves, and the discriminants of their CM fields are not coprime, then there exists an absolute constant_ \(c_{8}\) _such that_ \[\Big{|}\frac{\#\left\{p\leq x:a_{\mathcal{Z}}^{*}(p)\in[a,b]\right\}}{\#\{p \leq x\}}-2\int_{a}^{b}C_{3}(t)dt-\frac{1}{2}\mathbf{1}_{0\in I}\Big{|}\ll \exp\left(\frac{-c_{8}\log x}{\sqrt{\log x}+3\log N_{E}N_{E^{\prime}}}\right).\] It is clear that Theorem 1.1 and 1.2 are special cases of Theorem 2.1, and Theorem 1.5 is a special case of Theorem 2.2. In the rest of this section, we will present proofs for Theorem 2.1 and Theorem 2.2 using Theorem 1.7, which will be proven in Sections 6 to 9. ### Proof of Theorem 2.1 For simplicity, we only present the proof of Theorem 2.1 for the intervals \([a,b]\subset[0,3]\); the other cases are proved in the same way. In this proof, we fix \(\lambda\) and denote \[E:=E_{-\lambda/(\lambda+1)}^{\mathrm{Cl}},\qquad a(p):=a_{E_{-\lambda/(\lambda+ 1)}^{\mathrm{Cl}}}^{*}(p),\qquad q:=q_{\lambda}.\] Let \(p\) be a prime that does not divide \(q\) and let \(\phi_{p}\) be the unique quadratic character modulo \(p\). Proof of Theorem 2.1 (1).: In this case, we have \[\lambda\notin\{r\in\mathbb{Q}\colon\sqrt{r+1}\in\mathbb{Q}\}\cup\{-64,-4,- \frac{1}{4},-\frac{1}{64},\frac{1}{8},1\}. \tag{2.1}\] From [10, Page 191] we find that condition (2.1) implies \(E\) is a non-CM elliptic curve. Note that \(\lambda\notin\{r\in\mathbb{Q}\colon\sqrt{r+1}\in\mathbb{Q}\}\) implies \(q>1\). Applying the relations between \(X_{\lambda}\) and its Clausen elliptic curve \(E\) ((1.2) and (1.3)), we have \[a_{X_{\lambda}}^{*}(p)=\phi_{p}(\lambda+1)(a(p)^{2}-1).\] Note that when \(\phi_{p}(\lambda+1)=1\), \(a_{X_{\lambda}}^{*}(p)\in[a,b]\) is equivalent to \[a(p)\in\Big{[}\sqrt{1+a},\sqrt{1+b}\Big{]}\] and when \(\phi_{p}(\lambda+1)=-1\), \(a_{X_{\lambda}}^{*}(p)\in[a,b]\) is equivalent to \[a(p)\in\Big{[}\sqrt{1-b},\sqrt{1-a}\Big{]},\] where each endpoint is set to be \(0\) if it is not real. By quadratic reciprocity, \(\phi_{p}(\lambda+1)\) as a function of \(p\) is periodic with period dividing \(4q\) when \(p>4q\). Since \(q\neq 1\) and square-free, when \(j\) spans \((\mathbb{Z}/T\mathbb{Z})^{\times}\), exactly half of the \(\phi_{j}(q)\) will be equal to \(1\) and the other half will equal \(-1\). Apply Theorem 1.7 (2.a) to modulus \(4q\); with a simple calculation, we obtain \[\frac{\#\left\{p\leq X:a_{X_{\lambda}}^{*}(p)\in[a,b]\right\}}{\#\{p\leq X\} }=\int_{a}^{b}B(t)dt+O\Big{(}x^{-\frac{c_{8}}{\sqrt{q}}}+\frac{\log(N_{E}q\log x )}{\sqrt{\log x}}\Big{)}.\] Proof of Theorem 2.1 (2).: This proof differs from the proof of Theorem 2.1 (1) only in that \(\lambda+1\) is a rational square, and thus \(\phi_{p}(q)=1\) for all \(p\) not dividing \(q\). Hence we may apply [14, Theorem 1.1] to derive that for all \(x\geq 3\), \[\frac{\#\left\{p\leq x:a_{X_{\lambda}}^{*}(p)\in[a,b]\right\}}{\#\{p\leq X\} }=\frac{1}{2\pi}\int_{a}^{b}\sqrt{\frac{3-t}{1+t}}dt+O\Big{(}\frac{\log(N_{E} \log x)}{\sqrt{\log x}}\Big{)}.\] Proof of Theorem 2.1 (3, 4, 5).: By [10, Page 191], when \(\lambda\in\{8,1/8,1,-4,-1/4,-64,-1/64\}\), \(E\) is a CM elliptic curve. Let \(K\) be the CM field of \(E\) and \(D_{K}\) the discriminant of \(K\). Also, let \(T\) be the period of \(\phi_{p}(\lambda+1)\). All the possible values of \(\lambda\) and the corresponding \(T\) and \(D_{K}\) are presented in Table 1. The proof now proceeds in the same way as that of Theorem 2.1 (1). The only difference is that we apply Theorem 1.7 (2.b) to modulus \(T\), instead of (2.a) as in the above two cases. ### Proof of Theorem 2.2 Let \(E\) and \(E^{\prime}\) be two twist-inequivalent elliptic curves. For all subintervals \(I\subset[-2,2]\), define the semicircular measure \(S(I)\) as \[S(I)=\int_{I}\frac{1}{\pi}\sqrt{1-(t/2)^{2}}dt,\] and the reciprocal \(T(I)\) of the semicircular distribution as \[T(I)=\int_{I}\frac{1}{2\pi\sqrt{1-(t/2)^{2}}}dt.\] Consider \[R=\{(x,y)\in[-2,2]\times[-2,2]:a\leq xy\leq b\}.\] For an odd integer \(L>0\) that will be specified differently in each case, consider the \(2L-2\) lines \(x=-2+4k/L\) and \(y=-2+4k/L\) for \(1\leq k\leq L-1\). They divide the cell \([-2,2]\times[-2,2]\) into the \(L^{2}\) smaller cells \[\left[-2+\frac{4i-4}{L},-2+\frac{4i}{L}\right]\times\left[-2+\frac{4i^{\prime }-4}{L},-2+\frac{4i^{\prime}}{L}\right],\qquad 1\leq i,j\leq L.\] Since \(a_{E}^{*}(p)\) and \(a_{E^{\prime}}^{*}(p)\) are both irrational whenever they are nonzero, no point \((a_{E}^{*}(p),a_{E^{\prime}}^{*}(p))\) lies on the boundary of any of the above \(L^{2}\) cells. Proof of Theorem 2.2 (1).: Consider any \(1\leq i,i^{\prime}\leq L\), and define \(I=[-2+(4i-4)/L,-2+4i/L]\) and \(I^{\prime}=[-2+(4j-4)/L,-2+4j/L]\). Applying Theorem 1.7 (1.a) to \(I\) and \(I^{\prime}\), we have \[\#\{p\leq x:(a_{E}^{*}(p),a_{E^{\prime}}^{*}(p))\in I\times I^{ \prime}\} =S(I)S(I^{\prime})\pi(x)+O\bigg{(}\pi(x)\frac{\log(N_{E}N_{E^{ \prime}}\log\log x)}{\sqrt{\log\log x}}\bigg{)} \tag{2.2}\] \[\ll\pi(x)\bigg{(}\frac{1}{L^{2}}+\frac{\log(N_{E}N_{E^{\prime}} \log\log x)}{\sqrt{\log\log x}}\bigg{)}.\] Let \(\mathcal{S}\) be the union of all the smaller cells whose interior is contained in the interior of \(R\). Let \(\mathcal{T}\) be the union of all the smaller cells whose interior has non-empty intersection with \(\{xy=a\}\cup\{xy=b\}\). It is easy to check that \(\mathcal{T}\) contains \(O(L)\) of the smaller cells. Now, we have \[\#\{p\leq x:(a_{E}^{*}(p),a_{E^{\prime}}^{*}(p))\in R\}=\#\{p\leq x:(a_{E}^{* }(p),a_{E^{\prime}}^{*}(p))\in\mathcal{S}\}+\#\{p\leq x:(a_{E}^{*}(p),a_{E^{ \prime}}^{*}(p))\in\mathcal{T}\cap R\}.\] The \(L-1\) vertical lines divide \(\mathcal{S}\) into at most \(L\) vertical strips each with width \(1/L\). Applying Theorem 1.7 (1.a) to those strips of \(\mathcal{S}\), we obtain \[\#\{p\leq x:(a_{E}^{*}(p),a_{E^{\prime}}^{*}(p))\in\mathcal{S}\} =\pi(x)\int_{\mathcal{S}}\frac{1}{\pi^{2}}\sqrt{\bigg{(}1-\bigg{(} \frac{x}{2}\bigg{)}^{2}\bigg{)}\bigg{(}1-\bigg{(}\frac{y}{2}\bigg{)}^{2} \bigg{)}}dxdy\] \[+O\bigg{(}\pi(x)\bigg{(}L\frac{\log(N_{E}N_{E^{\prime}}\log\log x )}{\sqrt{\log\log x}}\bigg{)}\bigg{)}. \tag{2.3}\] Note that \[\int_{R-\mathcal{S}}\frac{1}{\pi^{2}}\sqrt{\bigg{(}1-\bigg{(}\frac{x}{2} \bigg{)}^{2}\bigg{)}\bigg{(}1-\bigg{(}\frac{y}{2}\bigg{)}^{2}\bigg{)}}dxdy\ll \frac{1}{L} \tag{2.4}\] Applying (2.2) to each cell of \(\mathcal{T}\), we have \[\#\{p\leq x:(a_{E}^{*}(p),a_{E^{\prime}}^{*}(p))\in\mathcal{T}\cap R\}\ll\pi( x)\bigg{(}L\frac{\log(N_{E}N_{E^{\prime}}\log\log x)}{\sqrt{\log\log x}}+\frac{1}{L} \bigg{)}. \tag{2.5}\] Collating (2.3), (2.4), and (2.5), we obtain \[\#\{p\leq x:(a_{E}^{*}(p),a_{E^{\prime}}^{*}(p))\in R\}=\pi(x)\int_{a}^{b}C_{ 1}(t)+O\bigg{(}\pi(x)\bigg{(}L\frac{\log(N_{E}N_{E^{\prime}}\log\log x)}{\sqrt {\log\log x}}+\frac{1}{L}\bigg{)}\bigg{)}.\] Taking \(L=2\left\lfloor\frac{\sqrt[4]{\log\log x}}{\sqrt{\log(N_{E}N_{E^{\prime}}\log \log x)}}\right\rfloor+1\) concludes the proof. Proof of Theorem 2.2 (2).: The proof proceeds similarly to that of Theorem 2.2 (1). For simplicity, we only present the proof in the case that \(0\notin[a,b]\); the full theorem may be proven in the same way. When \(0\notin I^{\prime}\), we have that by applying Theorem 1.7 (1.b) and elementary calculus, \[\#\{p\leq x:(a_{E}^{*}(p),a_{E^{\prime}}^{*}(p))\in I\times I^{ \prime}\} =S(I)T(I^{\prime})\pi(x)+O\bigg{(}\pi(x)\frac{\log(N_{E})^{4}\log (N_{E^{\prime}}\log x)}{\sqrt{\log x}}\bigg{)}\] \[\ll\pi(x)\bigg{(}\frac{1}{L^{3/2}}+\pi(x)\frac{\log(N_{E})^{4}\log (N_{E^{\prime}}\log x)}{\sqrt{\log x}}\bigg{)}.\] Define \(R\) similarly as in case (1). Then by a similar calculation as in case (1), \[\#\{p\leq x:(a_{E}^{*}(p),a_{E^{\prime}}^{*}(p))\in R\}=\pi(x)\int_{a}^{b}C_{2 }(t)+O\bigg{(}\pi(x)\bigg{(}L\frac{\log(N_{E})^{4}\log(N_{E^{\prime}}\log x)} {\sqrt{\log x}}+\frac{1}{\sqrt{L}}\bigg{)}\bigg{)}.\] Taking \(L=2\left\lfloor\frac{\sqrt[3]{\log x}}{\sqrt[3]{\log(N_{E})^{8}\log(N_{E^{ \prime}}\log x)^{2}}}\right\rfloor+1\) now concludes the proof. Proof of Theorem 2.2 (3), (4).: The proofs of cases (3) and (4) proceed in exactly the same way, so we only present the proof for case (3). Define \(\mathcal{T}\) and \(R\) similarly as in the previous cases. Similar to case (2), we only consider the case when \(0\notin[a,b]\). Now when \(0\notin I\) and \(0\notin I^{\prime}\), we have \[\#\{p\leq x:(a_{E}^{*}(p),a_{E^{\prime}}^{*}(p))\in I\times I^{ \prime}\} =T(I)T(I^{\prime})\pi(x)+O\bigg{(}\pi(x)\exp\bigg{(}\frac{-c_{3} \log x}{\sqrt{\log x+3\log N_{E}N_{E^{\prime}}}}\bigg{)}\bigg{)}\] \[\ll\pi(x)\bigg{(}\frac{\min\{T(I),T(I^{\prime})\}}{\sqrt{L}}+\exp \bigg{(}\frac{-c_{3}\log x}{\sqrt{\log x}+3\log N_{E}N_{E^{\prime}}}\bigg{)} \bigg{)}.\] Now, consider any \(c\in(-4,4)\). Let \(\mathcal{T}_{c}\) be the set of cells that have a nonempty intersection with the curve \(\{xy=c\}\); evidently \(|\mathcal{T}_{c}|\ll L(4-|c|)\). By elementary calculus, for any \(I\times I^{\prime}\in S_{c}\), \[\min\{T(I),T(I^{\prime})\}\ll\frac{1}{L\sqrt{4-\Big{(}\sqrt{c}+\frac{1}{L} \Big{)}^{2}}}.\] Thus \[\sum_{I\times I^{\prime}\in\mathcal{T}_{c}}\frac{\min\{T(I),T(I^{\prime})\}} {\sqrt{L}}\ll\frac{1}{\sqrt{L}}.\] Hence, we have that \[\#\{p\leq x:(a_{E}^{*}(p),a_{E^{\prime}}^{*}(p))\in\mathcal{T}\cap R\}\ll\pi( x)\bigg{(}L\exp\bigg{(}\frac{-c_{3}\log x}{\sqrt{\log x+3\log N_{E}N_{E^{ \prime}}}}\bigg{)}\bigg{)}+\frac{1}{\sqrt{L}}\bigg{)}.\] Thus, \[\#\{p\leq x:(a_{E}^{*}(p),a_{E^{\prime}}^{*}(p))\in R\} =\pi(x)\bigg{(}\int_{a}^{b}C_{3}(t)+\frac{3}{4}\mathbf{1}_{0\in[ a,b]}\bigg{)}\] \[+O\bigg{(}\pi(x)\bigg{(}L\exp\bigg{(}\frac{-c_{3}\log x}{\sqrt{ \log x+3\log N_{E}N_{E^{\prime}}}}\bigg{)}\bigg{)}+\frac{1}{\sqrt{L}}\bigg{)} \bigg{)}.\] Taking \(L=2\left\lfloor\exp\bigg{(}\frac{2c_{3}\log x}{3\sqrt{\log x}+9\log N_{E}N_{E ^{\prime}}}\bigg{)}\right\rfloor+1\) concludes the proof. ## 3. Preliminaries for the Proof of Theorem 1.7 In this section we introduce preliminary information that is needed for the proof of Theorem 1.7. Throughout this section, when \(E\) (resp. \(E^{\prime}\)) is an elliptic curve with complex multiplication over an imaginary quadratic field \(K\) (resp. \(K^{\prime}\)), let \(\chi_{K}\) (resp. \(\chi_{K^{\prime}}\)) and \(D_{K}\) (resp. \(D_{K^{\prime}}\)) respectively denote the Kronecker character and the absolute discriminant of \(K\) (resp. \(K^{\prime}\)). Finally, for an interval \(I\), let \(\chi_{I}\) denote the characteristic function of \(I\). ### Beurling-Selberg polynomials As important technical tools, we will need both the original one-dimensional Beurling-Selberg polynomials, along with a two-dimensional analogue. The following two Lemmas follow from [14, Theorem 1] and [15, Theorem 2]. **Lemma 3.1**.: _Let \(I\subseteq[0,\pi]\) be a closed subinterval and \(M\) a positive integer. Then there exists an absolute constant \(c_{9}>0\), and two polynomials_ \[F_{I,M}^{\pm}(\theta)=\sum_{0\leq m\leq M}\hat{F}_{I,M}^{\pm}(m)\cos(m\theta)\] _such that_ 1. _for all_ \(\theta\in[0,\pi]\)__ \[F_{I,M}^{-}(\theta)\leq\mathbf{1}_{I}(\theta)\leq F_{I,M}^{+}(\theta);\] 2. _we have that_ \[\left|\hat{F}_{I,M}^{\pm}(0)-\frac{|I|}{\pi}\right|\leq\frac{c_{9}}{M};\] 3. _if_ \(m\neq 0\)_, then_ \[\left|\hat{F}_{I,M}^{\pm}(m)\right|\leq\frac{c_{9}}{m}.\] **Lemma 3.2**.: _Let \(I,I^{\prime}\subseteq[0,\pi]\) be two closed subintervals and \(M\) a positive integer. Then there exists an absolute constant \(c_{10}>0\), and two polynomials_ \[F_{I,I^{\prime},M}^{\pm}(\theta,\theta^{\prime})=\sum_{0\leq m,m^{\prime}\leq M }\hat{F}_{I,I^{\prime},M}^{\pm}(m,m^{\prime})\cos(m\theta)\cos(m^{\prime} \theta^{\prime})\] _such that_ 1. _for all_ \(\theta,\theta^{\prime}\in[0,\pi]\)_,_ \[F_{I,I^{\prime},M}^{-}(\theta,\theta^{\prime})\leq\chi_{I}(\theta)\chi_{I^{ \prime}}(\theta^{\prime})\leq F_{I,I^{\prime},M}^{+}(\theta,\theta^{\prime});\] 2. _we have that_ \[\left|\hat{F}_{I,I^{\prime},M}^{\pm}(0,0)-\frac{|I|}{\pi}\right|\leq\frac{c_{ 10}}{M};\] 3. _if_ \(m\neq 0\)_, then_ \[\left|\hat{F}_{I,I^{\prime},M}^{\pm}(m,0)\right|,\ \left|\hat{F}_{I,I^{\prime},M}^{\pm}(0,m)\right|\leq\frac{c_{10}}{m};\] 4. _if_ \(mm^{\prime}\neq 0\)_, then_ \[\left|\hat{F}_{I,I^{\prime},M}^{\pm}(m,m^{\prime})\right|\leq\frac{c_{10}}{ mm^{\prime}}.\] The above two lemmas can be viewed as Fourier expansions of the characteristic functions with respect to the basis \(\cos(m\theta)\). When we deal with elliptic curves without complex multiplication, it will be useful to change the basis of trigonometric polynomials in Lemma 3.1 to the basis of the \(m\)-th Chebyshev polynomials of the second type \(U_{m}(\cos(\theta))\), which form an orthonormal basis for \(L^{2}([0,\pi],\mu_{ST})\) with respect to the usual inner product \(\langle f,g\rangle=\int_{0}^{\pi}f(\theta)g(\theta)d\mu_{ST}\). For a demonstration of said base change, see [11, Section 3]. ### Newforms Here we briefly recall the notion of newforms. For a complete treatment, see Section 2.5 in [10]. For a positive integer \(N\), recall that the level \(N\) congruence subgroup \(\Gamma_{0}(N)\) is defined by \[\Gamma_{0}(N):=\left\{\begin{pmatrix}a&b\\ c&d\end{pmatrix}\in\operatorname{SL}_{2}(\mathbb{Z}):c\equiv 0\bmod N\right\}.\] Throughout the rest of the paper, let \(\mathcal{M}_{k}(\Gamma_{0}(N),\chi)\) denote the space of modular forms of weight \(k\), level \(N\), and nebentypus \(\chi\), and let \(S_{k}(\Gamma_{0}(N),\chi)\subset\mathcal{M}_{k}(\Gamma_{0}(N),\chi)\) denote the subspace of cusp forms, both with respect to \(\Gamma_{0}(N)\). When \(\chi\) is trivial, write \(\mathcal{M}_{k}(\Gamma_{0}(N),\chi)=\mathcal{M}_{k}(\Gamma_{0}(N))\), \(\mathcal{S}_{k}(\Gamma_{0}(N),\chi)=\mathcal{S}_{k}(\Gamma_{0}(N))\). Now, given a positive integer \(d\), define the \(V\)-operator \(V(d)\) as \[\Bigg{(}\sum_{n\geq n_{0}}c(n)q^{n}\Bigg{)}|V(d):=\sum_{n\geq n_{0}}c(n)q^{dn}.\] If \(f(z)\in S_{k}(\Gamma_{0}(N))\) and \(d>1\), then both \(f(z)\) and \(f(dz)=f(z)|V(d)\) are in \(S_{k}(\Gamma_{0}(dN))\) (Proposition 2.22, [10]). Thus there are at least two natural ways for a function in \(S_{k}(\Gamma_{0}(dN))\) to come from lower levels. Motivated by this, we define the _space of oldforms_\(S_{k}^{\mathrm{old}}(\Gamma_{0}(N))\subset S_{k}(\Gamma_{0}(N))\) as \[S_{k}^{\mathrm{old}}(\Gamma_{0}(N)):=\bigoplus_{\begin{subarray}{c}dM|N\\ M\neq N\end{subarray}}S_{k}(\Gamma_{0}(M))|V(d),\] where we sum over pairs of positive integers \((d,M)\) satisfying \(dM|N\) and \(M\neq N\). Now, let \(\mathfrak{F}_{N}\) denote the fundamental domain for the action of \(\Gamma_{0}(N)\) on the upper half of the complex plane, and recall the _Petersson inner product_ between cusp forms \(f(z),g(z)\in S_{k}(\Gamma_{0}(N))\), defined as \[\langle f,g\rangle:=\frac{1}{[\mathrm{SL}_{2}(\mathbb{Z}):\Gamma_{0}(N)]}\int _{\mathfrak{F}_{N}}f(z)\overline{g(z)}y^{k-2}dxdy\] where \(z=x+iy\). **Definition 3.3**.: Define \(S_{k}^{\mathrm{new}}(\Gamma_{0}(N))\), the _space of newforms_, to be the orthogonal complement of \(S_{k}^{\mathrm{old}}(\Gamma_{0}(N))\) in \(S_{k}(\Gamma_{0}(N))\) with respect to the Petersson inner product. **Definition 3.4**.: A _newform_ in \(S_{k}^{\mathrm{new}}(\Gamma_{0}(N))\) is a normalized cusp form that is an eigenform for all the Hecke operators on \(S_{k}^{\mathrm{new}}(\Gamma_{0}(N))\), the Atkin-Lehner involution \(|_{k}W(N)\), and all of the Atkin-Lehner involutions \(|_{k}W(Q_{p})\) for each prime \(p|N\) (for more details, see [10, Section 2.5]). ### CM elliptic curves, Grossencharacters, and automorphic induction Let \(K\) be an imaginary quadratic field with absolute discriminant \(D_{K}\), ring of integers \(\mathcal{O}_{K}\), and absolute norm \(\mathrm{N}=\mathrm{N}_{K/\mathbb{Q}}\) defined as \(\mathrm{N}\mathfrak{a}=|\mathcal{O}_{K}/\mathfrak{a}|\) for all nonzero ideals \(\mathfrak{a}\subset\mathcal{O}_{K}\). Let \(\chi_{K}\) be the Kronecker character associated to \(K\); note that for all primes \(p\nmid D_{K}\), \[\chi_{K}(p)=\begin{cases}1&p\text{ splits in }K\\ -1&p\text{ inert in }K\end{cases}\] **Definition 3.5** ([14], Section 12.2).: Given a \(u_{\xi}\in\mathbb{Z}\), define \(\xi_{\infty}:K^{\times}\to S^{1}\) to be the group homomorphism satisfying \[\xi_{\infty}(a)=\bigg{(}\frac{a}{|a|}\bigg{)}^{u_{\xi}}.\] Let \(\mathfrak{m}\) be an ideal of \(\mathcal{O}_{K}\). Define a _Hecke Grossencharacter_\(\xi\) with modulus \(\mathfrak{m}\) to be a group homomorphism from \[I_{\mathfrak{m}}=\{\mathfrak{a}\text{ fractional ideal in }\mathcal{O}_{K}:( \mathfrak{a},\mathfrak{m})=1\}\] to \[\{z\in\mathbb{C}:|z|=1\}\] that agrees with \(\xi_{\infty}\) on \[P_{\mathfrak{m}}=\{a\mathcal{O}_{K}:a\in K^{\times},a\equiv 1\bmod\mathfrak{m}\}.\] Note that there may exist another Grossencharacter \(\xi^{*}\) of \(K\) with modulus \(\mathfrak{m}^{*}\supset\mathfrak{m}\) which satisfies that \(\xi^{*}(\mathfrak{a})=\xi(\mathfrak{a})\) for all \(\mathfrak{a}\in I_{\mathfrak{m}}\). The largest ideal \(\mathfrak{n}\) which is a modulus of such a Grossencharacter is defined as the conductor of \(\xi\). If \(\mathfrak{n}=\mathfrak{m}\), \(\xi\) is called primitive. The following theorem characterizes the \(L\)-function \(L(s,E)\) of an elliptic curve \(E/\mathbb{Q}\) with CM over an imaginary quadratic field \(K\) as the \(L\)-function of a Grossencharacter defined over \(K\). **Theorem 3.6** ([14, Theorem II.10.5]).: _Fix the notation above. Let \(E/\mathbb{Q}\) be an elliptic curve with complex multiplication over an imaginary quadratic field \(K\) with absolute discriminant \(D_{K}\). Then there exists a primitive Grossencharacter \(\xi\) with conductor \(\mathfrak{m}\subset\mathcal{O}_{K}\) and \(u_{\xi}=1\) such that \(\mathrm{N}\mathfrak{m}=N_{E}/D_{K}\) and \(L(s,E)=L(s,\xi)\)._ It will often be convenient to interpret the \(L\)-functions of Hecke Grossencharacters as \(L\)-functions associated with modular forms via automorphic induction. **Theorem 3.7** ([14, Theorem 12.5]).: _Let \(K\) be an imaginary quadratic field with absolute discriminant \(D_{K}\) and let \(\xi\) be a primitive Grossencharacter with conductor \(\mathfrak{m}\) and \(u_{\xi}\) non-negative. Let \(\chi\) be the Dirichlet character given by \(\chi(n)=\chi_{D}(n)\xi((n))\) for all \(n\in\mathbb{Z}^{+}\). Then the modular form_ \[f(z)=\sum_{\mathfrak{a}}\xi(\mathfrak{a})(\mathrm{N}\mathfrak{a})^{u_{\xi}/2 }e(z\mathrm{N}\mathfrak{a})\in\mathcal{M}_{u+1}(\Gamma_{0}(D_{K}\mathrm{N} \mathfrak{m}),\chi)\] _is a newform and satisfies that_ \[L(s,f)=L(s,\xi).\] _When \(u_{\xi}>0\), \(f\) is a cusp form._ Now, given a CM elliptic curve \(E\), consider \(\xi\) as defined in Theorem 3.6. Define \(\xi_{m}(\mathrm{mod}\ \mathfrak{m}_{m})\) to be the primitive Grossencharacter which induces \(\xi^{m}\), and let \(\chi_{m}\) be the Dirichlet character given by \(\chi_{m}(n)=\chi_{D}(n)\xi_{m}((n))\) for all \(n\in\mathbb{Z}^{+}\). Note that \(D_{K}\mathrm{N}\mathfrak{m}_{m}|N_{E}\). Let \(f_{m}\) be the holomorphic cusp form associated with \(\xi_{m}\) as in Theorem 3.7. Then \(f_{m}\in S_{m+1}(\Gamma_{0}(|D_{K}|\mathrm{N}\mathfrak{m}_{m}),\chi_{m})\). ### Automorphic \(L\)-functions Let \(\mathbb{A}_{\mathbb{Q}}\) be the adele ring of \(\mathbb{Q}\). Let \(\mathfrak{F}_{m}\) denote the set of all cuspidal representations of \(\mathrm{GL}_{m}(\mathbb{A}_{\mathbb{Q}})\) with unitary central character that are trivial on the diagonally embedded copy of the positive real numbers. For \(\pi\in\mathfrak{F}_{m}\), let \(\widetilde{\pi}\) denote the representation contragredient to \(\pi\) and let \(q_{\pi}\) denote the conductor of \(\pi\). Now, there exists a standard \(L\)-function \(L(s,\pi)\) attached to \(\pi\); let the conductor of this \(L\)-function be \(q_{\pi}\). The local parameters of this \(L\)-function are known as the Satake parameters, and for each prime \(p\) the Satake parameters \(\alpha_{1,\pi}(p),\dots,\alpha_{m,\pi}(p)\in\mathbb{C}\) satisfy that when \(p\nmid q_{\pi}\), \(\alpha_{j,\pi}(p)\neq 0\) for all \(j\). For every \(n\in\mathbb{Z}^{+}\), denote the \(n\)th Dirichlet coefficient of \(L(s,\pi)\) as \(a_{\pi}(n)\). The Dirichlet series of \(L(s,\pi)\) is given by the formula \[L(s,\pi)=\prod_{p}\prod_{j=1}^{m}\Big{(}1-\frac{\alpha_{j,\pi}(p)}{p^{s}}\Big{)} ^{-1}=\sum_{n=1}^{\infty}\frac{a_{\pi}(n)}{n^{s}},\] which converges absolutely when \(\mathrm{Re}(s)>1\). Define \[\Gamma_{\mathbb{R}}(s)=\pi^{-s/2}\Gamma(s/2),\] where \(\Gamma\) is the usual gamma function. The local parameters at infinity, \(\mu_{\pi}(1),\dots,\mu_{\pi}(n)\), of \(L(s,\pi)\), are known as the Langlands parameters. The gamma factor \(\gamma(s,\pi)\) of \(L(s,\pi)\) is by definition \[\gamma(s,\pi)=\prod_{j=1}^{m}\Gamma_{\mathbb{R}}(s+\mu_{\pi}(j)).\] Note that all of the relevant automorphic representations within this paper will satisfy the Generalized Ramanujan conjecture, so we have the following bounds on the Satake and Langlands parameters, \[|\alpha_{j,\pi}(p)|\leq 1,\qquad\mathrm{Re}(\mu_{\pi}(j))\geq 0. \tag{3.1}\] Note that as particular examples of automorphic \(L\)-functions, any Dirichlet \(L\)-function \(L(s,\chi)\) corresponds to a one-dimensional cuspidal representation \(\chi\) of \(\mathrm{GL}_{1}(\mathbb{A}_{\mathbb{Q}})\) and thus is an automorphic \(L\)-function. Recall that \(L(s,\pi)\) is always meromorphic; if \(\pi\) is the trivial representation \(\mathbb{1}\) of \(\mathrm{GL}_{1}(\mathbb{A}_{\mathbb{Q}})\) then \(L(s,\pi)\) has a simple pole at \(s=1\), otherwise \(L(s,\pi)\) is entire. Denote the order of the pole of \(L(s,\pi)\) at \(s=1\) as \(r_{\pi}\). Then \(r_{\pi}=1\) when \(\pi=\mathbb{1}\) and \(r_{\pi}=0\) otherwise. Now, the complete \(L\)-function \[\Lambda(s,\pi)=(s(s-1))^{r_{\pi}}q_{\pi}^{s/2}L(s,\pi)\gamma(s,\pi)\] is entire of order \(1\). Moreover, there exists a \(W(\pi)\in\mathbb{C}\) satisfying \(|W(\pi)|=1\), for which the functional equation \[\Lambda(s,\pi)=W(\pi)\Lambda(1-s,\widetilde{\pi}),\] holds true. Note that \(q_{\widetilde{\pi}}=q_{\pi}\), and the following sets are equal: \[\{\alpha_{j,\widetilde{\pi}}(p)\}=\{\overline{\alpha_{j,\pi}(p)}\},\qquad\{ \mu_{\widetilde{\pi}}(j)\}=\{\overline{\mu_{\pi}(j)}\}.\] The analytic conductor of \(L(s,\pi)\) is defined as \[C(\pi)=q_{\pi}\prod_{j=1}^{m}(3+|\mu_{\pi}(j)|).\] Note that throughout this paper, we will use \(C(\cdot)\) to generally refer to the analytic conductor of an \(L\)-function (see [14, Page 95] for the definition). Moreover, we will define \(\Lambda_{\pi}(n)\) to be the \(n\)th coefficient of the logarithmic derivative of \(-L(s,\pi)\). Specifically, we have the formula \[\sum_{n=1}^{\infty}\frac{\Lambda_{\pi}(n)}{n^{s}}=-\frac{L^{\prime}}{L}(s,\pi )=\sum_{p}\sum_{\ell=1}^{\infty}\frac{\sum_{j=1}^{m}\alpha_{j,\pi}(p)^{\ell} \log p}{p^{\ell s}}.\] ### Rankin-Selberg \(\boldsymbol{L}\)-functions Given any two \(\pi\in\mathfrak{F}_{m}\) and \(\pi^{\prime}\in\mathfrak{F}_{m^{\prime}}\), it is often useful to consider the Rankin-Selberg convolution of their \(L\)-functions. This convolution is an \(L\)-function itself, with Satake parameters denoted as \(\alpha_{j,j^{\prime},\pi\times\pi^{\prime}}(p)\). A complete description of these parameters is given in [13, Appendix]. Note that if \(m^{\prime}=m\) and \(\pi^{\prime}=\widetilde{\pi}\), then we call the resulting Rankin-Selberg convolution the Rankin-Selberg square of \(\pi\). Note that for any prime \(p\nmid q_{\pi}q_{\pi^{\prime}}\), we have that \[\{\alpha_{j,j^{\prime},\pi\times\pi^{\prime}}(p)\}=\{\alpha_{j,\pi}(p)\alpha_ {j^{\prime},\pi^{\prime}}(p)\}.\] The Dirichlet series of the Rankin-Selberg convolution of \(L(s,\pi)\) and \(L(s,\pi^{\prime})\) is given by \[L(s,\pi\times\pi^{\prime})=\prod_{p}\prod_{j=1}^{m}\prod_{j^{\prime}=1}^{m^{ \prime}}\Big{(}1-\frac{\alpha_{j,j^{\prime},\pi\times\pi^{\prime}}(p)}{p^{s}} \Big{)}^{-1}=\sum_{n=1}^{\infty}\frac{a_{\pi\times\pi^{\prime}}(n)}{n^{s}},\] is associated to the tensor product \(\pi\times\pi^{\prime}\), and converges absolutely for \(\mathrm{Re}(s)>1\). Let \(q_{\pi\times\pi^{\prime}}\) be the conductor of \(L(s,\pi\times\pi^{\prime})\). Note that \(q_{\pi\times\pi^{\prime}}|q_{\pi}^{m^{\prime}}q_{\pi^{\prime}}^{m}\) (see [1]). Now, denote the Langlands parameters of \(L(s,\pi\times\pi^{\prime})\) as \(\mu_{\pi\times\pi^{\prime}}(j,j^{\prime})\in\mathbb{C}\); a complete description of these Langlands parameters can be found in [13, Proof of Lemma 2.1]. By definition, the gamma factor of \(L(s,\pi\times\pi)\) is given by \[\gamma(s,\pi\times\pi^{\prime})=\prod_{j=1}^{m}\prod_{j^{\prime}=1}^{m^{ \prime}}\Gamma_{\mathbb{R}}(s+\mu_{\pi\times\pi^{\prime}}(j,j^{\prime})).\] It is known that \(L(s,\pi\times\pi^{\prime})\) is entire if \(\pi^{\prime}\neq\widetilde{\pi}\). If \(\pi^{\prime}=\widetilde{\pi}\), \(L(s,\pi\times\pi^{\prime})\) has a simple pole at \(s=1\) and is holomorphic elsewhere. Let \(r_{\pi\times\pi^{\prime}}\) denote the order of the pole of \(L(s,\pi\times\pi^{\prime})\) at \(s=1\); then \(r_{\pi\times\pi^{\prime}}=1\) if \(\pi\) and \(\pi^{\prime}\) are dual to each other and \(r_{\pi\times\pi^{\prime}}=0\) otherwise. The completed \(L\)-function of \(L(s,\pi\times\pi^{\prime})\) is given by \[\Lambda(s,\pi\times\pi^{\prime})=(s(s-1))^{r_{\pi\times\pi^{\prime}}}q_{\pi \times\pi^{\prime}}^{s/2}L(s,\pi\times\pi^{\prime})\gamma(s,\pi\times\pi^{ \prime}).\] Note that \(\Lambda(s,\pi\times\pi^{\prime})\) is entire and of order \(1\). Moreover, there exists a \(W(\pi\times\pi^{\prime})\in\mathbb{C}\) satisfying \(|W(\pi\times\pi^{\prime})|=1\) such that the functional equation \[\Lambda(s,\pi\times\pi^{\prime})=W(\pi\times\pi^{\prime})\Lambda(1-s, \widetilde{\pi}\times\widetilde{\pi^{\prime}}),\] holds. Define the analytic conductor of \(L(s,\pi\times\pi^{\prime})\) as \[C(\pi\times\pi^{\prime})=q_{\pi\times\pi^{\prime}}\prod_{j=1}^{m}\prod_{j^{ \prime}=1}^{m^{\prime}}(3+|\mu_{\pi\times\pi^{\prime}}(j,j^{\prime})|).\] By the work of Bushnell and Henniart [1] and Brumley [12, Appendix], we have the following bound for the analytic conductor \[\log C(\pi\times\pi^{\prime})\ll m^{\prime}\log C(\pi)+m\log C(\pi^{\prime}) \tag{3.2}\] Finally, we denote the \(n\)th coefficient of the negative log derivative of \(L(s,\pi\times\pi^{\prime})\) as \(\Lambda_{\pi\times\pi^{\prime}}(n)\). By definition, we have \[\sum_{n=1}^{\infty}\frac{\Lambda_{\pi\times\pi^{\prime}}(n)}{n^{s}}=-\frac{L^{ \prime}}{L}(s,\pi\times\pi^{\prime})=\sum_{p}\sum_{\ell=1}^{\infty}\frac{\sum_ {j=1}^{m}\alpha_{j,\pi\times\pi^{\prime}}(p)^{\ell}\log p}{p^{fs}}.\] ### Isobaric sums Here we recall some basic facts about the isobaric sum operation \(\boxplus\), first introduced by Langlands in [14]. Let \(k\geq 1\) be an integer, let \(m_{1},\ldots m_{k}\geq 1\) be integers, let \(\pi_{i}\in\mathfrak{F}_{m_{i}}\), let \(r=\sum_{i=1}^{k}m_{i}\), and let \(t_{1},t_{2},\ldots t_{k}\in\mathbb{R}\). Consider the isobaric automorphic representation \(\Pi\) of \(\operatorname{GL}_{r}(\mathbb{A})\), defined by \[\Pi=\pi_{1}\otimes|\det|^{it_{1}}\boxplus\ldots\boxplus\pi_{k}\otimes|\det|^{ it_{k}}\] The \(L\)-function associated to \(\Pi\) is \[L(s,\Pi)=\prod_{j=1}^{k}L(s+it_{j},\pi_{j}).\] The analytic conductor of this \(L\)-function is defined as \[C(\Pi,t) =\prod_{j=1}^{k}C(\pi_{j},t+t_{j})\] \[C(\Pi) =C(\Pi,0).\] Now, let \(k^{\prime}\geq 1\) be an integer, let \(m^{\prime}_{1},\ldots m^{\prime}_{k^{\prime}}\geq 1\) be integers, let \(\pi^{\prime}_{i}\in\mathfrak{F}_{m^{\prime}_{i}}\), let \(r^{\prime}=\sum_{i=1}^{k^{\prime}}m^{\prime}_{i}\), and let \(t^{\prime}_{1},\ldots t^{\prime}_{k^{\prime}}\in\mathbb{R}\). Consider the isobaric automorphic representation \(\Pi^{\prime}\in\operatorname{GL}_{r^{\prime}}(\mathbb{A})\), defined by \[\Pi=\pi^{\prime}_{1}\otimes|\det|^{it^{\prime}_{1}}\boxplus\ldots\boxplus\pi_{ k^{\prime}}\otimes|\det|^{it^{\prime}_{k^{\prime}}}.\] Then, the Rankin-Selberg convolution of \(L(s,\pi)\) and \(L(s,\pi^{\prime})\) is given by \[L(s,\Pi\times\Pi^{\prime})=\prod_{j=1}^{k}\prod_{j^{\prime}=1}^{k^{\prime}}L( s+it_{j}+it^{\prime}_{j^{\prime}},\pi_{j}\times\pi^{\prime}_{j^{\prime}})\] and has analytic conductor \[C(\Pi\times\Pi^{\prime},t) =\prod_{j=1}^{d}\prod_{j^{\prime}=1}^{d^{\prime}}C(\pi_{j}\times \pi^{\prime}_{j^{\prime}},t+t_{j}+t^{\prime}_{j^{\prime}})\] \[C(\Pi\times\Pi^{\prime}) =C(\Pi\times\Pi^{\prime},0).\] In our establishment of zero-free regions below we will implicitly need the following lemma, which is evident from [13, Section A]: **Lemma 3.8**.: _Given any unitary isobaric automorphic representation \(\Pi\) with \(L\)-function \(L(s,\pi)\), the Dirichlet coefficients of \(-\frac{L^{\prime}}{L}(s,\Pi\times\widetilde{\Pi})\) are all nonnegative._ ### Symmetric power \(L\)-functions A particular type of \(L\)-function of interest to us is the symmetric power \(L\)-function. Consider an elliptic curve \(E/\mathbb{Q}\). Recall that by the modularity theorem [1], there exists a non-CM cusp form \(f(z)\in S_{2}^{\text{new}}(\Gamma_{0}(N_{E}))\) of weight \(2\) and level \(N_{E}\) corresponding to \(E\). Now, let the Fourier expansion of \(f\) be \(f(z)=\sum_{n=1}^{\infty}a_{f}(n)e^{2\pi inz}\). We have that \(a_{f}(p)\) agrees with the trace of Frobenius of \(E\) modulo \(p\), so thus we may write \(a_{f}(p)=2\sqrt{p}\cos\theta_{E}(p)\) (where \(\theta_{E}(p)\) is as defined in Section 1). For each non-negative integer \(m\) and prime \(p\) consider the Satake parameters \(\alpha_{0,\operatorname{Sym}^{m}E}(p),\dots,\) \(\alpha_{m,\operatorname{Sym}^{m}f}(p)\in\mathbb{C}\). When \(p\nmid N_{E}\), the Satake parameters satisfy the useful identity \[\{\alpha_{0,\operatorname{Sym}^{m}E}(p),\dots,\alpha_{m,\operatorname{Sym}^{m }E}(p)\}=\{e^{i(m-2j)\theta_{E}(p)}\colon 0\leq j\leq m\}.\] A complete description of the values of \(\alpha_{j,\operatorname{Sym}^{m}E}(p)\) can be derived from [14, Appendix]. Now, the \(m\)th symmetric power \(L\)-function associated to \(f\) (denoted \(L(s,\operatorname{Sym}^{m}f)=L(s,\operatorname{Sym}^{m}E)\)) is the \(L\)-function with local parameters \[\{\alpha_{0,\operatorname{Sym}^{m}f}(p),\dots,\alpha_{m,\operatorname{Sym}^{m }f}(p)\}\] If we denote the \(n\)th Dirichlet coefficient of the \(m\)th symmetric power \(L\)-function as \(a_{\operatorname{Sym}^{m}E}(n)\), then by definition we have the Euler product and Dirichlet series expansions \[L(s,\operatorname{Sym}^{m}E)=\prod_{p}\prod_{j=0}^{m}\left(1-\frac{\alpha_{j, \operatorname{Sym}^{m}E}(p)}{p^{s}}\right)^{-1}=\sum_{n=1}^{\infty}\frac{a_{ \operatorname{Sym}^{m}E}(n)}{n^{s}},\] which converge for \(\operatorname{Re}(s)>1\). When \(p\nmid N_{E}\), we may readily compute \(a_{\operatorname{Sym}^{m}E}(p)=U_{m}(\cos\theta_{p})\), where \(U_{m}\) is the \(m\)-th Chebyshev polynomial of the second kind. Note that \(L(s,\operatorname{Sym}^{m}E)\) is a self-dual \(L\)-function. Define \(\Gamma_{\mathbb{C}}(s):=\Gamma_{\mathbb{R}}(s)\Gamma_{\mathbb{R}}(s+1)=2(2\pi) ^{-s}\Gamma(s)\), let \(q_{\operatorname{Sym}^{m}E}\) be the conductor of \(L(s,\operatorname{Sym}^{m}E)\), and for even \(m\), let \(r\in\{0,1\}\) such that \(r\equiv m/2\bmod 2\). The gamma factor of \(L(s,\operatorname{Sym}^{m}E)\) is given by \[\gamma(s,\operatorname{Sym}^{m}E)=\begin{cases}\prod_{j=1}^{(m+1)/2}\Gamma_{ \mathbb{C}}(s+(j-\tfrac{1}{2}))&\text{if $m$ is odd}\\ \Gamma_{\mathbb{R}}(s+r)\prod_{j=1}^{m/2}\Gamma_{\mathbb{C}}(s+j)&\text{if $m$ is even}.\end{cases}\] One may define the analytic conductor \(C(\operatorname{Sym}^{m}E)\) of \(L(s,\operatorname{Sym}^{m}E)\) similarly to the previous subsections. Note that the complete \(L\)-function of \(L(s,\operatorname{Sym}^{m}E)\) is entire of order \(1\), and is given by \[\Lambda(s,\operatorname{Sym}^{m}E)=q_{\operatorname{Sym}^{m}E}^{s/2}\gamma(s,\operatorname{Sym}^{m}E)L(s,\operatorname{Sym}^{m}E).\] Also, there exists a \(W(\operatorname{Sym}^{m}E)\in\mathbb{C}\) satisfying \(|W(\operatorname{Sym}^{m}E)|=1\) for which the functional equation \[\Lambda(s,\operatorname{Sym}^{m}E)=W(\operatorname{Sym}^{m}E)\Lambda(1-s, \operatorname{Sym}^{m}E)\] holds. Let \(\pi_{f}\in\mathfrak{F}_{2}\) be the cuspidal form corresponding to \(f\). Then as detailed in [11, Theorem 6.1], due to the work by Newton and Thorne ([14, Theorem B] and [14, Theorem A]) and the work in [13] and [15], we know that \(L(s,\operatorname{Sym}^{m}E)\) is the standard \(L\)-function associated to the representation \(\operatorname{Sym}^{m}\pi_{f}\in\mathfrak{F}_{m+1}\) with the same gamma factor, complete \(L\)-function, and functional equation (thus henceforth we will often write \(\operatorname{Sym}^{m}E=\operatorname{Sym}^{m}\pi_{f}\)). Now, by [1, Section A.2], we have \[\log q_{\operatorname{Sym}^{m}E}\ll m\log N_{E}.\] Through a straightforward calculation using Stirling's formula and the above properties, we may now obtain \[\log C(\operatorname{Sym}^{m}E)\ll m\log(N_{E}m). \tag{3.3}\] ## 4. Zero-free regions Throughout this section we will fix the notation from Sections 1 and 3. The following proposition, taken from [10], will be useful in establishing many of the zero-free regions used in this paper. **Proposition 4.1** ([10, Proposition 4.1]).: _Let \(\Pi\) be an isobaric automorphic representation of \(\operatorname{GL}_{r}(\mathbb{A})\). If \(L(s,\Pi\times\widetilde{\Pi})\) has a pole of order \(r_{\Pi\times\widetilde{\Pi}}\geq 1\) at \(s=1\), then \(L(1,\Pi\times\widetilde{\Pi})\neq 0\), and there exists a constant \(c_{11}\) such that \(L(s,\Pi\times\widetilde{\Pi})\) has at most \(r_{\Pi\times\widetilde{\Pi}}\) real zeroes in the interval_ \[s\geq 1-\frac{c_{11}}{(r_{\Pi\times\widetilde{\Pi}}+1)\log C(\Pi\times \widetilde{\Pi})}.\] We will also use the following zero-free region throughout the proof of Theorem 1.7. **Theorem 4.2** ([10, Corollary 4.2]).: _Let \(\pi\in\mathfrak{F}_{m}\) and \(\pi^{\prime}\in\mathfrak{F}_{m^{\prime}}\). Suppose that both \(\pi\) and \(\pi^{\prime}\) are self-dual. There exists a constant \(c_{12}>0\) for which the following results hold._ 1. \(L(s,\pi)\neq 0\) _in the region_ \[\operatorname{Re}(s)\geq 1-\frac{c_{12}}{m\log(C(\pi)(3+|\mathrm{Im}(s)|)}\] _apart from at most one zero. If the exceptional zero exists, then it is real and simple._ 2. \(L(s,\pi\times\pi^{\prime})\neq 0\) _in the region_ \[\operatorname{Re}(s)\geq 1-\frac{c_{12}}{(m+m^{\prime})\log(C(\pi)C(\pi^{ \prime})(3+|\mathrm{Im}(s)|)^{\min(m,m^{\prime})})}\] _apart from at most one zero. If the exceptional zero exists, then it is real and simple._ To remove the possibility of an exceptional zero, we will frequently use the following proposition. **Proposition 4.3**.: _Let \(f\) be a holomorphic newform with complex multiplication, and let \(\chi\) be the Dirichlet character such that \(f=f\otimes\chi\). Let \(\pi_{f}\in\mathfrak{F}_{2}\) be the representation corresponding with \(f\). Let \(\pi\) correspond with a cuspidal automorphic representation of \(\mathfrak{F}_{m}\), and suppose that \(\pi\otimes\chi\neq\pi\). Then there exists an absolute constant \(c_{13}\) such that \(L(s,\pi_{f}\times\pi)\) has no zeroes in the interval_ \[\operatorname{Re}(s)\geq 1-\frac{c_{13}}{m\log(C(\pi_{f})C(\pi)(|\mathrm{Im}(s)| +3))}.\] Proof.: Consider the isobaric sum \[\Pi=\pi\boxplus\pi\otimes\chi\boxplus\widetilde{\pi}_{f}\otimes|\cdot|^{-i \gamma}.\] By hypothesis, \(\pi_{f}\otimes\chi=\pi_{f}\) and \(\pi\otimes\chi\neq\pi\). This implies that \(\widetilde{\pi}_{f}\otimes\bar{\chi}=\widetilde{\pi}_{f}\) and \(\pi\otimes\bar{\chi}\neq\pi\). It follows that \(L(s,\Pi\times\widetilde{\Pi})\) factors as \[L(s,\pi_{f}\times\widetilde{\pi}_{f})L(s,\pi\times\widetilde{ \pi})^{2}L(s,\pi\times(\widetilde{\pi}\otimes\chi))L(s,\pi\times(\widetilde{ \pi}\otimes\overline{\chi}))L(s+i\gamma,\pi\times\pi_{f})L(s-i\gamma, \widetilde{\pi}\times(\widetilde{\pi}_{f})\] \[\times L(s+i\gamma,\pi\times(\pi_{f}\otimes\chi))L(s-i\gamma, \widetilde{\pi}\times(\widetilde{\pi}_{f}\otimes\overline{\chi}))\] \[= L(s,\pi_{f}\times\widetilde{\pi}_{f})L(s,\pi\times\widetilde{ \pi})^{2}L(s,\pi\times(\widetilde{\pi}\otimes\chi))L(s,\pi\times(\widetilde{ \pi}\otimes\overline{\chi}))L(s+i\gamma,\pi\times\pi_{f})^{2}L(s-i\gamma, \widetilde{\pi}\times\widetilde{\pi}_{f})^{2}.\] This \(L\)-function has a pole of order \(3\) at \(s=1\) (due to the \(L(s,\pi_{f}\times\widetilde{\pi}_{f})\) and \(L(s,\pi\times\widetilde{\pi})\) terms). Moreover, if \(\beta\) is a real zero of \(L(s+i\gamma,\pi\times\pi_{f})\), then \(\beta\) is also a real zero of \(L(s-i\gamma,\widetilde{\pi}\times\widetilde{\pi}_{f})\). Hence a zero \(\beta+i\gamma\) of \(L(s,\pi\times\pi_{f})\) would imply a real zero of order \(4\) of \(L(s,\Pi\times\widetilde{\Pi})\). If \(\beta+i\gamma\) is in the interval specified above, then this would now contradict Proposition 4.1 by a simple conductor calculation using (3.2). Note that the contribution from \(C(\chi)\) may be neglected as the conductor of \(\chi\) divides the square of the conductor of \(\pi_{f}\), by [11, Theorem A]. Note that in both applications of this Proposition within this paper, the newform \(f\) is taken to correspond to an elliptic curve with CM. Hence there are finitely many choices for \(\chi\) (as it must be the Kronecker character of the field over which the elliptic curve has CM), and so the contribution of \(C(\chi)\) above can be absorbed into the constant \(c_{13}\). ### Non-CM newforms twisted by Dirichlet characters **Lemma 4.4**.: _Let \(\pi\in\mathfrak{F}_{m+1}\) (\(m\geq 1\)), and \(\chi\) be a primitive Dirichlet character. Suppose that \(\pi\) is self dual. Then there exists an absolute constant \(c_{14}\) such that \(L(s,\pi\otimes\chi)\neq 0\) for all \(s\) in the region_ \[\operatorname{Re}(s)\geq 1-\frac{c_{14}}{m\log(C(\pi)C(\chi)(3+|\operatorname{ Im}(s)|))},\] _apart from at most one Siegel zero. If such a Siegel zero exists, it is real and simple, and \(\chi\) is self-dual._ Proof.: When \(\chi\) is real, this is a special case of Theorem 4.2. Hence assume \(\chi\) is not real. Suppose for the sake of contradiction that \(\rho=\sigma+it\) is a zero in that region. Let \(\Pi=\chi|\cdot|^{it}\boxplus\overline{\chi}|\cdot|^{-it}\boxplus\pi\). If \(\psi\) denotes the primitive Dirichlet character that induces \(\chi^{2}\), then \[L(s,\Pi\times\overline{\Pi})=\zeta(s)^{2}L(s,\pi\times\overline{\pi})L(s+it, \pi\otimes\chi)^{2}L(s-it,\pi\otimes\overline{\chi})^{2}L(s+2it,\psi)L(s-2it,\overline{\psi}),\] which has exactly \(3\) poles at \(s=1\). However, since \(\pi\) is self-dual, both \(L(s+it,\pi\otimes\chi)\) and \(L(s-it,\pi\otimes\overline{\chi})\) have a zero at \(\sigma\), so thus \(L(s,\Pi\times\widetilde{\Pi})\) has at least four zeroes at \(\sigma\). Combined with the bound on the conductor in (3.2), this now contradicts Proposition 4.1. Note that this proof works even when \(t=0\); hence a Siegel zero cannot exist in this case. **Lemma 4.5**.: _Consider a non-CM elliptic curve \(E\), and let \(f\) be the cusp form corresponding to \(E\). Let \(\pi_{f}\) be the representation corresponding to \(f\), and let \(\pi=\operatorname{Sym}^{m}\pi_{f}\in\mathfrak{F}_{m+1}\). Then we may replace \(c_{14}\) in Lemma 4.4 with an absolute constant \(c_{15}\) such that Lemma 4.4 holds true for \(\pi\) without the possibility of a Siegel zero._ Proof.: We only need to consider the case when \(\chi\) is self-dual. Consider the isobaric representation \(\Pi_{m}=\mathbf{1}\boxplus\operatorname{Sym}^{2}\pi_{f}\boxplus(\operatorname {Sym}^{m}\pi_{f}\otimes\chi)\). Recall the identities \[L(s,\operatorname{Sym}^{m}\pi_{f}\times\operatorname{Sym}^{m} \pi_{f}) =\zeta(s)\prod_{j=1}^{m}L(s,\operatorname{Sym}^{2j}\pi_{f}),\] \[\operatorname{Sym}^{m}\pi_{f}\times\operatorname{Sym}^{2}\pi_{f} =\boxplus_{j=0}^{2}\operatorname{Sym}^{m+2-2j}\pi_{f},\] These identities imply that \[L(s,\Pi_{m}\times\overline{\Pi}_{m})=\zeta(s)^{3}L(s,\operatorname {Sym}^{m}\pi_{f}\otimes\chi)^{4} L(s,\operatorname{Sym}^{2}\pi_{f})^{3}L(s, \operatorname{Sym}^{4}\pi_{f})L(s,\operatorname{Sym}^{m+2}\pi_{f}\otimes \chi)^{2}\] \[\times L(s,\operatorname{Sym}^{m-2}\pi_{f}\otimes\chi)^{2}\prod_ {j=1}^{m}L(s,\operatorname{Sym}^{2j}\pi_{f}).\] Now, this function has a pole of order \(3\) at \(s=1\). If \(L(s,\operatorname{Sym}^{m}\pi_{f}\otimes\chi)\) has a Siegel zero at \(\rho\), \(L(s,\Pi_{m}\times\overline{\Pi}_{m})\) will have a zero of order at least \(4\) at \(\rho\). Combined with the bound on the conductor in (3.2), this would again contradict Proposition 4.1. ### CM newforms twisted by Dirichlet characters Consider an elliptic curve \(E\) which has complex multiplication over an imaginary quadratic field \(K\). Let \(m\geq 1\) be an integer, \(\xi\) denote the primitive Grossencharacter which satisfies \(L(s,E)=L(s,\xi)\), \(f_{m}\) denote the cusp form which induces \(\xi_{m}\) (where \(\xi_{m}\) is as defined in Section 3.3), and \(\pi_{f_{m}}\in\mathfrak{F}_{2}\) denote the representation corresponding to \(f_{m}\). Finally, let \(\chi\) be a primitive Dirichlet character that induces a character mod \(q\). **Lemma 4.6**.: _There exists an absolute constant \(c_{16}>0\) such that \(L(s,\pi_{f_{m}}\otimes\chi)\) has no zeroes in the region_ \[\operatorname{Re}(s)\geq 1-\frac{c_{16}}{\log(N_{E}qm(|3+\operatorname{Im}(s)|))}.\] Proof.: By [13, Theorem 7.5], there exists an integer \(M\leq N_{E}q^{2}\) such that \(f_{m}\otimes\chi\) is a cusp newform in \(S_{m+1}(\Gamma_{0}(M),\chi_{m}\chi^{2})\) (where \(\chi_{m}\) is as defined in Section 3.3). Hence we have that \(C(\pi_{f_{m}}\otimes\chi)\ll N_{E}q^{2}m^{2}\). Moreover, by [11, Section 5.11], we have that the Rankin-Selberg convolutions \(L(s,\pi_{f_{m}}\otimes\chi\times\pi_{f_{m}}\otimes\chi)\) and \(L(\pi_{f_{m}}\otimes\chi\mp\overline{\pi_{f_{m}}\otimes\chi})\) both exist, with the former being entire and the latter having a simple pole at \(s=1\). Now, using the information above and from Section 3.3, we may apply [13, Theorem 5.10], which proves the above except for the possibility of an exceptional zero. We may now remove this possibility using [12, Theorem C]. ### Rankin-Selberg convolutions of a CM and Non-CM newform Consider two elliptic curves \(E,E^{\prime}\), the former having complex multiplication over an imaginary quadratic field \(K\), and the latter without complex multiplication over any imaginary quadratic field. Let \(\xi\) denote the primitive Grossencharacter which satisfies \(L(s,E)=L(s,\xi)\), \(g_{m}\) (instead of \(f_{m}\)) denote the cusp form which induces \(\xi_{m}\), and \(\pi_{g_{m}}\in\mathfrak{F}_{2}\) denote the representation corresponding to \(g_{m}\). Also let \(f^{\prime}\) denote the cusp form which corresponds to \(E^{\prime}\), and let \(\pi_{f^{\prime}}\in\mathfrak{F}_{2}\) be the representation which corresponds to \(f^{\prime}\). **Lemma 4.7**.: _There exists an absolute constant \(c_{17}>0\) such that for all integers \(1\leq m,m^{\prime}\leq M\), \(L(s,\pi_{g_{m}}\times\operatorname{Sym}^{m^{\prime}}\pi_{f^{\prime}})\neq 0\) in the region_ \[\operatorname{Re}(s)\geq 1-\frac{c_{17}}{M^{2}\log(N_{E}N_{E^{\prime}}(3+| \operatorname{Im}(s)|))}.\] Proof.: When \(\operatorname{Im}(s)\neq 0\), we may apply Theorem 4.2, while in the case where \(\operatorname{Im}(s)=0\), we may apply Proposition 4.3. In both cases, the result follows from a conductor calculation using (3.3) and the properties discussed in Section 3.3. ### Rankin-Selberg convolutions of two CM newforms Consider two twist-inequivalent elliptic curves \(E,E^{\prime}\) having complex multiplication over two imaginary quadratic fields \(K,K^{\prime}\) (respectively). Let \(m,m^{\prime}\geq 1\) be integers. Consider the Grossencharacters \(\xi,\xi^{\prime}\) corresponding to \(E,E^{\prime}\), let \(f=f_{m},f^{\prime}=f^{\prime}_{m^{\prime}}\) be the cusp forms corresponding to \(\xi_{m},\xi^{\prime}_{m^{\prime}}\) (as defined in 3.3), and let \(\pi_{f},\pi_{f^{\prime}}\in\mathfrak{F}_{2}\) denote the representations corresponding to \(f,f^{\prime}\) respectively. Finally, let \(\mathfrak{q}=N_{E}N_{E^{\prime}}mm^{\prime}\). **Lemma 4.8**.: _There exists an absolute constant \(c_{18}>0\) such that \(L(s,\pi_{f}\times\pi_{f^{\prime}})\neq 0\) in the region_ \[\operatorname{Re}(s)\geq 1-\frac{c_{18}}{\log(\mathfrak{q}(3+|\operatorname{Im }(s)|))}.\] Proof.: Note that as a special case of the work due to Bushnell and Henniart [1], we know that \(q(\pi_{f}\times\pi_{f^{\prime}})\) divides \(q(\pi_{f})^{2}q(\pi_{f^{\prime}})^{2}\) (where \(q(\cdot)\) denotes the conductor), and hence \(C(\pi_{f}\times\pi_{f^{\prime}})\ll\mathfrak{q}^{4}\). Now, since \(\pi_{f},\pi_{f^{\prime}}\) are both self-dual, Lemma 4.8 follows from Theorem 4.2 with the exception of a possible Siegel zero. We may remove this possibility using Proposition 4.3. ## 5. Prime Number Theorems Throughout this section, fix the notation from Sections 1 and 3. ### Representations of \(\operatorname{GL}_{m}(\mathbb{A})\) twisted by Dirichlet characters **Proposition 5.1**.: _Let \(\pi\in\mathfrak{F}_{m}\) be self-dual, and let \(\chi\) be a primitive Dirichlet character. Let \(\beta_{1}\) denote the possible Siegel zero from Lemma 4.4. For each prime \(p\), let \(a_{\pi}(p)\) be the coefficient of \(p^{-s}\) in the Dirichlet series expansion of \(L(s,\pi)\). Suppose that \(\pi\) satisfies the generalized Ramanujan conjecture at all primes \(p\nmid C(\pi)\) and all its Langlands parameters satisfy either \(\mu_{\pi}(j)=0\) or \(\operatorname{Re}(\mu_{\pi}(j))\geq\frac{1}{2}\) for each \(j\). Then there exists an absolute constant \(c_{19}>2\) such that if \(2\leq(C(\pi)C(\chi))^{m}\leq x^{1/c_{19}}\) and \(c_{19}mx^{-\frac{1}{c_{19}m}}<\frac{1}{4}\), we have_ \[\left|\bigg{(}\sum_{\begin{subarray}{c}p\leq x\\ p\nmid q_{\pi}q_{\chi}\end{subarray}}a_{\pi}(p)\chi(p)\log p\bigg{)}-r_{\pi \otimes\chi}x+\frac{x^{\beta_{1}}}{\beta_{1}}\right|\] \[\ll m^{2}x^{1-\frac{1}{c_{19}m}}+m^{2}x\Big{(}\exp\Big{(}-\frac{ c_{14}\log x}{2m\log C(\pi)C(\chi)}\Big{)}+\exp\Big{(}-\frac{\sqrt{c_{14}\log x}}{2 \sqrt{m}}\Big{)}\Big{)}.\] _where the \(\frac{x^{\beta_{1}}}{\beta_{1}}\) term is omitted if \(\beta_{1}\) does not exist._ Proof.: The proof is identical to the proof of [14, Proposition 5.1], except for the zero-free region [14, Corollary 4.2] is replaced by Lemma 4.4. Note that by definition, the coefficient \(a_{\pi\otimes\chi}(p)\) of \(p^{-s}\) in the Dirichlet series expansion of \(L(s,\pi)\) is precisely \(a_{\pi}(p)\chi(p)\) for \(p\nmid q_{\pi}q_{\chi}\). **Lemma 5.2**.: _Let \(E\) be a non-CM elliptic curve, and let \(\chi\) be a Dirichlet character which induces a character of modulus \(q\). Then, there exists a sufficiently small absolute constant \(c_{20}\) such that if \(M=\frac{c_{20}\sqrt{\log x}}{\log(N_{E}q\log x)}\), then for all \(1\leq m\leq M,\)_ \[\sum_{\begin{subarray}{c}p\leq x\\ p\nmid N_{E}\end{subarray}}\chi(p)U_{m}(\cos\theta_{E}(p))\log p\ll m^{2}x \Big{(}x^{-\frac{1}{c_{19}m}}+\Big{(}\exp\Big{(}-\frac{c_{15}\log x}{2m^{2} \log(N_{E}qm)}\Big{)}+\exp\Big{(}-\frac{c_{15}\sqrt{\log x}}{2\sqrt{m}}\Big{)} \Big{)}\Big{)}.\] Proof.: Consider any integer \(m\geq 1\). Let \(f\) be the cusp form corresponding to \(E\), and let \(\pi_{f}\in\mathfrak{F}_{2}\) be the representation corresponding to \(f\). We first establish the following properties about \(L(s,\operatorname{Sym}^{m}\pi_{f}\otimes\chi)\). 1. The conductor of \(\operatorname{Sym}^{m}\pi_{f}\otimes\chi\) satisfies \(\log C(\operatorname{Sym}^{m}\pi_{f}\otimes\chi)\ll m\log(N_{E}qm)\). 2. All of the Langlands parameters at infinity of \(L(s,\operatorname{Sym}^{m}\pi_{f}\otimes\chi)\) are nonnegative, and either integers or half integers. 3. \(L(s,\operatorname{Sym}^{m}\pi_{f}\otimes\chi)\) is the \(L\)-function of a cuspidal automorphic representation in \(\mathfrak{F}_{m+1}\). 4. \(L(s,\operatorname{Sym}^{m}\pi_{f}\otimes\chi)\) is entire for \(m\geq 1\). 5. \(L(s,\operatorname{Sym}^{m}\pi_{f}\otimes\chi)\) has no zero in the region \[\operatorname{Re}(s)\geq 1-\frac{c_{14}}{m^{2}\log(N_{E}qm(3+|\mathrm{Im}(s)|) )}.\] 6. \(L(s,\operatorname{Sym}^{m}\pi_{f}\otimes\chi)\) satisfies the generalized Ramanujan conjecture (GRC). (1) follows from (3.2) and (3.3). (2), (3), and (4) all follow from the work due to Newton and Thorne in [13] and [13]. (5) follows from Lemma 4.4 and Lemma 4.5. (6) follows from the definition of the symmetric power \(L\)-function, given in Section 3.7. Now, given (1)-(6), we have that evidently there exists a sufficiently small absolute constant \(c_{20}\) such that if \(M=\frac{c_{20}\sqrt{\log x}}{\log(N_{E}q\log x))}\), then for all \(1\leq m\leq M\), \(\operatorname{Sym}^{m}\pi_{f}\otimes\chi\in\mathfrak{F}_{m+1}\) satisfies the conditions in Proposition 5.1 for all \(x\geq 3\). Applying the proposition, we thus obtain \[\sum_{\begin{subarray}{c}p\leq x\\ p\nmid N_{E}\end{subarray}}\chi(p)U_{m}(\cos\theta_{E}(p))\log p\ll m^{2}x^{1 -\frac{1}{c_{19}m}}+m^{2}x\Big{(}\exp\Big{(}-\frac{c_{15}\log x}{2m^{2}\log( N_{E}qm)}\Big{)}+\exp\Big{(}-\frac{c_{15}\sqrt{\log x}}{2\sqrt{m}}\Big{)}\Big{)},\] as desired. ### CM Newforms twisted by Dirichlet characters Consider an elliptic curve \(E\) which has complex multiplication over an imaginary quadratic field \(K\). Let \(m\geq 1\) be an integer, \(\xi\) denote the primitive Grossencharacter which satisfies \(L(s,E)=L(s,\xi)\), \(f_{m}\) denote the cusp form which induces \(\xi_{m}\) (where \(\xi_{m}\) is as defined in Section 3.3), and \(\pi_{f_{m}}\in\mathfrak{F}_{2}\) denote the representation corresponding to \(f_{m}\). **Lemma 5.3**.: _Given any positive integer \(m\), there exists an absolute constant \(c_{21}>0\) such that for all \(x\geq N_{E}\),_ \[\sum_{\begin{subarray}{c}p\leq x\\ p\equiv a\bmod q\\ p\text{ splits in }K\end{subarray}}\cos(m\theta_{E}(p))\log p\ll x\Big{(}\exp \Big{(}\frac{-c_{21}\log x}{\sqrt{\log x}+3\log(N_{E}qm)}\Big{)}\log(xN_{E}qm )^{4}\Big{)}.\] Proof.: Consider a primitive Dirichlet character \(\chi\) which induces a Dirichlet character mod \(q\). We first establish a prime number theorem for \(L(s,f_{m}\otimes\chi)\). Let \(\Lambda(n)\) denote the Von Mangoldt function. Note that by [14, Section 12.3], we have that if we write the logarithmic derivative of \(L(s,f_{m}\otimes\chi)\) as \[-\frac{L^{\prime}(s,f_{m}\otimes\chi)}{L(s,f_{m}\otimes\chi)}=\sum_{n}a_{m, \chi}(n)\Lambda(n)n^{-s},\] then \[a_{m,\chi}(p)=\begin{cases}2\chi(p)\cos(m\theta_{E}(p))&p\text{ splits in }K\\ 0&\text{otherwise}\end{cases}\] and \(|a_{m,\chi}(p^{j})|\leq 2\) for all prime powers \(p^{j},j\geq 2\). Note that when \(\chi\) is trivial, we will write \(a_{m,\chi}=a_{m}\). Now, this evidently implies that \(\sum_{n\leq x}|a_{m,\chi}(n)\Lambda(n)|^{2}\ll x\log^{2}(x)\). Hence, given the above and Lemma 4.6, we may apply [13, Theorem 5.13] and obtain that there exists an absolute constant \(c_{21}>0\) for which \[\sum_{n\leq x}a_{m,\chi}(n)\Lambda(n)\ll x\exp\Big{(}\frac{-c_{21}\log x}{\sqrt {\log x}+3\log(N_{E}qm)}\Big{)}\log(xN_{E}qm)^{4}. \tag{5.1}\] Now, note that \[\sum_{\begin{subarray}{c}p\leq x\\ p\equiv a\bmod q\\ p\text{ splits in }K\end{subarray}}\cos(m\theta_{E}(p))\log p-\frac{1}{2}\sum_{ \begin{subarray}{c}n\leq x\\ n\equiv a(q)\end{subarray}}a_{m}(n)\Lambda(n) =-\frac{1}{2}\sum_{\begin{subarray}{c}p\leq x\\ r\geq 2\\ p^{j}\equiv a(q)\end{subarray}}a_{m}(p^{r})\log p-\frac{1}{2}\sum_{ \begin{subarray}{c}p\leq x\\ p\equiv a\bmod q\\ p|N_{E}\end{subarray}}a_{m}(p)\log p\] \[\ll\sqrt{N_{E}}\log N_{E}+\sqrt{x}\log x.\] Here we used the fact that by (4), \(a_{m}(p)=2\cos(m\theta_{E}(p))\) for all \(p\nmid N_{E}\) and \(|a_{m}(p)|\leq 2\sqrt{p}\) for all \(p\). Hence we have that \[\sum_{\begin{subarray}{c}p\leq x\\ p\equiv a\bmod q\\ p\text{ splits in }K\end{subarray}}\cos(m\theta_{E}(p))\log p \ll\sqrt{N_{E}}\log N_{E}+\sqrt{x}\log x+\sum_{\begin{subarray}{ c}n\leq x\\ n\equiv a(q)\end{subarray}}a_{m}(n)\Lambda(n)\] \[\ll\sqrt{N_{E}}\log N_{E}+\sqrt{x}\log x+\frac{1}{\varphi(q)} \sum_{\chi(q)}\overline{\chi(a)}\sum_{n\leq x}a_{m,\chi}(n)\Lambda(n)\] \[\ll x\exp\Big{(}\frac{-c_{21}\log x}{\sqrt{\log x}+3\log(N_{E}qm) }\Big{)}\log(xN_{E}qm)^{4}.\] when \(x\geq N_{E}\), as desired. Note that in the summation \(\chi\) ranges over all the primitive characters which induce the characters \(\bmod q\), while to get the last inequality we replace these characters with all the (possibly imprimitive) characters \(\bmod q\). This generates a negligible error which is absorbed into other error terms. For the rest of the paper, the symbol \(\chi(q)\) in the subscript for a summation means that the sum ranges through all Dirichlet characters \(\chi\) with modulus \(q\). Again, we will often replace these characters by the primitive characters that induce them; in every case the error will be negligble. For the proof of Theorem 1.7 (2.b), it will also be necessary to derive an estimate for the number of primes in an arithmetic progression which split or remain inert in \(K\). Throughout the following proof, let \(D_{K}\) be the discriminant of \(K\). **Lemma 5.4**.: _There exists absolute constants \(c_{22},c_{23}\) such that_ 1. \(\#\{p\leq x:p\equiv a\bmod q,p\text{\ splits\ in\ }K\}\)__ \[=\frac{\pi(x)}{2\varphi(q)}\,\big{(}1+\chi_{K}(a)\delta(K,q)\big{)}\] \[+O\Big{(}\pi(x)\Big{(}\frac{x^{-c_{22}/\sqrt{q!|D_{K}|}}}{\varphi( q)}+\exp\Big{(}\frac{-c_{23}\log x}{\sqrt{\log x}+3\log(q|D_{K}|)}\Big{)}(\log( xq|D_{K}|))^{4}\Big{)}\Big{)}.\] 2. \(\#\{p\leq x:p\equiv a\bmod q,p\text{\ inert\ in\ }K\}\)__ \[=\frac{\pi(x)}{2\varphi(q)}\,\big{(}1-\chi_{K}(a)\delta(K,q)\big{)}\] \[+O\Big{(}\pi(x)\Big{(}\frac{x^{-c_{22}/\sqrt{q!|D_{K}|}}}{\varphi( q)}+\exp\Big{(}\frac{-c_{23}\log x}{\sqrt{\log x}+3\log(q|D_{K}|)}\Big{)}(\log( xq|D_{K}|))^{4}\Big{)}\Big{)}.\] Proof.: The two estimates are proved analogously, so we only prove (1). First, note that \[\sum_{\begin{subarray}{c}p\leq x\\ p\equiv a\bmod q\\ p\text{\ splits\ in\ }K\end{subarray}}1 =\sum_{\begin{subarray}{c}p\leq x\\ p\equiv a\bmod q\end{subarray}}\frac{\chi_{K}(p)+1}{2}\] \[=\frac{1}{2}\pi(x;q,a)+\frac{1}{2}\sum_{\begin{subarray}{c}p \leq x\\ p\equiv a\bmod q\end{subarray}}\chi_{K}(p).\] Now, given the possibility of a Siegel zero, we have that by [14, Section 11.3] and [15, Theorem 5.28], there exists absolute constants \(c_{22}\) and \(c_{24}\) such that \(\pi(x;q,a)\) satisfies the following: \[\pi(x;q,a)=\frac{\operatorname{Li}(x)}{\varphi(q)}+O\Big{(}\frac{x}{\log x} \Big{(}\frac{x^{-c_{22}/\sqrt{q}}}{\varphi(q)}+(\log x)e^{-c_{24}\sqrt{\log x} }\Big{)}\Big{)}. \tag{5.2}\] where \(\operatorname{Li}(x):=\int_{2}^{x}\frac{dt}{\ln t}\). Note that by [14, Theorem 6.14], we may replace the \(\operatorname{Li}(x)\) in (5.2) with \(\pi(x)\) and generate an error that is absorbed into the other error terms. Now, consider any primitive character \(\chi\) that induces \(\chi^{\prime}\chi_{K}\) for some \(\chi^{\prime}\bmod q\). Define \(\delta_{\chi}\) as \(1\) if \(\chi\) is trivial and \(0\) otherwise. Then, by [15, Theorem 5.13, Theorem 5.28], there exists an absolute constant \(c_{25}\) such that \[\sum_{n\leq x}\chi(n)\Lambda(n)=\delta_{\chi}x+O\Big{(}x^{1-c_{22}/\sqrt{q}}+x \exp\Big{(}\frac{-c_{25}\log x}{\sqrt{\log x}+3\log q}\Big{)}(\log xq)^{4} \Big{)}.\] Here we adjust \(c_{22}\) if necessary to absorb the contribution from \(\sqrt{D_{K}}\), which can only take finitely many values. Also note that by [15, Theorem 5.28], the Siegel zero term \(x^{1-c_{22}/\sqrt{q}}\) can only exist for at most one of the primitive \(\chi\) we consider. Now, note that \[\sum_{p\leq x}\chi(p)\log p=\sum_{n\leq x}\chi(n)\Lambda(n)+O(\sqrt{x}\log x).\] We thus obtain \[\sum_{p\leq x}\chi(p)\log p=\delta_{\chi}x+O\Big{(}x^{1-c_{22}/\sqrt{q}}+x \exp\Big{(}\frac{-c_{25}\log x}{\sqrt{\log x}+3\log q}\Big{)}(\log xq)^{4} \Big{)}. \tag{5.3}\] We now bound \(\sum_{\begin{subarray}{c}p\leq x\\ p\equiv a\bmod q\end{subarray}}\chi_{K}(p)\log p\). We split into two cases. Note that by [14, Theorem 9.13], we have that \(\chi_{K}\) is primitive with conductor \(|D_{K}|\). Hence, when \(\delta(K,q)=0\), we have that is nontrivial for all \(\chi^{\prime}\bmod q\) with conductor dividing \(q|D_{K}|\), so thus by (5.3), \[\sum_{\begin{subarray}{c}p\leq x\\ p\equiv a\bmod q\end{subarray}}\chi_{K}(p)\log p =\frac{1}{\varphi(q)}\sum_{\chi(q)}\overline{\chi(a)}\sum_{p\leq x }\chi(p)\chi_{K}(p)\log p\] \[\ll\frac{x^{1-c_{22}/\sqrt{q|D_{K}|}}}{\varphi(q)}+x\exp\Big{(} \frac{-c_{25}\log x}{\sqrt{\log x}+3\log(q|D_{K}|)}\Big{)}(\log(xq|D_{K}|))^{4}.\] When \(\delta(K,q)=1\), \(\chi^{\prime}\chi_{K}\) is trivial for exactly one character \(\chi^{\prime}\bmod q\). For this \(\chi^{\prime}\), we have \(\overline{\chi^{\prime}(a)}=1/\overline{\chi_{K}(a)}=\chi_{K}(a)\). By (5.3), we thus have \[\sum_{\begin{subarray}{c}p\leq x\\ p\equiv a\bmod q\end{subarray}}\chi_{K}(p)\log p =\frac{1}{\varphi(q)}\sum_{\chi(a)}\overline{\chi(a)}\sum_{p\leq x }\chi(p)\chi_{K}(p)\log p\] \[=\frac{\chi_{K}(a)}{\varphi(q)}x+O\Big{(}\frac{x^{1-c_{22}/\sqrt{ q}}}{\varphi(q)}+x\exp\Big{(}\frac{-c_{25}\log x}{\sqrt{\log x}+3\log q} \Big{)}(\log xq)^{4}\Big{)}.\] We now collate the bounds above. By partial summation, we have \[\sum_{\begin{subarray}{c}p\leq x\\ p\equiv a\bmod q\end{subarray}}\chi_{K}(p) =\frac{\pi(x)}{\varphi(q)}\chi_{K}(a)\delta(K,q)\] \[+O\bigg{(}\frac{1}{\log x}\Big{(}\frac{x^{1-c_{22}/\sqrt{q|D_{K}|} }}{\varphi(q)}+x\exp\Big{(}\frac{-c_{25}\log x}{\sqrt{\log x}+3\log(q|D_{K}|)} \Big{)}(\log(xq|D_{K}|))^{4}\Big{)}\bigg{)}.\] Hence, there exists an absolute constant \(c_{23}\) such that \[\sum_{\begin{subarray}{c}p\leq x\\ p\equiv a\bmod q\\ p\text{ splits in }K\end{subarray}} 1 =\frac{1}{2}\pi(x;q,a)+\frac{1}{2}\sum_{\begin{subarray}{c}p \leq x\\ p\equiv a\bmod q\end{subarray}}\chi_{K}(p)\] \[=\frac{\pi(x)}{2\varphi(q)}\left(1+\chi_{K}(a)\delta(K,q)\right)\] \[+O\Big{(}\pi(x)\Big{(}\frac{x^{-c_{22}/\sqrt{q|D_{K}|}}}{\varphi( q)}+\exp\Big{(}\frac{-c_{23}\log x}{\sqrt{\log x}+3\log(q|D_{K}|)}\Big{)}(\log(xq|D_{K }|))^{4}\Big{)}\Big{)}.\] ### Rankin-Selberg convolutions of a CM and Non-CM newform Consider two elliptic curves \(E,E^{\prime}\), the former having complex multiplication over an imaginary quadratic field \(K\), and the latter without complex multiplication over any imaginary quadratic field. Let \(\xi\) denote the primitive Grossencharacter which satisfies \(L(s,E)=L(s,\xi)\), \(g_{m}\) (instead of \(f_{m}\)) denote the cusp form which induces \(\xi_{m}\), and \(\pi_{g_{m}}\in\mathfrak{F}_{2}\) denote the representation corresponding to \(g_{m}\). Also let \(f^{\prime}\) denote the cusp form which corresponds to \(E^{\prime}\), and let \(\pi_{f^{\prime}}\in\mathfrak{F}_{2}\) be the representation which corresponds to \(f^{\prime}\). **Lemma 5.5**.: _There exists an absolute constant \(c_{26}>0\) such that if \(M=\frac{c_{26}\sqrt{\log x}}{\log(N_{E}N_{E^{\prime}}\log x)}\), then for all \(1\leq m,m^{\prime}\leq M\),_ \[\sum_{\begin{subarray}{c}p\leq x\\ p\nmid N_{E}N_{E^{\prime}}\end{subarray}}\cos(m\theta_{E}(p))U_{m}(\cos\theta_ {E^{\prime}}(p))\log p\] \[\ll m^{\prime 2}x\Big{(}x^{-\frac{1}{32c_{19}M^{2}}}+\exp\Big{(}- \frac{c_{17}\log x}{4M^{2}\log(N_{E}N_{E^{\prime}}M)}\Big{)}+\exp\Big{(}-\frac {\sqrt{c_{17}\log x}}{2M}\Big{)}\Big{)}.\] Proof.: By the information in Sections 3.3 and 3.7, along with [14, Equation 5.86] and [19, Equation 3.3], we have that the Langlands parameters \(\mu_{\pi_{g_{m}}\times\operatorname{Sym}^{m^{\prime}}\pi_{f^{\prime}}}\) of \(L(s,\pi_{g_{m}}\times\operatorname{Sym}^{m^{\prime}}\pi_{f^{\prime}})\) satisfy that \(\mu_{\pi_{g_{m}}\times\operatorname{Sym}^{m^{\prime}}\pi_{f^{\prime}}}=0\) or \(\operatorname{Re}(\mu_{\pi_{g_{m}}\times\operatorname{Sym}^{m^{\prime}}\pi_{f^ {\prime}}})\geq\frac{1}{2}\). Combining this with (3.1), we find that there exists an absolute constant \(c_{26}>0\) such that if \(M\) is defined as above, then \(\pi_{g_{m}}\times\operatorname{Sym}^{m^{\prime}}\pi_{f^{\prime}}\) satisfies the conditions of [13, Proposition 5.1] for all \(1\leq m,m^{\prime}\leq M\). Applying the Proposition, we find that \[\sum_{\begin{subarray}{c}p\leq x\\ p\nmid N_{E}N_{E^{\prime}}\end{subarray}}\cos(m\theta_{E}(p))U_{m}(\cos\theta_ {E^{\prime}}(p))\log p\] \[\ll m^{\prime 2}x^{1-\frac{1}{32c_{10}M^{2}}}+m^{\prime 2}x\Big{(} \exp\Big{(}-\frac{c_{17}\log x}{4M^{2}\log(N_{E}N_{E^{\prime}}M)}\Big{)}+\exp \Big{(}-\frac{\sqrt{c_{17}\log x}}{2M}\Big{)}\Big{)},\] as desired. ### Rankin-Selberg convolutions of Two CM newforms Consider two twist-inequivalent elliptic curves \(E,E^{\prime}\) having complex mulitplication over two imaginary quadratic fields \(K,K^{\prime}\) (respectively). Let \(m,m^{\prime}\geq 1\) be integers. Consider the Grossencharacters \(\xi,\xi^{\prime}\) corresponding to \(E,E^{\prime}\), let \(f=f_{m},f^{\prime}=f^{\prime}_{m^{\prime}}\) be the cusp forms corresponding to \(\xi_{m},\xi^{\prime}_{m^{\prime}}\) (as defined in 3.3), and let \(\pi_{f},\pi_{f^{\prime}}\in\mathfrak{F}_{2}\) denote the representations corresponding to \(f,f^{\prime}\) respectively. Finally, let \(\mathfrak{q}=N_{E}N_{E^{\prime}}mm^{\prime}\). **Lemma 5.6**.: _There exists an absolute constant \(c_{27}>0\) such that_ \[\sum_{\begin{subarray}{c}p\leq x\\ p\text{ splits in }K\end{subarray}}\cos(m\theta_{E}(p))\cos(m^{\prime}\theta_{E^{ \prime}}(p))\log p\ll x\exp\bigg{(}\frac{-c_{27}\log x}{\sqrt{\log x}+3\log \mathfrak{q}}\bigg{)}(\log x\mathfrak{q})^{4}.\] Proof.: This is trivial for \(x<\mathfrak{q}\); thus assume \(x\geq\mathfrak{q}\). Let \(\Lambda(n)\) denote the Von Mangoldt function, and let the Dirichlet series expansion of \(L(s,\pi_{f}\times\pi_{f^{\prime}})\) be \(\sum_{n\geq 1}a_{\pi_{f}\times\pi_{f^{\prime}}}(n)n^{-s}\). Define \[\psi(x,\pi_{f}\times\pi_{f^{\prime}})=\sum_{n\leq x}a_{\pi_{f}\times\pi_{f^{ \prime}}}(n)\Lambda(n).\] Note that by definition \(a_{\pi_{f}\times\pi_{f^{\prime}}}(p)=\cos(m\theta_{E}(p))\cos(m^{\prime}\theta _{E^{\prime}}(p))\) when \(p\nmid N_{E}N_{E^{\prime}}\), and moreover this quantity equals \(0\) when \(p\) is inert in either \(K\) or \(K^{\prime}\). Hence \[\psi(x,\pi_{f}\times\pi_{f^{\prime}}) =\sum_{\begin{subarray}{c}p\leq x\\ p\nmid N_{E}N_{E^{\prime}}\\ p\nmid\text{ splits in }K\text{ and }K^{\prime}\end{subarray}}\cos(m\theta_{E}(p))\cos(m \theta_{E^{\prime}}(p))\log p+O\bigg{(}\sum_{p^{\prime}\leq x,j\geq 2}\log p+ \sum_{p\mid N_{E}N_{E^{\prime}}}\sqrt{p}\log p\bigg{)}\] \[=\sum_{\begin{subarray}{c}p\leq x\\ p\nmid N_{E}N_{E^{\prime}}\\ p\nmid\text{ splits in }K\text{ and }K^{\prime}\end{subarray}}\cos(m\theta_{E}(p))\cos(m \theta_{E^{\prime}}(p))\log p+O\left(\sqrt{x}\log x+\sqrt{N_{E}N_{E^{\prime}}} \log(N_{E}N_{E^{\prime}})\right).\] Now, note that [16, Equation 5.48] holds trivially for \(L(s,\pi_{f}\times\pi_{f^{\prime}})\). Hence, given the information above along with Lemma 4.8, we may apply [16, Theorem 5.13] (substituting [16, Equation 5.39] with (4.8)) and get that there exists an absolute constant \(c_{27}>0\) such that \[\psi(x,\pi_{f}\times\pi_{f^{\prime}})\ll x\exp\bigg{(}-\frac{c_{27}\log x}{ \sqrt{\log x}+3\log\mathfrak{q}}\bigg{)}(\log x\mathfrak{q})^{4},\] Substituting in \(\psi\), we now get our desired result. ## 6. Proof of Theorem 1.7 (2.a) Fix the notation from Sections 1, 3, and 4.1. In particular, let \(a,q\) be coprime positive integers, and consider an elliptic curve \(E\) without complex multiplication over any imaginary quadratic field. Proof of Theorem 1.7 (2.a).: We prove the Theorem for \(x\geq N_{E}\); for \(x<N_{E}\) the result is trivial. We start with some preliminary manipulations. Let \(U_{m}\) denote the \(m\)th Chebyshev polynomial of the second kind. By Lemma 3.1, we have \[\pi_{I}(x;q,a) =\sum_{\begin{subarray}{c}p\leq x\\ p\equiv a(q)\end{subarray}}\mathbf{1}_{I}(\theta_{E}(p))\] \[\leq\sum_{\begin{subarray}{c}p\leq x\\ p\nmid N_{E}\\ p\equiv a(q)\end{subarray}}\bigg{(}\mu_{\mathrm{ST}}(I)+c_{9}\bigg{(}\frac{1} {M}+\sum_{m=1}^{M}\bigg{(}\frac{1}{m}U_{m}(\cos\theta_{E}(p))\bigg{)}\bigg{)} \bigg{)}\] \[=\mu_{\mathrm{ST}}(I)\pi(x;q,a)+O\Bigg{(}\frac{\pi(x;q,a)}{M}+\sum _{\begin{subarray}{c}p\leq x\\ p\equiv a(q)\end{subarray}}\sum_{m=1}^{M}\bigg{(}\frac{1}{m}U_{m}(\cos\theta_{ E}(p))\bigg{)}\bigg{)},\] We may similarly bound \(\pi_{I}(x;q,a)\) from below to obtain that \[\pi_{I}(x;q,a)=\mu_{\mathrm{ST}}(I)\pi(x;q,a)+O\Bigg{(}\frac{\pi(x;q,a)}{M}+ \sum_{\begin{subarray}{c}p\leq x\\ p\equiv a(q)\end{subarray}}\sum_{m=1}^{M}\bigg{(}\frac{1}{m}U_{m}(\cos\theta_{ E}(p))\bigg{)}\Bigg{)}. \tag{6.1}\] Moreover, note that \[\sum_{\begin{subarray}{c}p\leq x\\ p\equiv a(q)\end{subarray}}\sum_{m=1}^{M}\frac{1}{m}U_{m}(\cos\theta_{E}(p))= \frac{1}{\varphi(q)}\sum_{\chi\bmod q}\overline{\chi(a)}\Bigg{(}\sum_{m=1}^{M }\frac{1}{m}\bigg{(}\sum_{p\leq x}\chi(p)U_{m}(\cos\theta_{E}(p))\bigg{)} \bigg{)}.\] Now, by Lemma 5.2, we have that there exists a sufficiently small absolute constant \(c_{20}\) such that if \(M=\frac{c_{20}\sqrt{\log x}}{\log(N_{E}q\log x))}\), then for all \(1\leq m\leq M\), \[\sum_{\begin{subarray}{c}p\leq x\\ p\nmid N_{E}\end{subarray}}\chi(p)U_{m}(\cos\theta_{E}(p))\log p\ll m^{2}x \Big{(}x^{-\frac{1}{c_{19}m}}+\Big{(}\exp\Big{(}-\frac{c_{15}\log x}{2m^{2} \log(N_{E}qm)}\Big{)}+\exp\Big{(}-\frac{c_{15}\sqrt{\log x}}{2\sqrt{m}}\Big{)} \Big{)}\Big{)}.\] Applying partial summation and inserting the contributions from primes dividing \(N_{E}\) (which is negligible for \(x>N_{E}\)), we obtain \[\sum_{p\leq x}\chi(p)U_{m}(\cos\theta_{E}(p))\ll\frac{m^{2}x}{\log x}\Big{(}x ^{-\frac{1}{c_{19}m}}+\Big{(}\exp\Big{(}-\frac{c_{15}\log x}{2m^{2}\log(N_{E} qm)}\Big{)}+\exp\Big{(}-\frac{c_{15}\sqrt{\log x}}{2\sqrt{m}}\Big{)}\Big{)} \Big{)}.\] Substituting this into (6.1), we obtain that \[\pi_{I}(x;q,a)-\mu_{\mathrm{ST}}(I)\pi(x;q,a)-\frac{\pi(x)}{ \varphi(q)M}\] \[\ll\frac{1}{\varphi(q)}\sum_{\chi\bmod q}\overline{\chi(a)} \sum_{m=1}^{M}\bigg{(}\frac{mx}{\log x}\Big{(}x^{-\frac{1}{c_{19}m}}+\Big{(} \exp\Big{(}-\frac{c_{15}\log x}{2m^{2}\log(N_{E}qm)}\Big{)}+\exp\Big{(}-\frac {c_{15}\sqrt{\log x}}{2\sqrt{m}}\Big{)}\Big{)}\Big{)}\bigg{)}\] \[\ll\frac{M^{2}x}{\log x}\Big{(}x^{-\frac{1}{c_{19}M}}+\exp\Big{(} -\frac{c_{15}\log x}{2M^{2}\log(N_{E}qM)}\Big{)}+\exp\Big{(}-\frac{c_{15} \sqrt{\log x}}{2\sqrt{M}}\Big{)}\Big{)}.\] Adjusting \(c_{20}\) to be suitably small, we find that for all \(x\geq 2\), \[\pi(x)\bigg{(}\frac{1}{\varphi(q)M}+M^{2}\Big{(}x^{-\frac{1}{c_{19 }M}}+\exp\Big{(}-\frac{c_{15}\log x}{2M^{2}\log(N_{E}qM)}\Big{)}+\exp\Big{(}- \frac{c_{15}\sqrt{\log x}}{2\sqrt{M}}\Big{)}\Big{)}\bigg{)}\] \[\ll\pi(x;q,a)\frac{\log(N_{E}q\log x)}{\sqrt{\log x}},\] Finally, recall that by (5.2), \[\pi(x;q,a)=\frac{\pi(x)}{\varphi(q)}+O\bigg{(}\frac{x}{\log x}\Big{(}\frac{x^{-c_ {22}/\sqrt{q}}}{\varphi(q)}+(\log x)e^{-c_{24}\sqrt{\log x}}\Big{)}\bigg{)}.\] Collating the above bounds, we get our desired estimate. ## 7. Proof of Theorem 1.7 (2.b) Throughout this section, we will fix the notation from Sections 1, 3, and 4.2. In particular, we will consider an elliptic curve \(E\) which has complex multiplication over an imaginary quadratic field \(K\), with its theta values \(\theta_{E}(p)\) corresponding to its traces of Frobenius modulo \(p\). Proof of Theorem 1.7 (2.b).: The bound is trivially true in the case \(x<N_{E}q\), so we only prove the case \(x\geq N_{E}q\). Note that \[\pi_{I}(x;q,a)=\sum_{\begin{subarray}{c}p\leq x\\ p\equiv a\bmod q\\ p\text{ splits in }K\end{subarray}}\mathbf{1}_{I}(\theta_{E}(p))+\mathbf{1}_{ \pi/2\in I}\sum_{\begin{subarray}{c}p\leq x\\ p\equiv a\bmod q\\ p\text{ inert }q\end{subarray}}1. \tag{7.1}\] We first estimate the first term on the right side of (7.1). By Lemma 3.1, we have that \[\sum_{\begin{subarray}{c}p\leq x\\ p\equiv a\bmod q\\ p\text{ splits in }K\end{subarray}}\mathbf{1}_{I}(\theta_{E}(p)) \leq\sum_{\begin{subarray}{c}p\leq x\\ p\equiv a\bmod q\\ p\text{ splits in }K\end{subarray}}\bigg{(}\frac{|I|}{\pi}+c_{9}\bigg{(}\frac{1}{M}+ \sum_{1\leq m\leq M}\frac{\cos(m\theta_{E}(p))}{m}\bigg{)}\bigg{)}\] \[=\bigg{(}\sum_{\begin{subarray}{c}p\leq x\\ p\equiv a\bmod q\\ p\text{ splits in }K\end{subarray}}\frac{|I|}{\pi}+O\Big{(}\frac{1}{M}\Big{)} \bigg{)}+O\bigg{(}\sum_{1\leq m\leq M}\frac{1}{m}\sum_{\begin{subarray}{c}p \leq x\\ p\equiv a\bmod q\\ p\text{ splits in }K\end{subarray}}\cos(m\theta_{E}(p))\bigg{)}.\] We may similarly bound the left hand side from below, to obtain that \[\sum_{\begin{subarray}{c}p\leq x\\ p\equiv a\bmod q\\ p\text{ splits in }K\end{subarray}}\mathbf{1}_{I}(\theta_{E}(p))=\bigg{(}\sum_{ \begin{subarray}{c}p\leq x\\ p\equiv a\bmod q\\ p\text{ splits in }K\end{subarray}}\frac{|I|}{\pi}+O\Big{(}\frac{1}{M}\Big{)} \bigg{)}+O\bigg{(}\sum_{1\leq m\leq M}\frac{1}{m}\sum_{\begin{subarray}{c}p \leq x\\ p\equiv a\bmod q\\ p\text{ splits in }K\end{subarray}}\cos(m\theta_{E}(p))\bigg{)}. \tag{7.2}\] Now, using Lemma 5.3 and partial summation, we find that \[\sum_{\begin{subarray}{c}p\leq x\\ p\equiv a\bmod q\\ p\text{ splits in }K\end{subarray}}\cos(m\theta_{E}(p)) \ll\frac{1}{\log x}\Big{(}x\exp\Big{(}\frac{-c_{21}\log x}{ \sqrt{\log x}+3\log(N_{E}qm)}\Big{)}(\log(xN_{E}qm)^{4}\Big{)}\] \[+\Big{(}\int_{1}^{x}(\log t)^{2}\exp\Big{(}\frac{-c_{21}\log t}{ \sqrt{\log t}+3\log(N_{E}qm)}\Big{)}\log(N_{E}qm)^{4}dt\Big{)} \tag{7.3}\] \[\ll\frac{1}{\log x}\Big{(}x\exp\Big{(}\frac{-c_{21}\log x}{ \sqrt{\log x}+3\log(N_{E}qm)}\Big{)}(\log(xN_{E}qm)^{4}\Big{)}.\] Here we use that for \(x\geq N_{E}q\), \[\int_{1}^{x}(\log t)^{2}\exp\Big{(}\frac{-c_{21}\log t}{\sqrt{ \log t}+3\log(N_{E}qm)}\Big{)}\log(N_{E}qm)^{4}dt\] \[=\int_{0}^{\log x}\exp\Big{(}t-\frac{-c_{21}t}{\sqrt{t}+3\log(N_ {E}qm)}\Big{)}t^{2}\log(N_{E}qm)^{4}dt\] \[\ll x(\log x)^{3}\exp\Big{(}\frac{-c_{21}\log x}{\sqrt{\log x}+3 \log(N_{E}qm)}\Big{)}\log(N_{E}qm)^{4}.\] We now use (7.3) and Lemma 5.4 to estimate the error in (7.2). We first investigate the contribution from the \(p\) which split in \(K\). Recall that \[\sum_{\begin{subarray}{c}p\leq x\\ p\equiv a\bmod q\\ p\text{ splits in }K\end{subarray}}\mathbf{1}_{I}(\theta_{E}(p))-\sum_{ \begin{subarray}{c}p\leq x\\ p\equiv a\bmod q\\ p\text{ splits in }K\end{subarray}}\frac{|I|}{\pi}\ll\sum_{\begin{subarray}{c}p\leq x \\ p\equiv a\bmod q\\ p\text{ splits in }K\end{subarray}}\frac{1}{M}+\sum_{1\leq m\leq M}\sum_{ \begin{subarray}{c}p\leq x\\ p\equiv a\bmod q\\ p\text{ splits in }K\end{subarray}}\frac{1}{m}\cos(m\theta_{E}(p)).\] Hence by Lemma 5.4 (1), we have \[\sum_{\begin{subarray}{c}p\leq x\\ p\equiv a\bmod q\\ p\text{ splits in }K\end{subarray}}\mathbf{1}_{I}(\theta_{E}(p))-\frac{|I|}{\pi} \frac{\pi(x)}{2\varphi(q)}\Big{(}1+\chi_{K}(a)\delta(K,q)\Big{)}\] \[\ll\frac{x}{\log x}\Big{(}\frac{x^{-c_{22}/\sqrt{q}}}{\varphi(q)} +\exp\Big{(}\frac{-c_{23}\log x}{\sqrt{\log x}+3\log q}\Big{)}(\log xq)^{4} \Big{)}+\frac{\pi(x)}{M\varphi(q)}\] \[+\sum_{1\leq m\leq M}\frac{x}{m\log x}\Big{(}\exp\Big{(}\frac{-c _{21}\log x}{\sqrt{\log x}+3\log(N_{E}qm)}\Big{)}(\log(xN_{E}qm)^{4}\Big{)}.\] Note that since there are finitely many possibilities for \(K\), we may drop the \(|D_{K}|\) contribution to the error term. Similarly, by Lemma 5.4 (2) we may estimate the contribution from the \(p\) which remain inert in \(K\) as \[\mathbf{1}_{\pi/2\in I}\sum_{\begin{subarray}{c}p\leq x\\ p\equiv a\bmod q\\ p\text{ inert in }K\end{subarray}}1 =\mathbf{1}_{\pi/2\in I}\Bigg{(}\frac{\pi(x)}{2\varphi(q)} \left(1-\chi_{K}(a)\delta(K,q)\right)\] \[+O\Big{(}\pi(x)\Big{(}\frac{x^{-c_{22}/\sqrt{q}}}{\varphi(q)}+ \exp\Big{(}\frac{-c_{23}\log x}{\sqrt{\log x}+3\log q}\Big{)}(\log xq)^{4} \Big{)}\Big{)}\Bigg{)}.\] Now, collating the bounds in the previous two equations gives \[\pi_{I}(x;q,a)-\frac{|I|}{\pi}\frac{\pi(x)}{2\varphi(q)}\Big{(}1 +\chi_{K}(a)\delta(K,q)\Big{)}-\mathbf{1}_{\pi/2\in I}\Big{(}\frac{\pi(x)}{2 \varphi(q)}\Big{(}1-\chi_{K}(a)\delta(K,q)\Big{)}\Big{)}\] \[\ll\frac{x}{\log x}\Big{(}\frac{x^{-c_{22}/\sqrt{q}}}{\varphi(q)} +\exp\Big{(}\frac{-c_{23}\log x}{\sqrt{\log x}+3\log q}\Big{)}(\log xq)^{4} \Big{)}+\frac{\pi(x)}{M\varphi(q)}\] \[+\frac{x\log M}{\log x}\Big{(}\exp\Big{(}\frac{-c_{21}\log x}{ \sqrt{\log x}+3\log(N_{E}qM)}\Big{)}(\log(xN_{E}qM)^{4}\Big{)}.\] Setting \(M=\exp(\sqrt{\log x})\) now gives us that there exists an absolute constant \(c_{28}\) such that \[\bigg{|}\pi_{I}(x;q,a)-\frac{\pi(x)}{2\varphi(q)}\bigg{(}\frac{|I |}{\pi}+\mathbf{1}_{\pi/2\in I}+\chi_{K}(a)\delta(K,q)\Big{(}\frac{|I|}{\pi}- \mathbf{1}_{\pi/2\in I}\Big{)}\bigg{)}\bigg{|}\] \[\ll\Big{(}\pi(x)\Big{(}\frac{x^{-c_{22}/\sqrt{q}}}{\varphi(q)}+ (\log x)^{9/2}\exp\Big{(}\frac{-c_{28}\log x}{\sqrt{\log x}+\log(N_{E}q)} \Big{)}\Big{)}\Big{)},\] as desired. ## 8. Proof of Theorem 1.7 (1.b) Throughout this section, we will fix the notation from Sections 1, 3, and 4.3. In particular, we will consider two elliptic curves \(E,E^{\prime}\), the former having complex multiplication over an imaginary quadratic field \(K\), the latter without complex multiplication over any imaginary quadratic field. Moreover, let \(\xi\) denote the primitive Grossencharacter which satisfies \(L(s,E)=L(s,\xi)\), and let \(g_{m}\) denote the cusp form which induces \(\xi_{m}\) (where \(\xi_{m}\) is as defined in Section 3.3). Proof of Theorem 1.7 (1.b).: We have \[\pi_{I,I^{\prime}}(x)=\sum_{\begin{subarray}{c}p\leq x\\ p\text{ splits in }K\end{subarray}}\mathbf{1}_{I}(\theta_{E}(p))\mathbf{1}_{I^{ \prime}}(\theta_{E^{\prime}}(p))+\mathbf{1}_{\frac{x}{2}\in I}\sum_{ \begin{subarray}{c}p\leq x\\ p\text{ inert in }K\end{subarray}}\mathbf{1}_{I^{\prime}}(\theta_{E^{\prime}}(p)). \tag{8.1}\] We first estimate the contribution from the splitting primes to (8.1). Applying Lemma 3.2, we obtain \[\bigg{|}\sum_{\begin{subarray}{c}p\leq x\\ p\text{ splits}\end{subarray}}\mathbf{1}_{I}(\theta_{E}(p))\mathbf{1}_{I^{ \prime}}(\theta_{E^{\prime}}(p))-\frac{|I|}{\pi}\mu_{\text{ST}}(I^{\prime}) \sum_{\begin{subarray}{c}p\leq x\\ p\text{ splits}\end{subarray}}1\bigg{|}\] \[\ll\frac{\pi(x)}{M}+\sum_{\begin{subarray}{c}p\leq x\\ p\text{ splits}\end{subarray}}\bigg{(}\sum_{m=1}^{M}\frac{\cos(m\theta_{E}(p))+U _{m}(\cos\theta_{E^{\prime}}(p))}{m}+\sum_{1\leq m,m^{\prime}\leq M}\frac{ \cos(m\theta_{E}(p))U_{m^{\prime}}(\cos\theta_{E^{\prime}}(p))}{mm^{\prime}} \bigg{)}. \tag{8.2}\] We first bound the third term on the right side of (8.2). By Lemma 5.5, there exists an absolute constant \(c_{26}>0\) such that if \(M=\frac{c_{26}\sqrt{\log x}}{\log(N_{E}N_{E^{\prime}}\log x)}\), then for all \(1\leq m,m^{\prime}\leq M\), \[\sum_{\begin{subarray}{c}p\leq x\\ p\mid N_{E}N_{E^{\prime}}\end{subarray}}\cos(m\theta_{E}(p))U_{m}(\cos\theta_ {E^{\prime}}(p))\log p\] \[\ll m^{\prime 2}x\Big{(}x^{-\frac{1}{32c_{26}M^{2}}}+\exp\Big{(}- \frac{c_{17}\log x}{4M^{2}\log(N_{E}N_{E^{\prime}}M)}\Big{)}+\exp\Big{(}- \frac{\sqrt{c_{17}\log x}}{2M}\Big{)}\Big{)}.\] Now, by partial summation, we have \[\sum_{p\leq x}\sum_{1\leq m,m^{\prime}\leq M}\frac{\cos(m\theta_ {E}(p))U_{m^{\prime}}(\cos\theta_{E^{\prime}}(p))}{mm^{\prime}}\] \[\ll M^{2}\log M\Big{(}\frac{x}{\log x}\Big{)}\Big{(}x^{-\frac{1} {32c_{26}M^{2}}}+\exp\Big{(}-\frac{c_{17}\log x}{4M^{2}\log(N_{E}N_{E^{\prime }}M)}\Big{)}+\exp\Big{(}-\frac{\sqrt{c_{17}\log x}}{2M}\Big{)}\Big{)}.\] Hence, we may estimate the first and third terms on the right side of (8.2) as follows: \[\frac{\pi(x)}{M}+\sum_{\begin{subarray}{c}p\leq x\\ p\text{ splits}\end{subarray}}\sum_{1\leq m,m^{\prime}\leq M}\frac{\cos(m \theta_{E}(p))U_{m^{\prime}}(\cos\theta_{E^{\prime}}(p))}{mm^{\prime}}\] \[\ll\pi(x)\bigg{(}\frac{1}{M}+M^{2}\log M\Big{(}x^{-\frac{1}{32c_ {19}M^{2}}}+\exp\Big{(}-\frac{c_{17}\log x}{4M^{2}\log(N_{E}N_{E^{\prime}}M)} \Big{)}+\exp\Big{(}-\frac{\sqrt{c_{17}\log x}}{2M}\Big{)}\Big{)}\bigg{)}\] \[\ll\pi(x)\frac{\log(N_{E}N_{E^{\prime}}\log x)}{\sqrt{\log x}}. \tag{8.3}\] We now bound the second term on the right side of (8.2). Note that \[\sum_{\begin{subarray}{c}p\leq x\\ p\text{ splits}\end{subarray}}\sum_{m=1}^{M}\frac{\cos(m\theta_{E}(p))+U_{m}( \cos\theta_{E^{\prime}}(p))}{m}=\sum_{p\leq x}\bigg{(}\frac{1+\chi_{K}(p)}{2 }\bigg{)}\sum_{m=1}^{M}\frac{\cos(m\theta_{E}(p))+U_{m}(\cos\theta_{E^{\prime} }(p))}{m}.\] Now, by Lemma 5.2 and partial summation, we have that \[\sum_{p\leq x}\bigg{(}\frac{1+\chi_{K}(p)}{2}\bigg{)}U_{m}(\cos\theta_{E^{ \prime}}(p))\ll\frac{m^{2}x}{\log x}\Big{(}\exp\Big{(}-\frac{c_{15}\log x}{2 m^{2}\log(N_{E}m)}\Big{)}+\exp\Big{(}-\frac{c_{15}\sqrt{\log x}}{2\sqrt{m}} \Big{)}\Big{)}.\] Note that since the possibilities for \(\chi_{K}\) are finite, we may omit both the contribution from \(q\) and the possible Siegel zero. Moreover, by using (5.1) and applying partial summation, we may obtain \[\sum_{p\leq x}\bigg{(}\frac{1+\chi_{K}(p)}{2}\bigg{)}\cos(m\theta_{E}(p))\ll \frac{x}{\log x}\exp\Big{(}\frac{-c_{21}\log x}{\sqrt{\log x}+3\log(N_{E}m)} \Big{)}\log(xN_{E}m)^{4}.\] Note that the contribution from the prime powers to (5.1) can be absorbed into the other error terms (by the same reasoning as in the proof to Lemma 5.3), and hence has been neglected above. Now, collating the above bounds and substituting in the value of \(M\), we have that \[\sum_{\begin{subarray}{c}p\leq x\\ p\text{ splits}\end{subarray}}\sum_{m=1}^{M}\frac{\cos(m\theta_{E}(p))+U_{m}( \cos\theta_{E^{\prime}}(p))}{m}\] \[\ll\sum_{m=1}^{M}\frac{mx}{\log x}\Big{(}\Big{(}\exp\Big{(}-\frac {c_{15}\log x}{2m^{2}\log(N_{E}m)}\Big{)}+\exp\Big{(}-\frac{c_{15}\sqrt{\log x}} {2\sqrt{m}}\Big{)}\Big{)}\Big{)}\] \[+\frac{x}{m\log x}\exp\Big{(}\frac{-c_{21}\log x}{\sqrt{\log x}+3 \log(N_{E}m)}\Big{)}\log(xN_{E}m)^{4}\] \[\ll\frac{M^{2}x}{\log x}\Big{(}\Big{(}\exp\Big{(}-\frac{c_{15} \log x}{2M^{2}\log(N_{E}M)}\Big{)}+\exp\Big{(}-\frac{c_{15}\sqrt{\log x}}{2 \sqrt{M}}\Big{)}\Big{)}\Big{)}\] \[+\frac{x\log M}{\log x}\exp\Big{(}\frac{-c_{21}\log x}{\sqrt{\log x }+3\log(N_{E}M)}\Big{)}\log(xN_{E}M)^{4}\] \[\ll\pi(x)\frac{\log(N_{E})^{4}\log(N_{E^{\prime}}\log x)}{\sqrt{ \log x}}. \tag{8.4}\] We now estimate the contribution from inert primes to (8.1). However, note that if \(D_{K}\) is the conductor of \(\chi_{K}\), then exactly half of the residues in \((\mathbb{Z}/D_{K}\mathbb{Z})^{\times}\) correspond to inert primes. Hence, by Theorem 1.7 (2.a) we have \[\mathbf{1}_{\frac{\pi}{2}\in I}\sum_{\begin{subarray}{c}p\leq x\\ p\text{ inert}\end{subarray}}\mathbf{1}_{I^{\prime}}(\theta_{E^{\prime}}(p))= \mathbf{1}_{\frac{\pi}{2}\in I}\Bigg{(}\frac{1}{2}\mu_{\text{ST}}(I^{\prime}) \pi(x)+O\bigg{(}\pi(x)\frac{\log(N_{E^{\prime}}\log x)}{\sqrt{\log x}}\bigg{)} \Bigg{)}. \tag{8.5}\] Combining (8.3), (8.4), and (8.5) now concludes the proof. ## 9. Proof of Theorem 1.7 (1.c) Throughout this section, we will fix the notation from Sections 1, 3, and 4.4. In particular, we will consider two elliptic curves \(E,E^{\prime}\) having complex multiplication over two imaginary quadratic fields \(K,K^{\prime}\) (respectively). Proof of Theorem 1.7 (1.c).: The proof is trivial when \(x<N_{E}N_{E^{\prime}}\); hence assume \(x\geq N_{E}N_{E^{\prime}}\). First, recall that by Lemmas 5.3 and 5.6 and applying partial summation, we have that for all positive integers \(m,m^{\prime}\), \[\sum_{\begin{subarray}{c}p\leq x\\ p\text{ splits in }K\end{subarray}}\cos(m\theta_{p})\ll\bigg{(}\frac{x}{\log x} \bigg{)}\exp\bigg{(}\frac{-c_{21}\log x}{\sqrt{\log x}+3\log(N_{E}m)}\bigg{)} \log(xN_{E}m)^{4} \tag{9.1}\] and \[\sum_{\begin{subarray}{c}p\leq x\\ p\text{ splits in }K\end{subarray}}\cos(m\theta_{E}(p))\cos(m^{\prime}\theta_{E^{ \prime}}(p))\ll\bigg{(}\frac{x}{\log x}\bigg{)}\exp\bigg{(}\frac{-c_{27}\log x }{\sqrt{\log x}+3\log\mathfrak{q}}\bigg{)}(\log x\mathfrak{q})^{4}\bigg{)}. \tag{9.2}\] Now we consider \[\begin{split}\pi_{I,I^{\prime}}(x)&=\sum_{p\leq x} \mathbf{1}_{I}(\theta_{E}(p))\mathbf{1}_{I^{\prime}}(\theta_{E^{\prime}}(p))\\ &=\sum_{\begin{subarray}{c}p\leq x\\ p\text{ splits in }K\\ p\text{ splits in }K^{\prime}\end{subarray}}\mathbf{1}_{I}(\theta_{E}(p)) \mathbf{1}_{I^{\prime}}(\theta_{E^{\prime}}(p))+\mathbf{1}_{\frac{\pi}{2} \in I^{\prime}}\sum_{\begin{subarray}{c}p\leq x\\ p\text{ splits in }K\\ p\text{ inert in }K^{\prime}\end{subarray}}\mathbf{1}_{I}(\theta_{E}(p))\\ &+\mathbf{1}_{\frac{\pi}{2}\in I}\sum_{\begin{subarray}{c}p\leq x \\ p\text{ inert in }K\\ p\text{ splits in }K^{\prime}\end{subarray}}\mathbf{1}_{I^{\prime}}(\theta_{E^{ \prime}}(p))+\mathbf{1}_{\frac{\pi}{2}\in I}\mathbf{1}_{\frac{\pi}{2}\in I^{ \prime}}\sum_{\begin{subarray}{c}p\leq x\\ p\text{ inert in }K\\ p\text{ inert in }K^{\prime}\end{subarray}}1+O(N_{E}N_{E^{\prime}}).\end{split} \tag{9.3}\] Here \(O(N_{E}N_{E^{\prime}})\) comes from the contribution of the omitted primes \(p\) which divide \(N_{E}N_{E^{\prime}}\). Now, (9.3) splits \(\pi_{I,I^{\prime}}(x)\) into 4 terms. We now simplify the first 3 terms. **Term 1.** By Lemma 3.2, we have \[\sum_{\begin{subarray}{c}p\leq x\\ p\text{ splits in }K\\ p\text{ splits in }K^{\prime}\end{subarray}}\mathbf{1}_{I}(\theta_{E}(p)) \mathbf{1}_{I^{\prime}}(\theta_{E^{\prime}}(p)) \tag{9.4}\] is equal to \[\begin{split}\frac{|I||I^{\prime}|}{\pi^{2}}\sum_{ \begin{subarray}{c}p\leq x\\ p\text{ splits in }K\\ p\text{ splits in }K^{\prime}\end{subarray}}& 1+O\bigg{(}\frac{\pi(x)}{M}+\sum_{m=1}^{M} \frac{1}{m}\bigg{(}\cos(m\theta_{E}(p))+\cos(m^{\prime}\theta_{E^{\prime}}(p)) \bigg{)}\\ &+\sum_{1\leq m,m^{\prime}\leq M}\frac{1}{mm^{\prime}}\cos(m \theta_{E}(p))\cos(m^{\prime}\theta_{E^{\prime}}(p))\bigg{)}.\end{split}\] By (9.1) and (9.2), the error term above is bounded by \[\begin{split}& O\bigg{(}\frac{x}{\log x}\bigg{(}\frac{1}{M}+ \log M\exp\bigg{(}\frac{-c_{21}\log x}{\sqrt{\log x}+3\log N_{E}N_{E^{\prime}} M}\bigg{)}\log(xN_{E}N_{E^{\prime}}M)^{4}\\ &+(\log M)^{2}\exp\bigg{(}\frac{-c_{27}\log x}{\sqrt{\log x}+3 \log N_{E}N_{E^{\prime}}M}\bigg{)}\log(xN_{E}N_{E^{\prime}}M)^{4}\bigg{)} \bigg{)}.\end{split} \tag{9.5}\] Hence, there exists an absolute constant \(c_{29}\) such that the term in (9.5) is bounded by \[O\bigg{(}\frac{x}{\log x}\bigg{(}\frac{1}{M}+(\log M)^{2}\exp\bigg{(}\frac{-c _{29}\log x}{\sqrt{\log x}+3\log N_{E}N_{E^{\prime}}M}\bigg{)}\log(xN_{E}N_{E^ {\prime}}M)^{4}\bigg{)}\bigg{)}.\] Taking \(M=\exp(\sqrt{\log x})\), we obtain that there exists an absolute constant \(c_{30}\) such that (9.4) is equal to \[\frac{|I||I^{\prime}|}{\pi^{2}}\bigg{(}\sum_{\begin{subarray}{c}p\leq x\\ p\text{ splits in }K\\ p\text{ splits in }K\\ p\text{ splits in }K^{\prime}\end{subarray}}1\bigg{)}+O\bigg{(}\frac{x}{\log x} \bigg{(}\exp\bigg{(}\frac{-c_{30}\log x}{\sqrt{\log x}+3\log N_{E}N_{E^{ \prime}}}\bigg{)}\bigg{)}\bigg{)}\bigg{)}.\] **Term 2.** When \(\frac{\pi}{2}\) is not in \(I^{\prime}\) or \(K\sim K^{\prime}\), the second term is automatically 0. Hence, assume \(\frac{\pi}{2}\in I^{\prime}\) and \(K\not\sim K^{\prime}\). In this case, \[\sum_{\begin{subarray}{c}p\leq x\\ p\text{ splits in }K\\ p\text{ inert in }K^{\prime}\end{subarray}}\mathbf{1}_{I}(\theta_{E}(p))=\sum_{ \begin{subarray}{c}p\leq x\\ p\text{ splits in }K\\ p\text{ splits in }K\end{subarray}}\frac{1-\chi_{K^{\prime}}(p)}{2}\mathbf{1}_{I}( \theta_{E}(p)).\] By the proof of Theorem 1.7 (2.b), we have that there exists an absolute constant \(c_{31}\) for which the above equals \[\frac{|I|}{2\pi}\bigg{(}\sum_{\begin{subarray}{c}p\leq x\\ p\text{ splits in }K\end{subarray}}1\bigg{)}+O\bigg{(}\frac{x}{\log x}\exp\bigg{(} \frac{-c_{31}\log x}{\sqrt{\log x}+\log N_{E}}\bigg{)}\bigg{)}.\] Here the Siegel zero contribution is omitted as there are only finitely many possible moduli for \(\chi_{K}\). Now, term 3 can be dealt with in the same way as term 2. Term 4 can be estimated using (5.2). Collating the above results, we have that when \(K\not\sim K^{\prime}\), there exists an absolute constant \(c_{32}\) such that \[\pi_{I,I^{\prime}}(x) =\frac{|I||I^{\prime}|}{\pi^{2}}\bigg{(}\sum_{\begin{subarray}{c}p \leq x\\ p\text{ splits in }K\end{subarray}}1\bigg{)}+\frac{|I|\mathbf{1}_{\frac{\pi}{2} \in I^{\prime}}}{2\pi}\bigg{(}\sum_{\begin{subarray}{c}p\leq x\\ p\text{ splits in }K\end{subarray}}1\bigg{)}\] \[+\frac{|I^{\prime}|\mathbf{1}_{\frac{\pi}{2}\in I}}{2\pi}\bigg{(} \sum_{\begin{subarray}{c}p\leq x\\ p\text{ splits in }K^{\prime}\end{subarray}}1\bigg{)}+\mathbf{1}_{\frac{\pi}{2} \in I}\mathbf{1}_{\frac{\pi}{2}\in I^{\prime}}\bigg{(}\sum_{\begin{subarray} {c}p\leq x\\ p\text{ inert in }K\\ p\text{ inert in }K^{\prime}\end{subarray}}1\bigg{)}\] \[+O\bigg{(}x(\log x)^{4}\exp\bigg{(}\frac{-c_{30}\log x}{\sqrt{ \log x}+\log N_{E}N_{E^{\prime}}}\bigg{)}\bigg{)}\] \[=\mu_{\text{ST}}^{1}(I)\mu_{\text{ST}}^{2}(I^{\prime})\pi(x)+O \bigg{(}x(\log x)^{4}\exp\bigg{(}\frac{-c_{32}\log x}{\sqrt{\log x}+\log N_{E} N_{E^{\prime}}}\bigg{)}\bigg{)}.\] When \(K\sim K^{\prime}\), there exists an absolute constant \(c_{33}\) such that \[\pi_{I,I^{\prime}}(x) =\frac{|I||I^{\prime}|}{\pi^{2}}\bigg{(}\sum_{\begin{subarray}{c }p\leq x\\ p\text{ splits in }K\sim K^{\prime}\end{subarray}}1\bigg{)}+\mathbf{1}_{\frac{\pi}{2} \in I}\mathbf{1}_{\frac{\pi}{2}\in I^{\prime}}\bigg{(}\sum_{\begin{subarray} {c}p\leq x\\ p\text{ inert in }K\sim K^{\prime}\end{subarray}}1\bigg{)}\] \[+O\bigg{(}x(\log x)^{4}\exp\bigg{(}\frac{-c_{30}\log x}{\sqrt{ \log x}+\log N_{E}N_{E^{\prime}}}\bigg{)}\bigg{)}\] \[=\frac{1}{2}\bigg{(}\frac{|I||I^{\prime}|}{\pi^{2}}+\mathbf{1}_{ \frac{\pi}{2}\in I}\mathbf{1}_{\frac{\pi}{2}\in I^{\prime}}\bigg{)}\pi(x)+O \bigg{(}x(\log x)^{4}\exp\bigg{(}\frac{-c_{33}\log x}{\sqrt{\log x}+\log N_{E} N_{E^{\prime}}}\bigg{)}\bigg{)}.\] Hence both the desired bounds have been proved. ## Appendix A The Sato-Tate distributions of Double Quadric and K3 Surfaces Figure 5. Graph of \(a_{X_{\lambda}}^{*}\), \(\lambda=8,p\leq 5\times 10^{5}\). We overlay this with the graph of \(\frac{1}{2\pi\sqrt{3+2x-x^{2}}}\). Figure 7. Graph of \(a^{*}_{\mathcal{Z}}\), where the two elliptic curves are \(x_{1}^{3}+7x_{1}z_{1}^{2}+13z_{1}^{3}\) and \(x_{1}^{3}-99x_{1}z_{1}^{2}+378z_{1}^{3}\), and \(p\leq 5\times 10^{5}\). We overlay this with the graph of \(C_{2}\).
2307.16664
Generative models for wearables data
Data scarcity is a common obstacle in medical research due to the high costs associated with data collection and the complexity of gaining access to and utilizing data. Synthesizing health data may provide an efficient and cost-effective solution to this shortage, enabling researchers to explore distributions and populations that are not represented in existing observations or difficult to access due to privacy considerations. To that end, we have developed a multi-task self-attention model that produces realistic wearable activity data. We examine the characteristics of the generated data and quantify its similarity to genuine samples with both quantitative and qualitative approaches.
Arinbjörn Kolbeinsson, Luca Foschini
2023-07-31T13:44:29Z
http://arxiv.org/abs/2307.16664v1
# Generative models for wearables data ###### Abstract Data scarcity is a common obstacle in medical research due to the high costs associated with data collection and the complexity of gaining access to and utilizing data. Synthesizing health data may provide an efficient and cost-effective solution to this shortage, enabling researchers to explore distributions and populations that are not represented in existing observations or difficult to access due to privacy considerations. To that end, we have developed a multi-task self-attention model that produces realistic wearable activity data. We examine the characteristics of the generated data and quantify its similarity to genuine samples with both quantitative and qualitative approaches. ## 1 Introduction High quality health data is a vital yet scarce resource in modern healthcare. Raw data collection is expensive and time consuming, labelling requires expert knowledge and storage poses privacy concerns. As a result, most health datasets fail to capture the true distribution of the underlying population, particularly in the tails which contain rare conditions and underrepresented attributes (Ganapathi et al., 2022). Extending these data by generating unseen yet realistic instances can augment the downstream task to allow for novel analyses and hypothesis generation. For downstream tasks to be representative, it is crucial that the generated samples remain realistic and reflective of the data intended for study. However, maintaining realism is a difficult task and must be finely balanced with the requirement to generate new samples instead of simply recreating those seen in the training set. In other fields where data generation is used, the same principle applies. In state-of-the-art image generation (Ramesh et al., 2022; Rombach et al., 2022) this trade-off has been finely balanced. The image quality has reached almost impeccable realism yet the models are able to create almost completely novel outputs. In code generation and completion, the value of quality (code that compiles and suits the context) is higher than the value of novelty. This has resulted in issues with models perfectly reconstructing samples from the training set. Text generation (Brown et al., 2020), a sequence generation task, is more similar to wearable data generation. These systems typically make use of autoregressive methods to predict the next word in the training set. The model can then be run on new input data and the next word prediction used for generation instead. Data generation for healthcare is an emerging field. Due to the potential high-risk of applications, data realism is even more of a concern than in other domains. Additionally, privacy concerns have historically limited access to large datasets to enable training of realistic generative models. Methods for time-series generation exist in the literature. (Kang et al., 2020) presented an approach using mixture autoregressive (MAR) models which can be configured to give the time series certain characteristics. The model was released as a shiny app where the properties can be configured. One drawback of this approach is that the specific characteristics, such as seasonal strength and stability, need to be quantified and cannot be inferred from the context, such as a medical condition. For healthcare data, Norgaard et al. (2018) presented a Generative Adversarial Network (GAN) for accelerometer and exercise data. (Dash et al., 2020) also used GANs for generation of hospital time-series based on the MIMIC-III dataset. More recently, outside healthcare applications, Srinivasan and Knottenbelt (2022) and Li et al. (2022) have proposed a general architecture based on transformers but train it using the GAN framework. In this work, we focus on personal health data, specifically multi-modal resting heart rate, sleep and step data, generated by consumer wearable devices. Applications on the health domain of such data are still emerging, detection of flu and COVID-19 being one example (Shapiro et al., 2021; Merrill and Althoff, 2022). Our approach features a multi-task self-attention model for wearable activity data synthesis. In summary, our contributions are: * A synthetic data generator based on self-attention for wearables data * Demonstration that the model can predict future activity through self-supervised learning of over 2 million activity days * Evaluation of the generative model with qualitative and quantitative comparisons to genuine real-world data ## 2 Data for training Dataset.All models were trained and evaluated on the same set of activity data acquired using wearable FitBit trackers, collected as part of the DiSCover (Digital Signals in Chronic Pain) Project, a 1-year longitudinal study (ClinicalTrials.gov identifier: NCT03421223) (Lee et al., 2021). The dataset contained day-level data from 10 000 individuals who gave permission for use of their data for the purpose of health research. Data were collected over one year, resulting in a total of 2 737 500 person-days of activity data. The data contain three signals: resting heart rate (beats per minute), total sleep (minutes), total steps (step count). The mean age of the participants was 37.3 (SD=10.5, range: 18 to 85) with 72.15% of participants female and primarily Non-Hispanic White (80.5%). Pre-processing.Day level aggregates were calculated from the minute-level raw data by summing all minutes spent sleeping per day, summing all steps per day and taking the mean resting heart rate per day. Only days with \(>80\%\) coverage were included in the analaysis. Missing data were imputed with the mean feature values per individual. Each feature was then scaled to \([0,1]\). We then divide the year-long sequences into shorter sequences with a length of 21 days for use as inputs. Although this is much shorter than sequences used with most transformers, we keep this short for the following reason: every source sequence is of length 365, corresponding to each day in the year for an individual. If we use a larger window of, e.g., 100 we could only create three non-overlapping sequences per individual. The shorter sequence length gives us a more diverse set of samples while still capturing a representative time period on the scale of human activity (three weeks). Although the labels are continuous values, we convert them to a one-hot encoding of 100 evenly-spaced bins. We do this to model the outputs as a softmax distribution. As described by Van Oord et al. (2016), this removes any assumptions about the shape of the distribution and is therefore highly compatible with neural networks and has also been used for audio-generation in Wavenet (Oord et al., 2016). ## 3 Model and learning Embeddings.The three input channels (resting-heart-rate, sleep minutes and step count) are embedded in a 64 dimensional space through a learned embedding weights. As the sequences are temporally ordered, it is important to preserve their positional relationships. To do that, they are positionally encoded with learned positional weights that are added to the embedded inputs. Transformer.The embeddings are passed into a transformer (Vaswani et al., 2017) that consists only of decoder layers. Self-attention is calculated as \(attention(Q,K,V)=softmax(QK^{T}/\sqrt{d_{k}})V\). Where \(Q\), \(K\) and \(V\) are the query, key and value matrices, respectively and \(d_{k}\) is the dimensionality of the keys. Decoder-only transformers have been shown to perform well in autoregressive tasks, like next-word predictions (Brown et al., 2020; Rae et al., 2021) and joint learning of multiple tasks (Reed et al., 2022). Each transformer block begins with layer-normalization to stabilize gradient updates and training. As this is an auto-regressive task, we ensure future information is not used by causal masking, i.e. confining each position to previous positions or the current position. This is implemented by masking the upper-right triangle of the attention weight-matrix. Finally, each block is completed by a feed-forward network of two dense layers of dimensionality 256 with GeLU activation and dropout probability of 0.1 during training. We stack three of these blocks to form the core of the model, and four attention heads. It is followed with a feed-forward network to an output of three 100-unit vectors, corresponding to the three tasks and 100 bins. A softmax activation is applied to each one to obtain the logits used for loss calculation. This results in a causally-masked multihead multi-task self-attention model that can be trained to model and forecast activity time series. Loss.As described in detail earlier, we use a softmax distribution of outputs. Then we can minimize the cross-entropy loss between the predicted and true values. We learn the three outputs (resting heart rate, daily steps and sleep minutes) jointly with separate feed-forward network heads. The individual losses are added through shake-shake regularization Gastaldi (2017), a stochastic affine combination. The combined loss which we minimize is then defined as \[\mathcal{L}_{combined}=\sum_{i=1}^{N}\alpha_{i}\mathcal{L}_{i}\] where \(\mathbf{\alpha}\) is a random vector of unit length and \(\mathcal{L}_{i}\) are individual losses. In our case, \(N=3\). Training.We minimize the loss using Adam (Kingma and Ba, 2014) and an initial learning rate of \(10^{-3}\), reducing it by a factor of 10 every 5 epochs, with a total of 15 training epochs. The model and training were implemented in PyTorch (Paszke et al., 2019), along with NumPy (Harris et al., 2020) and SciPy (Virtanen et al., 2020), and visualizations in Matplotlib (Hunter, 2007). We train four different models to compare the effect of increased number of training points on the quality of generated samples. The largest model contains \(2\,029\,230\) days, which represent 100% of the available training data. We then train three smaller models with 10%, 1% and 0.5% of the available training data, respectively. Generating new samplesWith the autoregressive model already trained to predict next-day values, synthesizing new sequences is straightforward. We start with a prompt sequence fragment, taken from a held-out set, and input into the trained model. Then, we recursively remove the first day of the sequence and append the next-day predictions to the end. Scaling the temperature of the logits gave more consistent results for resting steps and sleep, we used temperatures of 2, while resting heart rate was kept with a temperature of 1. The three softmax distributions of the output were sampled independently to obtain the next-day value. ## 4 Results and evaluation We evaluate the model on four criteria. 1) The prediction accuracy of the model 2) Qualitative visualization analysis of the generated sequences 3) Quantitative evaluation of distance measures and similarity scores between real and generated sequences and 4) Comparison of real and generated sequences on a lower-dimensional manifold. ### Activity modelling We begin by comparing the accuracy of the next-day predictions with the ground truth real-world data. These results are highlighted in Table 1. Increasing the number of training samples has a strong effect, particularly on resting heart rate prediction where the mean absolute error (MAE) is reduced to 1.21 BPM in the case of 2 million training samples. Given only 0.5% of the data, the accuracy is far lower and increasing the number of data always results in a marked increase in accuracy. The effect of increased data has a different effect for both steps and sleep minutes. It appears that going from \(\sim 20\)k days to \(\sim 200\)k days has a far greater effect than the next order of magnitude, which appears to have no marked difference. ### Visual comparisons Next, we perform a qualitative visual comparison of the generated and real data. In Figure 1 we highlight and compare examples of real and generated activity data across three different channels: resting heart rate, steps taken and minutes spent sleeping. We plot this over three months (120 days) to inspect both short-term and long-term trends. The generated sequences (two rightmost columns of Figure 1) are visually similar to the real examples (two left columns). The model clearly captures the individual properties of the three different modalities. Resting heart rate remains relatively stable without spikes or clear trends. Recorded and generated steps are highly variable, with differences over orders of magnitude between consecutive days and spikes representing very-high-step days. ### Distance and similarity measures No standard collection of methods exists for scoring differences between time series. However, we make use of two common metrics: cosine similarity and dynamic time warping (DTW) distance. For cosine similarity, we follow the approach of Norgaard et al. (2018) and compare the mean pairwise cosine similarity statistics between real sequences and generated ones. Where the cosine similarity statistic between two sequences \(X\) and \(Y\) is defined as their normalized dot product \(K(X,Y)=\frac{(X\cdot Y)}{\|X\|\|Y\|}\). The mean pairwise cosine similarity score between real sequences in the dataset is \(0.873\), providing an optimal value for this metric on the dataset. This captures the intra-dataset variation of the real data distribution. In further analysis, we calculate the mean pairwise DTW distance (Bundy and Wallen, 1984) using the DtAIdistance library (Wannesm et al., 2022). The mean pairwise DTW distance in the real dataset is \(27\,897\) which provides the optimal measure for comparing the distances to the generated data. Figure 2 illustrates the results of this comparison. Increasing the amount of training data has a significant impact on the similarity between generated and real sequences. When only \(0.05\%\) of the total available data is used for training (\(10\,146\) days), the mean pairwise cosine similarity is \(0.666\). When \(1\%\) of the data is used, the score increases to \(0.726\), and when \(10\%\) is used, it reaches \(0.773\). The full dataset of over \(2\) million days yielded the best trained model with a score of \(0.810\), which is close to the intra-similarity of real data, which is \(0.873\). In Figure 3 we see that increasing the size of the training data results in a model that produces sequences much closer to the real data. The increase appears nearly asymptotic to the intra-distance of real data, which is \(27\,897\) compared to \(29\,028\) for data generated from the model trained on the full dataset. The agreement between the c \begin{table} \begin{tabular}{r r r r} \hline \hline **Training size** & **MAE Resting HR** & **MAE Sleep** & **MAE Steps** \\ **(Days)** & **(BPM)** & **(Minutes)** & **(Count)** \\ \hline \(10\,146\) (\(0.5\%\)) & \(31.9\) & \(135.9\) & \(4922\) \\ \(20\,292\) (\(1\%\)) & \(18.6\) & \(137.2\) & \(4444\) \\ \(202\,923\) (\(10\%\)) & \(3.31\) & \(58.6\) & \(2627\) \\ \(2\,029\,230\) (\(100\%\)) & \(1.21\) & \(56.2\) & \(2830\) \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of mean absolute errors (MAE) of next-day resting heart rate (HR), sleep and steps with respect to the size of the training set. There is a marked difference in terms of accuracy as the number of training samples increases. Figure 1: Comparison of real and generated wearable activity data. Each subplot represents a single individual. The two left columns show real data sequences collected from a wearable FitBit device. The two right columns show synthetic sequences generated by our model. Resting heart rate is shown in the top three rows (green), steps taken per day in the three center rows (black) and total minutes spent sleeping per day in the bottom three rows (purple). Figure 3: Mean pairwise dynamic time warping distance of models trained with different training set sizes, compared with real data. The mean distance from the model trained with over 2 million days to the real data is \(29\,028\) with the intra-distance of real data being \(27\,897\). Figure 2: Mean pairwise cosine similarity measure of models trained with different training set sizes, compared with real data. Models trained with more data have more similarity with genuine data. The model trained with over 2 million days achieves a score of over \(0.810\) with the intra-similarity of real data being \(0.873\). DTW distance measures provides further evidence that the model is able to capture the inherent properties of the data and generate similar sequences. ### Manifold comparisons with UMAP In our final set of comparisons, we compare the real and generated distributions as transformed onto a learned low-dimensional manifold using UMAP (McInnes et al., 2018; Becht et al., 2019). The UMAP manifold is trained on a set of real sequences from the test set using a minimum distance of 0.1 and the cosine distance measure. A set of generated sequences from the model trained on the full dataset was then transformed onto the learned manifold. Figure 4 visualizes this comparison. The generated samples, represented in orange, overlap very well with the real samples, represented in blue. Not only does the distribution of generated data fall within the distribution of real data, the generated data covers almost the entire surface which the real data spans. However, the densities of the two distributions appear different. One reason for this is that the generator is sampling from the correct distribution but with a biased sampling regime. Further experiments which investigate the relationship between accuracy and concentration in the distributions could help illuminate this artifact. ## 5 Discussion We have presented a new class of activity time-series generators capable of synthesizing realistic resting heart rate, step and sleep records at the population level. It sets out necessary groundwork for conditional generators that can be controlled to output sequences with highly specific activity data properties. While synthetic activity data is an emerging field with few existing work to compare to, we note transformers have previously been used for learning from wearables data, including Merrill and Althoff (2022) who use minute-level data to perform influenza and COVID-19 prediction while Kolbeinsson et al. (2021) compare the performance of transformer models using different pre-training tasks. Through our experiments we have shown that the generated data is highly similar to genuine data. The model trained on the complete set of available data (2 million days) was able to predict next-day resting heart rate with a MAE of less than 2 BPM, which is impressive. Next-day sleep was predicted to within one hour of actual sleep time and steps to within 3 000. Futhermore, the DWT distance measures in addition to the mean pairwise cosine similarity demonstrated quantitatively that the generated sequences were similar to that of real data. Synthetic wearable data has a number of applications ranging from study simulations to data visualization and quality control. Personal health research requires significant amounts of data and careful study design (Huang et al., 2007; Orloff et al., 2009). Testing different studies and possible data collection outcomes in a simulated environment can guide study designers to set up experiments with a higher chance of success. Similarly, synthesized data can aid in the development and testing of new analysis tools. Generated data can be modulated to allow testing of edge case and rare conditions not observed in the original real-world cohorts, without generating any privacy concerns. Figure 4: A UMAP manifold with real data (blue) and generated data (orange). The representation is learned with real data from the test set not seen during training. Then, generated samples are transformed and plotted simultaneously. This highlights the general landscape of the two distributions and demonstrates visually that the generated data overlaps considerably with the real data distribution. Generated data could be used in privacy-sensitive research. In many environments the risk of data incidents, such as leaks or hacks during collaborations with a large number of researchers across institutions, is too great for real data testing to be viable. In such cases, data like the one presented here can be generated on-the-fly. However, recent reports have highlighted the risks involving authorship (McCormack et al., 2019; Dehouche, 2021) and further research into these matters is required before systems are deployed in practice. One limitation of the presented approach is that generated sequences depend only on the previous 21 days. Therefore there is no direct method of interacting with the generator to request specific properties of the generated sequence, such as that of better representing a certain fitness level. As a future research direction, we note that a slight modification to the architecture and learning process can make the model conditional, in a process similar to text-conditional image generation (Ramesh et al., 2022). It would then be easy to request a sequence with properties that the model has learned during the training process, such as age, physical fitness, and any relevant conditions such as sleep irregularities or arrhythmia. Learning these requires them to be present in the training set. While simpler generation approaches (e.g., sample from a statistically matched distribution) would likely give similar _unconditional_ results to those presented here, we see the proposed architecture as groundwork for interactive generators made conditional on specific characteristics of interest. A researcher designing a study on insomnia should be able to query that ideal interactive generator for \(1\,000\) participants aged 20-66, have BMI 22-30 and half of whom sleep less than 5 hours per night. Another limitation of our work is the relatively small training dataset with respect to the general model class, transformers, which typically excel with with enormous amounts of data, and more parameters. More training data will allow us to scale up the model size even further with evidence from other domains suggesting that scale and parameter count is a powerful tool for learning richer representations (Brown et al., 2020). Future work should focus on giving provable privacy guarantees on the generated sequences, preventing individual information from the training data to be leaked in the generated sequences (McCormack et al., 2019; Dehouche, 2021). Additionally, biases from the training data and other sources (Bender et al., 2021) highlight the need for standard reporting, like model cards (Mitchell et al., 2019), for investigating and preventing risks of applied systems. Although the data generation is sufficiently fast for offline experiments with thousands of samples, many applications would benefit from increased efficiency and parallelisation of the generation function. Finally, we believe that further research should go towards devising accepted benchmarks for generators such as the one presented. Unlike for images, text and code, the quality of the output cannot be easily evaluated. In addition to averages and standard deviations as proposed, a more complete suite of statistical tests should be developed to evaluate good matching, e.g., including higher order moments, tail tests, and matching in transformed spaces, such as Fourier's or Haar's. ## 6 Conclusion This work furthers the exploration of methods for generating synthetic personal health data. It provides researchers with the ability to craft datasets according to their needs while reducing privacy concerns, making study design more efficient and enabling the development of analysis at a faster rate. Moreover, it helps to identify issues before they affect real-world deployment. Our work adds to the existing literature on synthetic data across multiple fields and underscores the potential of generating realistic person-generated health data to enhance and improve health research.
2309.14050
NNgTL: Neural Network Guided Optimal Temporal Logic Task Planning for Mobile Robots
In this work, we investigate task planning for mobile robots under linear temporal logic (LTL) specifications. This problem is particularly challenging when robots navigate in continuous workspaces due to the high computational complexity involved. Sampling-based methods have emerged as a promising avenue for addressing this challenge by incrementally constructing random trees, thereby sidestepping the need to explicitly explore the entire state-space. However, the performance of this sampling-based approach hinges crucially on the chosen sampling strategy, and a well-informed heuristic can notably enhance sample efficiency. In this work, we propose a novel neural-network guided (NN-guided) sampling strategy tailored for LTL planning. Specifically, we employ a multi-modal neural network capable of extracting features concurrently from both the workspace and the B\"{u}chi automaton. This neural network generates predictions that serve as guidance for random tree construction, directing the sampling process toward more optimal directions. Through numerical experiments, we compare our approach with existing methods and demonstrate its superior efficiency, requiring less than 15% of the time of the existing methods to find a feasible solution.
Ruijia Liu, Shaoyuan Li, Xiang Yin
2023-09-25T11:24:40Z
http://arxiv.org/abs/2309.14050v2
# NNgTL: Neural Network Guided Optimal Temporal Logic ###### Abstract In this work, we investigate task planning for mobile robots under linear temporal logic (LTL) specifications. This problem is particularly challenging when robots navigate in continuous workspaces due to the high computational complexity involved. Sampling-based methods have emerged as a promising avenue for addressing this challenge by incrementally constructing random trees, thereby sidestepping the need to explicitly explore the entire state-space. However, the performance of this sampling-based approach hinges crucially on the chosen sampling strategy, and a well-informed heuristic can notably enhance sample efficiency. In this work, we propose a novel _neural-network guided_ (NN-guided) sampling strategy tailored for LTL planning. Specifically, we employ a multi-modal neural network capable of extracting features concurrently from both the workspace and the Buchi automaton. This neural network generates predictions that serve as guidance for random tree construction, directing the sampling process toward more optimal directions. Through numerical experiments, we compare our approach with existing methods and demonstrate its superior efficiency, requiring less than 15% of the time of the existing methods to find a feasible solution. ## I Introduction With the ongoing development and widespread deployment of mobile robots, there has been an increasing focus on path planning for high-level tasks. Among the formal languages used for specifying such complex tasks, Linear Temporal Logic (LTL) stands out as a widely adopted choice. LTL provides a structured means for the users to articulate complex requirements, such as navigating a robot to a target region without collision with obstacles or ensuring specific locations are visited infinitely often [1, 2, 3, 4]. In recent years, the field of robot path planning for LTL tasks has witnessed extensive investigation, spurred by its broad applications. These applications encompass a wide range of scenarios, such as environmental surveillance [5], search and rescue missions [6], and intelligent warehousing systems [7, 8]. In the context of LTL path planning, one of the most foundational methods is the automata-theoretical approach based on finite abstractions [9, 10, 11, 5, 12]. This approach involves the creation of a discrete abstraction of the workspace, which effectively captures the robot's mobility constraints. Subsequently, the discrete abstraction is synchronized with an automaton representation of the LTL task. This synchronization enables the formulation of the planning problem as a graph-search problem within the product space. However, this graph-search approach, although conceptually powerful, faces a significant computational challenge. In particular, as the system's dimensionality increases, the state-space of the finite abstraction grows exponentially, rendering the graph-search problem infeasible. Sampling-based methods, such as rapid random trees (RRT), have emerged as a promising solution to tackle the computational challenges associated with path planning in continuous state-spaces [13]. More recently, sampling-based algorithms have been introduced to enhance the computational efficiency of solving LTL planning problems [14, 15, 16, 17, 18]. For example, in [16], a sampling-based algorithm, inspired by the RRT*, is proposed to find optimal paths that satisfy LTL tasks with a prefix-suffix structure. This algorithm circumvents the necessity of explicitly exploring the entire state-space by incrementally constructing random trees over the product state-space. Building upon this advancement, [17] further introduces a biased sampling strategy that leverages automata information to significantly improve sample efficiency. Moreover, in [18], the authors extend the methods in [17] to continuous workspaces without discretization. Specifically, they introduce an abstraction-free planning algorithm that directly samples within a continuous space and integrates these samples with automata states. In sampling-based LTL planning, one of the key factor is how each new state is sampled. While biased sampling strategies offer an enhanced approach compared to uniform sampling, they primarily rely on distance information within the Buchi automaton. This approach neglects valuable insights from the workspace, such as the physical feasibility of task progression. However, integrating continuous workspace information with the Buchi automaton is very challenging due to the inherently heterogeneous structures of these components. Recently, there has been a notable progress in leveraging the power of neural networks for expedting solutions to intricate problems. For instance, within the domain of path planning, neural networks have been employed to predict the probability distribution of optimal paths [19, 20, 21, 22]. Moreover, neural networks have been applied, in an end-to-end fashion, for generating solutions to temporal logic tasks [23, 24, 25, 26]. In this paper, we introduce a novel neural-network guided (NN-guided) sampling strategy designed specifically for LTL planning. Our approach builds upon the basic architecture of the sampling-based algorithm for continuous workspaces, as established in [18]. However, instead of relying solely on automata information, we incorporate a multi-modal neural network capable of jointly extracting features from both the workspace and the Buchi automaton. This neural network offers predictive capabilities to steer the sampling process in directions that are more likely to yield optimal solutions, ones that are not only task-progressive but also feasible within the workspace. To demonstrate the efficiency of our approach, we compare the proposed NN-guided sampling strategy with existing sampling strategies on a set of randomly generated problem instances. The statistical results underscore the effectiveness of our new strategy, as it requires less than 15% of the time needed by the existing strategies to find a feasible solution. ## II Problem Formulation ### _System Model_ We consider the scenario, where a single robot navigating in a two-dimensional continuous workspace represented by a compact subset \(\mathcal{W}\subseteq\mathbb{R}^{2}\). The workspace is assumed to be partitioned as \(\mathcal{W}=\mathcal{W}_{\text{free}}\cup\mathcal{O}\), where \(\mathcal{O}\) is an open subset representing obstacle regions and \(\mathcal{W}_{\text{free}}\) is the free region in which the robot can move. The free workspace is further partitioned as \(m\) labeled regions of interest and a non-labeled region \(\mathcal{W}_{\text{free}}=\mathcal{R}_{1}\dot{\cup}\cdots\dot{\cup}\mathcal{R}_ {m}\dot{\cup}\mathcal{R}_{non}\). The mobility of the robot can be captured by a weighted transition system \[TS=(\mathcal{W},\mathbf{x}_{0},\rightarrow_{T},\mathcal{AP},L,C),\] where \(\mathcal{W}\) is set of infinite positions of the robot; \(\mathbf{x}_{0}\) is its initial position; \(\rightarrow_{T}\subseteq\mathcal{W}\times\mathcal{W}\) is the transition relation such that: for any \(\mathbf{x},\mathbf{x}^{\prime}\in\mathcal{W}\), we have \((\mathbf{x},\mathbf{x}^{\prime})\in\rightarrow_{T}\) if along the straight line between \(\mathbf{x}\) and \(\mathbf{x}^{\prime}\) (i) does not intersect with \(\mathcal{O}\); and (ii) crosses any boundary of labeled regions at most once [18]; \(\mathcal{AP}=\{\pi^{1},\ldots,\pi^{m}\}\) is the set of atomic propositions with labeling function \(L:\mathcal{W}\rightarrow\mathcal{AP}\) such that \(L(\mathbf{x})=\pi^{i}\) iff \(\mathbf{x}\in\mathcal{R}_{i};C:\mathcal{W}\times\mathcal{W}\rightarrow\mathbb{ R}^{+}\) is the weight function calculating the Euclidean distance, i.e., \(C(\mathbf{x}_{1},\mathbf{x}_{2})=\|\mathbf{x}_{1}-\mathbf{x}_{2}\|_{2}\). An infinite _path_ is a sequence \(\tau\!=\!\tau(1)\tau(2)\cdots\!\in\!\mathcal{W}^{\omega}\) such that \(\tau(1)=\mathbf{x}_{0}\) and \((\tau(k),\tau(k+1))\in\rightarrow_{T},k=1,2,\ldots\). The _trace_ of infinite path \(\tau\) is \(\operatorname{trace}(\tau)=L(\tau(1))L(\tau(2))\cdots\in(\mathcal{AP})^{\omega}\). We say a path \(\tau\) is finite if it is a finite sequence of points, and we denote by \(|\tau|\) its _length_. For a finite path \(\tau\), its cost \(J(\tau)\) is defined as the the cumulative distances between consecutive states, i.e., \(J(\tau)=\sum_{k=1}^{|\tau|-1}C(\tau(k),\tau(k+1))\). ### _Temporal Logic Tasks_ The formal specification of robot is captured by an linear temporal logic formula without next (\(\operatorname{LTL}_{-\bigcirc}\)), which is widely used in robot task planning in continuous workspace [27]. The syntax of \(\operatorname{LTL}_{-\bigcirc}\) is given as follows: \[\phi::=\text{true}\mid\pi\in\mathcal{AP}\mid\neg\phi\mid\phi_{1}\wedge\phi_ {2}\mid\phi_{1}\mathcal{U}\phi_{2},\] where \(\neg\) and \(\wedge\) are Boolean operators "negation" and "conjunction", respectively; \(\mathcal{U}\) is the temporal operator "until", which can further induce temporal operators "eventually" \(\Diamond\) and "always" \(\square\). An \(\operatorname{LTL}\) formula \(\phi\) is evaluated over an infinite word \(\sigma=\sigma(1)\sigma(2)\cdots\in\mathcal{AP}^{\omega}\). We denote by \(\sigma\models\phi\) that word \(\sigma\) satisfies \(\operatorname{LTL}\) formula \(\phi\); the reader is referred to [28] for the formal semantics. We denote by \(\operatorname{Words}(\phi)\) the set of all words satisfying formula \(\phi\). It is well-known that, for any \(\operatorname{LTL}\) formula \(\phi\), \(\operatorname{Words}(\phi)\) can be accepted by a non-deterministic Buchi automaton (NBA) [28]. Formally, an NBA is a tuple \(B=(\mathcal{Q}_{B},\mathcal{Q}_{B}^{0},\Sigma,\rightarrow_{B},\mathcal{Q}_{B} ^{\prime})\), where \(\mathcal{Q}_{B}\) is the set of states, \(\mathcal{Q}_{B}^{0}\) is the set of initial states, \(\Sigma=\mathcal{AP}\) is the alphabet, \(\rightarrow_{B}\subseteq\mathcal{Q}_{B}\times\Sigma\times\mathcal{Q}_{B}\) is the transition relation, and \(\mathcal{Q}_{B}^{F}\) is the set of accepting states. For simplicity, we assume that the initial-state is unique, i.e., \(\mathcal{Q}_{B}^{0}=\{q_{B}^{0}\}\). An infinite run \(\rho_{B}\) of \(B\) over an infinite word \(\sigma=\pi_{0}\pi_{1}\cdots\in(\mathcal{AP})^{\omega}\) is a sequence \(\rho_{B}=q_{B}^{0}q_{B}^{1}q_{B}^{2}\cdots\) such that \(q_{B}^{0}\in\mathcal{Q}_{B}^{0},(q_{B}^{i},\pi_{i},q_{B}^{i+1})\in\rightarrow_ {B},\forall i=0,1,\ldots\). We say infinite run \(\rho_{B}\) is accepting if it contains accepting states infinite number of times; we say a word is accepting if it induces an accepting run, and we denote by \(\mathcal{L}_{B}\) the set of all accepting words of \(B\). Hereafter, \(B\) is referred to as the NBA that accepts \(\phi\), i.e., \(\mathcal{L}_{B}=\operatorname{Words}(\phi)\). ### _LTL Task Planning Problem_ To fulfill an \(\operatorname{LTL}_{-\bigcirc}\) formula \(\phi\), the robot needs to execute an infinite path. For the purpose of planning, it suffices to consider paths in the "prefix-suffix" structure \(\tau=\tau^{\operatorname{pre}}[\tau^{\operatorname{swf}}]^{\omega}\), where the prefix part \(\tau^{\operatorname{pre}}\) repeats only once and the suffix part \(\tau^{\operatorname{swf}}\) repeats indefinitely [9]. The cost of prefix-suffix path \(\tau\) is defined by \(J(\tau)=\lambda J(\tau^{\operatorname{pre}})+(1-\lambda)J(\tau^{\operatorname{ swf}})\), where \(\lambda\in[0,1]\) is a user-defined weight coefficient. Then our objective is to find a plan for the robot in the prefix-suffix structure such that a given \(\operatorname{LTL}\) formula is fulfilled with minimum cost. **Problem 1**: _Given \(\operatorname{LTL}_{-\bigcirc}\) formula \(\phi\), determine a prefix-suffix path \(\tau\) in transition system \(TS\) such that (i) \(\operatorname{trace}(\tau)\models\phi\); and (ii) for any prefix-suffix path \(\tau^{\prime}\) such that \(\operatorname{trace}(\tau^{\prime})\models\phi\), we have \(J(\tau)\leq J(\tau^{\prime})\)._ ## III Sampling-Based Task Planning To solve Problem 1, a typical approach is to perform graph-search on the _product_ of \(TS\) and \(B\)[5]. However, when the state space \(\mathcal{W}\) is continuous, building the entire product space is infeasible even by discretizations. Therefore, in [18], the authors proposed a (continuous space) sampling-based algorithm, called TL-RRT*, that incrementally builds trees on-the-fly without construct the entire product space a priori. Our work is build upon the sampling-based approach; therefore, we review it briefly in this section. The readers are referred to [18] for more details on this method. ### _Main Sampling-Based Algorithm_ The TL-RRT* algorithm is essentially a random search over the product space \(\mathcal{Q}_{P}:=\mathcal{W}_{\text{free}}\times\mathcal{Q}_{B}\). It consists of two parts, prefix part and suffix part, and for each part, a similar random tree construction is employed in order to search the optimal path. We briefly sketch the procedures. #### Iii-A1 Prefix Search Starting from the initial state \(q_{0}=(\mathbf{x}_{0},q_{B}^{0})\), one first builds a prefix tree \(\mathcal{T}=(\mathcal{V}_{\mathcal{T}},\mathcal{E}_{\mathcal{T}},\operatorname{Cost})\) by incrementally adding vertices, where \(\mathcal{V}_{\mathcal{T}}\) and \(\mathcal{E}_{\mathcal{T}}\) are the sets of vertices and edges in \(\mathcal{T}\), respectively. The tree contains some _goal states_ defined by \(\mathcal{Q}_{\text{goal}}:=\mathcal{W}_{\text{free}}\times\mathcal{Q}_{B}^{F}\). By projecting paths in tree onto \(\mathcal{W}\), the path from the initial state \(q_{0}\) to each goal state \(q_{\text{goal}}\in\mathcal{V}_{\mathcal{T}}\cap\mathcal{Q}_{\text{goal}}\) forms a prefix path \(\tau^{\text{pre}}\), whose cost is stored by function \(\text{Cost}:\mathcal{V}_{\mathcal{T}}\rightarrow\mathbb{R}^{+}\). #### Iii-A2 Suffix Search The suffix part is very similar to the prefix part, the main difference is that one needs to construct a set of random trees rooted from goal states \(\mathcal{P}:=\mathcal{T}\cap\mathcal{Q}_{\text{goal}}\) in the prefix tree such that they can return back to the goal states. This gives us a suffix plan \(\tau^{\text{sif}}\) from each goal state, and the prefix-suffix plan with minimum cost among all goal states is the final optimal plan. #### Iii-A3 Constructing Random Trees In the above two parts, the key is construction of random tree \(\mathcal{T}=(\mathcal{V}_{\mathcal{T}},\mathcal{E}_{\mathcal{T}},\text{Cost})\) on-the-fly. The main construction steps are as follows. 1. Sample a state \(\mathbf{x}^{\text{rand}}\in\mathcal{W}_{\text{free}}\) "randomly"; 2. Determine a new state \(\mathbf{x}^{\text{new}}\) to be added to the tree based on some distance criteria on \(\mathbf{x}^{\text{rand}}\) and \(\mathcal{T}\); 3. Determine set \(\mathcal{Q}^{\text{near}}\subseteq\mathcal{V}_{\mathcal{T}}\) based on some distance criteria from which the tree will be extended to \(\mathbf{x}^{\text{new}}\); 4. For each state \(q_{P}^{\text{near}}=(\mathbf{x}^{\text{near}},q_{B}^{\text{near}})\in \mathcal{Q}_{P}^{\text{near}}\), we consider a potential edge from \(q_{P}\) to \(q_{P}^{\text{near}}=(\mathbf{x}^{\text{new}},q_{B})\) if \((\mathbf{x}^{\text{near}},\mathbf{x}^{\text{new}})\in\rightarrow_{\mathcal{T}}\) and \((q_{B}^{\text{near}},L(\mathbf{x}^{\text{near}}),q_{B}^{\text{near}})\in \rightarrow_{B}\); 5. Add \(q_{P}^{\text{new}}\) to tree \(\mathcal{T}\) from state \(q_{P}\in\mathcal{Q}_{P}^{\text{near}}\) to minimize \(\text{Cost}(q_{P}^{\text{new}})=\text{Cost}(q_{P}^{\text{near}})+C(q_{P}^{ \text{near}},q_{P}^{\text{new}})\); 6. After adding \(q_{P}^{\text{new}}\), the tree edges and costs are reconfigured so that each vertex is reached by a path with minimum cost from the root. ### _Sampling Strategies_ In the construction of random trees, it remains to specify how state \(\mathbf{x}^{\text{rand}}\) is sampled "randomly". In [18], the authors proposed two different sampling strategies for \(\mathbf{x}^{\text{rand}}\): * _Uniform Sampling:_\(\mathbf{x}^{\text{rand}}\) is selected according to a uniform distribution on \(\mathcal{W}_{\text{free}}\) (in fact, any distribution as long as all states has non-zero probability); * _Biased Sampling:_\(\mathbf{x}^{\text{rand}}\) is selected according to a biased distribution on \(\mathcal{W}_{\text{free}}\) such that one has more chance to move towards an accepting state in \(B\). Since our NN-guided approach is related to the biased sampling, we briefly review this method. First, one needs to pre-process the NBA \(B\) so that infeasible transitions are removed. Then based on the simplified NBA, one defines a distance function \(\rho:\mathcal{Q}_{B}\times\mathcal{Q}_{B}\rightarrow\mathbb{N}\) to capture the length of the shortest path between each pair of states. Then the selection of \(\mathbf{x}^{\text{rand}}\) is determined as follows: 1. Select a _feasible accepting state_\(q_{B}^{F,\text{feas}}\in\mathcal{Q}_{B}^{F}\) such that \(\rho(q_{B}^{Q},q_{B}^{F,\text{feas}})\neq\infty\) and \(\rho(q_{B}^{F,\text{feas}},q_{B}^{F,\text{feas}})\neq\infty\); 2. Define the set of vertices \(\mathcal{D}_{\min}\subseteq\mathcal{V}\) that are closest to \(q_{B}^{F,\text{feas}}\) according the distance function \(\rho\). Then select a vertex \(q_{P}^{\text{closest}}=\left(\mathbf{x}^{\text{closest}},q_{B}^{\text{closest }}\right)\in\mathcal{V}_{\mathcal{T}}\) from the tree according to a specific distribution [18] such that states in \(\mathcal{D}_{\min}\) has more chance to be selected; 3. Select two successive states \(q_{B}^{\text{succ},1},q_{B}^{\text{succ},2}\in\mathcal{Q}_{B}\) such that \(\alpha_{B}^{\text{closest}}\xrightarrow{L(\mathbf{x}^{\text{cluster}})}_{B}q_{B }^{\text{succ},1}\xrightarrow{\ldots}p_{B}\ q_{B}^{\text{succ},2}\), and \(\rho(q_{B}^{\text{succ},2},q_{B}^{\text{feas}})\leq\rho(q_{B}^{\text{succ},1},q _{B}^{F,\text{feas}})\leq\rho(q_{B}^{\text{closest}},q_{B}^{F,\text{feas}})\); 4. Select a state \(\mathbf{x}^{\mathcal{C}}\) such that \(q_{B}^{\text{succ},1}\xrightarrow{L(\mathbf{x}^{\mathcal{C}})}_{B}q_{B}^{ \text{succ},2}\) is feasible; 5. Compute the shortest path from \(\mathbf{x}^{\mathcal{L}}\) and \(\mathbf{x}^{\text{closest}}\) within \(\mathcal{W}_{\text{free}}\). Pick the second point in the shortest path, denoted by \(\mathbf{x}^{\text{target}}\) as a heuristic direction for sampling; 6. Finally, select \(\mathbf{x}^{\text{rand}}\) according to a distribution that has more chance to sample towards the direction of \(\mathbf{x}^{\text{target}}\). ## IV Neural Network Guided Sampling Although biased sampling provides a more efficient approach compared with the uniform sampling, it solely relies on the distance information in Buchi automaton. For example, when \(q_{B}^{F,\text{feas}}\) as well as intermediate states \(q_{B}^{\text{succ},1}\) and \(q_{B}^{\text{succ},2}\) are selected, one only consider whether or not the task progress can be pushed forward in the NBA. However, the information from the workspace, i.e., whether the task progress is feasible physically, is neglected. To further improve the sample efficiency for the random tree construction, one essentially needs to _jointly_ consider the information workspace and the NBA in order to have more chance to sample states that can move towards accepting states _by paths that are feasible in the workspace_. However, obtaining such a good heuristic is very challenging since the workspace is continuous. In this section, we present a new neural network guided (NN-guided) sampling strategy, where a "sample net" is used that effectively fuses the information of the workspace and the NBA. All implementation details as well as source codes are available at [https://github.com/LRJ-2000/NNgTL](https://github.com/LRJ-2000/NNgTL) ### _Overview of NN-Guided Approach_ #### Iv-A1 Purposes of Network Networks The main architecture of our sample net is shown in Figure 1 consisting of two sub-networks: the Path Prediction Network (PathNet) and the State Prediction Network (StateNet). The inputs of the PathNet and the StateNet are the same, which are both the workspace map as well as the NBA for the LTL task. However, their purposes and outputs are different. Specifically, * The output of the StateNet is a vector \(\mathbf{p}\) with a length of \(|\mathcal{Q}_{B}|\). In this vector, each entry \(p_{i}\) denotes the probability that state \(i\) is involved in the optimal path. The prediction vector \(\mathbf{p}\) is employed to guide the choices of \(q_{B}^{F,\text{feas}},q_{B}^{\text{succ},1},q_{B}^{\text{succ},2}\in\mathcal{Q }_{B}\) in the biased sampling. * The output of the PathNet is a \(200\times 200\) matrix \(\mathcal{P}\) such that the value of each entry represents the likelihood that the entry is on the optimal path. Therefore, this weight matrix \(\mathcal{P}\) is used as a more reasonable metric for sampling \(\mathbf{x}^{\text{rand}}\) by considering the workspace information with preforming shortest path search. #### Iv-A2 NN-Guided Sampling Strategy Now, let us discuss how the proposed neural networks are used to guide the sampling process. It still follows the basic idea of biased sampling method as detailed in steps S1-S6. However, we further leverage the outputs of the two sub-networks to improve and simply the sampling process. Specifically, at each instant, suppose that the output of the StateNet and the PathNet are \(\mathbf{p}\) and \(\mathcal{P}\). We set \(\alpha\in(0,1)\) as a parameter that specifies the probability of using the predicted information of StateNet to guide the sampling. Then, to determine the sample point \(\mathbf{x}^{\text{rand}}\), our approach makes the following changes C-1 and C-2 to steps S-1 to S-6. C-1: When selecting \(q_{B}^{F,\text{feas}}\), \(q_{B}^{\text{closest}}\), \(q_{B}^{\text{succ.1}}\) as in steps S1-S3 in the biased sampling, there is a \(1-\alpha\) probability that we still follow exactly the same strategy in S-1 to S-3. However, there is an \(\alpha\) probability that all these states selected according to the probability vector \(\mathbf{p}\). In other words, we have \(\alpha\) probability to activate the prediction result of the StateNet. C-2: After obtaining state \(\mathbf{x}^{\mathcal{L}}\) in step S-4, to sample state \(\mathbf{x}^{\text{rand}}\), we simplify steps S-5 and S-6 as a single step. Particularly, instead of first computing the shortest path and then using the second point in the path to generate a sample distribution, here we directly use the prediction result \(\mathcal{P}\) by the PathNet to sample \(\mathbf{x}^{\text{rand}}\). More specifically, let \((x_{1},y_{1})\) and \((x_{2},y_{2})\) be the coordinations of \(\mathbf{x}^{\mathcal{L}}\) and \(\mathbf{x}^{\text{closest}}\), respectively. We consider the rectangle region \(\textsc{Rec}=[x_{1}:x_{2},y_{1}:y_{2}]\). We define a discrete distribution over all grids in Rec according to the normalized value of their weights in \(\mathcal{P}\), i.e., grids with larger value has more chance to be selected. Then \(\mathbf{x}^{\text{rand}}\) is sampled randomly from the rectangle region according to this distribution. Here, we discuss the main features of the purposed NN-guided sampling strategy. First, our NN-guided approach subsumes all properties, such as probabilistic completeness and asymptotic optimality, of the TL-RRT* algorithm in [18] since we still follow the main structure of TL-RRT* and at each step, our algorithm has a non-zero probability to switch to the original sampling strategy. However, compared with the biased sampling strategy adopted in [18], our NN-guided sampling strategy further jointly considers both the workspace information and the NBA, which provides a better heuristic for "good" samples. Furthermore, since our strategy uses the predicted distribution directly without involving the shortest path search for each step, its online execution is also much faster that the biased sampling strategy. ### _Inputs Encodings_ Recall that, for both the PathNet and the StateNet, their inputs are the workspace map and the NBA for the LTL formula. To leverage neural networks for processing the workspace with continuous space and NBA with graph structure, appropriate encoding techniques are needed. #### Iii-B1 Encoding for Workspace First, we consider the continuous workspace as a \(200\times 200\) pixel image or a grid map. Each grid in the image corresponds to a specific point in the workspace, which is labeled, obstacle or free. Then the image is transformed into a tensor of dimensions \((m+1)\times 200\times 200\) with \(m\) be the number of different labels, i.e., each grid in the workspace is encoded as a vector \(\mathbf{a}=[a_{0},a_{1},...,a_{m}]\). Specifically, the first entry \(a_{0}\) represents the grid's status, where \(-1\) for the initial location, \(0\) for free space, and \(1\) for an obstacle. For \(i=1,\ldots,m\), we have \(a_{i}=1\) if it belongs to \(\mathcal{R}_{i}\), and \(a_{i}=0\) otherwise. #### Iii-B2 Encoding for Buchi Automata First, we convert NBA to a directed graph, where nodes and edges correspond to states and transitions, respectively, and have their features. Specifically, the feature for a node is a vector \(\mathbf{v}=[v_{1},v_{2},v_{3}]\), where \(v_{1}\in\{0,1\}\) represents whether or not it is an initial state, \(v_{2}\in\{0,1\}\) represents whether or not it is an feasible accepting state, and \(v_{3}\) is the (normalized) distance to the closest feasible accepting state. For each edge, the feature is a vector \(\mathbf{e}=[e_{1},e_{2},...,e_{m}]\in\{-1,0,1\}^{m}\) specifying the atomic propositions that need to be true for the underlying transition in the NBA. Since later on we use graph neural networks (GNN) to process features of the NBA, we further transform the directed graph into a heterogeneous graph. This transformation involves adding a new node within each original edge so that the features of the edges are inherited by the nodes added. Additionally, to augment feature aggregation and spread, a self-loop is added to each node, and a pooling node is added so that each node has a directed edge leading to this pooling node. ### _Implementation Details of Neural Networks_ #### Iii-C1 PathNet It has the following four building blocks. **Map Encoder:** The purpose is to extract features from the grid map by five convolutional blocks. Each block houses a \(4\times 4\) convolutional layer with a stride of \(2\) and padding of \(1\), complemented by a batch-norm layer, a dropout layer with probability \(0.5\), and a LeakyReLU activation with a negative slope of \(0.2\). Note that the sizes of the map evolves from Fig. 1: Overview of the sampling network. \(m\times 200\times 200\) to \(1024\times 6\times 6\) when passing through these blocks. **NBA Encoder:** The purpose is to extract global features from the NBA. Comprising five layers, the encoder utilizes Graph Attention (GAT) convolutions to address distinct edge types in the heterogeneous graph [29]. The node features are proceeded by a dropout layer and a ReLU activation after convolutions. Finally, global mean pooling operations are used to accumulate feature representation of the entire graph. **Fusion Network:** The purpose is to amalgamate features from both the workspace Map Encoder and the NBA Encoder. This is done by the following two steps. First, NBA features are transformed via a linear layer to attain compatibility with map features. Then vector concatenation are used to fuse these harmonized features. **Path Predictor:** The purpose is to output the weight matrix \(\mathcal{P}\in\mathbb{R}^{200\times 200}\) as the prediction for optimal paths. To this end, we use five up-convolution blocks to upscale and to refine the amalgamated features. Each block is structured with a transposed convolutional layer, a batch-norm layer, a dropout layer (with probability \(0.5\)), and a ReLU activation. Drawing from the "U-Net" architecture [30], our design integrates skip connections, merging features from both the third convolutional modules of the Map and NBA Encoders. This approach capitalizes on the synergy of both encoders, enhancing the merged feature representation. Subsequent to this fusion, features are relayed to the Path Predictor via concatenation. Leveraging these connections ensures the retention of intricate spatial details alongside the depth of hierarchical features. Finally, we use an \(1\times 1\) convolution to reshape the features to \(1\times 200\times 200\) dimensions, and a sigmoid activation is used to refine the output. #### Iv-C2 StateNet This net is essentially a classifier for the NBA states. It has a simpler structure consisting of the following two components. **Map Encoder:** The main purpose is to encode the map into a \(256\)-length feature vector, which will be served as a precursor to the Node Predictor. Specifically, the map is processed by an \(1\times 1\) convolution, transitioning the \(8\)-channel input to \(3\) channels, thus aligning with conventional image processing frameworks. The features then engage with the pre-trained "ResNet-50" model [31]. The terminal fully connected layer of the ResNet is omitted, and is replaced by our bespoke fully connected layer, rendering the output as a \(256\)-length feature vector. **Node Predictor**, tailored for node classification, starts with a GAT convolution layer, stretching the graph's node features to a 256-length dimension, in line with the map features. Each node's features are subsequently concatenated with the Map Encoder-generated map feature vector, amalgamating spatial and structural data at every node. A sequence of information dissemination follows via five sequential GAT modules, each housing a GAT convolution, a ReLU activation for non-linearity, and a dropout (with probability \(0.5\)) for regularization. The classification culminates with two sequential fully connected layers and a Softmax layer, analyzing the consolidated node features to yield the classification probability. ### _Training Neural Networks_ #### Iv-D1 Data Set Preparations Initially, we randomly generate \(15400\) pairs of workspace and LTL formulae. Specifically, in each \(200\times 200\) grid map workspace, we randomly place obstacles as well as seven distinct labeled regions. The initial location of the robot is also randomized. Note that grid map are only used for the purpose of map generation, it is still mapped to and used as continuous workspace. To order to obtain an expert path for each pair of workspace and LTL task, we use the existing biased-sampling approach with \(10,000\) iterations. The obtained expert path is then encoded into a \(200\times 200\) binary matrix. Specifically, we mark those grids crossed by the expert path, as well as their immediate neighbors, as \(1\). Then we label NBA states visited by the expert path as \(1\). Finally, through data augmentations, this data set is expanded to \(107800\) cases. #### Iv-D2 Training Procedures The training process started with training the StateNet first. Specifically, we first fixed the parameters of the ResNet portion and only update the parameters of the other parts. When the loss became stable after 100 epochs, we unfroze the parameters of the ResNet and continued training for another 20 epochs. The trainings were performed by the Adam optimizer with an initial learning rate of \(0.001\) and a batch size of \(128\). We used cross entropy loss as the loss function. After training the StateNet, we trained the PathNet. Specifically, we first initialized the GAT layers in the NBA Encoder using the parameters from the GAT layers in StateNet. After training for 150 epochs, the loss stabilized. We still used the Adam optimizer with an initial learning rate of \(0.0001\) and a batch size of \(128\). We employed the binary cross entropy loss as the loss function. ## V Simulations & Numerical Experiments In this section, we provide simulation results for the proposed method. First, a case study is provided to illustrate our approach. Then we perform a set of numerical experiments to evaluate the efficiency of our approach compared with existing sampling strategies. All algorithms were implemented using Python on a Windows 11 computer with an Intel Core i7-13700K 5.40GHz processor. ### _Case Study_ We consider a robot moving in a workspace shown in Figure 2 with initial position of the robot, obstacles, and labeled regions as depicted in the figure. We consider the following LTL task for the robot \[\phi=\Box\Diamond_{1}\wedge\neg l_{1}\mathcal{U}l_{2}\wedge\Diamond_{3},\] i.e., the robot needs to (i) visit \(l_{2}\) at least once without visiting \(l_{1}\); (ii) visit \(l_{3}\) at least once; (iii) visit \(l_{1}\) infinitely often. The NBA of \(\phi\) is shown in Figure 2(b), where state \(4\) is the unique feasible accepting state. The optimal plan searched by our algorithm is shown as the red lines in Figure 2. The robot first goes to \(l_{2}\), then hoes to \(l_{3}\), and finally stays at \(l_{1}\). That is, only the prefix part contributes to the overall cost. To better illustrate our NN-guided sampling strategy, we explain how the random tree \(\mathcal{T}\) for the prefix path is expended initially from the root \(q_{P}^{0}=(\mathbf{x}_{0},q_{B}^{\mathrm{init}})\). Since the NBA only has one feasible accepting state, the choice for \(q_{B}^{F,\mathrm{feas}}\) is unique, and we have \(q_{P}^{\mathrm{closest}}=q_{P}^{0}\) since the tree only has a root so far. Then the StateNet predicts \(\mathbf{p}=[0999,0.486,0.902,0.994,0.997]\). Since \(0\xrightarrow{L(\mathbf{x}_{0})}_{B}0\), we have \(q_{B}^{\text{succ.1}}=0\). From the feasible successor states of \(q_{B}^{\text{succ.1}}\), we choose \(q_{B}^{\text{succ.2}}=2\) since it has a higher probability (0.902) than state \(1\) (0.486). Then according to the transition condition \(0\xrightarrow{l_{2}}_{B}2\) in the NBA, we select a point in the labeled region \(l_{2}\) as \(\mathbf{x}^{\mathcal{C}}\), which is shown as the yellow point in Figure 3. Then we construct the rectangle region between \(\mathbf{x}^{\mathcal{C}}\) and \(\mathbf{x}^{\mathrm{closest}}\), and state \(\mathbf{x}^{\mathrm{rand}}\) is sampled randomly according to the distribution determined by the PathNet prediction \(\mathcal{P}\) as shown in Figure 2(a). Note that, in this example, there are actually two feasible prefix paths: \(l_{2}\to l_{3}\to l_{1}\) and \(l_{3}\to l_{2}\to l_{1}\). Clearly, the former is optimal with less cost. This information is captured by the prediction result of the StateNet, which prefers state \(2\in\mathcal{Q}_{B}\) for the optimal path. Furthermore, the prediction result of the PathNet can effectively avoid obstacles and has more chance to sample near the optimal path. Therefore, these prediction results by the neural networks guide our search process to converge to a desired solution more quickly. ### _Comparison with Existing Methods_ #### V-B1 Experiment Settings We conduct a set of experiments to illustrate the efficiency of our NN-guided sampling strategy compared with the existing uniform and biased sampling strategies. Specifically, independent from the training set, we generate another \(240\) pairs of workspace map and LTL task. For each instance, we run our method with \(\alpha=0.8\) as well as the two existing method to find a feasible plan. Note that, since all these RRT-based approaches are probabilistic optimal, we focus on comparing _how fast_ these strategies can find _the first feasible solution_, and the performance of the first feasible solution in terms of its length. Formally, we consider the following metrics when the first feasible solution is found: the execution time \(T\) taken, the number of iterations \(n\) required, the number of nodes \(m\) in the random tree, and the length of the first feasible solution \(len\). #### V-B2 Statistic Results The numerical experiments results are shown in Table I. Specifically, based on the execution time \(T\) of the uniform sampling approach, we divide the tasks into simple tasks with \(T\leq$180\,\mathrm{s}$\) and complex tasks with \(T>$180\,\mathrm{s}$\). Then Tables Ia and Ib show the average value of each metric for each algorithm within these two task categories, respectively. Note that many complex tasks, the uniform sampling strategy fails to find a feasible path within \(2000\,\mathrm{s}\). For such cases, we terminate the search by only recording \(T,n\) and \(m\) without consider the length \(len\). The statistic results show that, for both simple and complex tasks, our NN-guided sampling strategy can significantly enhance the efficiency of the RRT-based algorithm. In particular, the timed required to obtain a feasible solution by our NN-guided sampling strategy is less than 15% of that of the biased sampling strategy, which is already much more efficient than uniform sampling strategy. Furthermore, the length of the first feasible solution found by our strategy is similar to those of the other two strategies. We would like the remark that the metric of \(len\) is not essential compared with other metrics since this is just the performance of the first feasible solution. The entire algorithm will converge to the optimal solution when the number of iterations increases. ## VI Conclusion In this paper, building upon the current state-of-the-art sampling-based LTL planning algorithms, we propose a novel sampling strategy based on multi-modal neural networks to guide the sampling process. Our approach, on the one hand, leverages the feature extraction power of neural networks in an end-to-end fashion, and on the other hand, still enjoys all good properties of the sampling-based methods such as probabilistic optimality/completeness. Experimental results show that our proposed sampling strategy can significantly enhance the planning efficiency of the algorithm. In future research, we will further improve the feature fusion methods for multi-robot planning problems. \begin{table} \end{table} TABLE I: Experiment Results Fig. 3: Predication results of the neural networks. Fig. 2: Workspace of the robot, where the gray regions denote obstacles, the green regions denote labeled regions, and the red lines denote the optimal path synthesized.
2309.06454
Photonsphere, shadow, quasinormal modes, and greybody bounds of non-rotating Simpson-Visser black hole
In this manuscript, we study photonsphere, shadow, quasinormal modes, Hawking temperature, and greybody bounds of a non-rotating Simpson-Visser black hole which is a regular black hole. We observe that though the radius of the photonsphere does depend on the Simpson-Visser parameter $\alpha$, the shadow radius is independent of it. The shadow radius is found to be equal to that for Schwarzschild black hole. We, then, study quasinormal frequencies of the Simpson-Visser black hole for scalar and electromagnetic perturbations with the help of $6$th order WKB method. We tabulate values of quasinormal frequencies for various values of $\alpha$, angular momentum $\ell$, and overtone number $n$. We also graphically show the dependence of real and imaginary parts of quasinormal frequency on $\alpha$ and $\ell$. Additionally, We study the convergence of the WKB method for various values of pair $(n,\ell)$. Finally, we shed light on the dependence of the Hawking temperature on the parameter $\alpha$ and the dependence of greybody bounds on $\alpha$ and $\ell$.
Sohan Kumar Jha
2023-09-12T11:41:24Z
http://arxiv.org/abs/2309.06454v1
Photonsphere, shadow, quasinormal modes, and greybody bounds of non-rotating Simpson-Visser black hole ###### Abstract In this manuscript, we study photonsphere, shadow, quasinormal modes, Hawking temperature, and greybody bounds of a non-rotating Simpson-Visser black hole which is a regular black hole. We observe that though the radius of the photonsphere does depend on the Simpson-Visser parameter \(\alpha\), the shadow radius is independent of it. The shadow radius is found to be equal to that for Schwarzschild black hole. We, then, study quasinormal frequencies of the Simpson-Visser black hole for scalar and electromagnetic perturbations with the help of 6th order WKB method. We tabulate values of quasinormal frequencies for various values of \(\alpha\), angular momentum \(\ell\), and overtone number \(n\). We also graphically show the dependence of real and imaginary parts of quasinormal frequency on \(\alpha\) and \(\ell\). Additionally, We study the convergence of the WKB method for various values of pair \((n,\ell)\). Finally, we shed light on the dependence of the Hawking temperature on the parameter \(\alpha\) and the dependence of greybody bounds on \(\alpha\) and \(\ell\). ## I Introduction Black holes(BHs) are one of the most fascinating objects in the Universe. A great deal of research has gone into studying various aspects of BHs. It was the General theory of relativity(GTR) proposed by Einstein that gave rise to the very idea of BHs [1]. BHs that are derived from GTR such as Schwarzschild black holes or Reissner-Nordstrom(RN) black holes have two singularities: one is coordinate singularity called event horizon and the other is essential singularity at \(r=0\). It is because of the presence of essential singularity that curvature invariants diverge and geodesics become incomplete. We can avoid this singularity in GTR if, in the vicinity of BHs, the strong energy condition is broken. BHs having event horizon but no essential singularity are called regular black holes(RBHs). For RBHs, curvature invariants are finite everywhere. Two different approaches can be used to generate RBHs solutions. One approach is to consider a special source, e.g. spatially distributed matter, and then solve Einstein's field equation [2-8]. Another method is to introduce quantum corrections to classical BHs [6-17]. BH solution with no essential singularity but with event horizon [19] was first given by Bardeen [20]. Bardeen BH was later interpreted by Ayo\({}^{\prime}\)n-Beato and Garci\({}^{\prime}\) a using field theory. We have observed significant progress in the study of non-rotating RBHs [22-24] as well as rotating RBHs [24-25]. BHs have strong gravitational field. It is because of this strong field, light rays close to BH are refracted and as a result, BH shadows are formed. The first image of a supermassive black hole M87\({}^{*}\) was unveiled by EHT [26, 27]. Even before the first image given by EHT many attempts were made to study the observable appearance of a BH shadow. The shadow of a Kerr BH was studied by Bardeen et al. [28] whereas the shadow of a Schwarzschild black hole was examined by Synge [29]. Luminet studied BH surrounded by bright accretion disk [30]. The shadow radius is determined by the photon ring which is the characteristic of the underlying spacetime [31]. Photon rings and shadow of RN black hole have been studied in [32]. Black hole shadows bear imprints of the underlying spacetime. Several studies have been conducted to detect dark matter using BH shadow [33-42]. Quasinormal modes are oscillations of a black hole that die down over time because of dissipative effects, e.g., emission of gravitational waves [43-45]. These are called quasinormal because of their transient nature. They are complex numbers where the real part signifies the frequency of the emitted gravitational wave and the imaginary part represents the decay rate or damping rate. Inspiral, merger, and ringdown are the phases that BHs experience after perturbation. For remnant BHs, quasinormal modes are equal to the ringdown phase. Quasinormal modes depend on the parameters of the BH. Thus, to study the underlying geometry it is important to investigate quasinormal modes. A significant number of studies have been conducted in this field [46-77]. Hawking, by considering quantum consequences, showed that BHs emit radiation [78]. This radiation is known as Hawking radiation. When a pair production happens near the event horizon, one particle enters BH while the other moves away from BH. This second particle constitutes Hawking radiation [79-81]. Hawking temperature can be obtained through various methods [82-84]. The greybody factor is important in studying Hawking radiation. It can be calculated using the matching approach [85-87] or WKB approximation [88, 89]. An alternative to these methods was given by Visser [90] by finding rigorous bounds. The greybody factor was studied using this method in [91, 92]. This manuscript is organized as follows. In section II, we introduce a non-rotating Simpson-Visser black hole and study the photonsphere and shadow of the black hole. Section III is devoted to studying quasinormal modes for scalar and electromagnetic perturbations and analyzing their graphical behavior. In section IV, we obtain expressions of the Hawking temperature and greybody bounds and study their variations. We conclude our article with conclusions in section V. ## II Non-rotating Simpson-Visser black holes The concept of regular black holes was first proposed by Bardeen in his article [20]. Since then it has become a topic of great interest. One such regular black hole metric has been given by Simpson and Visser in their article [93]. The metric represents a non-rotating, static, and spherically symmetric black hole. The metric is defined by \[ds^{2}=-(1-\frac{2M}{\sqrt{r^{2}+\alpha^{2}}})dt^{2}+(1-\frac{2M}{\sqrt{r^{2} +\alpha^{2}}})^{-1}dr^{2}+(r^{2}+\alpha^{2})(d\theta^{2}+sin^{2}\theta d\phi^ {2}). \tag{1}\] Here, M is the ADM mass, and \(\alpha\) is the parameter having a dimension of length. This solution encompasses both regular black holes as well as wormholes depending on the value of the parameter \(\alpha\). We have a two-way, traversable wormhole when \(\alpha>2M\) and a one-way wormhole with a null throat when \(\alpha=2M\). The metric (1) gives a regular black hole when \(\alpha<2M\). In this case, the singularity at \(r=0\) is replaced by a bounce to a different universe. The bounce takes place through a spacelike throat shielded by an event horizon and it is christened as black-bounce in [93]. The metric (1) reduces to that for the Schwarzschild black hole when we put \(\alpha=0\). In [94], authors have shown that we can obtain the metric (1) as an exact solution to Einstein's field equations minimally coupled with a self-interacting phantom scalar field \(\varphi\) combined with a nonlinear electrodynamics field represented by tensor \(F_{\mu\nu}\). The action is given by [94] \[S=\int\sqrt{-g}d^{4}r\Big{(}R+2\epsilon g^{\mu\nu}\partial_{\mu}\phi\partial_{ \nu}\varphi-2V(\varphi)-\mathcal{L}(F)\Big{)}, \tag{2}\] where \(\epsilon=\pm 1\). Here, \(\mathcal{L}(F)\) is a gauge-invariant Lagrangian density and \(F=F_{\mu\nu}F^{\mu\nu}\). We obtain the followin Einstein equation from the action (2) \[G^{\nu}_{\mu}=-T^{\nu}_{\mu}[\varphi]-T^{\nu}_{\mu}[F], \tag{3}\] where stress-energy tensors, \(T^{\nu}_{\mu}[\varphi]\) and \(T^{\nu}_{\mu}[F]\), of the scalar and electromagnetic fields are given by \[T^{\nu}_{\mu}[\varphi] = 2\epsilon\partial_{\mu}\varphi\partial^{\nu}\varphi-\delta^{\nu }_{\mu}\left(\epsilon g^{\rho\sigma}\partial_{\rho}\varphi\partial_{\sigma} \varphi-V(\varphi)\right),\] \[T^{\nu}_{\mu}[F] = -2\frac{d\mathcal{L}}{dF}F_{\mu\sigma}F^{\nu\sigma}+\frac{1}{2} \delta^{\nu}_{\mu}\mathcal{L}(F),\] The expressions of \(\varphi,V(\varphi)\), and \(\mathcal{L}(F)\) are subsequently found and given in [94]. It clealy shows that the metric (1) is an exact solution of Einstein equations with the action given by (2). For the metric (1), the lapse function is given by \[f(r)=1-\frac{2M}{\sqrt{r^{2}+\alpha^{2}}}. \tag{4}\] The nature of the lapse function is very important in calculating various observables related to the underlying spacetime. We graphically represent the variation of the lapse function with respect to r for various values of \(\alpha\). From the plot, we observe that as we increase the value of \(\alpha\), the position where the function crosses the r-axis shifts towards the left. It signifies a decrease in the position of the event horizon with an increase in the parameter value \(\alpha\). einforces the finding from the above figure. Next, we move on to study null geodesics in the background of the spacetime given by ansatz (1). Since the spacetime we are considering is a spherically symmetric one, we can, without loss of generality, confine our study only to the equatorial plane. With this, the ansatz (1) reduces to \[ds^{2}=-f(r)dt^{2}+\frac{dr^{2}}{f(r)}+h(r)d\phi^{2}, \tag{5}\] where \(h(r)=r^{2}+\alpha^{2}\). The static and spherically symmetric nature of the spacetime ensures that the energy given by \(\mathcal{E}=-p_{\mu}\xi^{\mu}_{(t)}\) and the angular momentum given by \(\mathcal{L}=p_{\mu}\xi^{\mu}_{(\phi)}\) remain conserved along the geodesics. Here, \(\xi^{\mu}_{(t)}\) and \(\xi^{\mu}_{(\phi)}\) are the Killing vectors due to time-translational and rotational invariance respectively [45]. Thus, the energy of a photon is \(\mathcal{E}=-p_{t}\), and the angular momentum is \(\mathcal{L}=p_{\phi}\). To obtain the expressions of \(p_{t}\) and \(p_{\phi}\), we first write down the Lagrangian that corresponds to motion in the background of metric (5). The Lagrangian is given by \[\mathscr{L}=-f(r)\dot{t}^{2}+\frac{\dot{r}^{2}}{f(r)}+h(r)\dot{\phi}^{2}. \tag{6}\] With the help of definition \(p_{q}=\frac{\partial\mathscr{L}}{\partial\dot{q}}\), we obtain \[p_{t} = \frac{\partial\mathscr{L}}{\partial\dot{t}}=-f(r)\dot{t},\] \[p_{r} = \frac{\partial\mathscr{L}}{\partial\dot{r}}=\frac{\dot{r}}{f(r)},\] \[p_{\phi} = \frac{\partial\mathscr{L}}{\partial\dot{\phi}}=h(r)\dot{\phi}. \tag{7}\] Here, the dot is differentiation with respect to an affine parameter \(\tau\). Thus, in terms of energy and angular momentum, we get two very important differential equations given by \[\frac{dt}{d\tau}=\frac{\mathcal{E}}{f(r)}\quad\text{and}\quad\frac{d\phi}{d \tau}=\frac{\mathcal{L}}{h(r)}. \tag{8}\] Combining Eq.(8) and Eq.(5), equation for the null geodesics is obtained as \[\left(\frac{dr}{d\tau}\right)^{2}\equiv\dot{r}^{2}=\mathcal{E}^{2}-V(r), \tag{9}\] where \(V(r)\) is the effective potential given by \[V(r)=\frac{\mathcal{L}^{2}f(r)}{h(r)}. \tag{10}\] The effective potential given above determines the motion of any particle in the underlying spacetime. We graphically show the variation of this potential. Figure 1: Variation of lapse function with respect to r for various values of \(\alpha\). The above plot indicates that the peak of the potential shifts towards the left as we increase the value of \(\alpha\). This means the radius of the photonsphere decreases as we increase the value of \(\alpha\). For circular photon orbits of radius \(r_{p}\), we must have \[\frac{dV}{dr}|_{r=r_{p}}=0\Rightarrow\frac{f^{\prime}(r_{p})}{f(r_{p})}=\frac{h^ {\prime}(r_{p})}{h(r_{p})}. \tag{11}\] Simple algebra produces \(r_{p}=\sqrt{9M^{2}-b^{2}}\). This analytical expression confirms the inference we have drawn from Fig.(2). The corresponding impact parameter is \[b_{p}=\frac{\mathcal{L}}{\mathcal{E}}=\sqrt{\frac{h(r_{p})}{f(r_{p})}} \Rightarrow b_{p}=3\sqrt{3}, \tag{12}\] which is the same value as that for Schwarzschild black hole. Thus, we see that even though photon radius does depend on the parameter \(\alpha\), the critical impact parameter does not depend on \(\alpha\). Since, for a distant observer, the shadow radius is equal to the critical impact parameter, it is evident that the size of the shadow does not depend on the parameter \(\alpha\). The region within the red dotted circle is the black hole shadow. The shadow cast by a black hole is larger than the actual size because of gravitational lensing. Figure 3: Shadow of a non-rotating Simpson-Visser black hole. Figure 2: Variation of potential with respect to r for various values of \(\alpha\). Here, we have taken \(\mathcal{L}=1\). ## III Quasinormal modes of non-rotating Simpson-Visser black hole In this section, we study quasinormal modes for scalar and electromagnetic perturbations of non-rotating Simpson-Visser black hole. Here, it is assumed that the impact of the scalar field or the electromagnetic on the background spacetime is negligible. To study quasinormal modes, we first consider the equation for the relevant field and then, reduce it to a Schr\(\ddot{o}\)dinger-like equation. For the scalar field, we will have the Klein-Gordon equation and for the electromagnetic field, we will consider Maxwell equations. For the massless scalar field, we have \[\frac{1}{\sqrt{-g}}\partial\mu(\sqrt{-g}g^{\mu\nu}\partial_{\nu}\chi)=0, \tag{13}\] and for the electromagnetic field, we have \[\frac{1}{\sqrt{-g}}\partial\nu(F_{\rho\sigma}g^{\rho\mu}g^{\sigma\nu}\sqrt{-g} )=0, \tag{14}\] where \(F_{\rho\sigma}=\partial\rho A^{\sigma}-\partial\sigma A^{\rho}\), \(A_{\nu}\) being electromagnetic four-potential. We now introduce the tortoise coordinate given by \[\mathrm{d}r_{*}=\frac{\mathrm{d}r}{f(r)}. \tag{15}\] We have \(r_{*}\rightarrow-\infty\) as \(r\to r_{h}\) and \(r_{*}\rightarrow\infty\) as \(r\rightarrow\infty\). With the help of tortoise coordinate, Eqs.(13) and (14) reduce to the Schr\(\ddot{o}\)dinger-like form given by \[-\frac{\mathrm{d}^{2}\phi}{\mathrm{d}r_{*}^{2}}+V_{\mathrm{eff}}(r)\phi= \omega^{2}\phi, \tag{16}\] where the effective potential is given by \[V_{\mathrm{eff}}(r) = \frac{(1-s^{2})f(r)}{r}\frac{\mathrm{d}f(r)}{\mathrm{d}r}+\frac{ f(r)\ell(\ell+1)}{r^{2}}\] \[= \left(1-\frac{2M}{\sqrt{\alpha^{2}+r^{2}}}\right)\left(\frac{\ell (\ell+1)}{r^{2}}+\frac{2M\left(1-s^{2}\right)}{\left(\alpha^{2}+r^{2}\right)^ {3/2}}\right),\] where \(\ell\) is the angular momentum and s is the spin. For \(s=0\), we obtain the effective potential for scalar perturbation and for \(s=1\), we obtain the effective potential for electromagnetic perturbation. Since the effective potential influences quasinormal modes, we briefly study the variation of the effective potential for various scenarios. From above plots we observe that the peak of the effective potential increases with an increase in \(\ell\) or \(\alpha\). But the position of the peak shifts towards the right as we increase the angular momentum, whereas, for an increase in \(\alpha\), the position shifts towards the left. Schutz and Will in their article [97] first developed the WKB method. Others extended the method to higher order WKB method [98; 99; 100]. For the 6th order WKB method, we have the expression of quasinormal frequencies as \[\frac{\mathrm{i}(\omega^{2}-V_{0})}{\sqrt{-2V_{0}^{{}^{\prime\prime}}}}-\sum_{ \mathrm{i}=2}^{6}\Omega_{\mathrm{i}}=n+\frac{1}{2}, \tag{18}\] where \(V_{0}\) is the maximum value of the effective potential at the tortoise coordinate \(r_{0}\), \(V_{0}^{{}^{\prime\prime}}\) is the value of the second order derivative of the effective potential with respect to the tortoise coordinate evaluated at \(r_{0}\), and \(\Omega_{\mathrm{i}}\) are the correction terms given in [97; 98; 99; 100]. Now, to improve accuracy of WKB method, we employ Pade approximants [101], where in powers of the _order parameter_\(\epsilon\) we define a polynomial \(P_{k}(\epsilon)\) as \[P_{k}(\epsilon)=V_{0}+\Omega_{2}\epsilon^{2}+\Omega_{4}\epsilon^{4}+\Omega_{6 }\epsilon^{6}+\ldots-i(n+\frac{1}{2})\sqrt{-2V_{0}^{{}^{\prime\prime}}}\left( \epsilon+\Omega_{3}\epsilon^{3}+\Omega_{5}\epsilon^{5}\ldots\right) \tag{19}\] Here, \(k\) is the polynomial order, same as that for the WKB formula. We can obtain the squared frequency by putting \(\epsilon=1\) via \(\omega^{2}=P_{\tilde{n}/\tilde{m}}(1)\), where Pade approximants, \(P_{\tilde{n}/\tilde{m}}(\epsilon)\), for the polynomial \(P_{k}(\epsilon)\), are given by [101; 102] \[P_{\tilde{n}/\tilde{m}}(\epsilon)=\frac{\mathcal{Q}_{0}+\mathcal{Q}_{1} \epsilon+\ldots+\mathcal{Q}_{\tilde{n}}\epsilon^{\tilde{n}}}{\mathcal{R}_{0}+ \mathcal{R}_{1}\epsilon+\ldots+\mathcal{R}_{\tilde{m}}\epsilon^{\tilde{n}}}, \tag{20}\] with \(\tilde{n}+\tilde{m}=k\). We use 6th order Pade averaged WKB approach to estimate the QNMs and tabulate some of the values of quasinormal frequencies of scalar and electromagnetic perturbations for various values of angular momentum \(\ell\) and parameter \(\alpha\). Here, we take \(M=1\) for all calculations. In Table 1, we show quasinormal modes of scalar perturbation for different values of angular momentum \(\ell\) and parameter \(\alpha\) keeping overtone number \(n=0\). iIn Table 2, we show those for \(n=1\). In Table 3, we show quasinormal modes of electromagnetic perturbation for different values of angular momentum and parameter \(\alpha\) keeping overtone number \(n=0\) and in Table 4, we show those for \(n=1\). Figure 4: Variation of effective potential with respect to normal coordinate r. The upper ones are for various values of \(\alpha\) with \(\ell=1\) and the lower ones are for various values of angular momentum with \(\alpha=0.4M\). The left ones are for scalar perturbations and the right ones are for electromagnetic perturbations. From above tables, we can infer that the real part of quasinormal frequencies increases with an increase in parameter value \(\alpha\) for a particular value of \(\ell\). Additionally, it is observed for both perturbations that the real part of quasinormal modes increases as we increase the angular momentum \(\ell\). We can observe from the Table (1) and Table (2) that the decay rate or damping rate increases as we decrease the value of parameter \(\alpha\) or the angular momentum for scalar perturbation. From Tables (3) and (4) we can infer that the damping rate or decay rate increases as we decrease the value of the parameter \(\alpha\) or increase the angular momentum for electromagnetic perturbation. If we compare values of quasinormal modes for different overtone numbers, then we can see that the real part of quasinormal modes decreases with the overtone number but the decay or damping rate increases with the overtone number. Next, to understand the variation of error associated with an order WKB method, we tabulate quasinormal frequency for various orders of the WKB method and corresponding error associated with the method. The error is measured by the formula \(\mbox{error}=\frac{|\omega_{k+1}-\omega_{k-1}|}{2}\). Here, we have considered scalar perturbation with \(n=0\), \(\alpha=0.2M\), and \(\ell=1\). We can observe from the above table that the error decreases as we increase the order of WKB approximation upto sixth order and then, it starts increasing. Thus, we can infer that sixth order WKB approximation gives the best value of quasinormal frequency. Now, we graphically show the variation of quasinormal frequency for various aspects. \begin{table} \begin{tabular}{|c|c|c|c|} \hline \(\alpha/M\) & \(\ell\)=1 & \(\ell\)=2 & \(\ell\)=3 \\ \hline 0. & 0.248251 -0.0924847 i & 0.457595 -0.0950048 i & 0.656899 -0.0956163 i \\ \hline 0.2 & 0.249003 -0.0923694 i & 0.458724 -0.0948656 i & 0.65844 -0.0954741 i \\ \hline 0.4 & 0.251304 -0.0920112 i & 0.462177 -0.0944359 i & 0.663153 -0.095035 i \\ \hline 0.6 & 0.255271 -0.0913617 i & 0.468163 -0.0936765 i & 0.671328 -0.0942589 i \\ \hline 0.8 & 0.261148 -0.0903477 i & 0.477077 -0.0925117 i & 0.683516 -0.0930677 i \\ \hline 1. & 0.269347 -0.0888389 i & 0.489602 -0.0908062 i & 0.700663 -0.0913222 i \\ \hline \end{tabular} \end{table} Table 4: Quasinormal frequencies for electromagnetic field with \(n=1\). \begin{table} \begin{tabular}{|c|c|c|c|} \hline \(\alpha/M\) & \(\ell\)=1 & \(\ell\)=2 & \(\ell\)=3 \\ \hline 0. & 0.292931 -0.097661 i & 0.483644 -0.0967591 i & 0.675366 -0.0964997 i \\ \hline 0.2 & 0.293625 -0.0975045 i & 0.484747 -0.0966098 i & 0.676891 -0.0963529 i \\ \hline 0.4 & 0.295746 -0.0970231 i & 0.488122 -0.0961494 i & 0.681554 -0.0959 i \\ \hline 0.6 & 0.299405 -0.0961771 i & 0.493966 -0.0953383 i & 0.689639 -0.0951003 i \\ \hline 0.8 & 0.304815 -0.0948936 i & 0.502661 -0.0940996 i & 0.701683 -0.0938749 i \\ \hline 1. & 0.312351 -0.0930422 i & 0.514855 -0.0922961 i & 0.71861 -0.0920829 i \\ \hline \end{tabular} \end{table} Table 1: Quasinormal frequencies for scalar field with \(n=0\). \begin{table} \begin{tabular}{|c|c|c|c|} \hline \(\alpha/M\) & \(\ell\)=1 & \(\ell\)=2 & \(\ell\)=3 \\ \hline 0. & 0.248251 -0.0924847 i & 0.457595 -0.0950048 i & 0.656899 -0.0956163 i \\ \hline 0.2 & 0.249003 -0.0923694 i & 0.458724 -0.0948656 i & 0.65844 -0.0954741 i \\ \hline 0.4 & 0.251304 -0.0920112 i & 0.462177 -0.0944359 i & 0.663153 -0.095035 i \\ \hline 0.6 & 0.255271 -0.0913617 i & 0.468163 -0.0936765 i & 0.671328 -0.0942589 i \\ \hline 0.8 & 0.261148 -0.0903477 i & 0.477077 -0.0925117 i & 0.683516 -0.0930677 i \\ \hline 1. & 0.269347 -0.0888389 i & 0.489602 -0.0908062 i & 0.700663 -0.091322 i \\ \hline \end{tabular} \end{table} Table 3: Quasinormal frequencies for electromagnetic field with \(n=0\). \begin{table} \begin{tabular}{|c|c|c|} \hline WKB Order & Quasinormal frequency & Error \\ \hline 8 & 0.294678 -0.0971196 i & 0.030877 \\ \hline 7 & 0.293563 -0.0974886 i & 0.000590874 \\ \hline 6 & 0.293603 -0.097611 i & 0.000108075 \\ \hline 5 & 0.293768 -0.0975564 i & 0.000199927 \\ \hline 4 & 0.293655 -0.0972145 i & 0.000987336 \\ \hline 3 & 0.291812 -0.0978283 i & 0.00516503 \\ \hline \end{tabular} \end{table} Table 5: Quasinormal frequency and error for various orders of WKB approximation. Figure 6: It gives the variation of the real part of the quasinormal frequency with respect to \(\alpha\) for various values of \(\ell\). The left one is for the scalar field and the right one is for the electromagnetic field. Figure 5: It gives the variation of the imaginary part of the quasinormal frequency with respect to \(\alpha\) for various values of \(\ell\). The left one is for the scalar field and the right one is for the electromagnetic field. Fig.(5) and Fig.(6) reinforce findings we have drwan from Tabs.(I, II, III, IV). We can also observe that the real part of quasinormal modes is larger for scalar perturbation, whereas, the imaginary part is larger for electromagnetic perturbation. It implies that the damping rate or decay rate is larger for scalar perturbation. We next study the convergence of the WKB method for various values of \((n,\ell)\) pair. From the above figure we observe that quasinormal frequencies fluctuate even for higher order when we consider the pair \((2,0)\). This confirms the finding in the article [103] where it is observed that WKB approximation is reliable when the angular momentum is high and the overtone number is low. ## IV Hawking temperature and bounds of the greybody factor In this section, we intend to calculate the Hawking temperature and greybody bounds for the black hole under consideration. Hawking in his article [78] showed that black holes emit radiation. That radiation is known as Hawking Figure 8: Variation of the real and imaginary part of quasinormal frequencies with respect to WKB order for various values of \((n,\ell)\) pair. The upper left one is for \((1,0)\) pair, the upper right one is for \((2,0)\) pair, the lower left one is for \((2,2)\) pair and the lower right one is for \((2,4)\) pair. The upper line in each plot is for the real part of quasinormal modes and the lower line is for the imaginary part of quasinormal modes. Figure 7: Left one gives the variation of the imaginary part of the quasinormal frequency with respect to \(\alpha\) for scalar and electromagnetic fields and the right one gives that for the real part. Here, we have taken \(\ell=1\). radiation. Bekenstein in his article [106] and Keifer in his article [107] showed that it was necessary to associate a temperature with the horizon for consistency with thermodynamics. The Hawking temperature is given by \[T_{H}=\frac{1}{4\pi\sqrt{-g_{tt}g_{rr}}}\frac{dg_{tt}}{dr}|_{r=r_{h}}. \tag{21}\] For the metric in consideration, we have \(g_{tt}=-f(r)\) and \(g_{rr}=\frac{1}{f(r)}\). Putting these values in the above equation, we get \[T_{H}=\frac{\sqrt{4M^{2}-\alpha^{2}}}{16\pi M^{2}}. \tag{22}\] The dependence of the Hawking temperature on the parameter \(\alpha\) is evident from the above equation. We recover the value of the Hawking temperature for the Schwarzschild black hole if we put \(\alpha=0\) in the above equation. To show the dependence graphically, we plot the Hawking temperature against \(\alpha\). We can observe that the Hawking temperature decreases as we increase the value of the parameter \(\alpha\). The Hawking radiation observed by an asymptotic observer is different from the original radiation near the horizon of the black hole due to the redshift factor. Greybody distribution describes the Hawking radiation that is observed by an asymptotic observer. Here, we try to obtain the lower bound of the greybody factor for a non-rotating Simpson-Visser black hole. A lot of research has been dedicated to the bound greybody factor. Visser and Boonserm in their articles [90; 91; 105] gave an elegant way to lower bound the greybody factor. A rigorous bound of the transmission probability, which is the same as that of the graybody factor, is given by \[T\geq sech^{2}(\frac{1}{2\omega}\int_{-\infty}^{\infty}|V_{\rm eff}(r_{*})|dr_ {*}), \tag{23}\] where \(r_{*}\) is the tortoise coordinate defined in Eq.(15) and \(V_{\rm eff}(r_{*})\) is the potential given in Eq.(18). In terms of normal coordinate r, the above equation becomes \[T\geq sech^{2}(\frac{1}{2\omega}\int_{r_{h}}^{\infty}|V_{\rm eff}(r)|\frac{dr} {f(r)}). \tag{24}\] If we use Eq.(18), then, the above equation reduces to \[T\geq sech^{2}\left(\frac{\frac{\ell(\ell+1)}{\sqrt{4-\alpha^{2}}}+\frac{1-s^{ 2}}{\sqrt{4-\alpha^{2}+2}}}{2\omega}\right). \tag{25}\] The above equation shows the explicit dependence of greybody factor bounds on the value of parameter \(\alpha\). We, next, plot the bounds of the greybody factor against \(\omega\) for both scalar and electromagnetic perturbations by taking \(s=0\) and \(s=1\) respectively. We have plotted T for various values of \(\alpha\) as well as angular momentum \(\ell\). Figure 9: Variation of Hawking temperature with respect to \(\alpha\). Here, \(T_{s}\) is the greybody factor for scalar perturbation, and \(T_{em}\) is the greybody factor for electromagnetic perturbation. There are a few conclusions we can draw from the above plots. We can observe that the bounds asymptotically approach the value 1, for both scalar and electromagnetic perturbations. The nature of variation is also the same for both perturbations. From Fig.(10) we can conclude that the bounds decrease as we increase the angular momentum. We can also conclude from Fig.(11) that the bounds decrease as we increase the value of the parameter \(\alpha\). ## V Conclusions We have conducted an investigation of photonsphere, shadow radius, quasinormal modes, Hawking temperature, and greybody bounds of non-rotating Simpson-Visser black hole. To obtain the radius of the photosphere, we first write down the Lagrangian corresponding to the metric confining ourselves to the equatorial plane without loss of generality. Then, with the help of Killing vectors and the Lagrangian, we obtain the differential equation of motion for photons and get the expression of the effective potential. Imposing conditions on the effective potential and its derivative for circular photon orbits, we obtain the expression of the radius of photonsphere \(r_{p}\) and the corresponding impact parameter. For an observer located at asymptotic infinity, the shadow radius is equal to the impact parameter. We observe that the radius of the photonsphere decreases with an increase in the parameter \(\alpha\), whereas, the shadow radius remains constant to the value \(3\sqrt{3}\) which is the shadow radius for Schwarzschild black hole. Next, we study quasinormal modes for two types of perturbations: scalar and electromagnetic. We tabulate quasinormal frequencies for various values of overtone number n, angular momentum \(\ell\), and parameter \(\alpha\) in Tables (1, II, III, IV, V). Our findings indicate that with the increase in the value of \(\alpha\), the real part of quasinormal frequency increases. We observe that for scalar and electromagnetic perturbations, the decay rate or damping rate increases as we decrease the value of parameter \(\alpha\). But, the decay rate or the damping rate increases for scalar perturbation and Figure 11: Bounds of greybody factor for various values of \(\alpha\). Left one is for scalar perturbation and the right one is for electromagnetic perturbation. Here we have taken \(\ell=1\). Figure 10: Bounds of greybody factor for various values of \(\ell\). Left one is for scalar perturbation and the right one is for electromagnetic perturbation. Here we have taken \(\alpha=0.6\). decreases for electromagnetic perturbation as we decrease the angular momentum. Comparing values of quasinormal modes for different overtone numbers, we find that the real part of quasinormal modes decreases with the overtone number but the decay or damping rate increases with the overtone number. These findings are reinforced in various plots presented. We also observe that the error associated with a WKB order decreases as we increase the order of WKB approximation upto the sixth order and then it starts increasing. Moreover, the real part of quasinormal modes as well as the decay rate or damping rate for scalar perturbation is greater than that of electromagnetic perturbation. Our findings also confirm the result in the article [103] where it was observed that WKB approximation is reliable when the angular momentum is high and the overtone number is low. Finally, we have obtained the expression of Hawking temperature and bounds of the greybody factor. Our findings indicate that the Hawking temperature decreases with an increase in the parameter \(\alpha\). On the other hand, the bounds of the greybody factor for both type of perturbations decreases with an increase in angular momentum \(\ell\) or parameter \(\alpha\). We can obtain further insights into the behavior of various perturbations with the help of time-domain calculations and innovative approaches. We also hope that results from future experiments will guide us to have a complete theory of quantum gravity. **Data Availability Statement**: We do not have any additional data to present
2309.09767
Non-Singular Gravitational Collapse through Modified Heisenberg Algebra
We study the effects of cut-off physics, in the form of a modified algebra inspired by Polymer Quantum Mechanics and by the Generalized Uncertainty Principle representation, on the collapse of a spherical dust cloud. We analyze both the Newtonian formulation, originally developed by Hunter, and the general relativistic formulation, that is the Oppenheimer-Snyder model; in both frameworks we find that the collapse is stabilized to an asymptotically static state above the horizon, and the singularity is removed. In the Newtonian case, by requiring the Newtonian approximation to be valid, we find lower bounds of the order of unity (in Planck units) for the deformation parameter of the modified algebra. We then study the behaviour of small perturbations on the non-singular collapsing backgrounds, and find that for certain range of the parameters (the polytropic index for the Newtonian case and the sound velocity in the relativistic setting) the collapse is stable to perturbations of all scales, and the non-singular super-Schwarzschild configurations have physical meaning.
Gabriele Barca, Giovanni Montani
2023-09-18T13:48:28Z
http://arxiv.org/abs/2309.09767v2
# Non-Singular Gravitational Collapse through Modified Heisenberg Algebra ###### Abstract We study the effects of cut-off physics, in the form of a modified algebra inspired by Polymer Quantum Mechanics and the by the Generalized Uncertainty Principle representation, on the collapse of a spherical dust cloud. We analyze both the Newtonian formulation, originally developed by Hunter, and the general relativistic formulation, that is the Oppenheimer-Snyder model; in both frameworks we find that the collapse is stabilized to an asymptotically static state above the horizon, and the singularity is removed. In the Newtonian case, by requiring the Newtonian approximation to be valid, we find lower bounds of the order of unity (in Planck units) for the deformation parameter of the modified algebra. We then study the behaviour of small perturbations on the non-singular collapsing backgrounds, and find that for certain range of the parameters (the polytropic index for the Newtonian case and the sound velocity in the relativistic setting) the collapse is stable to perturbations of all scales, and the non-singular super-Schwarzschild configurations have physical meaning. ## I Introduction The problem of understanding the final fate of the gravitational collapse of an astrophysical object is a long standing question in literature [1; 2; 3]. In particular, the existence of the upper limits for the mass of compact stars [4; 5; 6; 7], above which the gravitational collapse is no longer contrasted by the matter pressure with the consequent formation of a black hole, constitutes one of the most outstanding and still debated results [8; 9; 10]. Indeed the observation of neutron stars with mass potentially greater than two Solar masses [11; 12; 13] opened the way to a series of conjectures concerning the possible physical explanation for this unexpected evidence, including scenarios with new physics for the gravitational field (see for instance the so-called scalarization phenomenon in modified gravity [14; 15; 16]). Here we consider the collapse of a spherical dust cloud, infalling under the effect of its self-gravity, both in the Newtonian and in the fully relativistic limit. The peculiarity of our study is that we introduce features of cut-off physics in the Hamiltonian formulation of the dynamics through modified Poisson brackets. This approach is inspired by the so-called Polymer Quantum Mechanics [17], when expanded in the free cut-off parameter [18; 19]. This way, we are including in the gravitational collapse the ingredients for a repulsive-like gravity, similarly to what happens in cosmology when the emergence of a Big Bounce is recovered (as for example in the frameworks of Loop Quantum Cosmology [20; 21; 22; 23], of Group Field Theories [24; 25], of Polymer Cosmology [26; 27; 28; 29; 30; 31], or of other modified approaches to gravity [32; 33; 34; 35]). In the Newtonian limit, we adopt the representation of the spherical collapse proposed in [36], which consists of a Lagrangian description for the dynamics of the background configuration and of an Euler formulation of the behavior characterizing small perturbations. While the background dynamics is characterized by a pressureless free fall, when studying the perturbations we adopt a polytropic equation of state and the pressure contribution is relevant for the system stability. In the general relativistic case, we adopt the Oppenheimer-Snyder model [37] in which the region external to the cloud is, according to the Birkhoff theorem [38; 39], a Schwarzschild spacetime, while the interior of the collapsing object is associated to a Robertson-Walker geometry with positive curvature. The two spacetime regions are then suitably matched on the boundary of the collapsing cloud. The stability of this collapse is then studied by considering the dynamics of the interior as the background, in agreement with the Lifshitz formulation of the cosmological perturbations [40; 41]. The equation of state for these perturbations has been taken in the isothermal form and the constant sound velocity is a free parameter, replacing the polytropic index of the Newtonian formulation. We stress that the assumption of a free falling background configuration, made both in the non-relativistic and relativistic cases, has been chosen in order to emphasize the effect of the repulsive gravity induced by cut-off physics, simply because they are not hidden here by the presence of a matter pressure contribution. The present analysis is characterized by two main relevant results. First, it is always possible to obtain an asymptotically static configuration of the background collapse in correspondence to a radius greater than the Schwarzschild value; second, for a suitable range of the free parameters of the perturbation dynamics, the background configuration results to be stable to small perturbations. Furthermore, it is worth stressing that these two outputs of our analysis remain valid in the limit of a very small (even sub-Planckian) value for the cut-off pa rameter that characterizes the modification of the Poisson brackets. This fact suggests that the presence of a cut-off physics in the gravitational collapse constitutes an intrinsic modification of the gravitational force with respect to the standard Newtonian or Einsteinian gravity and that the singular collapse is never recovered in the modified dynamics. In other words, the present analysis states that, if we include gravity modifications in the description of a spherical dust collapse, as expected in an effective quantum gravity scenario, the resulting dynamics is always associated to the existence of a physical (super-Schwarzschild) static and, for a given range of the free parameters of the model, stable configuration, i.e. what we could call a stable "dust star". This results, and in particular the capability of cut-off physics effects to determine a macroscopic modification i.e. the stabilization of the dust collapse above the event horizon, open a new perspective in understanding the basic ingredients to fix the morphology and the final fate of astrophysical bodies. More specifically, once a real equation of state is considered and the star radial inhomogeneity properly accounted for, it could be possible to give constraints on the value of the cut-off parameter that could accommodate the observed violation of the Chandrasekhar or Tolman-Oppenheimer-Volkoff limits. The paper is organized as follows. In Section II we introduce the modified algebra as a deformation of the canonical commutation relations, that in the (semi)classical limit becomes a deformation of the Poisson brackets. In Section III we present the Hamiltonian formulation for the classical and modified Hunter model, i.e. the Newtonian model for dust collapse, and in Section IV we introduce perturbations on this background. In Section V we present the Oppenheimer-Snyder model in its Hamiltonian formulation, and the modified dynamics obtained with the deformed algebra, while in Section VI we introduce perturbations in this relativistic framework. Section VII concludes the paper with a summary and some remarks. ## II Modified Heisenberg algebra In this section we introduce the modified Heisenberg algebra that we use to implement critical points on the classical evolution, thus solving the gravitational singularities. It is inspired by quantum gravity and quantum cosmological theories such as Polymer Quantum Mechanics (PQM) [17; 42] and the Generalised Uncertainty Principle (GUP) representation [43; 44; 45]. The algebra takes the form \[[\hat{q},\hat{p}]=i\,\left(1-\frac{\mu^{2}\ell_{P}^{2}\hat{p}^{2}}{\hbar^{2}} \right), \tag{1}\] where \(\hat{q}\) and \(\hat{p}\) are two generic conjugate operators and \(\mu\) is a real positive deformation parameter descending from the lattice spacing of PQM. In this commutator the necessary fundamental constants appear in order to have the deformation parameter \(\mu\) dimensionless, as is sometimes done in GUP literature [46; 47]; for example, in this case we assumed that \(q\) and \(p\) are the standard position and momentum. Due to the modified commutator depending on \(p\), this kind of algebras is usually studied in the momentum polarization, i.e. a representation where wavefunctions \(\Psi=\Psi(p)\) are functions of the momentum and the corresponding operator acts multiplicatively on them as \(\hat{p}\,\Psi(p)=f(p)\,\Psi(p)\). Through a simple procedure introduced in [48], by asking that the operator \(\hat{q}\) acts simply differentially as in Standard Quantum Mechanics (SQM), we can find the modified action of the momentum operator as \[\frac{\mathrm{d}f}{\mathrm{d}p}=1-\frac{\mu^{2}\,\ell_{P}^{2}\,f^{2}}{\hbar^{2 }},\quad\frac{\hbar}{\ell_{P}}\,\frac{\mathrm{arctanh}(\frac{\mu\,\ell_{P}\,f }{\hbar})}{\mu}=p; \tag{2}\] therefore the action of the two fundamental operators is \[\hat{p}\,\Psi(p)=\frac{\hbar}{\ell_{P}}\,\frac{\tanh(\left(\frac{\mu\,\ell_{ P}\,p}{\hbar}\right)}{\mu}\,\Psi(p), \tag{3a}\] \[\hat{q}\,\Psi(p)=i\,\hbar\,\frac{\mathrm{d}}{\mathrm{d}p}\Psi(p). \tag{3b}\] It is trivial to verify that in the limit \(\mu\to 0\) these revert to the operators of SQM in the standard momentum polarization; the corrections that this algebra introduces are usually relevant at high energies, i.e. when the \(p^{2}\) term approaches unity. It is possible to implement these corrections also on a (semi)classical level through an effective theory; in this case, the modified algebra (1) becomes a rule for Poisson brackets: \[\{q,p\}=1-\frac{\mu^{2}\ell_{P}^{2}p^{2}}{\hbar^{2}}; \tag{4}\] \[\dot{q}=\{q,\mathcal{H}\}=\frac{\partial\mathcal{H}}{\partial p}\,(1-\frac{\mu ^{2}\ell_{P}^{2}p^{2}}{\hbar^{2}}), \tag{5a}\] \[\dot{p}=\{p,\mathcal{H}\}=-\frac{\partial\mathcal{H}}{\partial q}\,(1-\frac{\mu ^{2}\ell_{P}^{2}p^{2}}{\hbar^{2}}), \tag{5b}\] where \(\mathcal{H}(q,p)\) is a Hamiltonian function. As mentioned earlier, this kind of equations of motion usually have an additional critical point at \(p=\hbar/\mu\ell_{P}\) and are therefore used to avoid and remove singularities in cosmological models [19; 49]. We will use this semiclassical formulation to study the collapse of a dust cloud, both in a Newtonian and in a Relativistic setting. ## III Newtonian gravitational collapse In this section we introduce the Newtonian description for the collapse of a dust cloud, first developed by Hunter [36], in its Hamiltonian formulation. The Hunter model consists in a homogeneous and isotropic sphere of dust, initially at rest, collapsing under the action of its own gravity; therefore the density \(\rho\) is a function of time only and the pressure gradients are identically zero (this won't be valid anymore when later we introduce perturbations). Then we implement the modified algebra (4) to show how the singularity is removed and also derive some bounds on the deformation parameter \(\mu\) by requiring that the non-relativistic assumption holds. ### Hamiltonian Formulation of the Hunter Model Given spherical symmetry, it is enough to study the evolution of the radius \(r\) of the sphere; using the Newtonian gravitational potential, the Hamiltonian (actually the Hamiltonian per unit mass) results to be \[{\cal H}=\frac{p^{2}}{2}-\frac{GM}{r}, \tag{6}\] where \(p\) is the momentum conjugate to \(r\), \(G\) is Newton's gravitational constant, and \(M\) is the total mass of the cloud. The Hamilton equations are \[\dot{r}=\frac{\partial{\cal H}}{\partial p}=p,\qquad\dot{p}=-\frac{\partial{ \cal H}}{\partial r}=-\frac{GM}{r^{2}}; \tag{7}\] dividing the second equation by the first we obtain a differential equation for \(p(r)\) that can be integrated with the initial conditions \(r(t=0)=r_{0}\) and \(\dot{r}(t=0)=p(t=0)=0\), where \(r_{0}\) is the initial radius of the cloud: \[\frac{\partial p}{\partial r}=\frac{\dot{p}}{\dot{r}}=-\frac{GM}{r^{2}p}, \qquad p(r)=\pm\sqrt{2GM\left(\frac{1}{r}-\frac{1}{r_{0}}\right)}. \tag{8}\] Then, substituting this in the equation for \(\dot{r}\) with the minus sign (since in a collapse \(\dot{r}<0\)) and defining \(a=r/r_{0}\), we can obtain a solution for \(a(t)\) in implicit form: \[\dot{a}=-\sqrt{\frac{2GM}{r_{0}^{3}}\,\frac{1-a}{a}}\,, \tag{9}\] \[\sqrt{\frac{2GM}{r_{0}^{3}}}\,\,t=\sqrt{a(1-a)}\,+\mbox{acos}\,\sqrt{a}. \tag{10}\] By setting \(a(t_{0})=0\), we can find the time of collapse \(t_{0}\) to be \[t_{0}=\frac{\pi}{2}\,\sqrt{\frac{r_{0}^{3}}{2GM}}. \tag{11}\] The solution is shown in Figure 1 compared with the modified non-singular solution that we will now derive. ### Modified Non-Singular Collapse To obtain the modified evolution, we start from the same Hamiltonian (6) but derive the equations of motion through the modified Poisson brackets (4): \[\dot{r}=p(1-\frac{\mu^{2}p^{2}}{c^{2}}),\qquad\dot{p}=-\frac{GM}{r^{2}}(1- \frac{\mu^{2}p^{2}}{c^{2}}), \tag{12}\] where \(p\) has the dimensions of a velocity and therefore we inserted the speed of light \(c\) to keep \(\mu\) dimensionless. Now, dividing the second equation by the first we obtain the same relation (8), and substituting we get a differential equation for \(a(t)\) of the form \[\dot{a}=-\sqrt{\frac{2GM}{r_{0}^{3}}\,\frac{1-a}{a}}\,\,\left(1-\frac{2GM\mu^ {2}}{r_{0}c^{2}}\,\frac{1-a}{a}\right). \tag{13}\] We see that the modified algebra has introduced a critical point: we find the value \(a_{\infty}<1\) such that \(\dot{a}=0\) as \[1-c_{\mu}\,\frac{1-a_{\infty}}{a_{\infty}}=0,\qquad a_{\infty}=\frac{c_{\mu}}{ 1+c_{\mu}}, \tag{14}\] where we defined \(c_{\mu}=2GM\mu^{2}/r_{0}c^{2}\) to shorten the notation. The solution for \(a(t)\) can again be found only in implicit form: \[\sqrt{\frac{2GM}{r_{0}^{3}}}\,\,t=\frac{\sqrt{a(1-a)}}{1+c_{\mu}}+\frac{1+3c_ {\mu}}{(1+c_{\mu})^{2}}\,\mbox{acos}\,\sqrt{a}\,+\] \[+\frac{2c_{\mu}^{\frac{3}{2}}}{(1+c_{\mu})^{\frac{3}{2}}}\sqrt{1+2b_{-}}\,(1+ b_{+})\mbox{atanh}\left(\frac{\sqrt{a}-1}{\sqrt{(1-a)(1+2b_{-})}}\right)+\] \[-\frac{2c_{\mu}^{\frac{3}{2}}}{(1+c)^{\frac{3}{2}}}\sqrt{1+2b_{+}}\,(1+b_{-}) \mbox{atanh}\left(\frac{\sqrt{a}-1}{\sqrt{(1-a)(1+2b_{+})}}\right), \tag{15}\] where \(b_{\pm}=c_{\mu}\pm\sqrt{c_{\mu}(1+c_{\mu})}\). It is trivial to see that in the limit \(\mu\to 0\) we have \(c_{\mu},b_{\pm}\to 0\) and the standard solution (10) is recovered; it is also easy to verify that, when \(a=a_{\infty}\), the arguments of both inverse hyperbolic tangents become 1 and the right-hand-side diverges, meaning that the inverse function \(a(t)\) has an horizontal asymptote such that \(a\to a_{\infty}\) when \(t\rightarrow\infty\). In Figure 1 the classical and the modified solutions are compared. At this point we can find some constraints on the deformation parameter \(\mu\) by requiring that the Newtonian description be valid. In particular, we impose that the minimum radius be much greater than the Scharzschild radius \(r_{S}=2GM/c^{2}\): the condition \(a_{\infty}\gg a_{S}\) implies \[\frac{c_{\mu}}{1+c_{\mu}}\gg a_{S}=\frac{r_{S}}{r_{0}},\quad\mu\gg\sqrt{\frac {1}{1-a_{S}}} \tag{16}\] (note that \(c_{\mu}=a_{S}\mu^{2}\)). Therefore we find that for a cloud with initial mass and radius equal to those of our Sun we have \[\mu\gg 1, \tag{17}\] which was expected since for the Sun \(a_{S}\sim 10^{-4}\) and the square root is basically 1; this relation may of course vary for different values of initial radius and mass, but even for more compact objects with \(a_{S}\sim 2/5\) we would have \(\mu\gg 1.3\). As a secondary check, we require that the maximum speed reached during the collapse be non-relativistic. The maximum speed is found by setting \(\ddot{r}=0\) and substituting in \(\dot{r}\): we find \[\mu\gg\frac{2}{3\sqrt{3}}\sim 0.4, \tag{18}\] which is still slightly smaller than 1. Note how this constraint, differently from the previous one, does not depend on any parameter. Therefore, we can conclude that, by taking the deformation parameter just one or two orders of magnitude greater than 1, we are assured that the Newtonian dynamics is still a good description for this model and that the collapse stops before the formation of a horizon. ## IV Non-relativistic perturbations Let us now study the behaviour of perturbations in the Newtonian description. We will see that, for the non-singular case, a Jeans-like length naturally emerges. Note that, while the background configuration is determined by including cut-off physics effects, the evolution of the perturbations follows standard dynamics; this choice is justified by the observation that, while the background evolution is non-perturbatively sensitive to the cut-off physics, the smallness of the perturbations ensures that their dynamics can be satisfactorily described via standard gravity effects. Still following Hunter [36], for the description of perturbations it is better to use an Eulerian representation. The system is then described by the following quantities: \[\mathbf{v}=(r_{0}\dot{a},0,0), \tag{19a}\] \[\rho=\rho_{0}\,a^{-3},\] (19b) \[\Phi=-2G\pi\rho r_{0}^{2}(1-\frac{a^{2}}{3}), \tag{19c}\] where \(\mathbf{v}\) is the velocity vector, \(\rho\) and \(\rho_{0}\) are the density of the cloud and its initial value, and \(\Phi\) is the gravitational potential. These quantities are linked by the continuity, Euler and Poisson equations [40]: \[\dot{\rho}+\mathbf{\nabla}\mathbf{\cdot}(\rho\mathbf{v})=0, \tag{20a}\] \[\dot{\mathbf{v}}+\left(\mathbf{v}\mathbf{\cdot}\mathbf{\nabla}\right)\mathbf{v}=-\mathbf{ \nabla}\Phi-\frac{\mathbf{\nabla}P}{\rho},\] (20b) \[\nabla^{2}\Phi=4\pi G\rho, \tag{20c}\] where \(P=P(\rho)\) is the pressure that depends only on the density due to the barotropic assumption. Now we perturb the quantities (19) to first order (higher-order corrections were investigated by Hunter later in [50; 51]) as \(\mathbf{v}=\mathbf{\nabla}+\delta\mathbf{v}\), \(\rho=\overline{\rho}+\delta\rho\), \(\Phi=\overline{\Phi}+\delta\Phi\), where the unperturbed quantities (those with the overline) already satisfy equations (20). By substituting into the Euler equation (20b) and taking the rotor, we obtain an equation for the vorticity \(\delta\mathbf{w}=\mathbf{\nabla}\mathbf{\times}\delta\mathbf{v}\): \[\dot{\delta\mathbf{w}}=-\mathbf{\nabla}\mathbf{\times}(\delta\mathbf{w}\mathbf{\times} \mathbf{v}), \tag{21}\] with solution \[\delta\mathbf{w}=\left(\frac{w_{r}}{a^{2}}+W\quad,\quad\frac{w_{\theta}}{a^{2} }\quad,\quad\frac{w_{\varphi}}{a^{2}}\right), \tag{22}\] where \(w_{r}\), \(w_{\theta}\), \(w_{\phi}\) and \(W\) are arbitrary functions in spherical coordinates which must satisfy \(\mathbf{\nabla}\mathbf{\cdot}\delta\mathbf{w}=0\) (the divergence of a curl is identically zero in any system of coordinates); note that \(W\) can be ignored since it represents a static distribution. Substituting this result back in equations (20), eliminating \(\delta\Phi\) and using the polytropic relation \(P=\kappa\rho^{\gamma}\), we obtain a term involving the Laplacian of \(\delta\rho\) (for more details see [36; 52]); in order to get rid of it, we can separate the variables as \[\delta\rho(t,r_{0}a,\theta,\varphi)=\delta\rho(t)\,\psi(r_{0}a,\theta,\varphi), \tag{23}\] and then exploit the spherical symmetry of the problem by choosing \(\psi\) to be an eigenfunction of the Laplacian operator: \[\psi_{klm}(r_{0}a,\theta,\varphi)=\Big{(}A_{lm}\,j_{l}(kr_{0}a)+B_{lm}\,y_{l }(kr_{0}a)\Big{)}Y_{lm}(\theta,\varphi), \tag{24}\] Figure 1: Comparison between the classical collapse (dashed black line) and the modified non-singular evolution (red continuous line) for generic values of the parameters; the collapse time \(t_{0}\) and the minimum value \(a_{\infty}\) are highlighted by faded grey lines. where \(j_{l}\) and \(y_{l}\) are spherical Bessel functions of the first and second kind, \(A_{lm}\) and \(B_{lm}\) are constant coefficients, and \(Y_{l}^{m}\) are spherical harmonics; this way we can write \(\nabla^{2}\delta\rho=-k^{2}\delta\rho\), simplify the spatial part \(\psi\) and obtain a differential equation just for the time dependent part of the density perturbation \(\delta\varrho(t)\): \[a^{3}\ddot{\delta\varrho}+8a^{2}\dot{a}\dot{\delta\varrho}+\left(-4\pi\rho_{0} G+k^{2}v_{s}^{2}a+12a\dot{a}^{2}+3a^{2}\ddot{a}\right)\delta\varrho=0, \tag{25}\] where \(v_{s}^{2}=\partial P/\partial\rho=\kappa\gamma\rho^{\gamma-1}\). Until now, no reference to any solution was made. At this point we can insert the different expressions for \(a(t)\) and its derivatives to obtain the solution \(\delta\varrho(t)\) for the two cases. ### The Singular Classical Case In the standard case we have \[\dot{a}=-\sqrt{\frac{2GM}{r_{0}^{3}}\frac{1-a}{a}}\,\quad\ddot{a}=-\frac{GM}{r_{0 }^{3}a^{2}}, \tag{26}\] so the perturbation equation (25) becomes \[a^{3}\ddot{\delta\varrho}+8a^{2}\dot{a}\dot{\delta\varrho}+\left(\frac{6GM}{r _{0}^{3}}(3-4a)+k^{2}v_{0}^{2}a^{4-3\gamma}\right)\delta\varrho=0, \tag{27}\] where \(v_{0}^{2}=\kappa\gamma\rho_{0}^{\gamma-1}\). Now, since we are interested in the asymptotic behaviour near the singularity, we can take the solution (10) and perform an asymptotic expansion for \(t\to t_{0}\), \(a\to 0\), where \(t_{0}\) is the collpase time given by equation (11), obtaining the explicit expression \[a^{\rm asymp}(t)=\left(\frac{3}{4}\pi\right)^{\frac{2}{3}}\left(1-\frac{t}{t_{ 0}}\right)^{\frac{2}{3}}. \tag{28}\] After some manipulation, substituting this in equation (27) yields \[y^{2}\frac{{\rm d}^{2}\delta\varrho}{{\rm d}y^{2}}-\frac{16}{3}y\frac{{\rm d} \delta\varrho}{{\rm d}y}+\left(4+\frac{3^{\frac{3}{3}-2\gamma}\pi^{\frac{3}{3} -2\gamma}}{2^{\frac{13}{3}-4\gamma}}\frac{k^{2}v_{0}^{2}r_{0}^{3}}{GM}y^{ \frac{5}{3}-2\gamma}\right)\delta\varrho=0, \tag{29}\] where \(y=1-t/t_{0}\) so that the limit \(t\to t_{0}\) corresponds to \(y\to 0\); the general solution is \[\delta\varrho(y)=A_{+}\,f_{+}(y)+A_{-}\,f_{-}(y), \tag{30}\] \[f_{\pm}(y)=\frac{J_{\frac{\pm 5}{8-6\gamma}}\left((\alpha y)^{\frac{4}{3}- \gamma}\right)}{y^{\frac{13}{6}}}, \tag{31}\] where \(\alpha\) is a dimensionless constant containing the parameters \(v_{0}\), \(k\), \(r_{0}\) and \(M\), and \(J_{n}\) is the Bessel function of the first kind. Studying the asymptotic behaviour, for \(1\leq\gamma<4/3\) we have \[f_{+}\sim y^{-3},\quad f_{-}\sim y^{-\frac{4}{3}}, \tag{32}\] while for \(4/3<\gamma\leq 5/3\) we have \[f_{\pm}\sim\frac{\cos\Bigl{(}(\alpha y)^{\frac{4}{3}-\gamma}\Bigr{)}}{y^{ \frac{17}{6}-\frac{\gamma}{2}}}; \tag{33}\] remembering that in the asymptotic regime we have \(\overline{\rho}\propto a^{-3}\propto y^{-2}\), the density contrast \(\delta\varrho/\overline{\rho}\) will behave in the following ways: \[\frac{\delta\varrho}{\overline{\rho}}\propto\frac{1}{y}\quad\mbox{for}\quad 1 \leq\gamma<\frac{4}{3}, \tag{34a}\] \[\frac{\delta\varrho}{\overline{\rho}}\propto\frac{1}{y^{\frac{13}{6}}}\quad \mbox{for}\quad\gamma=\frac{4}{3},\] (34b) \[\frac{\delta\varrho}{\overline{\rho}}\propto\frac{\cos\Bigl{(}(\alpha y)^{\frac{4 }{3}-\gamma}\Bigr{)}}{y^{\frac{5}{6}-\frac{\gamma}{2}}}\quad\mbox{for}\quad \frac{4}{3}<\gamma<\frac{5}{3},\] (34c) \[\frac{\delta\varrho}{\overline{\rho}}\propto\cos\biggl{(}\frac{1}{\alpha y} \biggr{)}\quad\mbox{for}\quad\gamma=\frac{5}{3}. \tag{34d}\] Therefore we conclude that, except for the last case where the frequency of oscillations increases but the amplitude remains constant, the perturbations collapse faster than the background and a fragmentation process is favoured [53]. The behaviour of \(\delta\varrho/\overline{\rho}\) for different values of \(\gamma\) is depicted in figure 2. ### The Modified Non-Singular Case In the modified model, the expressions for the derivatives of \(a\) are different: \[\dot{a}=-\sqrt{\frac{2GM}{r_{0}^{3}}\,\frac{1-a}{a}}\ \left(1-c_{\mu}\,\frac{1-a}{a} \right), \tag{35a}\] Figure 2: The asymptotic behaviour of the density contrast \(\delta\varrho/\overline{\rho}\) for different values of the polytropic index in the standard singular case: \(1\leq\gamma<\frac{4}{3}\) (blue dot-dashed line), \(\gamma=\frac{4}{3}\) (purple dashed line), \(\frac{4}{3}<\gamma<\frac{5}{3}\) (black dotted line), \(\gamma=\frac{5}{3}\) (red continuous line). The parameters have been chosen to have \(\delta\varrho/\overline{\rho}|_{t=0}\approx 10^{-1}\). \[\ddot{a}=-\frac{GM}{r_{0}^{3}a^{2}}\left(1-c_{\mu}\,\frac{1-a}{a}\right)\left(1-3 \,c_{\mu}\,\frac{1-a}{a}\right), \tag{35b}\] so the differential equation for the amplitude \(\delta\varrho\) of the perturbations is more complicated; however, we are aided by the fact that the asymptotic behaviour at lowest order is just \(a(t)\to a_{\infty}\), \(\dot{a},\ddot{a}\to 0\). Therefore the perturbation equation (25) becomes simply \[a_{\infty}^{3}\ddot{\delta\varrho}+\left(k^{2}v_{0}^{2}a_{\infty}^{4-3\gamma} -\frac{3c^{2}a_{\infty}}{2(1-a_{\infty})r_{0}^{2}\mu^{2}}\right)\delta\varrho=0, \tag{36}\] and the solution is a simple sum of two exponential functions: \[\delta\varrho(t)=B_{+}e^{+\lambda t}+B_{-}e^{-\lambda t}, \tag{37}\] \[\lambda=\sqrt{\frac{3c^{2}}{2(1-a_{\infty})a_{\infty}^{2}r_{0}^{2}\mu^{2}}-k^{ 2}v_{0}^{2}a_{\infty}^{1-3\gamma}}; \tag{38}\] therefore the behaviour of perturbations depends entirely on the sign of the quantity in the square root. First of all, we can compute the value of \(v_{0}\) (corresponding to the speed of sound at the start of the collapse) using a quasi-static approximation: \[-\frac{GM^{2}}{r_{0}^{2}}+4\pi r_{0}^{2}P_{0}=0, \tag{39}\] where \(P_{0}\) is the pressure at the start of the collapse, thus obtaining \[v_{0}^{2}=\kappa\gamma\rho_{0}^{\gamma-1}=\gamma\frac{P_{0}}{\rho_{0}}=\frac{ \gamma}{3}\,\frac{GM}{r_{0}}; \tag{40}\] then, from the expression of \(a_{\infty}\) we can rewrite the value of \(\lambda\) as \[\lambda=\sqrt{\frac{3GM}{r_{0}^{3}a_{\infty}^{3}}\left(1-\frac{\gamma}{9}\,k^ {2}\,r_{0}^{2}\,a_{\infty}^{4-3\gamma}\right)}\,. \tag{41}\] Now, when \(\lambda=0\), we obtain a pivot scale \(k_{0}\) of the form \[k_{0}=\frac{3}{r_{0}}\,\sqrt{\frac{a_{\infty}^{3\gamma-4}}{\gamma}} \tag{42}\] such that we can rewrite \(\lambda\) as \[\lambda=\sqrt{\frac{3GM}{r_{0}^{3}a_{\infty}^{3}}\left(1-\frac{k^{2}}{k_{0}^{ 2}}\right)}\,. \tag{43}\] Therefore, for \(k<k_{0}\), we have \(\lambda^{2}>0\) so \(\delta\varrho\) and \(\delta\varrho/\overline{\rho}\) diverge while, for \(k>k_{0}\), \(\lambda^{2}<0\) so \(\delta\varrho\) oscillates with constant amplitude and the density contrast \(\delta\varrho/\overline{\rho}\) is ultimately damped to zero. This translates to a Jeans-like length scale \(\ell_{0}=2\pi/k_{0}\) of the form \[\ell_{0}=\frac{2\pi r_{0}}{3}\sqrt{\frac{\gamma}{a_{\infty}^{3\gamma-4}}}, \tag{44}\] above which a perturbation diverges and the fragmentation process is initiated while below it the perturbation is damped and erased. Note that, since \(0<a_{\infty}<1\) and it is constant, for each value of the polytropic parameter \(\gamma\) we can find both behaviours depending only on the initial scale of the perturbation. However, for some values of \(\gamma\) it turns out that the scale \(\ell_{0}\) is bigger than the initial radius of the cloud, so all perturbations will disappear. Figure 3 shows the length scale \(\ell_{0}\) as function of the deformation parameter \(\mu\) for different values of \(\gamma\) (the other parameters are again those of our Sun): we see that in order to have \(\ell_{0}<r_{0}\) and allow the fragmentation process, we must first of all have \(\gamma<4/3\) since, for \(\gamma=4/3\), \(\ell_{0}\) does not depend on \(a_{\infty}\) (and therefore on \(\mu\)) and is a constant already greater than \(r_{0}\); this upper limit is further reduced by the condition \(r_{\infty}\gg r_{S}\), and therefore we must have \(1\leq\gamma<\gamma_{1}<4/3\), where \(\gamma_{1}\) is such that \(\ell_{0}=r_{0}\) at a the value of \(\mu\) for which \(r_{\infty}=r_{S}\). ## V Relativistic gravitational collapse In this section we study the collapse from a general relativistic point of view. Therefore the starting point will be the Oppenheimer-Snyder collapse model [37; 54], for which we will present the Hamiltonian formulation and then implement on it the modified algebra (4). Figure 3: The Jeans-like length scale \(\ell_{0}\) as function of the deformation parameter \(\mu\) for different values of \(\gamma\) in the modified non-singular model. From top to bottom: \(\gamma=\frac{5}{3}\) (thick purple dashed line), \(\gamma=\frac{4}{3}\) (continuous blue line) for which \(\ell_{0}\) is constant, \(\gamma=\gamma_{1}\) (thin red dashed line) that crosses the point \((1,1)\), and \(\gamma=1\) (black dot-dashed line); the faded gray lines correspond to \(\ell_{0}=r_{0}\) and \(\mu=1\). ### The Oppenheimer-Snyder Model and its Hamiltonian formulation The Oppenheimer-Snyder (OS) model is the simplest and most widely known model of gravitational collapse. Its importance lies in highlighting the need to consider two different observers, one stationary outside the collapsing matter and one comoving with it. The original paper [37] starts from the outside Schwarzschild metric in the standard form \[ds^{2}=(1-\frac{r_{S}}{R})c^{2}dT^{2}-\frac{dR^{2}}{1-\frac{r_{S}}{R}}-R^{2}d \theta^{2}-R^{2}\sin^{2}(\theta)d\varphi^{2}, \tag{45}\] where \(T=T(t,r)\) and \(R=R(t,r)\) are the external variables as functions of the internal \(t\), \(r\); then, by requiring spherical symmetry and homogeneity and implementing matching conditions on the surface \(r_{0}\), it finds the equations that the internal metric must satisfy and computes the collapse time as seen from a comoving observer. The internal metric results to be that of a closed FLRW model [39]: \[ds^{2}=c^{2}dt^{2}-a^{2}(t)\left(\frac{dr^{2}}{1-Kr^{2}}+r^{2}d\theta^{2}+r^{2 }\sin^{2}(\theta)d\varphi^{2}\right), \tag{46}\] where \(a(t)\) is the scale factor and \(K>0\) is the positive spatial curvature; while in the actual FLRW model the latter can always be set to \(\pm 1\) by rescaling the variables, here it can be linked to the initial parameters of the cloud both through physical arguments [55] and through a comparison of the solutions, as we will see shortly. To obtain the Hamiltonian formulation for the OS model one starts with the ADM-reduced action \(S\) for spherically symmetric spacetimes [56; 57; 58] filled with Brown-Kuchar dust [59]; then, by implementing matching conditions between the Schwarzschild (45) and the FLRW (46) metrics and performing a partial symmetry reduction, the Hamiltonian gets split in three different contributions: \[\begin{split} S&=\int dt\,(p_{a}\dot{a}+P_{\tau}\dot {\tau}-N\mathcal{H}-M_{+}\dot{T}_{+})+\\ &+\int dt\int_{r_{0}}^{\infty}\,dr(P_{R}\dot{R}+P_{L}\dot{L}-N^{ 0}\mathcal{H}_{0}-N^{r}\mathcal{H}_{r}),\end{split} \tag{47}\] \[\mathcal{H}_{0}=\frac{P_{L}^{2}L}{2R^{2}}-\frac{P_{R}P_{L}}{R}+\frac{R^{ \prime 2}+2RR^{\prime\prime}}{2L}-\frac{L^{\prime}RR^{\prime}}{L^{2}}-\frac{L}{2}, \tag{48a}\] \[\mathcal{H}_{r}=P_{R}R^{\prime}-P_{L}^{\prime}L,\] (48b) \[\mathcal{H}=-\frac{\chi}{6V_{S}}\,\frac{p_{a}^{2}}{a}-\frac{3V_{S}}{2\chi}\,Kc^ {2}a+P_{\tau}, \tag{48c}\] where \(\tau\) is dust proper time, \(a\) is the scale factor for the internal metric, \(L\) and \(R\) are the functions appearing in the spherically symmetric external metric (and can be found by comparison with (45)), \(N\), \(N^{0}\) and \(N^{r}\) are Lagrange multipliers, the \(P_{i}\) are the momenta conjugate to their respective variables, \(M_{+}\dot{T}_{+}\) is a boundary term containing the ADM mass and the Schwarzschild-Killing time at asymptotic infinity, \(\mathcal{H}_{0}\) and \(\mathcal{H}_{r}\) are the super-Hamiltonian and super-momentum for the exterior of the dust cloud, and \(\mathcal{H}\) is the Hamiltonian for the interior (for more information on the derivation of the action and the Hamiltonians see [60; 61; 62; 63]). We are interested of course in the internal Hamiltonian (48c): it contains the scale factor \(a\), its conjugate momentum \(p_{a}\), the spatial curvature \(K\), the momentum conjugate to dust proper time \(P_{\tau}\) which contains the energy density of the cloud, Einstein's constant \(\chi=8\pi G/c^{2}\) and the internal volume \(V_{S}\) of the sphere given by \[\begin{split} V_{S}=&\int_{0}^{r_{0}}dr\,\frac{4 \pi\,r^{2}}{\sqrt{1-Kr^{2}}}=\\ =&\frac{4\pi}{2K}\left(\frac{\text{asin}\Big{(}r_{0} \sqrt{K}\Big{)}}{\sqrt{K}}-r_{0}\sqrt{1-Kr_{0}^{2}}\right).\end{split} \tag{49}\] Had we started directly from the FLRW model we would have obtained a very similar Hamiltonian, with a (constant in the case of pressureless dust) energy density term instead of the (still constant) \(P_{\tau}\). For the classical evolution, the equations of motion are \[\dot{a}=-\frac{\chi}{3V_{S}}\,\frac{p_{a}}{a}, \tag{50a}\] \[\dot{p_{a}}=-\frac{\chi}{6V_{S}}\,\frac{p_{a}^{2}}{a^{2}}+\frac{3Kc^{2}V_{S}}{ 2\chi}; \tag{50b}\] as in the non-relativistic case, it is useful to compute \(\dot{p_{a}}/\dot{a}\) in order to have an easily solvable differential equation for \(p_{a}(a)\): \[\frac{\partial p_{a}}{\partial a}=\frac{p_{a}}{2a}-\frac{9}{2}\,\frac{Kc^{2}V_ {S}^{2}}{\chi^{2}}\,\frac{a}{p_{a}},\quad p_{a}(a)=\frac{3cV_{S}}{\chi}\sqrt{a(1 -a)K}\,, \tag{51}\] where we used the standard initial conditions \(a(t=0)=1\) and \(p_{a}(t=0)=0\). Substituting this in equation (50a), we obtain the same differential equation (9) of the standard case, but with a different constant: \[\dot{a}=-\sqrt{K\,c^{2}\,\frac{1-a}{a}}\,; \tag{52}\] therefore we already know the solution and furthermore we can identify the curvature as function of the initial parameters of the cloud: \[\sqrt{a(1-a)}\,\,+\text{acos}\,\sqrt{a}=\sqrt{K}\,\,c\,t, \tag{53}\] \[K=\frac{2GM}{c^{2}r_{0}^{3}}=\frac{r_{S}}{r_{0}^{3}}; \tag{54}\] this is the same identification found from physical arguments in [55], where the authors find a link between the Schwarschild and the FLRW metrics. With this identification we can also rewrite the expression of \(V_{S}\) as \[V_{S}=\frac{2\pi\,r_{0}^{3}}{\sqrt{a_{S}^{3}}}\left(\text{asin}\,\sqrt{a_{S}}- \sqrt{a_{S}(1-a_{S})}\right), \tag{55}\] where we have again defined \(a_{S}=r_{S}/r_{0}\). The solution is shown later in Figure 4, compared with the modified solution which we will now derive. ### Non-Singular Relativistic Collapse To find the modified dynamics, we again start from the same Hamiltonian (48c) but use the modified algebra (4). The new equations of motion then are \[\dot{a}=-\frac{\chi}{3V_{S}}\,\frac{p_{a}}{a}(1-\frac{\mu^{2}p_{a}^{2}}{\hbar^{ 2}}), \tag{56a}\] \[\dot{p_{a}}=\left(-\frac{\chi}{6V_{S}}\,\frac{p_{a}^{2}}{a^{2}}+\frac{3Kc^{2}V _{S}}{2\chi}\right)(1-\frac{\mu^{2}p_{a}^{2}}{\hbar^{2}}), \tag{56b}\] where \(p_{a}\) has the dimensions of an action so we introduced a Planck constant to still have \(\mu\) dimensionless; dividing the second equation by the first we obtain the same relation (51), so the final differential equation for \(a(t)\) becomes \[\dot{a}=-\sqrt{K\,c^{2}\,\frac{1-a}{a}}\,\Big{(}1-g_{\mu}(1-a)a\Big{)}, \tag{57}\] where we defined \(g_{\mu}=K(3\mu cV_{S}/\hbar\chi)^{2}\). Already from here we see that there is still a critical point, but its expression is different from the Newtonian case: \[1-g_{\mu}(1-a_{\infty})a_{\infty}=0,\quad a_{\infty}=\frac{1}{2}\pm\frac{1}{2 }\sqrt{1-\frac{4}{g_{\mu}}}. \tag{58}\] First of all we see that, in order for \(a_{\infty}\) to be real we must have \(g_{\mu}\geq 4\), otherwise we will still have the collapse \(a\to 0\), and this will imply a lower limit on the deformation parameter \(\mu\) as we will see later; secondly, when that condition is satisfied, we will always have \(a_{\infty}\geq 1/2\) (the solution with the minus sign will never be reached in this model, but only by starting below it with a positive derivative). Now, imposing the condition \(a_{\infty}\gg a_{S}\), we obtain the following lower limits for \(\mu\): \[\mu\gg\frac{8\pi\hbar\sqrt{r_{S}r_{0}^{3}}}{3cMV_{S}}=\frac{2\hbar\sqrt{K}}{c \rho_{0}V_{S}}\quad\text{for}\quad r_{S}<\frac{r_{0}}{2}, \tag{59a}\] \[\mu\gg\frac{4\pi\hbar r_{0}^{\frac{5}{2}}}{3cMV_{S}\sqrt{r_{0}-r_{S}}}=\frac{ \hbar\sqrt{K}}{c\rho_{0}V_{S}\sqrt{a_{S}(1-a_{S})}}\quad\text{for}\quad r_{s} >\frac{r_{0}}{2}; \tag{59b}\] note that when \(r_{S}<r_{0}/2\), we already have \(a_{\infty}>a_{S}\) by construction; indeed, the constraint (59a) actually corresponds to the reality condition \(g_{\mu}>4\). When we insert the parameters of our Sun the two conditions yield the following lower limits for the parameter \(\mu\): \[\mu\gg 10^{-84}\quad\text{for}\quad r_{S}<\frac{r_{0}}{2}, \tag{60a}\] \[\mu\gg 2.5\times 10^{-82}\quad\text{for}\quad r_{s}>\frac{r_{0}}{2}. \tag{60b}\] These small values are due to \(g_{\mu}\) being a very large number. We can safely assume that the asymptotic radius of the cloud is always greater than its Schwarzschild radius as long as \(\mu\neq 0\). Now, the non-singular solution can again be expressed only in implicit form: \[\frac{\sqrt{8}\ \text{atan}\Big{(}\sqrt{\frac{2}{d_{-}}\frac{1-a}{a}}\Big{)}}{ \sqrt{g_{\mu}(g_{\mu}-4)d_{-}}}-\frac{\sqrt{8}\ \text{atan}\Big{(}\sqrt{\frac{2}{d_{+}}\frac{1-a}{a}} \Big{)}}{\sqrt{g_{\mu}(g_{\mu}-4)d_{+}}}=\sqrt{K}\ c\,t, \tag{61}\] where we defined \(d_{\pm}=2-g_{\mu}\pm\sqrt{g_{\mu}(g_{\mu}-4)}\) to shorten the notation. The solution is presented in Figure 4, compared with the unmodified relativistic evolution (53). ## VI Relativistic Perturbations We will now study the behaviour of density perturbations in the relativistic setting. We will mainly follow [41], meaning that we will study linear perturbations of the Einstein equations. First of all it is convenient to rewrite the FLRW metric in an easier form, introducing conformal time \(\eta\) and the new variable \(X\) defined as \[d\eta=\frac{dt}{a}, \tag{62}\] Figure 4: Comparison between the classical relativistic collapse (dashed black line) and the modified non-singular evolution (red continuous line) for generic values of the parameters; the collapse time \(t_{0}\) and the minimum value \(a_{\infty}\) are highlighted by faded grey lines; it is evident how \(a_{\infty}>1/2\). \[dX^{2}=\frac{dr^{2}}{1-Kr^{2}},\quad X=\frac{\text{asin}\Big{(}\sqrt{K}\ r\Big{)}}{ \sqrt{K}}, \tag{63}\] where the expression of \(X\) as function of \(r\) is valid for \(K>0\); this way, the FLRW metric inside of the cloud rewrites as \[ds^{2}=a^{2}(\eta)\Big{(}c^{2}d\eta^{2}-dX^{2}-F(X)^{2}\,d\theta^{2}-F(X)^{2}\, \sin^{2}(\theta)\,d\varphi^{2}\Big{)}, \tag{64}\] \[F(X)=r=\frac{\text{sin}\Big{(}\sqrt{K}\ X\Big{)}}{\sqrt{K}}. \tag{65}\] Small perturbations are described by changes in the metric tensor, in the four-velocity and in the scalar density, parametrized as \(g_{jk}=\overline{g}_{jk}+\delta g_{jk}\), \(u^{j}=\overline{u}^{j}+\delta u^{j}\) and \(\rho=\overline{\rho}+\delta\rho\). Without loss of generality we can impose the synchronous gauge, thus setting \(\delta g_{00}=\delta g_{0\alpha}=0\) (latin indices go from \(0\) to \(3\), while greek indices refer to the spatial part and therefore go from \(1\) to \(3\)). If the unperturbed system is comoving, we can set \(u^{\alpha}=0\) and \(u^{0}=1/a\), and then from the unitarity of the four-velocity we obtain \(\delta u^{0}=0\). Perturbations of the metric tensor imply perturbations of the Ricci tensor \(R_{\alpha}^{\ jk}\) and of the Ricci scalar \(R\) of the form \[\delta R_{\alpha}^{\ \beta}=\frac{1}{2a^{2}}\left(\delta g_{ \alpha}^{\ \gamma;\beta}{}_{;\gamma}+\delta g_{\gamma;\alpha}^{\ \beta}{}_{;\gamma}-\delta g_{\alpha}^{\ \beta;\gamma}{}_{;\gamma}-\delta g_{;\alpha}^{\ \beta}{}_{;\beta}\right)+ \tag{66a}\] \[+\frac{1}{c^{2}a^{2}}\left(\frac{1}{2}\delta g_{\alpha}^{\ \beta\ \prime}{}_{;\alpha}+\frac{a^{\prime}}{a}\delta g_{ \alpha}^{\ \beta\ \prime}+\frac{a^{\prime}}{2a}\delta g^{\prime}\,\delta_{\alpha}^{\ \beta}-2Kc^{2}\delta g_{\alpha}^{\ \beta}\right),\] (66b) \[\delta R_{0}^{0}=\frac{1}{2a^{2}}\left(\delta g_{\alpha}^{\ \prime}{}_{;\alpha}-\delta g_{\alpha}^{\ \beta}{}_{;\beta}\right),\] (66c) \[\delta R=\frac{1}{a^{2}}\left(\delta g_{\alpha}^{\ \gamma;\alpha}{}_{;\gamma}-\delta g_{;\alpha}^{\ \ \alpha}{}_{;\alpha}{}_{;\beta}\right)+\frac{1}{c^{2}a^{2}}\left(\delta g _{\gamma}^{\prime\prime}+3\frac{a^{\prime}}{a}\delta g^{\prime}-\frac{2Kc^{2} }{a}\,\delta g\right), \tag{66d}\] where \(\delta g\) is the trace of the metric perturbations, \(\delta_{\alpha}^{\ \beta}\) is Kronecker's delta, a semicolon indicates a covariant derivative and a prime a derivative after \(\eta\). On the other hand, we can write the perturbed components of the Energy-Momentum tensor \(T_{j}^{\ jk}\) as \[\delta T_{j}^{\ jk}=(P+c^{2}\rho)(u_{j}\delta u^{k}+u^{k}\delta u_{j})+(\delta P +c^{2}\delta\rho)u_{j}u^{k}+\delta_{j}^{\ k}\delta P, \tag{67a}\] \[\delta T_{\alpha}^{\ \beta}=-\delta_{\alpha}^{\ \beta}\,\frac{\text{d}P}{\text{d}\rho}\,\frac{ \delta T_{0}^{\ 0}}{c^{2}},\] (67b) \[\delta T_{0}^{\ \alpha}=-a(P+\rho c^{2})\delta u^{\alpha},\] (67c) \[\delta T_{0}^{\ 0}=-c^{2}\delta\rho, \tag{67d}\] where we have made use of the relation \(\delta P=(dP/d\rho)\delta\rho\). In the linear approximation, the perturbations satisfy the equation \[\delta R_{j}^{\ jk}-\frac{1}{2}\,\delta_{j}^{\ jk}\,\delta R=\frac{\chi}{c^{2}} \,\delta T_{j}^{\ jk}, \tag{68}\] which yields the following equations for the perturbations of the metric: \[\begin{split}\big{(}\delta g_{\gamma}^{\ \gamma;\beta}{}_{;\gamma}+\delta g_{ \gamma;\alpha}^{\ \beta}{}_{;\gamma}-\delta g_{\alpha}^{\ \beta;\gamma}{}_{;\gamma}-\delta g_{;\alpha}^{\ \beta}{}_{;\alpha}^{\ \beta} \big{)}+\\ +\frac{1}{c^{2}}\left(\delta g_{\alpha}^{\ \beta\ \prime}+2\frac{a^{ \prime}}{a}\delta g_{\alpha}^{\ \beta\ \prime}-K\,c^{2}\delta g_{\alpha}^{\ \beta}\right)= 0,\end{split} \tag{69a}\] \[\begin{split}\frac{1}{2}\big{(}\delta g_{\gamma}^{\ \gamma}-\delta g_{ \gamma}^{\ \delta;\gamma}{}_{;\delta}\big{)}-\frac{1}{c^{2}}\left(\delta g ^{\prime\prime}+2\frac{a^{\prime}}{a}\delta g^{\prime}-K\,c^{2}\delta g\right) =\\ =\frac{3}{c^{4}}\frac{\text{d}P}{\text{d}\rho}\left(\frac{1}{2} \big{(}\delta g_{\gamma}^{\ \delta;\gamma}{}_{;\delta}-\delta g_{;\gamma}^{\ \gamma}\big{)}+\frac{a^{\prime}}{a}\delta g^{\prime}-K\,c^{2}\delta g \right).\end{split} \tag{69b}\] Putting everything together, the final equation for the density perturbations turns out to be \[\chi\,c^{2}\delta\rho=\frac{1}{2a^{2}}\left(\delta g_{\alpha}^{\ \beta;\alpha}{}_{;\beta}-\delta g_{;\alpha}^{\ ;\alpha}+\frac{2a^{\prime}}{c^{2}a}\delta g^{\prime}-2K\,\delta g\right). \tag{70}\] For more details on the derivation of these expressions, see [41]. Now, any perturbation in a hyperspherical geometry such as the positively-curved FLRW model can be expanded in four-dimensional spherical harmonics (similarly to the expansion in three-dimensional spherical harmonics performed in the non relativistic case in Section IV). The scalar hyperspherical harmonics \(Q^{n}\) can be expressed as [64]; \[Q^{n}=\sum_{l=0}^{n-1}\ \sum_{m=-l}^{l}A_{lm}^{n}\,Y_{lm}(\theta,\varphi)\,\Pi_{ nl}(X), \tag{71}\] where \(A_{lm}^{n}\) are constant coefficients and \(Y_{lm}\) are the standard three-dimensional spherical harmonics. As an example, the most symmetric hyperspherical harmonics with \(l=0\) take the form \[Q^{n}=\frac{\text{sin}\Big{(}n\,\sqrt{K}\ X\Big{)}}{\text{sin}\Big{(}\sqrt{K} \ X\Big{)}}. \tag{73}\] From here on we will drop the superscript \(n\) to avoid cluttering the notation. All hyperspherical harmonics are scalar eigenfunctions of the Laplacian operator on the surface of a hypersphere with unit radius, and therefore they satisfy the relation \[Q_{;\alpha}^{\ ;\alpha}=-(n^{2}-1)Q. \tag{74}\] Here the order \(n\) of the harmonics is an integer, and will play a similar role to the wave number \(k\) of the non-relativistic perturbations; it can be roughly interpreted as "the number of wavelengths that fit inside the radius of the sphere" i.e. as the ratio of the radius of the sphere to the length scale of a given perturbation. Note that there exist also vector and tensor hyperspherical harmonics, but they are not needed to study density perturbations. Now, from the scalar harmonics \(Q\) it is possible to construct the following tensors and vectors with the following symmetries: \[Q_{\alpha}^{\ \beta}=\frac{\delta_{\alpha}^{\ \beta}}{3}\,Q,\quad Q_{\alpha}^{\ \alpha}=Q, \tag{75a}\] \[Z_{\alpha}=\frac{Q_{;\alpha}}{n^{2}-1},\quad Z_{\alpha}^{\ ;\alpha}=-Q,\] (75b) \[Z_{\alpha}^{\ \beta}=\frac{Q_{;\alpha}^{\ \beta}}{n^{2}-1}+Q_{\alpha}^{\ \beta},\quad Z_{ \alpha}^{\ \alpha}=0. \tag{75c}\] Then we can define \[\delta g_{\alpha}^{\ \beta}=\Lambda(\eta)\,Z_{\alpha}^{\ \beta}+\Omega(\eta)\,Q_{ \alpha}^{\ \beta},\quad\delta g=\Omega\,Q, \tag{76}\] so that the whole spatial evolution is contained within the two tensors \(Q_{\alpha}^{\ \beta}\) and \(Z_{\alpha}^{\ \beta}\) while the time evolution i.e. the amplitude is just given by the two functions \(\Lambda\) and \(\Omega\); now the equation for the density perturbations becomes \[\chi\,c^{2}\delta\rho=\frac{Q}{3a^{2}}\left(K\,c^{2}(n^{2}-4)(\Lambda+\Omega) +3\frac{a^{\prime}}{a}\Omega^{\prime}\right). \tag{77}\] Inserting expression (76) into equations (69), we obtain two differential equations for the two functions \(\Lambda\) and \(\Omega\): \[\Lambda^{\prime\prime}+2\frac{a^{\prime}}{a}\Lambda^{\prime}-\frac{K\,c^{2}} {3}(n^{2}-1)(\Lambda+\Omega)=0, \tag{78a}\] \[\Omega^{\prime\prime}+ \left(2+\frac{3}{c^{2}}\frac{\mathrm{d}P}{\mathrm{d}\rho}\right) \frac{a^{\prime}}{a}\Omega^{\prime}+ \frac{K\,c^{2}}{3}(n^{2}-4)(\Lambda+\Omega)\left(1+\frac{3}{c^{2}}\frac{ \mathrm{d}P}{\mathrm{d}\rho}\right)=0. \tag{78b}\] Note that in the relativistic context we cannot use the polytropic relation because it is not a solution of the relativistic continuity equation; in this case we will make an isothermal assumption and leave the speed of sound \(v_{s}^{2}=dP/d\rho\) as a free constant parameter. It is important to consider that only harmonics with \(n>2\) correspond to physical perturbations. For \(n=1,2\) the tensor \(Z_{\alpha}^{\ \beta}\) cannot be constructed, and therefore it is necessary to put \(\Lambda=0\); then we are left with just a second-order equation for \(\Omega\). When \(n=2\) both solutions can be ruled out by a transformation of the coordinates. When \(n=1\) only one of the two solutions can be ruled out by such a transformation; the second solution corresponds to a perturbation in the entire mass of the cloud, but space remains fully uniform and isotropic. Thus only \(n>2\) correspond to real physical perturbations of the metric. For more details, see [41]. Now we only need to insert the solutions for the scale factor \(a\) in the two cases; however we first have to express them in terms of the new time variable \(\eta\). ### Classical Oppenheimer-Snyder Perturbations In order to find the expression for \(a(\eta)\), we go back to equation (52) and substitute \(dt=a\,d\eta\), thus obtaining a differential equation in \(\eta\) that is easily solved: \[a^{\prime}=-\sqrt{K\,c^{2}\,(1-a)\,a}\,\quad a(\eta)=\cos^{2}\!\left(\frac{ \sqrt{K}\,c\,\eta}{2}\right)\!. \tag{79}\] Now, equations (78) have two particular integrals that correspond to those fictitious change in the metric that can be ruled out by a transformation of the reference system; nevertheless, they are useful to lower the order of the two equations. The particular integrals are \[\Lambda_{1}=-\Omega_{1}=\mathrm{const.}\quad, \tag{80a}\] \[\Lambda_{2}=-\sqrt{K}\,c\,(n^{2}-1)\int\frac{d\eta}{a},\] (80b) \[\Omega_{2}=\sqrt{K}\,c\,(n^{2}-1)\int\frac{d\eta}{a}-\frac{3a^{\prime}}{\sqrt {K}\,c\,a^{2}}. \tag{80c}\] At this point we can perform the following change of variables: \[\Lambda+\Omega=(\Lambda_{2}+\Omega_{2})\sqrt{K}\,c\int\xi\,d\eta=-\frac{3a^{ \prime}}{a^{2}}\,\int\xi\,d\eta, \tag{81a}\] \[\Lambda^{\prime}-\Omega^{\prime}=\sqrt{K}\,c\,(\Lambda_{2}^{ \prime}-\Omega_{2}^{\prime})\int\xi\,d\eta+\sqrt{K}\,c\,\frac{\zeta}{a}=\] \[= \left(3\!\left(\frac{a^{\prime\prime}}{a^{2}}-2\frac{a^{\prime\ 2}}{ a^{3}}\right)-\frac{2K\,c^{2}(n^{2}-1)}{a}\right)\int\xi\,d\eta+\sqrt{K}\,c \,\frac{\zeta}{a}; \tag{81b}\] this way we obtain two coupled first-order differential equations for the new unknown functions \(\xi\) and \(\zeta\): \[\xi^{\prime}+\xi\left(\frac{2a^{\prime\prime}}{a^{\prime}}+\frac{a^{\prime}}{ a}\Big{(}\frac{3}{2c^{2}}\,\frac{\mathrm{d}P}{\mathrm{d}\rho}-2\Big{)}\right)+ \frac{\sqrt{K}}{2c}\,\frac{\mathrm{d}P}{\mathrm{d}\rho}\,\zeta=0, \tag{82a}\] \[\zeta^{\prime}+\Big{(}1+\frac{3}{2c^{2}}\,\frac{\mathrm{d}P}{ \mathrm{d}\rho}\Big{)}\frac{a^{\prime}}{a}\,\zeta+\xi\!\left(-2\sqrt{K}\,c\,(n ^{2}-1)+\right.\] \[\left.\qquad+\frac{3}{\sqrt{K}\,c}\,\Big{(}\frac{a^{\prime\prime} }{a}-\frac{2a^{\prime\ 2}}{a^{2}}+\frac{3}{2c^{2}}\,\frac{a^{\prime\ 2}}{a^{2}}\,\frac{ \mathrm{d}P}{\mathrm{d}\rho}\Big{)}\right)=0. \tag{82b}\] Now we have to perform asymptotic expansions; in particular, close to the singularity, the scale factor (79) behaves as \[a^{\rm asymp}(\eta)=\left(\frac{\pi}{2}\right)^{2}\left(1-\frac{\eta}{\eta_{0}} \right)^{2},\quad\eta_{0}=\frac{\pi}{\sqrt{K}\;c}, \tag{83}\] where we have defined the time of singularity \(\eta_{0}\). Then, inserting everything in equations (82) and introducing the velocity parameter \(\beta=v_{s}/c<1\), we obtain the following differential equations for \(\xi\) and \(\zeta\): \[\frac{\mathrm{d}\xi}{\mathrm{d}x}+\frac{2+3\beta^{2}}{x}\,\xi-\sqrt{K}\;c\, \beta^{2}\eta_{0}\,\zeta=0, \tag{84a}\] \[\frac{\mathrm{d}\zeta}{\mathrm{d}x}+\frac{2+3\beta^{2}}{x}\,\zeta+\frac{18(1- \beta^{2})}{\sqrt{K}\;c\,\eta_{0}^{2}x^{2}}\,\xi=0, \tag{84b}\] where we defined \(x=1-\eta/\eta_{0}\); the solutions are \[\xi(x)=\frac{D_{-}\,x^{-\frac{\pi}{2}}+D_{+}\,x^{+\frac{\pi}{2}}}{x^{\frac{ \pi}{2}+3\beta^{2}}}, \tag{85a}\] \[\zeta(x)=\frac{\sqrt{K}\;c\,\eta_{0}\Big{(}D_{-}(\sigma+1)x^{-\frac{\pi}{2}}-D _{+}(\sigma-1)x^{+\frac{\pi}{2}}\Big{)}}{36(1-\beta^{2})x^{\frac{3}{2}+3\beta^ {2}}},\] (85b) \[\sigma=\sqrt{72\beta^{4}-72\beta^{2}+1}\,, \tag{85c}\] where \(D_{\pm}\) are integration constants. From these, we can obtain the expressions for \(\Lambda\) and \(\Omega\) and therefore for \(\delta\rho\); remembering that \(\overline{\rho}\propto a^{-3}\propto x^{-6}\), in the asymptotic limit \(\eta\to\eta_{0}\) corresponding to \(x\to 0\) we find the leading-term behaviour of the perturbations as \[\frac{\delta\rho}{\overline{\rho}}\propto x^{-(\frac{6}{2}+3\beta^{2}+\frac{ \pi}{2})}. \tag{86}\] Now, when \(\sigma^{2}>0\) this quantity diverges; on the other hand, when \(\sigma^{2}<0\) the exponent is complex, but it still has a (negative) real part so that, even if there are some oscillations, the amplitude is still divergent close to the singularity. For thoroughness, note that \(\sigma^{2}>0\) for \(\beta<\sqrt{(6-\sqrt{34})/3}\;/2\approx 0.119\) and \(\beta>\sqrt{(6+\sqrt{34})/3}\;/2\approx 0.992\) (remember that by definition \(0<\beta<1\)). To conclude, we have shown that in the relativistic case all perturbations can diverge and initiate the fragmentation process, differently from the Newtonian case where for \(\gamma=5/3\) the amplitude remained constant. ### Non-Singular Relativistic Perturbations In the modified non-singular case, finding the expression for \(a(\eta)\) is not necessary (although possible) because asymptotically the leading term is simply \(a=a_{\infty}\) as in the Newtonian case. Therefore the density perturbations (77) will depend only on the sum \(\Lambda+\Omega\), which can be found by summing equations (78) and solving them: \[(\Lambda+\Omega)^{\prime\prime}-K\,c^{2}\Big{(}1-(n^{2}-4)\beta^{2}\Big{)}( \Lambda+\Omega)=0, \tag{87}\] \[\Lambda(\eta)+\Omega(\eta)=E_{+}e^{+\nu\sqrt{K}\;c\,\eta}+E_{-}e^{-\nu\sqrt{K }\;c\,\eta}, \tag{88}\] \[\nu=\sqrt{1-(n^{2}-4)\beta^{2}}, \tag{89}\] where \(E_{\pm}\) are constants of integration. Similarly to the Newtonian case, given that when \(a^{\prime}=0\) the perturbations depend directly on the sum \(\Lambda+\Omega\), the fate of the perturbations depends entirely on the nature of this parameter \(\nu\) i.e. on the sign of the term inside the square root. Therefore, since only \(n>2\) are relevant for physical perturbations, for each value of \(n\) there exist a critical value of \(\beta\) such that above it all perturbations oscillate and are ultimately damped, while below it all perturbations diverge. Conversely, for each value of \(\beta\), there exist a value of \(n\) large enough such that above it all perturbations oscillate and are damped, while below it they all diverge. Given that a higher value of \(n\) corresponds to a perturbation with a shorter scale, we have again found a Jeans-like length. If we want the collapse to be stable to all kinds of perturbations, we require \(n\) to be the smallest possible i.e. \(n=3\), thus finding a lower limit on \(\beta\): \[\beta>\beta_{0}=\frac{1}{\sqrt{5}}\approx 0.48. \tag{90}\] ## VII Concluding Remarks We analyzed the gravitational collapse of a spherical dust configuration, both in the Newtonian limit and in the fully general relativistic case, by including in the dynamics cut-off physics effects. The particular modification we introduced consists of a generalized Heisenberg algebra inspired by Polymer Quantum Mechanics. We studied the collapse dynamics by replacing the standard Poisson brackets with those ones coming from the considered generalized approach, modulated, to some extent, from Loop Quantum Gravity [65; 66; 67]. The very remarkable result we have drawn in both the two considered regimes has to be identified in the existence of a stable and asymptotically static configuration of the collapse, established at a radius greater than the Schwarzschild one. Furthermore, this feature takes place in correspondence to a sufficiently small value of the parameter accounting for the new cut-off physics. In other words, it is always possible to accommodate the stabilization of the gravitational collapse at super-Schwarzschild scales even when the deformation parameter is defined as a Planckian quantity, i.e. as regularizing physics only for very high energy scales. We also obtained some specific constraints on the free equation of state parameters by requiring that the asymptotic configuration be stable, in particular for the relativistic isothermal case we arrived to the requirement that the sound velocity \(\beta\) be greater than \(1/\sqrt{5}\). This suggests that, even if the repulsive character of the modified gravitational dynamics creates a static macroscopic configuration also when matter pressure is negligible, the request that this configuration be also stable under small perturbations still requires that the elementary constituents of the collapsing gas have a significant free-streaming effect. The present analysis must be regarded as the starting point for subsequent investigations in which the gravitational collapse is modelled in a realistic astrophysical context, in order to better understand the implications that the repulsive gravitational dynamics can have on the formation of compact objects. In particular, the impact of the repulsive effects on the equilibrium of a real relativistic star [6] is of interest in order to determine possible corrections to the mass limits in the proposed scenario. ## Acknowledgements G. B. thanks the TAsP Iniziativa Specifica of INFN for their support.
2309.16647
A generalization of immanants based on partition algebra characters
We introduce a generalization of immanants of matrices, using partition algebra characters in place of symmetric group characters. We prove that our immanant-like function on square matrices, which we refer to as the recombinant, agrees with the usual definition for immanants for the special case whereby the vacillating tableaux associated with the irreducible characters correspond, according to the Bratteli diagram for partition algebra representations, to the integer partition shapes for symmetric group characters. In contrast to previously studied variants and generalizations of immanants, as in Temperley-Lieb immanants and $f$-immanants, the sum that we use to define recombinants is indexed by a full set of partition diagrams, as opposed to permutations.
John M. Campbell
2023-09-28T17:52:02Z
http://arxiv.org/abs/2309.16647v1
# A generalization of immanants based on partition algebra characters ###### Abstract We introduce a generalization of immanants of matrices, using partition algebra characters in place of symmetric group characters. We prove that our immanant-like function on square matrices, which we refer to as the _recombinant_, agrees with the usual definition for immanants for the special case whereby the vacillating tableaux associated with the irreducible characters correspond, according to the Bratteli diagram for partition algebra representations, to the integer partition shapes for symmetric group characters. In contrast to previously studied variants and generalizations of immanants, as in Temperley-Lieb immanants and \(f\)-immanants, the sum that we use to define recombinants is indexed by a full set of partition diagrams, as opposed to permutations. 2020 Mathematics Subject Classification: 05E10, 15A15 immanant, partition algebra, character, irreducible representation ## 1 Introduction The concept of the _immanant_ of a matrix was introduced in a seminal 1934 article by Littlewood and Richardson [19]. As suggested by Littlewood and Richardson [19], by generalizing determinants and permanents of matrices using symmetric group characters, this provides a way of unifying disparate areas of combinatorial analysis, linear algebra, and representation theory. Since partition algebras are such natural extensions of symmetric group algebras [11], this leads us to consider how immanants of matrices may be generalized using partition algebra characters. This forms the main purpose of our article, in which we introduce the concept of the _recombinant_ of a matrix. This gives us a generalization of immanants that is separate from the concept of an \(f\)-immanant. Given an \(n\times n\) matrix \[A=\left(a_{i,j}\right)_{n\times n}=\begin{pmatrix}a_{1,1}&a_{1,2}&\cdots&a_{1,n}\\ a_{2,1}&a_{2,2}&\cdots&a_{2,n}\\ \vdots&\vdots&\ddots&\vdots\\ a_{n,1}&a_{n,2}&\cdots&a_{n,n}\end{pmatrix}, \tag{1}\] the Leibniz identity for determinants is as below: \[\det(A)=\sum_{\sigma\in S_{n}}\left(\operatorname{sgn}(\sigma)\prod_{i=1}^{n}a_{i,\sigma_{i}}\right), \tag{2}\] letting \(S_{n}\) denotes the group of all permutations of \(\{1,2,\ldots,n\}\). The _permanent_ of (1) is defined by replacing the sign function in (2) as below: \[\operatorname{perm}(A)=\sum_{\sigma\in S_{n}}\prod_{i=1}^{n}a_{i,\sigma_{i}}. \tag{3}\] The matrix functions in (2) and (3) are special cases of the immanant function defined in [19] and as below. An integer partition is a finite tuple \(\lambda\) of non-increasing natural numbers. If the sum of all of the entries of \(\lambda\) is a natural number \(n\), then \(\lambda\) is said to be a partition of \(n\), and this is denoted as \(\lambda\vdash n\). For \(\lambda\vdash n\), we may let \(\chi^{\lambda}_{S_{n}}\) be the irreducible character that is of the symmetric group \(S_{n}\) and that corresponds to \(\lambda\). The _immanant_\(\operatorname{Imm}^{\lambda}\) of (1) may be defined so that: \[\operatorname{Imm}^{\lambda}(A)=\sum_{\sigma\in S_{n}}\chi^{\lambda}_{S_{n}}( \sigma)\prod_{i=1}^{n}a_{i,\sigma_{i}}. \tag{4}\] We find that the \(\lambda=(1^{n})\) case of (4) agrees with (2) and the \(\lambda=(n)\) case of (4) agrees with (3). The purpose of this article is to generalize (2), (3), and (4) using partition algebra characters, as opposed to symmetric group characters. Immanants are of interest within many different areas of advanced linear algebra; see [2, 4, 5, 8, 12, 13, 15, 18, 26, 32], for example, and many related references. The definition of immanants in terms of the irreducible characters of the symmetric group naturally lends itself to applications related to many different areas of algebraic combinatorics; for example, see [1, 6, 7, 9, 17, 31] and many similar references. The foregoing considerations reflect the interdisciplinary nature about immanants and motivate our generalization of immanants. Let \(V\) denote an \(r\)-dimensional vector space. Let the general linear group \(\operatorname{GL}_{r}(\mathbb{C})\) act on the tensor space \(V^{\otimes n}\) diagonally. By taking \(S_{r}\) as a subgroup of \(\operatorname{GL}_{r}(\mathbb{C})\) and restricting the action of \(\operatorname{GL}_{r}(\mathbb{C})\) to permutation matrices, partition algebras may be defined via the centralizer algebra \[P_{n}(r)\cong\operatorname{End}_{S_{r}}\left(V^{\otimes n}\right), \tag{5}\] and the study of partition algebras had arisen within the field of statistical mechanics via the centralizer algebra in (5), with reference to the work of Jones [16] and Martin [21, 22, 23, 24]. This again speaks to the interdisciplinary interest surrounding our generalization of immanants via partition algebra characters. ### Preliminaries Our notation concerning partition algebras is mainly borrowed from Halverson's article on the character theory for partition algebras [10]. For the sake of breivty, we assume familiarity with partition diagrams and the multiplication of partition diagrams, referring to [10] for details. We let \(P_{n}(r)\) denote the \(\mathbb{C}\)-span of all order-\(n\) partition diagrams, and we endow this space with the multiplicative operation specified in [10]. Structures of this form are referred to as _partition algebras_. We find that the symmetric group algebra of order \(n\) spanned by \(\mathbb{C}\) is naturally a subalgebra, by taking the span of partition diagrams of order \(n\) with \(n\) components with exactly one vertex in the upper row and exactly one vertex in the lower row. For integer partitions \(\lambda\) and \(\mu\), if \(\mu_{i}\leq\lambda_{i}\) for all \(i\), then \(\lambda/\mu\) denotes the skew shape obtained by removing \(\mu\) from \(\lambda\). We adopt the convention whereby the upper nodes of a partition diagram of order \(n\) are labeled with 1, 2, \(\ldots\), \(n\) and whereby the lower nodes of this diagram are labeled with \(1^{\prime}\), \(2^{\prime}\), \(\ldots\), \(n^{\prime}\). We then let \(P_{n-1}(x)\) be embedded in \(P_{n}(x)\) by adding vertices labeled with \(n\) and \(n^{\prime}\) and by letting these vertices be adjacent. From the branching rules subject to the restriction from \(P_{n}(r)\) to \(P_{n-1}(r)\), and with the use of double centralizer theory via (5), it can be shown that the irreducible representations of \(P_{n}(r)\) are in bijection with \[\widehat{P_{n}(r)}=\{\lambda\vdash r\ :\ |\lambda^{*}|\leq n\}, \tag{6}\] where \(\lambda^{*}=\lambda/(\lambda_{1})\). We let \(M^{\lambda}\) denote the irreducible representation of \(P_{n}(r)\) indexed by \(\lambda\in\widehat{P_{n}(r)}\). Following [10], we establish a bijection between (6) and the set \(\widehat{P_{n}}\) consisting of all expressions of the form \(\lambda^{*}\) in (6), i.e., by mapping \(\lambda\) to \(\lambda^{*}\) and, conversely, by adding a row to \(\lambda^{*}\) appropriately. For \(\mu\in\widehat{P_{n}}\), we may let \(\chi^{\lambda}_{P_{n}(x)}\) denote the irreducible character of \(P_{n}(x)\) corresponding to \(M^{\lambda}\). A basic result in the representation theory of groups is given by how characters are constant on conjugacy classes. Halverson [10] introduced a procedure for collecting partition diagrams so as to form analogues of conjugacy classes, referring to [10] for details. For a diagram \(d\), we let \(d_{\mu}\) denote the conjugacy class representative such that \(\chi(d)=\chi(d_{\mu})\) for a given partition algebra character \(\chi\). ## 2 A generalization of immanants For a permutation \(p\) of order \(n\) that we denote as a function \[p\colon\{1,2,\ldots,n\}\to\{1,2,\ldots,n\}, \tag{7}\] we identify this permutation with the partition diagram corresponding to \(\{\{1,(p(1))^{\prime}\}\), \(\{2,(p(2))^{\prime}\}\), \(\ldots\), \(\{n,(p(n))^{\prime}\}\}\). We then consider this partition diagram as being associated with the product \[\prod_{i=1}^{n}a_{i,p(i)}, \tag{8}\] for the matrix \(A\) in (1), and with regard to the summand in (4). So, this raises the question as to what would be appropriate as an analogue of the product in (8), for an _arbitrary_ partition diagram. This leads us toward the following. **Definition 1**.: For the \(n\times n\) matrix in (1), we let the product \(\prod_{d}a_{i,j}\) or \(\prod_{d}A\) be defined in the following manner. If \(d\) is of propagation number \(0\), then we let the expression \(\prod_{d}a_{i,j}\) vanish. If \(d\) is of a positive propagation number, let \(B\) be a component of \(d\) that is propagating. We then form the product of all expressions of the form \(a_{i,j}\) such that \(i\) is in \(B\) and \(j^{\prime}\) is in \(B\). Let \(\Pi_{B}\) denote this product we have defined using the component \(B\). We then define \(\prod_{d}a_{i,j}\) as the product of all expressions of the form \(\Pi_{B}\) for all propagating components of \(d\). **Example 1**.: For the partition diagram \[d=\raisebox{-14.226378pt}{\includegraphics[width=14.226378pt]{Fig1.eps}}\] and for the \(5\times 5\) case of (1), we find that \[\prod_{d}a_{i,j}=\prod_{d}A=(a_{2,1}a_{2,2}a_{2,3})\,(a_{3,4}a_{3,5}a_{5,4}a_ {5,5})\,.\] Definition 1 puts us in a position to offer a full definition for the concept of the recombinant of a matrix, as below. **Definition 2**.: We define the _recombinant_ of the square matrix in (1) so that \[\mathrm{Rec}^{\lambda}(A)=\sum_{d\in P_{n}(r)}\chi_{P_{n}(r)}^{\lambda}(d)\prod_ {d}a_{i,j}. \tag{9}\] For example, an explicit evaluation for the recombinant, for non-propagating submodules of partition algebras, of any \(2\times 2\) matrix is given Section 2.1 Since our article is based on generalizing immanants using partition algebra characters, it would be appropriate to prove, as below, that Definition 2 does indeed generalize (4). In our below proof, we are to make use of the property described by Halverson [10] whereby character tables for partition algebras satisfy a recursion of the form \[\Xi_{P_{n}(x)}=\begin{bmatrix}x\Xi_{P_{n-1}(x)}&\vdots&*\\ \cdots&&\cdots\\ 0&\vdots&\Xi_{S_{n}}\end{bmatrix}, \tag{10}\] where \(\Xi_{S_{n}}\) denotes the character table of \(S_{n}\). By direct analogy with how Young tableaux are formed from paths in Young's lattice, _vacillating tableaux_ are formed from paths in the Bratteli diagram \(\hat{A}\) described in [11]. For the case whereby such a path ends on an integer partition of order \(n\) at level \(n\) in \(\hat{A}\), this corresponds to an embedding of an irreducible representation of \(\mathbb{C}S_{n}\)[11]. For a vacillating tableau \(T\) of this form, Theorem 1 below gives us that the recombinant corresponding to the partition algebra representation \(\rho\) corresponding to \(T\) is the same as the immanant corresponding to the symmetric group algebra representation corresponding to \(\rho\). **Theorem 1**.: _For an \(n\times n\) matrix \(A\), if \(|\lambda^{*}|=n\), then \(\mathrm{Rec}^{\lambda}(A)=\mathrm{Imm}^{\lambda^{*}}(A)\)._ Proof.: First, let us write \(\lambda\vdash r\) and \(|\lambda^{*}|\leq n\), and let us suppose that \(\mu\) is a weak composition such that \(0\leq|\mu|\leq n\). By Corollary 4.2.3 from [10], we have that \[\chi_{P_{n}(r)}^{\lambda}\left(d_{\mu}\right)=0\quad\text{if }|\mu|<|\lambda^{*}|, \tag{11}\] and that the equality \(|\mu|=|\lambda^{*}|=n\) implies that \[\chi_{P_{n}(r)}^{\lambda}\left(d_{\mu}\right)=\chi_{S_{n}}^{\lambda^{*}}\left( \gamma_{\mu}\right). \tag{12}\] For a permuting diagram \(d\), Halverson's procedure for conjugacy class analogues [10] gives us that \(\gamma_{\mu}\) is the cycle type for the permutation corresponding to \(d\), with \(d=d_{\mu}\) written as a product of disjoint, cyclic permutation diagrams. So, for an \(n\times n\) matrix \(A\) and for \(|\lambda^{*}|=n\), we find, from (11), that \(\chi_{P_{n}(r)}^{\lambda}(d)\) vanishes for all non-propagating partition diagrams \(d\), as in the lower left block of the character table in (10), so that we may rewrite (9) so that \[\operatorname{Rec}^{\lambda}(A)=\sum_{\operatorname{prop}(d)=n}\chi_{P_{n}(r )}^{\lambda}(d)\prod_{d}a_{i,j}, \tag{13}\] and where the character \(\chi_{P_{n}(r)}^{\lambda}(d)\) reduces, in the manner specified in (12), to the corresponding character of \(S_{n}\) evaluated at the permutation corresponding to the permuting diagram \(d\). By Definition 1, the product \(\prod_{d}a_{i,j}\) in (13) is equal to \(a_{1,d(1)}a_{2,d(2)}\cdots a_{n,d(n)}\), writing the permuting diagram \(d\) as a permutation as in (7). _Remark 1_.: Let us write \(E_{\ell}\) to denote the partition diagram corresponding to \[\frac{1}{r}\{\{1,1^{\prime}\},\{2,2^{\prime}\},\ldots,\{\ell-1,(\ell-1)^{ \prime}\},\{\ell,\ell+1,\ldots,n\},\{\ell^{\prime},(\ell+1)^{\prime},\ldots,n^ {\prime}\}\}.\] We find that \(P_{n}(r)E_{\ell}P_{n}(r)\) is a two-sided ideal and consists of all linear combinations of partition diagrams with propagation number strictly less than \(\ell\). Fundamental results in the representation theory of partition algebras are such that \[\mathbb{C}S_{n}\cong P_{n}(r)/\left(P_{n}(r)E_{n}P_{n}(r)\right) \tag{14}\] and such that any irreducible representation of \(P_{n}(r)\) is either an irreducible representation of \(E_{n}P_{n}(r)E_{n}\) or an irreducible representation of the right-hand side of (14); see [20, SS4], for example, and references therein. These properties can be used to formulate an alternative proof of Theorem 1. Our generalization of immanants, as above, is fundamentally different compared to previously considered generalizations or variants of the immanant function. Notably, Definition 2 is separate relative to how \(f\)_-immanants_ are defined. Following [28], an \(f\)-immanant, by analogy with (4), is of the form \[\operatorname{Imm}^{f}(A)=\sum_{\sigma\in S_{n}}f(\sigma)\prod_{i=1}^{n}a_{i, \sigma_{i}} \tag{15}\] for an arbitrary function \(f\colon S_{n}\to\mathbb{C}\). A notable instance of an \(f\)-immanant that is not of the form indicated in (4) is the Kazhdan-Lusztig immanant, where the \(f\)-function in (15) is given by Kazhdan-Lusztig polynomials associated to certain permutations. In contrast to generalizations of immanants of the form shown in (15), our lifting of the definition in (4) is based on a sum indexed by the diagram basis of \(P_{n}(r)\), in contrast to the index set for the sum in (15). In contrast to immanants of \(n\times n\) matrices being in correspondence with integer partitions of \(n\), and in contrast to \(f\)-immanants of \(n\times n\) matrices being in correspondence with class functions on \(S_{n}\), we have that recombinants of \(n\times n\) matrices are in correspondence with the family of integer partitions in (6). ### An explicit evaluation We find it convenient to denote partition algebra characters by writing \(\chi^{\lambda^{*}}(d)\) in place of \(\chi^{\lambda}_{P_{n}(r)}(d)\). Correspondingly, we may denote the recombinant associated with the character \(\chi^{\lambda^{*}}\) as \(\operatorname{Rec}^{\lambda^{*}}\). As below, we are to let diagram basis elements be ordered according to the SageMath convention for ordering such basis elements. According to this convention, let the diagram basis of the order-2 partition diagram be ordered in the manner indicated in Table 1, letting partition diagrams be denoted with set partitions. **Example 2**.: According to Definition 2, by writing \[\operatorname{Rec}^{\varnothing}\begin{pmatrix}a_{1,1}&a_{1,2}\\ a_{2,1}&a_{2,2}\end{pmatrix}=\sum_{d\in P_{2}(r)}\chi^{\varnothing}(d)\prod_ {d}a_{i,j}\] \[=\chi^{\varnothing}(d_{1})\prod_{d_{1}}a_{i,j}+\chi^{\varnothing} (d_{2})\prod_{d_{2}}a_{i,j}+\cdots+\chi^{\varnothing}(d_{15})\prod_{d_{15}}a_ {i,j},\] we may evaluate the recombinant \(\operatorname{Rec}^{\varnothing}\) according to the character values shown in Table 1, so as to obtain that \[\operatorname{Rec}^{\varnothing}\begin{pmatrix}a_{1,1}&a_{1,2}\\ a_{2,1}&a_{2,2}\end{pmatrix}=\] \[a_{1,1}a_{1,2}a_{2,1}a_{2,2}+\] \[a_{1,1}a_{2,1}+a_{1,1}a_{1,2}+a_{1,2}a_{2,2}+a_{2,1}a_{2,2}+\] \[2\left(a_{1,1}a_{2,2}+a_{1,2}a_{2,1}\right)+\] \[r\left(a_{1,1}+a_{1,2}+a_{2,1}+a_{2,2}\right).\] For example, we may verify the above evaluation by computing the traces associated with the linear transforms given by the action of left-multiplication by diagram basis elements on the irreducible \(P_{2}(r)\)-module \(\mathscr{L}\{d_{4},d_{14}\}\). We may obtain a similar evaluation, relative to Example 2, for the recombinant that corresponds to the 3-dimensional representations of \(P_{2}(r)\). ## 3 Conclusion We conclude with some areas for future research concerning the matrix function introduced in this paper. A fundamental formula in algebraic combinatorics is Frobenius' formula for irreducible characters of the symmetric group, which, following [10], was later shown by Schur to be a consequence of what is now know as _Schur-Weyl duality_ between symmetric groups and general linear groups. The _irreducible character basis_ introduced in [25] may be defined via a lifting of \begin{table} \begin{tabular}{|c|c|c|} \hline \(i\) & \(d_{i}\) & \(\chi^{\varnothing}(d_{i})\) \\ \hline 1 & \(\{\{2^{\prime},\,1^{\prime},\,1,\,2\}\}\) & 1 \\ \hline 2 & \(\{\{2^{\prime},\,1,\,2\},\,\{1^{\prime}\}\}\) & 1 \\ \hline 3 & \(\{\{2^{\prime}\},\,\{1^{\prime},\,1,\,2\}\}\) & 1 \\ \hline 4 & \(\{\{2^{\prime},\,1^{\prime}\},\,\{1,\,2\}\}\) & \(r\) \\ \hline 5 & \(\{\{2^{\prime}\},\,\{1^{\prime}\},\,\{1,\,2\}\}\) & \(r\) \\ \hline 6 & \(\{\{2^{\prime},\,1^{\prime},\,1\},\,\{2\}\}\) & 1 \\ \hline 7 & \(\{\{2^{\prime},\,1\},\,\{1^{\prime},\,2\}\}\) & 2 \\ \hline 8 & \(\{\{2^{\prime},\,1\},\,\{1^{\prime}\},\,\{2\}\}\) & \(r\) \\ \hline 9 & \(\{\{2^{\prime},\,2\},\,\{1^{\prime},\,1\}\}\) & 2 \\ \hline 10 & \(\{\{2^{\prime},\,1^{\prime},\,2\},\,\{1\}\}\) & 1 \\ \hline 11 & \(\{\{2^{\prime},\,2\},\,\{1^{\prime}\},\,\{1\}\}\) & \(r\) \\ \hline 12 & \(\{\{2^{\prime}\},\,\{1^{\prime},\,1\},\,\{2\}\}\) & \(r\) \\ \hline 13 & \(\{\{2^{\prime}\},\,\{1^{\prime},\,2\},\,\{1\}\}\) & \(r\) \\ \hline 14 & \(\{\{2^{\prime},\,1^{\prime}\},\,\{1\},\,\{2\}\}\) & \(r\) \\ \hline 15 & \(\{\{2^{\prime}\},\,\{1^{\prime}\},\,\{1\},\,\{2\}\}\) & \(r^{2}\) \\ \hline \end{tabular} \end{table} Table 1: The SageMath ordering for partition diagrams of order 2, along with the irreducible characters corresponding to non-propagating representations. the consequence \[p_{\mu}=\sum_{\lambda\vdash n}\chi_{S_{n}}^{\lambda}(\mu)s_{\lambda} \tag{16}\] of Schur-Weyl duality, with partition algebra characters used in place of symmetric group characters in an analogue of (16). The SageMath implementation of the \(\tilde{s}\)-basis from [25] provides a useful way of computing partition algebra characters, which could be used to obtain a useful way of computing recombinants. We encourage applications of this. Temperley-Lieb algebras form an important family of subalgebras of partition algebras. The _Temperley-Lieb immanants_ introduced by Rhoades and Skandera [30] are \(f\)-immanants defined in a way related to Temperley-Lieb algebras, referring to [30] for details. It seems that past research influenced by [30], including relevant research on immanants or immanant-type functions as in [3, 27, 28, 29], has not involved any generalizations of immanants using partition algebra characters. It may be worthwhile to explore relationships among recombinants and Temperley-Lieb immanants, or to explore generalizations or variants of recombinants related to the way Temperley-Lieb immanants are defined. The concept of a _twisted immanant_ was introduced in [14] and was based on how the irreducible character \(\chi^{\lambda}\), if restricted to an alternating subgroup, splits as a sum of two irreducible characters, writing \(\chi^{\lambda}=\chi^{\lambda_{+}}+\chi^{\lambda_{-}}\). What would be an appropriate notion of a _twisted recombinant_, and how could this be applied in a similar way, relative to [14]? Immanants are often applied in the field of algebraic graph theory, via immanants of Laplacian matrices and the like. How could recombinants be applied similarly? Immanants of Toeplitz matrices are often studied due to recursive properties of such immanants. What is the recombinant of a given Toeplitz matrix? ### Acknowledgements The author was supported through a Killam Postdoctoral Fellowship from the Killam Trusts, and the author wants to thank Karl Dilcher for many useful discussions. The author is thankful to Mike Zabrocki for useful comments concerning the irreducible character basis and for many useful discussions concerning partition algebras.
2302.14444
Learning to Estimate Two Dense Depths from LiDAR and Event Data
Event cameras do not produce images, but rather a continuous flow of events, which encode changes of illumination for each pixel independently and asynchronously. While they output temporally rich information, they lack any depth information which could facilitate their use with other sensors. LiDARs can provide this depth information, but are by nature very sparse, which makes the depth-to-event association more complex. Furthermore, as events represent changes of illumination, they might also represent changes of depth; associating them with a single depth is therefore inadequate. In this work, we propose to address these issues by fusing information from an event camera and a LiDAR using a learning-based approach to estimate accurate dense depth maps. To solve the "potential change of depth" problem, we propose here to estimate two depth maps at each step: one "before" the events happen, and one "after" the events happen. We further propose to use this pair of depths to compute a depth difference for each event, to give them more context. We train and evaluate our network, ALED, on both synthetic and real driving sequences, and show that it is able to predict dense depths with an error reduction of up to 61% compared to the current state of the art. We also demonstrate the quality of our 2-depths-to-event association, and the usefulness of the depth difference information. Finally, we release SLED, a novel synthetic dataset comprising events, LiDAR point clouds, RGB images, and dense depth maps.
Vincent Brebion, Julien Moreau, Franck Davoine
2023-02-28T09:42:39Z
http://arxiv.org/abs/2302.14444v1
# Learning to Estimate Two Dense Depths from LiDAR and Event Data+ ###### Abstract Event cameras do not produce images, but rather a continuous flow of events, which encode changes of illumination for each pixel independently and asynchronously. While they output temporally rich information, they lack any depth information which could facilitate their use with other sensors. LiDARs can provide this depth information, but are by nature very sparse, which makes the depth-to-event association more complex. Furthermore, as events represent changes of illumination, they might also represent changes of depth; associating them with a single depth is therefore inadequate. In this work, we propose to address these issues by fusing information from an event camera and a LiDAR using a learning-based approach to estimate accurate dense depth maps. To solve the "potential change of depth" problem, we propose here to estimate two depth maps at each step: one "before" the events happen, and one "after" the events happen. We further propose to use this pair of depths to compute a depth difference for each event, to give them more context. We train and evaluate our network, ALED, on both synthetic and real driving sequences, and show that it is able to predict dense depths with an error reduction of up to 61% compared to the current state of the art. We also demonstrate the quality of our 2-depths-to-event association, and the usefulness of the depth difference information. Finally, we release SLED, a novel synthetic dataset comprising events, LiDAR point clouds, RGB images, and dense depth maps. Keywords:Sensor fusion Machine learning Dense depth estimation. ## 1 Introduction Rather than accumulating light to create images, event cameras perceive changes of illumination for each pixel independently and asynchronously. Thanks to their high temporal resolution (in the order of the \(\mathrm{\SIUnitSymbolMicro s}\)) and high dynamic range, event cameras are a sensor of choice for dynamic applications in complex environments (fast motions, extreme illumination conditions), where traditional cameras reach their limits. LiDAR sensors offer accurate but sparse 3D information of their surrounding environment. They are a key component for autonomous navigation, helping to solve multiple problems, e.g., obstacle detection and tracking, SLAM, etc. Yet, their sparsity often constitutes a limiting factor. While 64- or 128-channel LiDARs are starting to be commercialized, they come at a significantly high cost, and are still not as dense as cameras. In this work, we focus on the fusion of LiDAR and event data, which we consider as a dual problem: (1) LiDAR depths densification and (2) events-depths association. Regarding problem (1), we are interested in densifying the LiDAR data using the events as a guide. As a result, dense depth maps are obtained, which allow for a dense 3D perception of the observed scene. As for problem (2), we are interested in associating a depth to each event. By doing so, each event can be projected in 3D, and then even be backprojected in 2D in another vision sensor. For a fully calibrated and synced setup, this process would allow for the superimposition of events and RGB images, a task which is only possible at the moment through the use of specific low-resolution frame+events cameras like the DAVIS240C [2]. Estimating dense depth maps from sparse LiDAR data is a well-regarded problem as it solves the sparsity drawback of the LiDAR while keeping its metric scale. However, using events to densify depth maps (i.e., problem (1)) might be inaccurately seen as a task that inherently includes problem (2), as corresponding depths for the events could be taken from the dense depth map. We argue in this work that, as each event represents a change in illumination, it might also represent a change in depth. As such, two depths should be associated to each event, and we will therefore compute two depth maps: one before the events happen, and one after they happen. As an answer to these issues, we propose in this work a learning-based fusion method for estimating pairs of dense depth maps from events and sparse LiDAR data. Our main contributions are as follows: * We propose to revise the principle of associating a single depth to each event to rather estimate two depths: one "before" the event happens, and one "after". Following this, we introduce the notion of "depth change map", to give more context to each event. * We propose a novel convolutional network, the ALED (Asynchronous LiDAR and Events Depths densification) network, able to fuse asynchronous events and LiDAR data, and to estimate the two dense depth maps from them, while surpassing state-of-the-art accuracy. * We finally build and share a high-definition simulated dataset, the SLED (Synthetic LiDAR Events Depths) dataset, used as part of the training of the network and its evaluation. If the reader is interested, supplementary material, the SLED dataset, source codes, as well as videos showcasing results on both simulated and real data are all available at [https://vbrebion.github.io/ALED](https://vbrebion.github.io/ALED). ## 2 Related Work ### LiDAR Densification LiDAR sensors only produce sparse point clouds, which is challenging for numerous applications (3D reconstruction, object detection, SLAM, etc). As a consequence, LiDAR depth completion is a subject that has been widely studied in the literature. Some authors try to obtain dense depth maps while only relying on the sparse data from the LiDAR. These methods either use machine learning [4, 14, 39] or traditional image processing operations [19]. The most successful approaches use a secondary modality as a guide for the densification process. While most of these approaches employ a RGB camera as the secondary sensor [7, 14, 15, 42], other authors have proposed using alternative modalities, such as stereo cameras [23] or more recently event cameras [5]. ### Fusion of Events and Other Modalities Due to their relative youth, the literature on the fusion of data from event cameras with other sensors is quite sparse. Most of the investigations focused on the fusion of events and frames, thanks to sensors offering both modalities like the DAVIS camera [2]. These works include frame interpolation and deblurring [26, 27, 31], feature tracking [8, 20], object detection [3, 16, 38], or even steering prediction [13]. In the past few years, a few authors have started investigating the fusion of events and LiDAR data. Explored issues include calibration [35, 36], and very recently, point clouds enhancement with events [21] and LiDAR densification [5]. ### Depth Estimations with Events Several approaches have been proposed in order to estimate sparse or dense depth maps by using a single event camera. Kim et al. [17] used probabilistic filters to simultaneously estimate the motion of the camera, reconstruct a log intensity image of the observed scene, and construct a sparse inverse depth map. Zhu et al. [44] used a convolutional network to jointly predict depth and egomotion, by trying to minimize the amount of motion blur in the accumulated events. Hidalgo-Carrio et al. [12] were the first to estimate dense depth maps from a monocular event camera, through the use of a recurrent convolutional network. In parallel, other authors have advocated for the use of a secondary sensor to help the depth estimation. Schraml et al. [32, 33] and Nam et al. [25] used two event cameras, and estimated depths by creating images of accumulated events for each camera and applying stereo matching. While [32, 33] used traditional model-based approaches, [25] entirely relied on learning-based networks: an attention-based network to construct detailed events representations, then a convolutional network for depth map inference. Other authors have also combined the event camera with a RGB sensor; Gehrig et al. [9] for instance designed a recurrent network to fuse asynchronous data and estimate dense depths from them. Finally, some authors have also used depth sensors in direct combination with event cameras. Weikersdorfer et al. [41] used an RGB-D camera to obtain dense depths, and used the depth-augmented events to perform SLAM. Li et al. [21] used a LiDAR sensor to associate a depth to each event through the use of a Voronoi diagram and a set of heuritic rules. Cui et al. [5] also employed a LiDAR, to derive dense depth maps by using 3D geometric information. ## 3 Depth Change Map: Two Depths per Event The fusion of depths and events is a problem that can have two different goals: (1) obtaining dense depth maps from events and sparse LiDAR data, or (2) determining a depth for each event. While problem (1) can be interpreted as a LiDAR densification method guided by the events, we argue here for problem (2) that associating a single depth to an event is inadequate. By definition, an event represents a significant change in illumination observed by a given pixel. Under motion, observed events can either originate from (a) texture changes inside an object; or from (b) the contour of an object. In case (a), associating a single depth to these events can be coherent, as depth inside an object should be subject to little variation. However, doing so in case (b) is erroneous, as the events are likely to denote also a depth change. Instead, we propose to estimate two dense depth maps: one before the events happen, which we will denote \(D_{\text{bf}}\) in the rest of this article, and one after the events happen, which we will denote \(D_{\text{af}}\). We can then formulate the depth change map as \(D_{\text{af}}-D_{\text{bf}}\) and compare the two depths \(d_{\text{bf}}\) and \(d_{\text{af}}\) for each pixel. Three meaningful cases can be distinguished: 1. \(d_{\text{af}}-d_{\text{bf}}\approx 0\): the pixel is located in an area where depths do not vary much, i.e., inside an object; 2. \(d_{\text{af}}-d_{\text{bf}}\gg 0\): the pixel was at the edge of an object, and is now on an object further away; 3. \(d_{\text{af}}-d_{\text{bf}}\ll 0\): the pixel was located on a far object, and is now at the edge of a closer object. The depth difference information given by the depth change map can especially help events processing to differentiate real objects from artifacts such as shadows and even noise. An illustration of some possibilities offered by the depth change map on events is given in Fig. 1. Other applications could also take advantage of the pair of depth maps \(D_{\text{bf}}\) and \(D_{\text{af}}\): ego-motion and speed estimation, objects clustering, scene flow, etc. ## 4 Method ### The ALED Network Inspired by the Recurrent Asynchronous Multimodal Network (RAMNet) architecture of Gehrig et al. [9] for RGB and events fusion, we propose here a fully convolutional recurrent network to estimate dense depths from asynchronous LiDAR and events data. We call it the ALED Net, for Asynchronous LiDAR and Events Depths densification network. Our network can be decomposed in two main parts: an encoder, tasked with fusing asynchronous events and LiDAR features at different scales, and a decoder, tasked with interpreting the fused features for estimating dense depths. In its encoder part, illustrated in Fig. 2, the LiDAR and events inputs are fed independently. Both of them go through an encoding head, computing a first feature map of 32 channels while keeping their original height and width. Convolutional encoders (in the form of ResNet Basic Blocks [11]) are then used to compute feature maps at scales 1/2, 1/4, and 1/8, doubling the number of channels every time. Each of these feature maps is then used as the input of convolutional gated recurrent unit (convGRU) blocks [34], updating its corresponding state. Since these states are shared between the LiDAR and events encoders, both parts of the network can update them asynchronously. In its decoder part, illustrated in Fig. 3, the convGRU state at the lowest scale first goes through two residual blocks. Then, for each following scale, the decoded feature map from the previous scale is upscaled by using convex upsampling [37]. While a simple bilinear upsampling was used in RAMNet [9], convex upsampling Figure 1: Example of the importance of the depth change map for each event on the “Town01_00” sequence from our SLED dataset. Notice how simple thresholds on this depth difference help distinguishing the events linked to the contour of real objects from the events corresponding to the texture of the road, the halo from the street lamp, or even the noisy events in the sky. Figure 3: The decoder part of the network. Figure 2: The encoder part of the network. allows our network to learn how to upscale features from a lower scale, using information from a higher scale. We propose to design the convGRU such that the first half of its state (in purple in Fig. 2 and 3) guides the convex upsampling. Fusion of the upsampled decoded features and the state from the current scale is then performed by concatenating the output of the convex upsampling block with the remaining half of the convGRU state (in green in Fig. 2 and 3), and by applying a convolution to reduce the number of channels. After the last scale, a prediction head is used to obtain the two final depth maps, \(D_{\text{bf}}\) and \(D_{\text{af}}\), in the form of a two-channel tensor, at the same full resolution as the events input. Regarding the implementation, both encoding heads use a kernel size of 5. LiDAR and events encoders use a kernel size of 5, with stride 2. Both the convGRU and residual blocks use a kernel size of 3. The convolutions in the convex upsampling blocks use a kernel size of 5, while the convolutions following the concatenations use a kernel size of 1. Finally, the prediction layer also uses a kernel size of 1. Convolutions are followed by a PReLU activation function [10], and instance normalization is used in the ResNet encoders as proposed by Pan et al. [28]. In total, the network contains 26 millions of trainable parameters. ### Data Representation #### 4.2.1 Events We use Discretized Event Volumes [44] as the input representation for the events. We follow the formulation of Perot et al. [29], where the Discretized Event Volume \(V\) for input events \(\{e_{i}=(x_{i},y_{i},p_{i},t_{i})\}_{i=1}^{N}\) is described as: \[V_{t,p,y,x}=\sum_{e_{i},x_{i}=x,y_{i}=y,p_{i}=p}\max(0,1-|t-t_{i}^{*}|) \tag{1}\] \[t_{i}^{*}=(B-1)\frac{t_{i}-t_{0}}{t_{N}-t_{0}} \tag{2}\] where \((x,y)\) is the position of the event, \(t\) its timestamp, and \(p\) its polarity. In our experiments, we set \(B=5\) bins, and concatenate the negative and positive polarity bins along the first dimension, resulting in a tensor of shape \((10,H,W)\). #### 4.2.2 LiDAR and Depths LiDAR data is fed to the network as a 1-channel depth image. To do so, each LiDAR point cloud is projected onto the image plane of the event camera. Pixels where one or more LiDAR points fall into are given as value the lowest depth. Pixels without any LiDAR point are given a value of 0. For an easier learning, the LiDAR projection and ground truth images are normalized between 0 and 1 based on the maximum LiDAR range (200m in the case of our synthetic SLED dataset, 100m in the case of the MVSEC dataset [43]). ### Loss functions To train our network, we combine the use of two losses: a pixel-wise \(\ell_{1}\) loss \(\mathcal{L}_{\text{pw}}\), and a multiscale gradient matching loss \(\mathcal{L}_{\text{msg}}\). The pixel-wise \(\ell_{1}\) loss operates as the main supervision loss, applied on both the "before" and the "after" depth maps, and is defined as follows: \[\mathcal{L}_{\text{pw}}=\sum_{x,y}\left\|D(x,y)-\hat{D}(x,y)\right\|_{1} \tag{3}\] where \(D\) and \(\hat{D}\) are the respectively the estimated and ground truth depth maps. However, when supervised by the \(\ell_{1}\) loss alone, the network tends to produce blurry and non-smooth depth images. To solve this issue, we use here a multiscale gradient matching loss inspired by [40], also applied on both the "before" and the "after" depth maps, and defined as \[\mathcal{L}_{\text{msg}}=\sum_{h\in\{1,2,4,8,16\}}\sum_{x,y}\left\|\mathbf{g} [D](x,y,h)-\mathbf{g}[\hat{D}](x,y,h)\right\|_{2} \tag{4}\] with the discrete gradient function \(\mathbf{g}\) of an image \(f\) defined as \[\mathbf{g}[f](x,y,h)=\left(f(x+h,y)-f(x,y);f(x,y+h)-f(x,y)\right)^{T} \tag{5}\] This loss helps regulating the depth results, by making depth discontinuities more prominent, and by smoothing homogeneous regions. Our total loss \(\mathcal{L}\) for a sequence of length \(T\) is finally defined as \[\mathcal{L}=\sum_{t=0}^{T}\sum_{\text{bf,af}}(\mathcal{L}_{\text{pw}}^{t}+ \alpha\mathcal{L}_{\text{msg}}^{t}) \tag{6}\] where \(\alpha\) is a weight parameter for the multiscale gradient matching loss. In our experiments, we observed that giving too much importance to the multiscale gradient matching loss early in the training makes the network unable to derive correct depth estimates. Therefore, we always set \(\alpha=0.1\) during the first epoch of training, to force the network to use mainly the \(\ell_{1}\) loss and converge towards good initial depth estimates. For the remaining epochs, we set \(\alpha=1\). ## 5 The SLED Dataset In order to train and evaluate the proposed network, we require a dataset containing both events, LiDAR point clouds, as well as a dense ground truth on depths. While we can use the MVSEC dataset [43] for low-resolution cameras, its ground truth is constructed by accumulating point clouds from a LiDAR sensor, a solution which introduces errors in case of moving objects. A similar dataset does not exist for sensors of higher resolution. For these reasons, we use the CARLA simulator [6] (version 0.9.14) to generate a dataset with perfect synchronization and calibration of the sensors, and perfect ground truth depth. We call it SLED, for Synthetic LiDAR Events Depths dataset. It is composed of 160 sequences of 10 seconds each, for a total of more than 20 minutes of data. These sequences are recorded on the _Town01_ to _Town07_ and _Town10HD_ maps (20 sequences for each map), each sequence starting from a different geographic location. By doing so, a wide range of environments is represented within the dataset, as detailed in Table 1. Each sequence contains a 1280\(\times\)720 event camera, a 40-channel LiDAR, and a 1280\(\times\)720 depth camera which is perfectly aligned with the event camera. Both the event data and the depth images are recorded at 200Hz, while the LiDAR is configured to run at 10Hz. RGB images (1280\(\times\)720) are also provided at a 30Hz rate, aligned with the event-based sensor. The LiDAR sensor is configured with a maximum range of 200 meters. For realism and diversity purposes, AI-controlled vehicles and pedestrians are added to the simulation. Sun altitude also varies, resulting for each map in 4 night recordings, and the other 16 recordings ranging from early morning (where the sun can be directly in front of the camera) to midday (where the sun is at its apogee). Varying cloudiness conditions are also used, adding more or less texture to the sky, and making shadows more diverse. We also configure the event camera in CARLA to use a linear intensity scale rather than the default logarithmic one, making the events produced by the simulator more realistic. More details on that topic, as well as an overview of the data contained in the dataset, are given in the Supplementary Material. ## 6 Evaluation ### Dense Depths #### 6.1.1 On the SLED Dataset For training on the SLED dataset, we use the Adam optimizer [18] with a learning rate of \(10^{-4}\) and a batch size of 4, and train for a total of 50 epochs. To augment input data, we randomly crop it to \(608\times 608\), and apply random horizontal flipping. Numerical results on the testing set are presented in the "Dense depths errors" column of Table 2. Evaluations are conducted on _Town01_ and _Town03_ maps, which contain challenging environments with many unique features (bridges, tunnels,...) that are not present in the training maps. For the max range of 200 meters, ALED estimates depth maps with an average absolute error slightly over 4.5 meters for _Town01_, and around 5 meters for _Town03_. The respective average relative error is around 18% for _Town01_, and around 22% for _Town03_. A first element that can explain these errors is that the LiDAR has a small \begin{table} \begin{tabular}{c c c c c c} \hline \hline Map & Set & Environment & Features & Night seq. & Day seq. \\ \hline Town01 & Test & Town & Small buildings, bridges, distant forests and mountains, palm trees & 4 & 16 \\ Town02 & Train & Town & Small buildings, planes, forest road & 4 & 16 \\ Town03 & Test & City & Tall and small buildings, roundabouts, tunnel, aerial railway & 4 & 16 \\ Town04 & Val. & Town & Small buildings, highway, parking, lake, forests and mountains & 4 & 16 \\ Town05 & Train & City & Tall buildings, parking, aerial beltny and railway & 4 & 16 \\ Town06 & Train & Suburban & Small buildings, U-turns, distant hills & 4 & 16 \\ Town07 & Train & Countryside & Barns, grain silos, fields, mountain road & 4 & 16 \\ Town10HD & Train & City & Buildings, nonuncurs and sculptures, playgrounds, seaside & 4 & 16 \\ \hline \hline \end{tabular} \end{table} Table 1: Detailed content of our SLED dataset containing 160 sequences of 10 seconds each. \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline & \multicolumn{4}{c}{Dense depths errors} & \multicolumn{4}{c}{Sparse depths errors} & \multicolumn{4}{c}{Depth change map errors} \\ **Map** & **Cutoff** & \multicolumn{2}{c}{On \(D_{\text{bf}}\)} & \multicolumn{2}{c}{On \(D_{\text{bf}}\)} & \multicolumn{2}{c}{On \(D_{\text{bf}}\)} & \multicolumn{2}{c}{On \(D_{\text{bf}}\)} & \multicolumn{2}{c}{Absolute error} & \multicolumn{2}{c}{Correctly classified events} \\ & & Raw & Rel. & Raw & Rel. & NN & ALELN & NN & ALELN & (with a threshold of 11m) \\ \hline \multirow{5}{*}{Tow01} & 10m & 1.24m & 20.99\% & 1.37m & 28.60\% & 1.32m & 1.46m & 2.24m & 1.79m & 2.11m & 90.27\% \\ & 20m & 2.08m & 2.30\% & 2.27m & 28.48\% & 1.51m & 1.84m & 2.53m & 2.15m & 3.18m & 85.07\% \\ & 30m & 2.72m & 23.76\% & 2.92m & 26.03\% & 1.71m & 2.37m & 2.83m & 2.67m & 3.88m & 81.68\% \\ & 100m & 4.25m & 24.01\% & 5.11m & 26.07\% & 2.40m & 3.48m & 3.91m & 3.95m & 5.12m & 77.48\% \\ & 200m & 4.53m & 17.20\% & 4.81m & 18.66\% & 7.86m & 5.44m & 9.76m & 6.23m & 7.36m & 75.54\% \\ \hline \multirow{5}{*}{Tow01} & 10m & 2.00m & 28.91\% & 2.00m & 30.11\% & 0.47m & 0.65m & 0.67m & 0.66m & 1.14m & 93.70\% \\ & 20m & 2.25m & 29.91\% & 2.97m & 31.15\% & 0.64m & 0.75m & 1.12m & 0.87m & 2.54m & 87.16\% \\ \cline{1-1} & 30m & 3.33m & 29.10\% & 3.45m & 30.24\% & 0.92m & 1.11m & 1.61m & 1.26m & 3.23m & 83.71\% \\ \cline{1-1} & 100m & 4.60m & 27.37\% & 4.77m & 28.42\% & 1.88m & 2.55m & 3.17m & 2.88m & 4.47m & 78.80\% \\ \cline{1-1} & 200m & 4.86m & 21.50\% & 5.03m & 22.33\% & 4.43m & 3.60m & 5.93m & 4.10m & 6.20m & 77.23\% \\ \hline \hline \end{tabular} \end{table} Table 2: Errors on the testing set of the SLED dataset for various cutoff depth distances. From left to right: average absolute and relative depth errors on both the “before” \(D_{\text{bf}}\) and “after” \(D_{\text{bf}}\) depth maps; average absolute depth errors when associating a depth to each event; average absolute depth difference errors and percentage of correctly classified events based on this depth difference. Figure 4: Two qualitative results on our synthetic SLED dataset, on _Town01_ (left) and _Town02_ (right). From top to bottom: events, LiDAR, predicted depth map, ground truth, color scale. vertical coverage of the image: close ground objects or the top of close buildings are not reached by the LiDAR, meaning that accurate depth estimations for these objects are complex to achieve. In opposition, sky pixels (for which ALED produces good results) are only accounted for at the full 200m cutoff. These observations can be correlated to the larger relative errors observed for close cutoff distances than for the full 200m cutoff. It can also be observed that errors on the "after" depth maps are slightly higher. This is to be expected, as the network has to make use of the events to estimate the movement and propagate the depths accordingly. If the reader is interested, results for each sequence of the testing set are given in the Supplementary Material. Qualitative results are given in Fig. 4. They showcase the ability of the network to estimate accurate depths for the whole image, by using events as a guide for the areas the LiDAR sensor can not reach. This is particularly visible for the trees and the light pole for the left column, or the ceiling of the tunnel for the right column. If the reader is interested, more visual results are given in the Supplementary Material and in the videos linked in the introduction. #### 4.1.2 On the MVSEC Dataset In order to be able to compare our results with the other approaches in the literature, we also train and evaluate ALED on the MVSEC dataset [43]. We conduct our evaluation under three different sets of weights from different training setups described below: * \(\mathrm{ALED_{S}}\): the network is only trained in simulation, on proposed SLED dataset; * \(\mathrm{ALED_{R}}\): the network is only trained on real data, on the MVSEC dataset; * \(\mathrm{ALED_{S\to R}}\): the network is first trained on the SLED dataset, then fined-tuned on the MVSEC dataset. We use a batch size of 4 and a learning rate of \(10^{-4}\) (\(10^{-5}\) when fine-tuning). 50 epochs are used when training the network from zero, 5 when fine-tuning it. We also augment input data, by randomly cropping it to \(256\times 256\), and by applying random horizontal flipping. Numerical results are given in Table 3. Comparing our three sets of trained weights, it appears clearly that training on synthetic data before fine-tuning the network on the MVSEC dataset (\(S\to R\)) produces the best results. Training from zero on the MVSEC dataset (\(R\)) is not as good as the \(S\to R\) variant due to the limited data available for training. Finally, training solely on the SLED dataset (\(S\)) produces the worst results, due to the large differences in terms of resolution and LiDAR models between the two datasets, and as simulation is not a perfect reproduction of real data. Proposed \(\mathrm{ALED_{S\to R}}\) network greatly outperforms all the other approaches of the state of the art. Most impressive results are obtained with distant cut-off depths, where fewer LiDAR points are available: our network is still able to infer accurate depths, while reference methods show large errors. Compared to the frames+events EvT\({}^{+}\) method of Sabater et al. [30], we improve the error by \(\{-2.7\%,13.4\%,31.3\%\}\) at minimum and by \(\{60.6\%,58.8\%,57.0\%\}\) at maximum for each of the \(\{10\mathrm{m},20\mathrm{m},30\mathrm{m}\}\) cutoff distances respectively. Compared to the LiDAR+events method of Cui et al. [5], this improvement is of \(\{32.7\%,17.4\%,56.7\%\}\) at minimum and of \(\{59.7\%,39.9\%,79.1\%\}\) at maximum. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline \multirow{2}{*}{**Recording**} & \multirow{2}{*}{**Cutoff**} & \multicolumn{3}{c}{**Event-based**} & \multicolumn{3}{c}{**Event- and frame-based**} & \multicolumn{3}{c}{**LiDAR- and event-based**} \\ & & Z1m et al. [44] & EDDepth [12] & RAMNet [9] & E\(\mathrm{VT}\)- [90] & Cui et al. [5] & **ALED\({}_{\mathrm{s}}\)** & **ALED\({}_{\mathrm{s}}\)** & **ALED\({}_{\mathrm{s}-\mathrm{k}}\)** \\ \hline \multirow{4}{*}{Outdoor day 1} & 10m & 2.72 & 1.85 & 1.39 & 1.27 & 1.24 & 1.54 & 0.91 & **0.50** \\ & 20m & 3.84 & 2.64 & 2.17 & 1.94 & 1.28 & 2.55 & 1.22 & **0.80** \\ & 30m & 4.40 & 3.13 & 2.76 & 2.37 & 4.87 & 3.18 & 1.43 & **1.02** \\ & 50m & - & - & - & - & - & 3.79 & 1.67 & **1.31** \\ & 100m & - & - & - & - & - & 4.08 & 1.96 & **1.60** \\ \hline \multirow{4}{*}{Outdoor night 1} & 10m & 3.13 & 3.38 & 2.50 & **1.48** & 2.26 & 2.24 & 1.75 & 1.52 \\ & 20m & 4.02 & 3.82 & 3.19 & 2.09 & 2.19 & 3.32 & 2.10 & **1.81** \\ & 30m & 4.89 & 4.46 & 3.82 & 2.84 & 4.50 & 3.82 & 2.25 & **1.95** \\ & 50m & - & - & - & - & - & 4.31 & 2.44 & **2.20** \\ & 100m & - & - & - & - & - & 4.62 & 2.73 & **2.54** \\ \hline \multirow{4}{*}{Outdoor night 2} & 10m & 2.19 & 1.67 & 1.21 & 1.48 & 1.88 & 1.94 & 1.19 & **1.09** \\ & 20m & 3.15 & 2.63 & 2.31 & 2.13 & 2.14 & 2.82 & 1.65 & **1.49** \\ \cline{1-1} & 30m & 3.92 & 3.58 & 3.28 & 2.92 & 4.67 & 3.22 & 1.81 & **1.64** \\ \cline{1-1} & 50m & - & - & - & - & - & 3.58 & 1.85 & **1.80** \\ \cline{1-1} & 100m & - & - & - & - & - & 3.78 & 2.11 & **1.97** \\ \hline \multirow{4}{*}{Outdoor night 3} & 10m & 2.86 & 1.42 & 1.01 & 1.40 & 1.78 & 1.76 & 0.85 & **0.81** \\ & 20m & 4.46 & 2.33 & 2.34 & 2.05 & 1.93 & 2.43 & 1.25 & **1.16** \\ \cline{1-1} & 30m & 5.05 & 3.18 & 3.43 & 2.79 & 4.55 & 2.78 & **1.42** & **1.33** \\ \cline{1-1} & 50m & - & - & - & - & - & 3.12 & 1.57 & **1.51** \\ \cline{1-1} & 100m & - & - & - & - & - & 3.31 & 1.73 & **1.66** \\ \hline \hline \end{tabular} \end{table} Table 3: Average absolute depth errors (in meters) on the MVSEC dataset for various cutoff depth distances. This evaluation is performed on the “before” depth map \(D_{\mathrm{bf}}\), to be consistent with the methods we compare ourselves to. Figure 5: Qualitative results on “Outdoor day 1” from the MVSEC dataset with real data. Qualitative results are presented in Fig. 5. All three ALED variants produce results which are visually close to the ground truth for ground objects. Since the MVSEC dataset lacks ground truth depth for the sky, and since the rare elements to have a ground truth for this part of the image are close buildings, close trees, or power lines, the network cannot learn to derive correct depth estimations for the corresponding pixels, leading to the purple blobs in the upper parts of Fig. 5e and 5f. Only the \(S\) variant is able to predict accurate values for sky areas (as our SLED dataset contains valid ground truth depths for all pixels), but has more difficulties for ground objects due to the lack of fine-tuning. Between the \(R\) and \(S\to R\) variants, improvement can still be seen, for instance for the edges of the objects of Fig. 5e and 5f, which are less uneven in the \(S\to R\) variant. Finally, when comparing our results to RAMNet, we can clearly observe that our method provides in all cases more accurate depth maps, where object boundaries are more prominent, and where estimated depths are closer to the ground truth. This observation further demonstrates that the use of a LiDAR input -- even if very sparse -- is of great help for obtaining accurate dense depth maps. ### Associating a Depth to Each Event As stated in Sections 1 and 3, our goal is not only to estimate dense depth maps, but also to associate two depths to each event, allowing for their 3D reprojection and depth difference analysis. Evaluation of the depth association to each event on proposed SLED dataset is given in the "Sparse depths errors" column of Table 2. The sparse event-LiDAR fusion literature is limited to the method of Li et al. [21]. However, their approach only considers one depth per event, and is intended for a Road Side Unit (RSU) application (i.e., their event camera is fixed and evaluation is conducted on a specific dataset). Therefore, we decided to compare ourselves to a more naive (and faster) baseline: the Nearest Neighbor (NN) approach, where each event is given the depth of its closest LiDAR point. As the Nearest Neighbor approach can not infer correct depths for events which are too far from a LiDAR scan, and so as to provide a fair comparison, we only consider the events between the bottom and top LiDAR scans. As displayed in Table 2, in the "before" \(D_{\text{bf}}\) case, depending on the map and the cutoff distance, best results are shared between the NN approach and our \(\text{ALED}_{\text{S}}\) network. These results can be explained by the fact that our network is more likely to commit large errors for events at the boundary of close objects, as it might estimate that they should be given the depth of the more distant background. On the contrary, the NN approach will always attribute the depth of the closest LiDAR point, and will therefore commit more frequent but smaller errors. In the "after" \(D_{\text{af}}\) case, despite this potential source of error, our network \(\text{ALED}_{\text{S}}\) nearly always obtains the best results, as it has correctly learned the temporal propagation of the depths, a task which cannot be completed natively with the NN approach. We also remind here that these results are given for the parts of the image where LiDAR data is available: the NN method would not be able to derive correct estimations for the other parts of the image. Numerical results on each sequence of the dataset are given in Supplementary Material. ### Depth Difference We finally estimate the quality of our "two depths per event" approach through the depth change map, as presented in Section 3. We perform two evaluations on our SLED dataset: (1) the average error on the depth change map \(D_{\mathrm{af}}-D_{\mathrm{bf}}\) compared to the true depth changes, and (2) the percentage of correctly classified events when using a difference threshold of 1m on the depth change map. Results of these evaluations are given in the "Depth change map errors" column of Table 2. Numerical results on each sequence of the dataset, as well as some visual results, are given in Supplementary Material. We can observe here that, despite a significant absolute error on the depth change maps, events can still be classified correctly, with a rate of success over 75% on _Town01_, and over 77% on _Town03_. We remind here that the ALED network is not trained on these depth change map and classification tasks. As such, we believe that, while even more accurate individual depth maps could improve both the depth change map and classification errors, further improvements could be brought by designing a network specifically dedicated to these tasks. ## 7 Conclusion In this article, a novel learning-based approach for estimating dense depth maps from asynchronous LiDAR and event-based data has been proposed. A novel "two depths per event" notion has also been proposed, to solve the issue of events representing a change of depth. A synthetic multimodal dataset has also been recorded, to train and evaluate our method. Multiple evaluations on our synthetic and a real driving datasets have been performed to show the relevance of our contributions. In particular, on the MVSEC dataset, an improvement of up to 79.1% compared to the current LiDAR-and-events state of the art has been achieved, on complex daytime and nighttime recordings. In hindsight, further improvements could be brought to the method. Attention-based networks provide state-of-the-art results in numerous vision-based applications, and could improve the fusion of the event and LiDAR modalities. Making the network predict directly sparse depths for each event could also potentially provide better results for the depth to event inference. This could be achieved by using sparse convolutional networks [24] for instance, and could be subject to future work. The use of the Event Volume with a fixed time window as the input representation for the events in our network could also be revised, as it can become ill-suited under large motions. A solution could be to use an alternative representation, such as TORE Volumes [1], or to use adaptive accumulation times using methods such as the one proposed by Liu and Delbruck [22]. Finally, the recording of a real dataset with a high resolution event camera could also be considered, to complete the possibilities offered by the low-resolution MVSEC dataset.
2309.15428
Quasi-pure resolutions and some lower bounds of Hilbert coefficients of Cohen-Macaulay modules
Let $(A,\mathfrak{m})$ be a Gorenstein local ring and let $M$ be a finitely generated Cohen Macaulay $A$ module. Let $G(A)=\bigoplus_{n\geq 0}\mathfrak{m}^n/\mathfrak{m}^{n+1}$ be the associated graded ring of $A$ and $G(M)=\bigoplus_{n\geq 0}\mathfrak{m}^nM/\mathfrak{m}^{n+1}M$ be the associated graded module of $M$. If $A$ is regular and if $G(M)$ has a quasi-pure resolution then we show that $G(M)$ is Cohen-Macaulay. If $G(A)$ is Cohen-Macaulay and if $M$ has finite projective dimension then we give lower bounds on $e_0(M)$ and $e_1(M)$. Finally let $A = Q/(f_1, \ldots, f_c)$ be a strict complete intersection with $\text{ord}(f_i) = s$ for all $i$. Let $M$ be an Cohen-Macaulay module with $\text{cx}_A(M) = r < c$. We give lower bounds on $e_0(M)$ and $e_1(M)$.
Tony J. Puthenpurakal, Samarendra Sahoo
2023-09-27T06:36:32Z
http://arxiv.org/abs/2309.15428v1
# Quasi-pure resolutions and some lower bounds of Hilbert coefficients of Cohen-Macaulay modules. ###### Abstract. Let \((A,\mathfrak{m})\) be a Gorenstein local ring and let \(M\) be a finitely generated Cohen Macaulay \(A\) module. Let \(G(A)=\bigoplus_{n\geq 0}\mathfrak{m}^{n}/\mathfrak{m}^{n+1}\) be the associated graded ring of \(A\) and \(G(M)=\bigoplus_{n\geq 0}\mathfrak{m}^{n}M/\mathfrak{m}^{n+1}M\) be the associated graded module of \(M\). If \(A\) is regular and if \(G(M)\) has a quasi-pure resolution then we show that \(G(M)\) is Cohen-Macaulay. If \(G(A)\) is Cohen-Macaulay and if \(M\) has finite projective dimension then we give lower bounds on \(e_{0}(M)\) and \(e_{1}(M)\). Finally let \(A=Q/(f_{1},\ldots,f_{c})\) be a strict complete intersection with \(\operatorname{ord}(f_{i})=s\) for all \(i\). Let \(M\) be an Cohen-Macaulay module with \(\operatorname{cx}_{A}(M)=r<c\). We give lower bounds on \(e_{0}(M)\) and \(e_{1}(M)\). Key words and phrases:Associated graded rings and modules, strict complete intersections, Gorenstein rings, graded resolutions 2020 Mathematics Subject Classification: Primary 13A30, 13C14; Secondary 13D40, 13D07 ## 1. Introduction Let \((A,\mathfrak{m})\) be a Cohen Macaulay local ring of dimension \(d\) with residue field \(k=A/\mathfrak{m}\) and let \(M\) be finitely generated \(A\)-module. Let \(G(A)=\bigoplus_{n\geq 0}\mathfrak{m}^{n}/\mathfrak{m}^{n+1}\) be the associated graded ring of \(A\) and \(G(M)=\bigoplus_{n\geq 0}\mathfrak{m}^{n}M/\mathfrak{m}^{n+1}M\) be the associated graded module of \(M\) considered as a graded \(G(A)\)-module. Let \(\lambda(E)\) denotes length of an \(A\)-module \(E\). Let \(M\) be an \(A\)-module of dimension \(r\). The Hilbert series of \(M\) is \[H_{M}(z)=\sum_{i\geq 0}\lambda(\mathfrak{m}^{i}M/\mathfrak{m}^{i+1}M)z^{i}= \frac{h_{M}(z)}{(1-z)^{r}}.\] Where \(h_{M}(z)=h_{0}+h_{1}z+\ldots+h_{t}z^{t}\) is called \(h\)-polynomial of \(M\). The integer \(e_{i}(M)=h_{M}^{i}(1)\) is called as the \(i^{th}\)-Hilbert coefficient of \(M\), where \(h_{M}^{i}(z)\) is \(i\)-th derivative of \(h_{M}(z)\). **I:**_The case when \(A\) is regular local._ Let \(A=K[[x_{1},\ldots,x_{n}]]\) be a power series ring over the field \(K\) and \(M\) be a Cohen-Macaulay \(A\)-module. In [11] the author proved that if \(G(M)\) has a pure resolution then \(G(M)\) is Cohen Macaulay. When \(A\) is not equicharacteristic this result is proved in [1]. Let \(R\) be a standard graded polynomial ring over a field and \(E\) be a finitely generated graded \(R\) module. Define \[\alpha_{i}=\max\{n\,|\,\operatorname{Tor}_{i}(k,E)_{n}\neq 0\}\] and \[\gamma_{i}=\min\{n\,|\,\operatorname{Tor}_{i}(k,E)_{n}\neq 0\}.\] We say \(E\) has a _pure resolution_ if \(\alpha_{i}=\gamma_{i}\) for all \(i\geq 0\) and quasi pure resolution if \(\gamma_{i}\geq\alpha_{i-1}\) for all \(i\geq 1.\) For examples of pure and _quasi-pure resolution_ see 3.5. Our first result is: **Theorem 1.1**.: _Let \((A,\mathfrak{m})\) be a regular local ring and \(M\) be a Cohen Macaulay \(A\) module. Set \(R=G(A)=k[X_{1},\ldots,X_{n}]\). Set \(p=\operatorname{projdim}(G(M))\). If \(G(M)\) is not Cohen Macaulay then \(\alpha_{p}(M)<\alpha_{p-1}(M).\)_ An easy application of this theorem is: **Corollary 1.2**.: _Let \((A,\mathfrak{m})\) be a regular local ring and \(M\) be a finitely generated Cohen Macaulay \(A\) module. If \(G(M)\) has quasi-pure resolution then \(G(M)\) is Cohen Macaulay._ **II:**_Cohen-Macaulay modules with finite projective dimension._ If \(A\) is not regular then note that even if \(\operatorname{projdim}_{A}M\) is finite it might very well happen that \(\operatorname{projdim}_{G(A)}G(M)\) is infinite. For an example if \(A\) is Cohen-Macaulay with \(\operatorname{depth}G(A)=0\) then \(\operatorname{projdim}_{G(A)}G(M)\) is infinite if \(\dim M<\dim A\). However if \(A\) is Gorenstein and \(G(A)\) is Cohen-Macaulay then if \(M\) is a Cohen-Macaulay module with finite projective dimension then we can give bounds of multiplicity \(e_{0}(M)\) and the first Hilbert coefficient \(e_{1}(M)\) of \(M\). Let \(\operatorname{reg}(G(A))\) denote the regularity of \(G(A)\). We prove **Lemma 1.3**.: _Let \((A,\mathfrak{m})\) be a Gorenstein local ring. Assume \(G(A)\) is Cohen Macaulay. Let \(M\) be an Cohen Macaulay module of dimension \(r\) with finite projective dimension. Then \(e_{0}(M)\geq\mu(M)+c\) and \(e_{1}(M)\geq\binom{c+1}{2},\) where \(c=\operatorname{reg}(G(A)).\)_ We also show that if the lower bound for \(e_{1}(M)\) is attained then necessarily \(G(M)\) is Cohen-Macaulay, see Theorem 4.3. **III:**_Cohen-Macaulay modules over strict complete intersections._ Next, we consider the following situation. Let \((Q,\mathfrak{n})\) be a regular local ring with infinite residue field, \(f_{1},\ldots,f_{c}\in\mathfrak{n}^{2}\) of order \(s\) such that \(f_{1}^{*},\ldots,f_{c}^{*}\) is a \(G(Q)\)-regular sequence. Let \(A=Q/(f_{1},\ldots,f_{c})\) and let \(M\) be an Cohen Macaulay \(A\)-module with \(\operatorname{cx}_{A}(M)=r<c.\) **Theorem 1.4**.: _(with hypotheses as above) Then \(e_{0}(M)\geq\mu(M)+\alpha\) and \(e_{1}(M)\geq\binom{\alpha+1}{2}\), where \(\alpha=(c-r)(s-1).\) Moreover if \(e_{1}(M)=\binom{\alpha+1}{2}\) then \(G(M)\) is Cohen Macaulay._ We now describe in brief the contents of this paper. In section 2 we describe some preliminary results that we need. In section 3 we sketch a proof of Theorems 1.1 and Corollary 1.2. We also give some examples showing that our hypothesis about \(M\) is optimal. In section 4 we give a proof of Lemma 1.3. Finally, in section five we prove Theorem 1.4. ## 2. Preliminaries Throughout this paper all rings considered are Noetherian and all modules considered (unless stated otherwise) are finitely generated. **2.1**.: Let \((A,\mathfrak{m})\) be a local ring and let \(M\) be an \(A\)-module. Define \(L(M)=\bigoplus_{n\geq 0}M/\mathfrak{m}^{n+1}M\). Let \(\mathcal{R}=A[\mathfrak{m}t]\) be Rees ring and \(\mathcal{R}(M)=\bigoplus_{n\geq 0}\mathfrak{m}^{n}Mt^{n}\) be Rees module of \(M\). The Rees ring \(\mathcal{R}\) is a subring of \(A[t]\). So \(A[t]\) is an \(\mathcal{R}\) module. Therefore \(M[t]=M\otimes_{A}A[t]\) is an \(\mathcal{R}\)-module. The exact sequence \[0\to\mathcal{R}(M)\to M[t]\to L(M)(-1)\to 0\] defines an \(\mathcal{R}\) module structure on \(L(M)(-1)\) and hence on \(L(M)\)(for more details see ([9], definition 4.2)). Note that \(L(M)\) is not a finitely generated \(\mathcal{R}\)-module. **2.2**.: An element \(x\in\mathfrak{m}\) is called \(M\)-superficial with respect to \(\mathfrak{m}\) if there exists \(c\in\mathbb{N}\) such that for all \(n\geq c\), \((\mathfrak{m}^{n+1}M:_{M}x)\cap\mathfrak{m}^{c}M=\mathfrak{m}^{n}M\). If depth \(M>0\) then one can show that a \(M\)-superficial element is \(M\)-regular. Furthermore \((\mathfrak{m}^{n+1}M:_{M}x)=\mathfrak{m}^{n}M\) for \(n\gg 0.\) Superficial elements exist if the residue field is infinite. A sequence \(x_{1},\ldots,x_{r}\) in \((A,\mathfrak{m})\) is said to be \(M\)-superficial sequence if \(x_{1}\) is \(M\)-superficial and \(x_{i}\) is \(M/(x_{1},\ldots,x_{i-1})M\)-superficial for \(2\leq i\leq r.\) **2.3**.: Let \(x_{1},\ldots,x_{n}\) be a sequence in \(A\). For an \(A\)-module \(M\), we denote \(K\).\((\underline{x},M)\) as Koszul complex of \(\underline{x}\) with respect to \(M\) and \(H_{i}(K\).\((\underline{x},M))\) as it's \(i\)-th homology group. Let \(\underline{x}=x_{1},\ldots,x_{n}\) be a sequence in \(A\) and \(M\) is an \(A\)-module. Comparing the \(i\)-th homology groups \(H_{i}(K\).\((\underline{x},M))\) and \(H_{i}(K\).\((x^{\prime},M))\)(where \(x^{\prime}=x_{1},\ldots,x_{n-1}\)), we get the following short exact sequence, for all \(i\geq 0\) \[0\to H_{0}(x_{n},H_{i}(K\).\((x^{\prime},M)))\to H_{i}(K\).\((\underline{x},M))\to H_{1}(x_{n},H_{i-1}(K\).\((x^{\prime},M)))\to 0.\] **2.4**.: _(Sally Descent)_ Let \((A,\mathfrak{m})\) be local ring, \(M\) an \(A\)-module of dimension \(d\). Let \(x_{1},\ldots,x_{r}\) is an \(M\)-superficial sequence with \(r<d\). Set \(B=A/(x_{1},\ldots,x_{r})\) and \(N=M/(x_{1},\ldots,x_{r})M\). Then \[\operatorname{depth}_{G(B)}G(N)\geq 1\operatorname{iff}\operatorname{depth}_{G(A)}G (M)\geq r+1.\] ## 3. Length of \(i\)-th Koszul homology of associated graded modules In this section \(\mathcal{R}\) denotes the Rees ring \(A[\mathfrak{m}t].\) **Theorem 3.1**.: _Let \((A,\mathfrak{m})\) be a Cohen Macaulay local ring with an infinite residue field. \(M\) be a finitely generated Cohen Macaulay \(A\) module of dimension \(r\geq 1\). Let \(Xt=x_{1}t,\ldots,x_{r}t\in\mathcal{R}_{1}.\) Then \(\lambda(H_{i}(Xt,L(M)))<\infty,\text{ for all }i\geq 1\), where \(\underline{x}=x_{1},\ldots,x_{r}\) is an \(M\)-superficial sequence._ Proof.: We will prove this by induction on \(r\). Set \(L=L(M)\). Let \(r=1\). We have an exact sequence ( \[\star\] ) \[0\rightarrow\mathcal{B}(x_{1},M)\to L(M)(-1)\xrightarrow{\phi}L(M) \to L(N)\to 0,\] where \(\phi(a+\mathfrak{m}^{n}M)=x_{1}a+\mathfrak{m}^{n+1}M,\)\(\mathcal{B}(x_{1},M)=\bigoplus_{n\geq 0}(\mathfrak{m}^{n+1}M:x_{1}/\mathfrak{ m}^{n}M)\) and \(N=M/x_{1}M\). From above exact sequence, we get \(H_{1}(x_{1}t,L(M))=\bigoplus_{n\geq 0}(\mathfrak{m}^{n+1}M:x_{1}/\mathfrak{ m}^{n}M).\) Since \(x_{1}\) is \(M\)-superficial regular \((\mathfrak{m}^{n+1}M:x_{1})/\mathfrak{m}^{n}M=0\) for \(n\gg 0.\) Hence \(\lambda(H_{1}(x_{1}t,L(M)))<\infty.\) Now assume it is true for \(r=s\). we have \[0\to H_{0}(x_{s+1}t,H_{i}(X^{\prime}t,L))\to H_{i}(Xt,L)\to H_{1}(x_{s+1}t,H_ {i-1}(X^{\prime}t,L)\to 0,\] where \(X^{\prime}=x_{1},\ldots,x_{s}\). By induction \(\lambda(H_{i}(Xt,L(M)))<\infty\) for all \(i\geq 2\). Since \(H_{0}(X^{\prime}t,L(M))=L(M/X^{\prime}M)\) and as \(x_{s+1}\) is \(M/X^{\prime}M\)-superficial, by (\(\star\)) we get \(H_{1}(x_{s+1},H_{0}(X^{\prime}t,L(M)))\) has finite length. Hence \(\lambda(H_{i}(Xt,L(M)))<\infty\) for all \(i\geq 1\). Next we prove **Lemma 3.2**.: _Let \((A,\mathfrak{m})\) be a regular local ring with an infinite residue field. \(M\) be a Cohen Macaulay \(A\)-module of dimension \(r\). Let \(x_{1},\ldots,x_{r}\) be a \(M\oplus A\)-superficial sequence. Then \(H_{i}(Xt,Yt,L(M))\) for all \(i>s\) has finite length, where \(X=x_{1},\ldots,x_{r}\) is \(M\)-superficial sequence and \(\mathfrak{m}=<x_{1},\ldots,x_{r},y_{1},\ldots,y_{s}>\)(minimal set of generators)._ Proof.: Set \(L=L(M)\). We will prove that \(H_{i}(Xt,y_{1}t,\ldots,y_{l}t,L)\) has finite length for \(i>l\) by induction on \(l\). Let \(l=1\). We have the following short exact sequence \[0\to H_{0}(y_{1}t,H_{i}(Xt,L))\to H_{i}(Xt,y_{1}t,L)\to H_{1}(y_{1}t,H_{i-1}( Xt,L))\to 0.\] By Theorem 3.1, we get for all \(i\geq 2>1\), \(H_{i}(Xt,y_{1}t,L)\) has finite length. Now assume it is true for \(l=c.\) Similarly as above we have the following short exact sequence \[0\to H_{0}(y_{c+1}t,H_{i}(Xt,Y^{\prime}t,L))\to H_{i}(Xt,Yt,L)\to H_{1}(y_{c+1} t,H_{i-1}(Xt,Y^{\prime}t,L))\to 0,\] where \(Y^{\prime}=y_{1},\ldots,y_{c}.\) Since for \(i>c\), \(H_{i}(Xt,y_{1}t,\ldots,y_{c}t,L)\) has finite length. So for \(i>c+1\), \(H_{i}(Xt,y_{1}t,\ldots,y_{c+1}t,L)\) has finite length. The main result of this section is: **Theorem 3.3**.: _Let \((A,\mathfrak{m})\) be a regular local ring of dimension \(d\) and let \(M\) be a Cohen Macaulay \(A\)-module. Set \(R=G(A)=k[X_{1},\ldots,X_{d}]\). Set \(p=\operatorname{projdim}(G(M))\). If \(G(M)\) is not Cohen Macaulay then \(\alpha_{p}(M)<\alpha_{p-1}(M).\)_ Proof.: We may assume the residue field of \(A\) is infinite. Let the dimension of \(M\) is \(r\) and \(x_{1},\ldots,x_{r}\) be a \(M\oplus A\)-superficial sequence with respect to \(\mathfrak{m}.\) Since \(A\) is regular with infinite residue field, so \(x_{1},\ldots,x_{r}\) is part of regular system of parameters of \(A\). Let \(\mathfrak{m}=\langle x_{1},\ldots,x_{r},y_{1}\ldots,y_{l}\rangle\) be minimal set of generator. Here \(1\leq r\leq d-1\), so \(l\geq 1\). Let depth \(G(M)=c<r(0\leq c\leq r).\) If \(c\geq 1\) then \(x_{1}^{*},\ldots,x_{c}^{*}\) is \(G(M)\) regular(see [8], Theorem 8) and \[G(M)/(x_{1}^{*},\ldots,x_{c}^{*})G(M)\cong G(M/(x_{1},\ldots,x_{c})M).\] Set \(N=M/(x_{1},\ldots,x_{c})M\) and \(S=G(A/(x_{1},\ldots,x_{c}))=R/(x_{1}^{*},\ldots,x_{c}^{*}).\) We also have \(\operatorname{Tor}_{i}^{R}(k,G(M))\cong\operatorname{Tor}_{i}^{S}(k,G(N))\) for all \(i\)(see [7], p-140). So we can now assume depth \(G(M)=0.\) dim\(M=\ \text{dim}\ G(M)=r\geq 1\) and dim\(A=d=r+l\geq r+1.\) We have a short exact sequence \[0\to G(M)\to L(M)\to L(M)(-1)\to 0.\] This induces the following exact sequence \[0\to H_{d}(Xt,Yt,G(M))\to H_{d}(Xt,Yt,L(M))\to H_{d}(Xt,Yt,L(M)(-1))\to\ldots.\] Set \(V_{i}=H_{i}(Xt,Yt,G(M))\) and \(W=H_{d}(Xt,Yt,L(M))\). From above exact sequence we get \[0\to V_{d,n}\to W_{n}\to W_{n-1}\to V_{d-1,n}\to\ldots.\] We know \(V_{d-1,n}=0\) for all \(n\geq\alpha_{p-1}(M)+1.\) So for all \(n\geq\alpha_{p-1}(M)+1\) we get surjections \[W_{n+3}\twoheadrightarrow W_{n+2}\twoheadrightarrow W_{n+1}\twoheadrightarrow W_{n-1}\to 0.\] Since \(r\geq 1\) and \(d=r+l>l\). By Lemma 3.2, the length of \(W=H_{d}(Xt,Yt,L(M))\) is finite. So \(W_{n+j}=0\) for all \(j\gg 0.\) Thus \(W_{j}=0\) for \(j\geq\alpha_{p-1}.\) So \(V_{d,j}=0\) for all \(j\geq\alpha_{p-1}.\) Hence \(\alpha_{p}(M)<\alpha_{p-1}(M).\) As a consequence we get: **Corollary 3.4**.: _(With hypothesis as in 3.3) If \(G(M)\) has a quasi-pure resolution then \(G(M)\) is Cohen Macaulay._ Proof.: Suppose \(G(M)\) is not Cohen Macaulay. Set \(p=\operatorname{projdim}(G(M))\). By above theorem \(\alpha_{p}<\alpha_{p-1}.\) But \(G(M)\) has quasi pure resolution, so \(\alpha_{p}\geq\gamma_{p}\geq\alpha_{p-1}.\) This is a contradiction. We give some examples which show our assumptions are optimal. We used Singular [5] to check the following examples. **Example 3.5**.: 1. Let \(A=k[[x,y]],\)\(I=(x^{2},xy)\) and \(M=A/I\). It is clear that \(M\) is not Cohen Macaulay and is of dimension \(1\). Note \(G(M)=R/(X^{2},XY),\) where \(R=k[X,Y]\). Now we have the following pure graded resolution of \(G(M)\) \[0\to R(-3)\to R^{2}(-2)\to R\to 0.\] Here \(\alpha_{2}=3\) and \(\alpha_{1}=2,\)\(i.e.\)\(\alpha_{2}>\alpha_{1}.\) 2. Let \(A=k[[x,y]]\) and \(M=k.\) Then \(G(M)=k\) is Cohen Macaulay. From the Koszul complex of \(G(M)\) we get \(\alpha_{i}=i\) for all \(i,\)\(1\leq i\leq d.\) This implies \(\alpha_{d}>\alpha_{d-1}.\) So Theorem 3.3 is false if \(G(M)\) is Cohen Macaulay. 3. Let \(A=k[[x,y]],\)\(I=(x^{3},x^{2}y,y^{4})\) and \(M=A/I\). So \(R=G(A)=k[X,Y]\) and \(G(M)=R/(X^{3},X^{2}Y,Y^{4}).\) Clearly \(M\) and \(G(M)\) is Cohen Macaulay because they are of dimension zero. Graded resolution of \(G(M)\) is \[0\to R(-4)\oplus R(-6)\to R^{2}(-3)\oplus R(-4)\to R\to 0\] and it is quasi pure resolution. So there exist Cohen Macaulay \(A\)-modules with quasi-pure resolution. 4. Let \(A=k[[x,y]],\)\(I=(x^{2},xy,y^{5})\) and \(M=A/I\). So \(R=G(A)=k[X,Y]\) and \(G(M)=R/(X^{2},XY,Y^{5}).\) Clearly \(M\) and \(G(M)\) is Cohen Macaulay because they are of dimension zero. Graded resolution of \(G(M)\) is \[0\to R(-3)\oplus R(-6)\to R^{2}(-2)\oplus R(-5)\to R\to 0\] and it is not a quasi pure resolution. This shows there exists \(M\) Cohen Macaulay with \(G(M)\) not having a quasi-pure resolution. ## 4. Hilbert coefficient of the modules of finite projective dimension In [[3], Theorem 1.1] it first shows that if \((A,\mathfrak{m})\) is a non-regular Gorenstein local ring and \(G(A)\) is Cohen Macaulay, then for all finite length \(A\)-modules \(E\) of finite projective dimension \(\ell\ell(E)\geq\operatorname{reg}(G(A))+1\) (here \(\ell\ell(E)\) denotes the Lowey length of \(E\). Set \(c=\operatorname{reg}(G(A))\). In this section, we first prove if \(M\) is Cohen Macaulay of positive dimension with finite projective dimension then \(e_{0}(M)\geq\mu(M)+reg(G(A))\) and \(e_{1}(M)\geq\binom{c+1}{2}\). Then we will show \(G(M)\) is Cohen Macaulay if equality holds for \(e_{1}(M)\)(see 4.3). **4.1**.: Recall the Lowey length of a finite length \(A\)-module \(E\) is defined to be the number \[\ell\ell(E)=\min\{i\,|\,\mathfrak{m}^{i}E=0\}.\] Let \(G(A)_{+}\) be the irrelevant maximal ideal of \(G(A)\). Let \(H^{i}(G(A))\) be the \(i\)-th local cohomology module of \(G(A)\) with respect to \(G(A)_{+}\). The Castelnuovo-Mumford regularity of \(G(A)\) is \[\operatorname{reg}(G(A))=\max\{i+j\,|\,H^{i}(G(A))_{j}\neq 0\}.\] One can easily check that if \(x^{*}\) is \(G(A)\)-regular element of degree \(1\) then \(\operatorname{reg}(G(A))=\operatorname{reg}(G(A/xA)).\) **Lemma 4.2**.: _Let \((A,\mathfrak{m})\) be Gorenstein local ring. Assume \(G(A)\) is Cohen Macaulay. Let \(M\) be Cohen Macaulay module of dimension \(r\) with finite projective dimension. Then \(e_{0}(M)\geq\mu(M)+c\) and \(e_{1}(M)\geq{c+1\choose 2},\) where \(c=\operatorname{reg}(G(A)).\)_ Proof.: We may assume that the residue field \(k\) is infinite. Let \(x_{1},\ldots,x_{r}\) be \(A\oplus M\)-superficial sequence. Set \(B=A/(x_{1},\ldots,x_{r})\) and \(N=M/(x_{1},\ldots,x_{r})M\)(i.e. \(\lambda(N)<\infty\)). Let \(\ell\ell(N)=t,\) i.e. \(G(N)=\bigoplus_{i=0}^{t}\mathfrak{m}^{i}N/\mathfrak{m}^{i+1}N.\) Set \(a_{i}=\lambda(\mathfrak{m}^{i}N/\mathfrak{m}^{i+1}N).\) The Hilbert series of \(N\) is \(H_{N}(z)=\mu(N)+a_{1}z+\ldots+a_{t-1}z^{t-1}\). So \[e_{0}(N)=\mu(N)+a_{1}+\ldots+a_{t-1}\geq\mu(N)+t-1\geq\mu(N)+ \operatorname{reg}(G(A)).\] We also have \[e_{1}(N)= a_{1}+2a_{2}+\ldots+(t-1)a_{t-1}\] \[\geq 1+2+\ldots+(t-1)\geq 1+2+\ldots+c\] \[= {c+1\choose 2}.\] By ([8], Corollary 10) \(\mu(M)=\mu(N),\)\(e_{0}(M)=e_{0}(N)\) and \(e_{1}(M)\geq e_{1}(N).\) Hence \(e_{0}(M)\geq\mu(M)+\operatorname{reg}(G(A))\) and \(e_{1}(M)\geq{c+1\choose 2}.\) The next theorem is about what happens if \(e_{1}(M)={c+1\choose 2}.\) **Theorem 4.3**.: _Let \((A,\mathfrak{m})\) be a Gorenstein local ring with \(G(A)\) be Cohen Macaulay. Let \(M\) be Cohen Macaulay module of dimension \(r\) having finite projective dimension. If \(e_{1}(M)={c+1\choose 2}\), then \(G(M)\) is Cohen Macaulay._ Proof.: Let \(x_{1},\ldots,x_{r}\) be \(A\oplus M\)-superficial sequence. Set \(N=M/(x_{1},\ldots,x_{r-1})M.\) By ([8], Corollary 10) \(e_{1}(M)=e_{1}(N)\) and \(e_{1}(N)\geq e_{1}(N/x_{r}N)\). So by the hypothesis \({c+1\choose 2}=e_{1}(N)\geq e_{1}(N/x_{r}N)\geq{c+1\choose 2}.\) So \(e_{1}(N)=e_{1}(N/x_{r}N)\). By ([8], Corollary 10) \(G(N)\) is Cohen Macaulay. So by Sally descent \(G(M)\) is Cohen Macaulay. ## 5. Cohen Macaulay modules over complete intersection rings In this section, we will prove an interesting result on Cohen Macaulay modules over complete intersection ring(see Theorem 5.7) and we will also give an application of the Corollary 4.2 and Theorem 4.3. For this, we need to recall the following things. **5.1**.: Let \(A\) be a local ring and \(M\) be an \(A\)-module. Let \(\beta_{i}^{A}(M)=\lambda(\operatorname{Tor}_{i}(M,k))\) be the \(i\)-th Betti number of \(M\) over \(A.\) The complexity of \(M\) over \(A\) is defined as \[\operatorname{cx}_{A}(M)=\inf\left\{b\in\mathbb{N}\quad\left|\begin{array}{ c}\limsup_{n\to\infty}\frac{\beta_{n}^{A}(M)}{n^{b-1}}<\infty\right.\end{array}\right\}.\] Note that \(M\) has bounded Betti numbers iff \(\operatorname{cx}_{A}(M)\leq 1.\) **5.2**.: Let \(Q\) be a local ring and \(\mathbf{f}=f_{1},\ldots,f_{c}\) be a \(Q\)-regular sequence. Set \(A=Q/(\mathbf{f}).\) The Eisenbud operators(see [4]) are constructed as follows Let \(\mathbb{F}:\cdots F_{i+2}\xrightarrow{\partial}F_{i+1}\xrightarrow{\partial}F _{i}\ldots\) be a complex of free \(A\)-modules. (a) Choose a sequence of free \(Q\)-modules \(\widetilde{\mathbb{F}}_{i}\) and maps \(\widetilde{\partial}\) between them \[\widetilde{\mathbb{F}}:\cdots\widetilde{F}_{i+2}\xrightarrow{\tilde{\partial} }\widetilde{F}_{i+1}\xrightarrow{\tilde{\partial}}\widetilde{F}_{i}\ldots\] such that \(\mathbb{F}=\widetilde{\mathbb{F}}\otimes A.\) (b) Since \(\widetilde{\partial}^{2}\equiv 0\) modulo\((f_{1},\ldots,f_{c}),\) we may write \(\widetilde{\partial}^{2}=\sum_{j=1}^{c}f_{j}\widetilde{t}_{j}\) where \(\widetilde{t}_{j}:\widetilde{F}_{i}\to\widetilde{F}_{i-2}\) are linear maps for all \(i.\) (c) Define, for \(j=1,\ldots,c\) the map \(t_{j}=t_{j}(Q,\mathbf{f},\mathbb{F})\) by \(t_{j}=\widetilde{t}_{j}\otimes A.\) The operators \(t_{1},\ldots,t_{c}\) are called Eisenbud operators associated to \(\mathbf{f}.\) It can be shown that 1. \(t_{i}\) are uniquely determined up to homotopy. 2. \(t_{i},t_{j}\) commutes up to homotopy. **5.3**.: Let \(\mathbf{g}=g_{1},\ldots,g_{c}\) be another \(Q\)-regular sequence such that \((\mathbf{g})=(\mathbf{f})\). The Eisenbud operators associated to \(\mathbf{g}\) can be constructed by using Eisenbud operators associated to \(\mathbf{f}\) as follows: Let \(f_{i}=\sum_{j=1}^{c}a_{ij}g_{j}\) for \(i=1,\ldots,c\). Then we can choose \(t_{i}^{\prime}=\sum_{j=1}^{c}a_{ji}t_{c}\) for \(i=1,\ldots,c\) as Eisenbud operators for \(g_{1},\ldots,g_{c}.\) Note that in [4] there is an indexing error. **5.4**.: Let \(R=A[t_{1},\ldots,t_{c}]\) be a polynomial ring over \(A\) with variable \(t_{1},\ldots,t_{c}\) of degree \(2\). Let \(M\) be an \(A\)-module and let \(\mathbb{F}\) be a free resolution of \(M\). So we get well defined maps \[t_{j}:\operatorname{Ext}_{A}^{n}(M,k)\to\operatorname{Ext}_{A}^{n+2}(M,k)\text { for all }1\leq j\leq c\text{ and all }n.\] This turns \(\operatorname{Ext}_{A}^{*}(M,k)=\bigoplus_{i\geq 0}\operatorname{Ext}_{A}^{i}(M,k)\) into a \(R\)-module. Where \(k\) is residue field of \(A\). Since \(\mathfrak{m}\subseteq\operatorname{ann}(\operatorname{Ext}_{A}^{i}(M,k))\) for all \(i\geq 0\) we get that \(\operatorname{Ext}_{A}^{*}(M,k)\) is a \(S=R/\mathfrak{m}R=k[t_{1},\ldots,t_{c}]\)-module. **Remark 5.5**.: The subring \(k[t_{1},\ldots,t_{r}](r<c)\) of \(S\) can be identified with the ring \(S^{\prime}\) of cohomological operators of a presentation \(A=P/(f_{1},\ldots,f_{r}),\) where \(P=Q/(f_{r+1},\ldots,f_{c}).\) **5.6**.: In ([6], 3.1) Gulliksen proved that if \(\operatorname{projdim}_{Q}(M)\) is finite then \(\operatorname{Ext}_{A}^{*}(M,N)\) is finitely generated \(R\)-module for all \(A\)-modules \(N\). In ([2], 3.10) Avramov proved a converse for \(N=k.\) Note that if \(\operatorname{projdim}_{Q}(M)\) is finite then \(\operatorname{Ext}_{A}^{*}(M,k)\) is finitely generated graded \(S\)-module of Krull dimension \(\operatorname{cx}_{A}(M)\). Let \((A,\mathfrak{m})\) be a local ring. For \(x\in\mathfrak{m}^{i}\setminus\mathfrak{m}^{i+1}\) we set \(\operatorname{ord}(x)=i\) as order of \(x.\) **Theorem 5.7**.: _Let \((Q,\mathfrak{n})\) be a regular local ring with infinite residue field, \(f_{1},\ldots,f_{c}\in\mathfrak{n}^{2}\) of order \(s\) such that \(f_{1}^{*},\ldots,f_{c}^{*}\) is a \(G(Q)\)-regular sequence. Let \(A=Q/(f_{1},\ldots,f_{c})\) and \(M\) be Cohen Macaulay \(A\)-module with \(\operatorname{cx}_{A}(M)=r<c\). Then \(M\) has finite projective dimension as \(Q/(g_{r+1},\ldots,g_{c})\)-module, for some \(g_{r+1},\ldots,g_{c}\). Here \(g_{r+1}^{*},\ldots,g_{c}^{*}\) is a \(G(Q)\)-regular sequence and \(\operatorname{ord}(g_{i})=s.\)_ Proof.: By 5.6, \(\operatorname{Ext}_{A}^{*}(M,k)\) is finitely generated graded \(S\)-module of Krull dimension \(\operatorname{cx}_{A}(M).\) Set \(E=\operatorname{Ext}_{A}^{*}(M,k)\). Let \(\xi_{1},\ldots,\xi_{r}\in S_{2}\) be a system of parameters of \(E\) and \(\xi_{i}=\sum_{j=1}^{c}\overline{\beta_{ij}}t_{j}.\) Set \(\beta=(\overline{\beta_{ij}})_{r\times c}\). Since \(\xi_{1},\ldots,\xi_{r}\in S_{2}\) is a system of parameters, the rank of matrix \(\beta\) is equals to \(r.\) Note that the first \(r\) columns of \(\beta\) may not be linearly independent. But we can rearrange the columns of \(\beta\) so that the first \(r\) columns are linearly independent. Without loss of generality, we can reorder \(f_{1},\ldots,f_{c}\) so that the first \(r\) column of \(\beta\) is linearly independent(see 5.3). Now set \(\alpha=(a_{ij})_{c\times c},\) where \[a_{ij}=\left\{\begin{array}{ll}\beta_{ij}&1\leq i\leq r,\,1\leq j\leq c\\ 1&\text{ if }i=j\text{ and }i>r\\ 0&\text{ otherwise}\end{array}\right.\] (here we assume the image of \(\beta_{ij}\) in \(k\) is \(\overline{\beta_{ij}}\).) Clearly, the matrix \(\alpha\) is nonsingular(as \(\overline{\alpha}\) is non-singular). Set \(\xi_{j}=t_{j}\) for \(j>r.\) Then \(S=k[\xi_{1},\ldots,\xi_{c}].\) By ([10], 1.9) there is a regular sequence \(\mathbf{g}=g_{1},\ldots,g_{c}\) such that \((\mathbf{g})=(\mathbf{f})\) and if \(t_{i}^{\prime}\) are the Eisenbud operators associated to \(\mathbf{g}\), then the action of \(t_{j}^{\prime}\) on \(E\) is same as that of \(\xi_{j}\) for \(j=1,\ldots,c.\) By ([10], 1.9), \([\mathbf{g}]=(\alpha^{tr})^{-1}[\mathbf{f}],\) where \(\alpha^{tr}\) is the transpose of \(\alpha\). We note that \(\operatorname{ord}(g_{i})\geq s\) for all \(i\). As \((\mathbf{g})=(\mathbf{f})\) we get \(A=Q/(g_{1},\ldots,g_{c})\). We have \(e(A)=s^{c}\). So by [12, 1.8] it follows that \(\operatorname{ord}(g_{i})=s\) for all \(i\) and \(g_{1}^{*},\ldots,g_{c}^{*}\) is a \(G(Q)\)-regular sequence. We also have \(G(A)=G(Q)/(g_{1}^{*},\ldots,g_{c}^{*}).\) The subring \(k[\xi_{1},\ldots,\xi_{r}]\) of \(S\) can be identified by the ring \(S^{\prime}\) of cohomological operators of a presentation \(A=P/(g_{1},\ldots,g_{r}),\) where \(P=Q/(g_{r+1},\ldots,g_{c})\). Since \(\operatorname{Ext}_{A}^{*}(M,k)\) is finitely generated graded \(S^{\prime}\)-module, we obtained by 5.6 that \(\operatorname{projdim}_{P}(M)\) is finite. This proves our result. As a consequence, we obtain: **Theorem 5.8**.: _Let \((Q,\mathfrak{n})\) be a regular local ring with infinite residue field, \(f_{1},\ldots,f_{c}\in\mathfrak{n}^{2}\) of order \(s\) such that \(f_{1}^{*},\ldots,f_{c}^{*}\) is a \(G(Q)\)-regular sequence. Let \(A=Q/(f_{1},\ldots,f_{c})\) and \(M\) be Cohen Macaulay \(A\)-module with \(\operatorname{cx}_{A}(M)=r<c\). Then \(e_{0}(M)\geq\mu(M)+\alpha\) and \(e_{1}(M)\geq\binom{\alpha+1}{2}\), where \(\alpha=(c-r)(s-1).\) Moreover if \(e_{1}(M)=\binom{\alpha+1}{2}\) then \(G(M)\) is Cohen Macaulay._ Proof.: We may assume that \(A\) has an infinite residue field(see [10], 1.1). By Theorem 5.7, we get that \(A=B/(g_{1},\ldots,g_{r})\) and \(M\) has finite projective dimension as \(B\)-module, where \(B=Q/(g_{r+1},\ldots,g_{c})\) and \((g_{1}^{*},\ldots,g_{c}^{*})\) is a \(G(Q)\)-regular sequence. Since \(\operatorname{ord}(g_{i})=s\) for all \(1\leq i\leq c\), we obtain that \(\operatorname{reg}(G(B))=(c-r)(s-1)\). Set \(\alpha=(c-r)(s-1).\) By Lemma 4.2, \(e_{0}(M)\geq\mu(M)+\alpha\) and \(e_{1}(M)\geq\binom{\alpha+1}{2}.\) By Theorem 4.3 if \(e_{1}(M)=\binom{\alpha+1}{2}\), then \(G(M)\) is Cohen Macaulay \(G(B)\)-module (and so a Cohen-Macaulay \(G(A)\)-module).
2303.18157
MAGNNETO: A Graph Neural Network-based Multi-Agent system for Traffic Engineering
Current trends in networking propose the use of Machine Learning (ML) for a wide variety of network optimization tasks. As such, many efforts have been made to produce ML-based solutions for Traffic Engineering (TE), which is a fundamental problem in ISP networks. Nowadays, state-of-the-art TE optimizers rely on traditional optimization techniques, such as Local search, Constraint Programming, or Linear programming. In this paper, we present MAGNNETO, a distributed ML-based framework that leverages Multi-Agent Reinforcement Learning and Graph Neural Networks for distributed TE optimization. MAGNNETO deploys a set of agents across the network that learn and communicate in a distributed fashion via message exchanges between neighboring agents. Particularly, we apply this framework to optimize link weights in OSPF, with the goal of minimizing network congestion. In our evaluation, we compare MAGNNETO against several state-of-the-art TE optimizers in more than 75 topologies (up to 153 nodes and 354 links), including realistic traffic loads. Our experimental results show that, thanks to its distributed nature, MAGNNETO achieves comparable performance to state-of-the-art TE optimizers with significantly lower execution times. Moreover, our ML-based solution demonstrates a strong generalization capability to successfully operate in new networks unseen during training.
Guillermo Bernárdez, José Suárez-Varela, Albert López, Xiang Shi, Shihan Xiao, Xiangle Cheng, Pere Barlet-Ros, Albert Cabellos-Aparicio
2023-03-31T15:47:49Z
http://arxiv.org/abs/2303.18157v1
# MAGNETO: A Graph Neural Network-based Multi-Agent system for Traffic Engineering ###### Abstract Current trends in networking propose the use of Machine Learning (ML) for a wide variety of network optimization tasks. As such, many efforts have been made to produce ML-based solutions for Traffic Engineering (TE), which is a fundamental problem in ISP networks. Nowadays, state-of-the-art TE optimizers rely on traditional optimization techniques, such as Local search, Constraint Programming, or Linear programming. In this paper, we present MAGNETO, a distributed ML-based framework that leverages Multi-Agent Reinforcement Learning and Graph Neural Networks for distributed TE optimization. MAGNETO deploys a set of agents across the network that learn and communicate in a distributed fashion via message exchanges between neighboring agents. Particularly, we apply this framework to optimize link weights in OSPF, with the goal of minimizing network congestion. In our evaluation, we compare MAGNETO against several state-of-the-art TE optimizers in more than 75 topologies (up to 153 nodes and 354 links), including realistic traffic loads. Our experimental results show that, thanks to its distributed nature, MAGNETO achieves comparable performance to state-of-the-art TE optimizers with significantly lower execution times. Moreover, our ML-based solution demonstrates a strong generalization capability to successfully operate in new networks unseen during training. Traffic Engineering, Routing Optimization, Multi-Agent Reinforcement Learning, Graph Neural Networks ## I Introduction During the last decade, the networking community has devoted significant efforts to build efficient solutions for automated network control, pursuing the ultimate goal of achieving the long-desired _self-driving networks_[1, 2]. In this vein, Machine Learning (ML) is considered as a promising technique for producing efficient tools for autonomous networking [3, 4]. In this paper, we revisit a fundamental networking problem: Traffic Engineering (TE) optimization [5, 6]. TE is among the most common operation tasks in today's ISP networks. Here, the classical optimization goal is to minimize network congestion, which is typically achieved by minimizing the maximum link utilization in the network [7, 8, 9, 10, 11]. Given the relevance of this problem, we have witnessed a plethora of proposals approaching this problem from different angles, such as optimizing the configuration of widely deployed link-state protocols (e.g., OSPF [12]), making fine-grained flow-based routing, or re-routing traffic across overlay networks [13, 14]. Likewise, for the last years the networking community has focused on developing effective ML-based solutions for TE. In particular, many works propose the use of Reinforcement Learning (RL) for efficient TE optimization (e.g., [15, 16, 17, 18]). However, at the time of this writing, no ML-based proposal has succeeded to replace long-established TE solutions; indeed, the best performing TE optimizers to date are based on traditional optimization algorithms, such as Constraint Programming [10], Local Search [9], or Linear Programming [11, 19]. In this paper, we present MAGNETO, a novel ML framework for distributed TE optimization leveraging Graph Neural Networks (GNN) [20] and Multi-Agent Reinforcement Learning (MARL) [21] at its core1. In the proposed algorithm, a RL-based agent is deployed on each router. Similarly to standard intradomain routing protocols (e.g., OSPF), MAGNETO's agents exchange information with their neighbors in a distributed manner. In particular, agents communicate via a neural network-driven message passing mechanism, and learn how to cooperate to pursue a common optimization goal. As a result, the proposed framework is fully distributed, and agents learn how to effectively communicate to perform intradomain TE optimization, i.e. to minimize the maximum link utilization in the network. Footnote 1: MAGNNETO stands for Multi-Agent Graph Neural Network Optimization. The code of this framework and all the data needed to reproduce our experiments are publicly available at: [https://github.com/BNN-UPC/Papers/wiki/MAGNNETO-TE](https://github.com/BNN-UPC/Papers/wiki/MAGNNETO-TE). More in detail, MAGNNETO presents the following contributions: **Top performance with very low execution times:** We compare MAGNNETO against a curated set of well-established TE solutions: SRLS [9], DEFO [10] and TabulGPWO [11]. These solutions implement mature optimization techniques on top of expert knowledge. As a result, they are able to achieve close-to-optimal performance in large-scale networks within minutes [22]. Our results show that MAGNNETO achieves comparable performance to these state-of-the-art TE solutions, while being significantly faster. In fact, when enabling several simultaneous actions in our framework, it runs up to three orders of magnitude faster than the baseline optimizers (sub-second vs. minutes) in networks with 100+ nodes. The reason for this is the fully decentralized architecture of MAGNNETO, which naturally distributes and parallelizes the execution across the network. **Generalization over unseen networks:** A common downside of current ML-based solutions applied to networking is their limited performance when operating in different networks to those seen during training, which is commonly referred to as lack of _generalization_[23]. Without generalization, training _must_ be done at the same network where the ML-based solution is expected to operate. Hence, from a practical standpoint generalization is a crucial aspect, as training directly in networks in production is typically unfeasible. MAGNNETO implements internally a GNN, which introduces proper learning biases to generalize across networks of different sizes and structures [23]. In our evaluation, we train MAGNNETO in two different networks, and test its performance and speed on 75 real-world topologies from the Internet Topology Zoo [24] not seen before. Our results show that in such scenarios, MAGNNETO still achieves comparable performance to state-of-the-art TE optimizers, while being significantly faster. **No need for overlay technologies**: Recent TE optimizers rely on novel overlay technologies to achieve their optimization goals [9, 10]. By leveraging Segment Routing [25] these solutions are able to use arbitrary overlay paths that are not routed via the standard OSPF weights. This allows to extend the routing space to a source-destination granularity and -as shown in the literature- it renders outstanding results. However, in this paper we show that comparable performance is achievable by using only standard destination-based OSPF routing. Indeed, MAGNNETO is fully compliant with current OSPF-based networks, and does not require the use of any overlay technology. MAGNNETO is partially based on an earlier version presented at [26]. In that work, we raised an open question: _Is ML ready for Traffic Engineering optimization?_ Our goal was to discuss whether state-of-the-art ML techniques are mature enough to outperform traditional TE solutions; to this end, we presented a ML framework for TE optimization, and made an exploratory evaluation on this. This paper actually deeps dive into this question by formulating an enhanced ML framework -MAGNNETO- and performing a much more comprehensive evaluation. We summarize below the main novelties of this work with respect to [26]: * MAGNNETO formulates the TE problem as a Decentralized Partially-Observable Markov Decision Process (DecPOMDP), which enables to achieve a more functional MARL setting. Instead, the previous solution [26] operated over a classical MDP, where agents must take actions sequentially in a synchronized manner. * MAGNNETO supports simultaneous actions at each RL optimization step. This dramatically reduces the execution time (up to 10x in our experiments) with respect to the previous framework, which was limited by design to one action per step. * We present in this paper an extensive evaluation including 75+ real-world topologies, large-scale scenarios (up to 153 nodes), and a benchmark consisting of a representative collection of advanced TE optimizers. In contrast, the evaluation of [26] only considered 3 different topologies of limited size (up to 24 nodes), and the results were compared against a single TE solver. The remainder of this paper is as follows. Section II describes the TE scenario where we deploy the proposed ML-based system. Section III formalizes MAGNNETO, as a general framework for networked environments. Afterwards, Section IV describes how we adapt this framework to perform intradomain TE optimization. In Section V, we make an extensive evaluation of the proposed framework against state-of-the-art TE proposals. Section VI summarizes the main existing works related to this paper, and lastly Section VII concludes the paper. ## II Network Scenario This section describes the intradomain TE scenario where MAGNNETO operates. In this paper, we consider the intradomain TE problem, where network traffic is measured and routed to minimize network congestion. Typically, IP networks run link-state Interior Gateway Protocols (IGP), such as Open Shortest Path First (OSPF) [12], that choose paths using the Dijkstra's algorithm over some pre-defined link weights. There exists a wide range of architectures and algorithms for TE in the literature [27]. Network operators commonly use commercial tools [28, 29] to fine-tune link weights. However, other mechanisms propose to add extra routing entries [30] or end-to-end tunnels (e.g., RSVP-TE [31]) to perform source-destination routing, thus expanding the solution space. MAGNNETO is a fully distributed framework that interfaces with standard OSPF, by optimizing the link weights used by such protocol. As a result, it does not require any changes to OSPF and it can be implemented with a software update on the routers where it is deployed. In this context, relying on well-known link-state routing protocols, such as OSPF, offers the advantage that the network is easier to manage compared to finer-grained alternatives, such as flow-based routing [32]. Figure 1 illustrates the general operational workflow of MAGNNETO: **1) Traffic Measurement:** First, a traffic measurement platform deployed over the network identifies a new Traffic Matrix (TM). This new TM is communicated to all participating routers (Fig. 1, step 1), which upon reception will start the next step and optimize the routing for this TM. We leave Figure 1: Intradomain traffic engineering optimization with MAGNNETO. out of the scope of this paper the details of this process, as TM estimation is an extensive research field with many established proposals. For instance, this process can be done periodically (e.g., each 5-10 minutes as in [11]), where the TM is first estimated and then optimized. Some proposals trigger the optimization process when a relevant change is detected in the TM [33], while others use prediction techniques to optimize it in advance [34]. Also, some real-world operators make estimates considering their customers' subscriptions and operate based on a static TM. Our proposal is flexible and can operate with any of these approaches. **2) MAGNNETO TE optimization:** Once routers receive the new TM, the distributed RL-based agents of MAGNETTOT start the TE optimization process, which eventually computes the per-link weights that optimize OSPF routing in the subsequent step (Fig. 1, step 2). Particularly, we set the goal to minimize the maximum link load (_MinMaxLoad_), which is a classic TE goal in carrier-grade networks [7, 8, 10]. This problem is known to be NP-hard, and even good settings of the weights can deviate significantly from the optimal configuration [8, 32]. Our MARL optimization system is built using a distributed Graph Neural Network (GNN) that exchanges messages over the physical network topology. Messages are sent between routers and their directly attached neighbors. The content of such messages are _hidden states_ that are produced and consumed by artificial neural networks and do not have a human-understandable _meaning_. The GNN makes several message iterations and, during this phase, local configuration of the router remains unchanged, thus having no impact on the current traffic. More details about the inner workings, performance, communication overhead, and computational cost can be found in Sections III-V. **3) OSPF convergence:** Lastly, the standard OSPF convergence process is executed taking into account the new per-link weights computed by MAGNNETO. Specifically, each agent has computed the optimal weights for its locally attached links. For OSPF to recompute the new forwarding tables, it needs to broadcast the new link weights; this is done using the standard OSPF Link-State Advertisements (LSAs) [12]. Once the routers have an identical view of the network, they compute locally their new forwarding tables (Fig. 1, step 3), and traffic is routed following the optimization goal. Convergence time of OSPF is a well-studied subject. For instance, routing tables can converge in the order of a few seconds in networks with thousands of links [35]. ## III MagnnetO This section provides a detailed description on how MAGNNETO operates. To do so we first briefly introduce the main ML methodologies it implements. Note that MAGNNETO is conceived as a general framework to optimize networked environments in a distributed fashion; details on how it is particularly adapted to face intradomain TE are then provided in Section IV. ### _Related ML-based Technologies_ MAGNNETO incorporates two well-known ML-based mechanisms: Multi-Agent Reinforcement Learning and Graph Neural Networks. Let us provide some background on these technologies: #### Iii-A1 Reinforcement Learning (RL) According to the regular setting of RL [36], an agent interacts with the environment in the following way: at each step \(t\), the agent selects an action \(a_{t}\) based on its current state \(s_{t}\), to which the environment responds with a reward \(r_{t}\) and then moves to the next state \(s_{t+1}\). This interaction is modeled as an episodic, time-homogeneous Markov Decision Process (MDP) \((\mathcal{S},\mathcal{A},r,P,\gamma)\), where \(\mathcal{S}\) and \(\mathcal{A}\) are respectively the state and action spaces; \(P\) is the transition kernel, \(s_{t+1}\sim P(\cdot|s_{t},a_{t})\); \(r_{t}\) represents the immediate reward given by the environment after taking action \(a_{t}\) from state \(s_{t}\); and \(\gamma\in(0,1]\) is the discount factor used to compute the return \(G_{t}\), defined as the \(-\)discounted-cumulative reward from a certain time-step \(t\) to the end of the episode \(T\): \(G_{t}=\sum_{t=0}^{T}\gamma^{t}r_{t}\). The behavior of the agent is described by a policy \(\pi:\mathcal{S}\rightarrow\mathcal{A}\), which maps each state to a probability distribution over the action space, and the goal of an RL agent is to find the optimal policy in the sense that, given any considered state \(s\in\mathcal{S}\), it always selects an action that maximizes the expected return \(\hat{G}_{t}\). There are two main model-free approaches to this end [37]: * Action-value methods, typically referred to as Q-learning; the policy \(\pi\) is indirectly defined from the learned estimates of the action value function \(Q^{\pi}(s,a)=\mathbb{E}_{\pi}\left[G_{t}|s_{0}=s,a_{0}=a\right]\). * Policy Gradient (PG) methods, which directly attempt to learn a parameterized policy representation \(\pi_{\theta}\). The Actor-Critic family of PG algorithms also involves learning a function approximator \(V_{\phi}(s)\) of the state value function \(V^{\pi_{\theta}}(s)=\mathbb{E}_{\pi_{\theta}}\left[G_{t}|s_{t}=s\right]\). In this case, actions are exclusively selected from function \(\pi_{\theta}\), which estimates the policy (i.e., the actor), but the training of such policy is guided by the estimated value function \(V_{\phi}(s)\), which assesses the consequences of the actions taken (i.e., the critic). #### Iii-A2 Multi-Agent Reinforcement Learning (MARL) In a MARL framework there is a set of agents \(\mathcal{V}\) interacting with a common environment that have to learn how to cooperate to pursue a common goal. Such a setting is generally formulated as a Decentralized Partially Observable MDP (DecPOMDP) [21] where, besides the global state space \(\mathcal{S}\) and action space \(\mathcal{A}\), it distinguishes local state and action spaces for every agent -i.e., \(\mathcal{S}_{v}\) and \(\mathcal{A}_{v}\) for \(v\in\mathcal{V}\). At each time step \(t\) of an episode, each agent may choose an action \(a_{t}^{\nu}\in\mathcal{A}_{v}\) based on local observations of the environment encoded in its current state \(s_{t}^{\nu}\in\mathcal{S}_{v}\). Then, the environment produces individual rewards \(r_{t}^{\nu}\) (and/or a global reward \(r_{t}\)), and it evolves to a next global state \(s_{t+1}\in\mathcal{S}\) -i.e., each agent \(v\) transitions to the following state \(s_{t+1}^{\nu}\in\mathcal{S}_{v}\). Typically, a MARL system seeks for the optimal global policy by learning a set of local policies \(\{\pi_{\theta_{v}}\}_{v\in\mathcal{V}}\). For doing so, most state-of-the-art MARL solutions implement traditional (single-agent) RL algorithms on each distributed agent, while incorporating some kind of cooperation mechanism between them [21]. The standard approach for obtaining a robust decentralized execution, however, is based on a centralized training where extra information can be used to guide agents' learning [38]. #### Ii-A3 Graph Neural Networks (GNN) These models are a recent family of neural networks specifically conceived to operate over graph-structured data [20, 23]. Among the numerous GNN variants developed to date [39], we focus on Message Passing Neural Networks (MPNN) [40], which is a well-known type of GNN whose operation is based on an iterative message-passing algorithm that propagates information between elements in a graph \(\mathcal{G}=(\mathcal{N},\mathcal{E})\). Focusing on the set of nodes, the process is as follows: first, each node \(v\in\mathcal{N}\) initializes its hidden state \(h_{v}^{0}\) using some initial features already included in the input graph. At every message-passing step \(k\), each node \(v\) receives via messages the current hidden state of all the nodes in its neighborhood \(\mathcal{B}(v)=\{u\in\mathcal{N}\,|\exists e\in\mathcal{E},e=(u,v)\lor e=(v,u)\}\), and processes them individually by applying a message function _m(-)_ together with its own internal state \(h_{v}^{k}\). Then, the processed messages are combined by an aggregation function _a(-)_: \[M_{v}^{k}=a(\{m(h_{v}^{k},h_{i}^{k})\}_{t\in\mathcal{B}(v)}) \tag{1}\] Finally, an update function _u(-)_ is applied to each node \(v\); taking as input the aggregated messages \(M_{v}^{k}\) and its current hidden state \(h_{v}^{k}\), it outputs a new hidden state for the next step (\(k+1\)): \[h_{v}^{k+1}=u(h_{v}^{k},M_{v}^{k}). \tag{2}\] After a certain number of message passing steps \(K\), a readout function _r(-)_ takes as input the final node states \(h_{v}^{K}\) to produce the final output of the GNN model. This readout function can predict either features of individual elements (e.g., a node's class) or global properties of the graph. Note that a MPNN model generates _a single set of message, aggregation, update, and readout functions that are replicated at each selected graph element_. ### _Execution Framework_ MAGNNETO internally models a networked environment as a graph \(\mathcal{G}=(\mathcal{N},\mathcal{E},\mathcal{V})\), with \(\mathcal{N}\) and \(\mathcal{E}\) representing the set of nodes and edges, respectively, and \(\mathcal{V}\) acting for a set of agents that can control some of the graph entities (nodes or edges). Let \(\mathcal{S}\) and \(\mathcal{A}\) represent the global state and action spaces, respectively, defined as the joint and union of the respective agents' local spaces, \(\mathcal{S}=\prod_{v\in\mathcal{V}}\mathcal{S}_{v}\) and \(\mathcal{A}=\bigcup_{v\in\mathcal{V}}\mathcal{A}_{v}\). The theoretical framework of MAGNNETO allows to implement both Q-learning and PG methods, so for the sake of generalization let \(f_{\theta}\) represent the global RL-based function that is aimed to learn -i.e., the global state-action value function \(Q_{\theta}\) for the former, or the global policy \(\pi_{\theta}\) for the latter. A main contribution of MAGNNETO is that it makes all agents \(v\in\mathcal{V}\) learn the global RL-based function approximator in a fully distributed fashion -i.e., all agents end up constructing and having access to the very same representation \(f_{\theta}\). In particular, and from a theoretical RL standpoint, this allows to formulate the problem within two different paradigms depending on the number of actions allowed at each time-step of the RL episode. On the one hand, imposing a single action per time-step enables to devise the problem as a time-homogeneous MDP of single-agent RL [37]. On the other hand, it requires the more challenging Dec-POMDP formalization of standard MARL [21] when letting several agents act simultaneously. Note, however, that in practice the execution pipeline of MAGNNETO is exactly the same in both cases. Another relevant feature of our design is that all agents \(v\in\mathcal{V}\) are able to internally construct such global representation \(f_{\theta}\) mainly through message communications with their direct neighboring agents \(\mathcal{B}(v)\) and their local computations, no longer needing a centralized entity responsible for collecting and processing all the global information together. Such a decentralized, message-based generation of the global function is achieved by modeling the global function \(f_{\theta}\) with a MPNN (see Sec. III-A3), so that all agents \(v\in\mathcal{V}\) deployed in the network are actually _replicas_ of the MPNN modules (message, aggregation, update and readout functions) that perform regular message exchanges with their neighbors \(\mathcal{B}(v)\) following the message passing iteration procedure of MPNNs; in particular, note that such _parameter sharing_ implies that all agents share as well the same local state and action spaces. This reinterpretation of a MPNN as a set of copies of its internal modules is especially important due to the fact that in our approach we directly map the graph \(\mathcal{G}\) to a real networked scenario, deploying copies of the MPNN modules along hardware devices in the network (e.g., routers) and making all message communications involved to actually go through the real network infrastructure. Hence, our proposed architecture naturally distributes the execution of the MPNN, and consequently is able to fully decentralize the execution of single-agent RL algorithms. Algorithm 1 summarizes the resulting distributed pipeline. At each time-step \(t\) of an episode of length \(T\), the MPNN-driven process of approximating the function \(f_{\theta}(s_{t},a_{t})\) -where \(s_{t}\in\mathcal{S}\) and \(a_{t}\in\mathcal{A}\) refer to the global state and action at \(t-\) first constructs a meaningful hidden state \(h_{v}\) for each agent \(v\in\mathcal{V}\). Each hidden state \(h_{v}\) basically depends on the hidden representations of the neighboring agents \(\mathcal{B}(v)\), and its initialization \(h_{v}^{0}\) is a function of the current agent state \(s_{v}^{t}\in\mathcal{S}_{v}\), which is in turn based on some pre-defined internal agent features \(x_{v}^{t}\). Those representations are shaped during \(K\) message-passing steps, where hidden states are iteratively propagated through the graph via messages between direct neighbors. In particular, successive hidden states \(h_{v}^{k}\), where \(k\) accounts for the message-passing step, are computed by the message, aggregation and update functions of the MPNN, as previously described in Section III-A3. Once agents generate their final hidden representation, a readout function -following the MPNN nomenclature- is applied to each agent to finally obtain the global function \(f_{\theta}\). Particularly, in our system the readout is not two steps: first, each agent \(v\in\mathcal{V}\) implements a local readout that takes as input the final representation \(h_{v}^{K}\), and outputs the final value -or a representation- of the global function \(f_{\theta}\) over every possible action in the agent's space \(\mathcal{A}_{v}\); for instance, this output could be the unnormalized log probability (i.e., logit) of the agent's actions in case of PG methods, or directly the q-value associated to each action when considering Q-learning algorithms. The second and last steps involve a communication layer that propagates such individual outputs to the rest of the agents, so that all of them can internally construct the global representation of \(f_{\theta}\) for the overall network state \(s_{t}=\prod_{\nu\in\mathcal{V}}s_{\nu}^{\prime}\) and all possible actions \(\bigcup_{v\in\mathcal{V}}\{a_{v},0,a_{v},1,\ldots,a_{v,i}\}\), with \(i\in\mathbb{N}\backslash\{0\}\) the number of actions of local agent spaces \(\mathcal{A}_{v}\). Finally, to ensure that all distributed agents sample the same actions when \(f_{\theta}\) encodes a distribution, they are provided with the same probabilistic seed before initiating the process. Consequently, only agents whose action has been selected does execute an action at each time-step \(t\). Note that actions are not actually applied over the network configuration until the whole optimization process finishes. ``` 0: A graph \(\mathcal{G}=(\mathcal{N},\mathcal{E})\) with a set of agents \(\mathcal{V}\), MPNN trained parameters \(\theta=\{\theta_{i}\}_{i\in\{m,a,u,t,r\}}\) 0: Initial graph configuration \(X^{0}_{\mathcal{G}}\), episode length \(T\), number of message passing steps \(K\) 1 Agents initialize their states \(s^{0}_{\nu}\) based on \(X^{0}_{\mathcal{G}}\) 2for\(t\gets 0\) to \(T\)do 3 Agents initialize their hidden states \(h^{0}_{\nu}\leftarrow(s^{\prime}_{\nu},0,\ldots,0)\) 4for\(k\gets 0\) to \(K\)do 5 Agents share their current hidden state \(h^{k}_{\nu}\) to neighboring agents \(\mathcal{B}(v)\) 6 Agents process the received messages \(M^{k}_{\nu}\gets a_{\theta_{u}}(\{m_{\theta_{u}}(h^{k}_{\nu},h^{k}_{\mu}) \}_{\mu\in\mathcal{B}(v)})\) 7 Agents update their hidden state \(h^{k+1}_{\nu}\gets u(h^{k}_{\nu},M^{k}_{\nu})\) 8 endfor 9 Agents partially evaluate the RL function \(f_{\theta}\) over their own actions \(\{f_{\theta}(s_{t},a)\}_{a\in\mathcal{A}_{v}}\gets r_{\theta_{u}}(h^{K}_{\nu})\) 10 Agents receive the partial evaluations of \(f_{\theta}\) of the rest of agents and build the global representation \(f_{\theta}\leftarrow\{f_{\theta}(s_{t},a)\}_{a\in\mathcal{A}}\) 11 Agents select the same set of actions \(A_{t}\) according to \(f_{\theta}\) 12 Agents whose action was selected execute it, and the environment updates the graph configuration \(X^{t+1}_{\mathcal{G}}\) 13 Agents update their states \(s^{t+1}_{\mathcal{G}}\) based on \(X^{t+1}_{\mathcal{G}}\) 14 endfor 15 16 endfor 17: New graph configuration \(X^{*}_{\mathcal{G}}\) that optimizes some pre-defined objective or metric ``` **Algorithm 1**MAGNNETO's execution pipeline. ## IV MagnnnetO for Traffic Engineering In this section we describe the particular adaptations of the general MAGNNETO framework when applying it to the intradomain TE scenario described in Section II. Moreover, we provide some details about the training pipeline of our models. ### _General Setting_ A straightforward approach to map the graph \(\mathcal{G}\) of the described MAGNNETO framework to a computer network infrastructure is to associate the nodes \(\mathcal{N}\) to hardware devices (e.g., router, switches) and the edges \(\mathcal{E}\) to the physical links of the network. Regarding the set of agents \(\mathcal{V}\), they can be identified either with the set of nodes, so that they individually control a hardware device, or with the set of edges by controlling some configuration parameters of a link connecting two devices. In the intradomain TE problem, the goal is to learn the set of link weights \(\mathcal{W}=\{w_{e}\}_{e\in\mathcal{E}}\) that minimizes the maximum link utilization for a certain traffic matrix \(TM\). Hence, we adapt MAGNNETO so that each agent controls a link (i.e., \(\mathcal{V}\Xi\mathcal{E}\)) and can modify its weight \(w_{e}\); in fact, in order to make the notation simpler, from now on we will refer to each agent \(v\in\mathcal{V}\) as the edge \(e\in\mathcal{E}\) it represents. We also note that: * computer networks are commonly represented as directed graphs with links in both directions, so for each directed link \(e=(n^{src}_{e},n^{dst}_{e})\in\mathcal{E}\), with \(n^{src}_{e},n^{dst}_{e}\in\mathcal{N}\), we define its neighbor as the set \(\mathcal{B}(e)\) of edges whose source node coincides with the destination node of \(e\), i.e. \(\mathcal{B}(e)=\{e^{\prime}\in\mathcal{E}|n^{src}_{e}=n^{dst}_{e}\}\). In other words, edges in \(\mathcal{B}(e)\) are those links that can potentially receive traffic from link \(e\). * in practice, link-based agents \(e\in\mathcal{E}\) would be deployed and executed in their adjacent source (\(n^{src}_{e}\)) or destination (\(n^{dst}_{e}\)) hardware device. Furthermore, we implement a well-known Actor-Critic method named Proximal Policy Optimization (PPO) [41], which offers a favorable balance between reliability, sample complexity, and simplicity. Consequently, in this case the global function \(f_{\theta}\) of the framework (see Sec. III-B) is the global policy \(\pi_{\theta}\) of the actor. Regarding the critic's design, more information can be found in Section IV-C. ### _Adapting MAGNNETO to TE_ Having clear the general configuration of our MAGNNETO implementation, now we will further describe its operation when dealing with the intradomain TE objective. To do so, let us reinterpret each of the main fundamental elements introduced earlier from a TE perspective: #### Iv-B1 Environment We consider episodes of a fixed number of time-steps \(T\). At the beginning of each episode, the environment provides with a set of traffic demands between all source-destination pairs (i.e., an estimated traffic matrix [11]). Each link \(e\in\mathcal{E}\) has an associated capacity \(c_{e}\), and it is initialized with a certain link weight \(w^{0}_{e}\). These link weights are in turn used to compute the routers' forwarding tables, using the standard Dijkstra's algorithm. Each agent \(v_{e}\in\mathcal{V}\) has access to its associated link features, which in our case are the current weight, its capacity, the estimated traffic matrix and the weights of the other links. This can be achieved with standard procedures in OSPF-based environments (see Sec. II). #### Iv-B2 State Space and Message Passing At each time-step \(t\) of an episode, each link-based agent \(v_{e}\in\mathcal{V}\), feeds its MPNN module with its input features \(x^{e}_{e}\) to generate its respective initial hidden state \(h^{0}_{e}\) (Figure 2.a). In particular, agents consider as input features the current weight \(w_{e}^{t}\) and the utilization \(u_{e}^{t}\)\([0,1]\) of the link, and construct their initial link hidden representations \(h_{e}^{0}\) as a fixed-size vector where the first two components are the input features and the rest is zero-padded. Note that the link utilization can be easily computed by the agent with the information of the estimated traffic matrix and the global link weights locally maintained. Then, the algorithm performs \(K\) message-passing steps (Figures 2.b and 2.c). At each step \(k\), the algorithm is executed in a distributed fashion over all the links of the network. Particularly, each link-based agent \(e\in\mathcal{E}\) receives the hidden states of its neighboring agents \(\mathcal{B}(e)\), and combines them individually with its own state \(h_{e}^{k}\) through the \(message\) function (a fully-connected NN). Then, all these outputs are gathered according to the \(aggregation\) function -in our case an element-wise min and max operations- producing the combination \(M_{e}^{k}\). Afterwards, another fully-connected NN is used as the \(update\) function, which combines the link's hidden state \(h_{e}^{k}\) with the new aggregated information \(M_{e}^{k}\), and produces a new hidden state representation for that link (\(h_{e}^{k+1}\)). As mentioned above, this process is repeated \(K\) times, leading to some final link hidden state representations \(h_{e}^{K}\). #### Iv-B3 Action Space In our implementation, each agent \(e\in\mathcal{E}\) can only take a single action: to increase its link weight \(w_{e}\) in one unit. In particular, the agent's action selection (Figure 2.d) is done as follows: first, every agent applies a local readout function -implemented with a fully-connected NN- to its final hidden state \(h_{e}^{K}\), from which it obtains the global logit estimate of choosing its action (i.e., increase its link weight) over the actions of the other agents. Then, as previously described in Section III-B, these logits are shared among agents in the network, so that each of them can construct the global policy distribution \(\pi_{\theta}\). By sharing the same probabilistic seed, all agents sample locally the same set of actions \(A_{t}\). Finally, agents whose action has been selected increase by one unit the weight of their associated link in its internal global state copy, which is then used to compute the new link utilization \(u_{e}^{t+1}\) under the new weight setting, as well as to initialize its hidden state representation in the next time-step \(t+1\). #### Iv-B4 Reward Function During training, a reward function is computed at each step \(t\) of the optimization episode. In our case, given our optimization goal we directly define the reward \(r_{t}\) as the difference of the global maximum link utilization between steps \(t\) and \(t+1\). Note that this reward can be computed locally at each agent from its global state copy, which is incrementally updated with the new actions applied at each time-step. ### _Training Details_ The training procedure highly depends on the type of RL algorithm chosen. In our particular implementation, given that we considered an Actor-Critic method (PPO), the objective at training is to optimize the parameters \(\{\theta,\phi\}\) so that: * the previously described GNN-based actor \(\pi_{\theta}\) becomes a good estimator of the optimal global policy; * the critic \(V_{\phi}\) learns to approximate the state value function of any global state. As commented in Section III-A1, the goal of the critic is to guide the learning process of the actor; it is no longer needed at execution time. Therefore, taking \(V_{\phi}\) a centralized design would have no impact on the distributed nature of MAGNNETO. In fact, following the standard approach of MARL systems [38], the training of MAGNNETO is performed in a centralized fashion, and such centrality precisely comes from the critic's model. In particular, we have implemented \(V_{\phi}\) as another link-based MPNN, similar to the actor but with a centralized readout that takes as inputs all link hidden states in and outputs the value function estimate. We also considered a MPNN-based critic to exploit the relational reasoning provided by GNNs; however, note that any other alternative design might be valid as well. At a high level, the training pipeline is as follows. First, an episode of length \(T\) is generated by following the current policy \(\pi_{\theta}\), while at the same time the critic's value function \(V_{\phi}\) evaluates each visited global state; this defines a trajectory \(\{s_{t},a_{t},r_{t},p_{t},V_{t},s_{t+1}\}_{t=0}^{T-1}\), where \(p_{t}=\pi_{\theta}(a_{t}|s_{t})\) and \(V_{t}:=V_{\phi}(s_{t})\). When the episode ends, this trajectory is used to update the model parameters -through several epochs of minibatch Stochastic Gradient Descent- by maximizing the global PPO objective \(L^{PPO}(\theta,\phi)\) described in [41]. The same process of generating episodes and updating the model is repeated for a fixed number of iterations to guarantee convergence. ## V Evaluation In this section we extensively evaluate MAGNNETO in an intradomain TE scenario: we benchmark it against a curated set of advanced TE optimizers in more than 75 different real-world topologies, using realistic traffic loads. As shown in Figure 2: Description of the message passing and action selection process of MAGNNETO at a certain time-step \(t\) of an episode. For simplicity, visual representations of steps (c) and (d) are focused on a single agent (\(A_{9}\)); however, note that the same procedure is executed in parallel in all link-based agents. our experimental results, MAGNNETO achieves similar performance to state-of-the-art TE optimizers with a significantly lower execution time. We begin by describing the considered baselines as well as the setup used in our evaluations. The rest of the section is devoted to analyze the results. ### _Baselines_ In this section we describe the set of baselines we use to benchmark MAGNNETO in our evaluation. We particularly consider a well-established standard TE mechanism and three advanced TE optimizers. The first baseline is labeled as "Default OSPF", a simple heuristic widely used in today's ISP networks. In Default OSPF, link weights are inversely proportional to their capacities and traffic is split over multiple paths using Equal-Cost Multi-Path (ECMP). In our experiments, all performance results are expressed in terms of their improvement with respect to Default OSPF. As state-of-the-art TE benchmarks, we consider the following set of centralized algorithms provided by REPETITA [22]: * TabulGPWO (IGP Weight Optimizer, based on [11]): This algorithm runs a Local Search to find the OSPF weights that minimize the load of the maximally-utilized link. TabulGPWO requires more execution time than the rest of baselines, but represents a classical TE optimizer that operates in the same optimization space than MAGNNETO (i.e., OSPF link weight configuration). * DEFO (Declarative and Expressive Forwarding Optimizer) [10]: It uses Constraint Programming and Segment Routing (SR) [25] to optimize routing configurations in the order of minutes. To this end, DEFO reroutes traffic paths through a sequence of middlepoints, spreading their traffic over multiple ECMP paths. * SRLS (Segment Routing and Local Search) [9]: By leveraging Local Search and SR, SRLS achieves similar -or even better- performance than DEFO at a lower execution time. It also implements ECMP, and reroutes traffic paths through a sequence of middlepoints. Particularly, SRLS and DEFO represent state-of-the-art TE optimizers obtaining close-to-optimal performance on several network optimization goals -one of them being our intradomain TE goal of minimizing the most loaded link. To this end, both optimizers leverage SR, which enables to define overlay paths at a source-destination granularity. In contrast, MAGNNETO and TabulGPWO operate directly over standard OSPF-based networks with destination-based routing. ### _Experimental Setup_ We compare MAGNNETO against the previously defined TE baselines in all our experimental settings, which involve 82 different real-world topologies: NSFNet, GBN, and GEANT2 from [42], and 79 networks from the Internet Topology Zoo dataset [24]. In this section we provide more low-level technical details of MAGNNETO's configuration, required to reproduce the results. Regarding the length \(T\) of the training and evaluation RL-based episodes, it varies depending on the network topology size and the number of simultaneous actions allowed (more details below in Sec. V-C). At the beginning of each episode, the link weights are randomly selected as an integer in the range \([1,4]\), so our system is evaluated over a wide variety of scenarios with random routing initializations. From that point on, at each step of an episode one or several agents can modify their weight by increasing it by one unit. Taking [43] as a reference for defining the hyperparameters' values of the solution, we ran several grid searches to appropriately fine-tune the model. The implemented optimizer is Adam with a learning rate of 3\(\cdot\)10\({}^{-4}\), \(\beta\)=0.9, and \(\epsilon\)=0.01. Regarding the PPO setting, the number of epochs for each training episode is set to 3 with batches of 25 samples, the discount factor \(\gamma\) is set to 0.97, and the clipping parameter is 0.2. We implement the Generalized Advantage Estimate (GAE), to estimate the advantage function with \(\lambda\)=0.9. In addition, we multiply the critic loss by a factor of 0.5, and we implement an entropy loss weighted by a factor of 0.001. Finally, links' hidden states \(h_{e}\) are encoded as 16-element vectors, and in each MPNN forward propagation \(K\)=4 message passing steps are executed. For each experiment, we generate two sets of simulated traffic matrices: uniform distribution across source-destination traffic demands, and traffic distributions following a gravity model [44] -which produces realistic Internet traffic patterns. The training process of MAGNNETO highly depends on the topology size; in a machine with a single CPU of 2.20 GHz, it can take from few hours (\(\approx\)20 nodes) to few days (100+ nodes). ### _Multiple Actions and Episode Length_ As previously mentioned in Section III, there is a relevant hyperparameter that needs to be further addressed: the episode length \(T\) of RL-based episodes, which represents the maximum number of optimization steps that MAGNNETO needs to execute before producing a good set of link weights. In this section we provide more details about its definition in terms of the topology size and the number of simultaneous actions. Figure 3: Evaluation of MAGNNETO for different number of simultaneous actions \(n\in\{1,2,5,10\}\), each of them considering an episode length of \(T=150/n\). The training only considers samples of NSFNet and GEANT2 topologies, and the evaluation is performed over 100 unseen TMs on the GBN topology. Each MAGNNETO model and baseline optimizer is trained and/or evaluated twice for uniform and gravity-based traffic profiles; markers represent the mean of these results, and we also include the corresponding boxplots. Let \(n\) be such maximum number of simultaneous actions allowed at each time-step \(t\) of the episode. When imposing \(n\)=1 -i.e., only one link weight changes per time-step-, we have empirically found that MAGNNETO requires an episode length of \(\approx\)2-3 times the number of links in the network to reach its best performance. This is in line to what we already observed in our preliminary work [26]. However, whereas [26] was subject to \(n\)=1 by design, MAGNNETO allows taking \(n\)>1 actions at each time-step, which can potentially reduce the number of required optimization steps (i.e., speed up the optimization process). Figure 3 shows that the length \(T\) of the episode -which directly relates to the execution time- can be reduced proportionally by \(n\) without a noticeable performance loss. In particular, the model with \(n\)=10 actually reduces by one order of magnitude the execution time of the 1-action model, but still achieves comparable performance to the state-of-the-art optimizers of our benchmark -for both traffic profiles, and evaluating on a topology not previously seen in training. Given the good trade-off that provides allowing more than one action at each time-step, for the rest of our experiments we fine-tuned the number of actions \(n\) and the episode length \(T\) to balance a competitive performance with the minimum possible execution time. Later in Section V-F we will analyze in detail the execution cost of MAGNNETO. ### _Generalization over Unseen Topologies_ In Section I we argued the importance of generalization in ML-based solutions, which refers to the capability of the solution to operate successfully in other networks where it was not trained. In this section, we bring MAGNNETO under an intensive evaluation in this regard. In our experiments, MAGNNETO only observes NSFNet (14 nodes, 42 links) and GEANT2 (24 nodes, 74 links) samples during training [42], whereas the evaluation is performed over a subset of 75 networks from the Topology Zoo dataset [24] including topologies ranging from 11 to 30 nodes, and from 30 to 90 links. More in detail: * We train two MAGNNETO models, one for each traffic profile (uniform and gravity). * Each model is trained observing 50 different TMs -either uniform or gravity-based, depending on the model- alternating between the NSFNet and GEANT2 topologies. * Each of these two trained models is evaluated over 100 different TMs -again, either uniform or gravity-based-on each of the 75 topologies from Topology Zoo. Overall, this experimental setup comprises \(7,500\) evaluation runs for each traffic profile, which we summarize in Figures 3(a) and 3(b), respectively for uniform and gravity-based loads. In particular, note that we first compute the mean _MinMaxLoad_ improvement of MAGNNETO -and the baselines- over the 100 TMs of each evaluation network, obtaining a single value for each of the 75 topologies. Thus, in these figures we represent the corresponding CDF and boxplot of the 75-sized vector of mean improvement values for each TE optimizer. In both traffic scenarios MAGNNETO achieves comparable performance to the corresponding best performing benchmark -DEFO when considering uniform traffic and SRLS for gravity. In fact, MAGNNETO outperforms TabulGPWO, improves DEFO with gravity-based traffic, and lies within a 2% average improvement difference with respect to SRLS in both cases. We reckon that these represent remarkable results on generalization; as far as we know, this is the first time that a ML-based model consistently obtains close performance to state-of-the-art TE optimizers on such a large and diverse set of topologies not previously seen in training. ### _Traffic Changes in Large Topologies_ After evaluating the generalization capabilities of MAGNNETO, we aim to test the performance of our method over traffic changes in large networks, where the combinatorial of the optimization process might dramatically increase. Having considered networks up to 30 nodes and 90 links so far, for this set of experiments we arbitrarily select four large real-world topologies from Topology Zoo [24]: Interroute (110 nodes, 294 links), Colt (153 nodes, 354 links), DialtelecomCz (138 links, 302 links) and VtWavenet2011 (92 nodes, 192 links). Figures 5.I-IV depict these topologies. In these experiments, for each traffic profile (uniform or gravity) we train a MAGNNETO model on each network. Then, we evaluate models on the same topology where they Figure 4: Evaluation of MAGNNETO’s generalization capability for (a) uniform and (b) gravity traffic. Each point of the CDF corresponds to the mean _MinMaxLoad_ improvement over 100 TMs for one of the 75 evaluation topologies from Topology Zoo [24], and boxplots are computed based on these mean improvement values as well. Both the uniform (a) and gravity (b) MAGNNETO models evaluated were trained exclusively on samples from the NSFNet and GEANT2 topologies [42]. were trained, over 100 different TMs not previously seen in training. Figures 5.a-d and 5.e-f show the corresponding CDF of all these evaluations, considering uniform and gravity traffic loads respectively. As we can see, with uniform traffic SRLS is clearly the best performing baseline, achieving a remarkable overall improvement gap with respect to the other two benchmarked optimizers. However, in this scenario MAGNNETO is able to obtain similar improvements to SRLS, slightly outperforming it in VtlWavenet2011. On the other hand, results with gravity-based traffic suggest that Default OSPF already provides with low-congested routing configurations in scale-free networks when considering more realistic traffic. Despite this fact, MAGNNETO turns out to be the overall winner in the comparison with gravity loads, consistently achieving lower congestion ratios for a large number of TMs in all four topologies. In short, in all scenarios MAGNNETO attained equivalent -or even better- performance than the advanced TE optimizers benchmarked. These results evince its potential to successfully operate in large computer networks. ### _Execution Cost_ Lastly, in this section we evaluate the execution cost of MAGNNETO. In particular, we measure the impact of the message communications involved when running our distributed solution, as well as compare its execution time against the considered set of state-of-the-art TE baselines; Table I gathers these results for several variable-sized networks used in the previous evaluations. Taking into account the recommendations of REPETITA [9], as well as analyzing the results provided in the original works [9, 10, 11], we defined the following running times for each of our benchmarks: 10 minutes for TabuIGPWO, 3 minutes for DEFO, and 1 minute for SRLS. At first glance, the execution time of MAGNNETO becomes immediately its most remarkable feature. Particularly, it is able to obtain subsecond times even for the larger network of our evaluation (Colt). Indeed, as previously discussed in Section V-C these times could be further reduced by allowing multiple simultaneous actions. For instance, by considering up to 10 simultaneous actions, MAGNNETO can run 3 orders of magnitude faster than the most rapid state-of-the-art TE optimizer. This relevant difference can be explained by the fact that MAGNNETO's distributed execution naturally parallelizes the global optimization process across all network devices (i.e., Figure 5: Evaluation of MAGNNETO on traffic changes in four large real-world topologies (I-IV) from the Topology Zoo dataset [24], both for uniform ((a)-(d)) and gravity-based ((e)-(h)) traffic loads. A MAGNNETO model is trained for each network and traffic profile, and then evaluated on the same topology over 100 unseen TMs. CDFs represent the _MinMaxLoad_ improvement results of each optimizer for those 100 evaluation TMs. routers); in contrast, typical TE optimizers rely on centralized frameworks that cannot benefit from this. Such decentralization comes at the expense of the extra message overhead generated by the MPNN. In this context, Table I shows that the link overhead produced by MAGNNETO (few MB/s) can reasonably have a negligible impact in today's real-world networks with 10G/40G (or even more) interfaces. Moreover, note that this cost is quite similar in all topologies; this is as expected, given that the messaging overhead of the GNN-based communications is directly proportional to the average node degree of the network, and computations are distributed among all nodes. To sum up, our results show that MAGNNETO is able to attain equivalent performance to state-of-the-art centralized TE optimizers -even in topologies not previously seen in training-with significantly lower execution time, and with an affordable message communication overhead. ## VI Related Work Recently, numerous solutions based on Deep Reinforcement Learning (DRL) have been proposed to solve complex networking problems, especially in the context of routing optimization and TE [15, 17, 45]. However, current state-of-the-art RL-based TE solutions fail to generalize to unseen scenarios (e.g., different network topologies) as the implemented traditional neural networks (e.g., fully connected, convolutional) are not well-suited to learn and generalize over data that is inherently structured as graphs. In [16], the authors design a DRL-based architecture that obtains better results than Shortest Path and Load Balancing routing. Regarding MARL-based solutions [46, 47], most of them suffer from the same lack of topology generalization. An exception to that is the work of [18], an interesting MARL approach for multi-region TE that consistently outperforms ECMP in several scenarios, although it is not benchmarked against state-of-the-art TE optimizers. GNNs [20, 48], and in particular Message Passing Neural Networks (MPNN) [40], precisely emerged as specialized methods for dealing with graph-structured data; for the first time, there was an AI-based technology able to provide with topology-aware systems. In fact, GNNs have recently attracted a large interest in the field of computer networks for addressing the aforementioned generalization limitations. The work from [42] proposes to use GNN to predict network metrics and a traditional optimizer to find the routing that minimizes some of these metrics (e.g., average delay). Authors of [49] propose a novel architecture for routing optimization in Optical Transport Networks that embeds a GNN into a centralized, single-agent RL setting that is compared against Load Balancing routing. Narrowing down the use case to intradomain TE, we highlight the work of [50], whose premise is similar to ours: the generation of easily-scalable, automated distributed protocols. For doing so, the authors also use a GNN, but in contrast to our approach they are focused on learning routing strategies that directly imitate already existing ones -shortest path and min-max routing- and compare their solution against these ones. This is the reason why they did not implement a RL-based approach, but instead a semi-supervised learning algorithm, therefore guiding the learning process with explicit labeled data. In fact, so far the very few works that combine GNNs with a MARL framework [51, 52] are theoretical papers from the ML community, and none of them apply to the field of networking. ## VII Conclusions Intradomain Traffic engineering (TE) is nowadays among the most common network operation tasks, and has a major impact on the performance of today's ISP networks. As such, it has been largely studied, and there are already some well-established TE optimizers that deliver near-optimal performance in large-scale networks. During the last few years, state-of-the-art TE solutions have systematically competed for reducing execution times (e.g., DEFO [10], SRLS [9]), thus scaling better to carrier-grade networks and achieving faster reaction to traffic changes. In this context, ML has attracted interest as a suitable technology for achieving faster execution of TE tasks and -as a result- during recent years the networking community has devoted large efforts to develop effective ML-based TE solutions [15, 16, 17, 18]. However, at the time of this writing no ML-based solution had shown to outperform state-of-the-art TE optimizers. In this paper we have presented MAGNNETO, a novel ML-based framework for intradomain TE optimization. Our system implements a novel distributed architecture based on Multi-Agent Reinforcement Learning and Graph Neural Networks. In our evaluation, we have compared MAGNNETO with a set of non-ML-based TE optimizers that represent the state of the art in this domain. After applying our system to 75+ real-world topologies, we have observed that it achieves comparable performance to the reference TE benchmarks. However, MAGNNETO offers considerably faster operation than these \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline & **NSFNet** & **GBN** & **GEANT2** & **VlWavenet2011** & **Interroute** & **DialtelecomCz** & **Colt** \\ \hline (\#nodes, \#links) & (14,42) & (17,52) & (24,74) & (92,192) & (110,294) & (138,302) & (153,354) \\ MAGNNETO Link Overhead* (MB/s) & 1.20 & 1.32 & 1.20 & 0.83 & 1.28 & 0.91 & 1.01 \\ Execution Time (s) & & & & & & & \\ \hline TabulGPW0 [11] & 600 & 600 & 600 & 600 & 600 & 600 & 600 \\ DEFO [10] & 180 & 180 & 180 & 180 & 180 & 180 & 180 \\ SRLS [9] & 60 & 60 & 60 & 60 & 60 & 60 & 60 \\ MAGNNETO [\(n\) actions] & \(0.08/n\) & \(0.12/n\) & \(0.16/n\) & \(0.42/n\) & \(0.64/n\) & \(0.66/n\) & \(0.78/n\) \\ \hline \multicolumn{7}{l}{*Average value, with an extra 20\% message size for headers and metadata.} \\ \end{tabular} \end{table} Table I: Cost of MAGNNETO: Average link overhead and execution time –in terms of the maximum number of simultaneous actions allowed– for variable-sized network topologies. state-of-the-art TE solutions, reducing execution times from several minutes to sub-second timescales in networks of 100+ nodes. In this context, MAGNNETO was especially designed to perform several actions at each RL optimization step, which enables to considerably accelerate the optimization process. Particularly, we have seen that our system was able to perform up to 10 actions in parallel with no noticeable decrease in performance. These results lay the foundations for a new generation of ML-based systems that can offer the near-optimal performance of traditional TE techniques while reacting much faster to traffic changes. Last but not least, we have shown that the proposed system offers strong generalization power over networks unseen during the training phase, which is an important characteristic from the perspective of deployability and commercialization. Particularly, generalization enables to train ML-based products in controlled testbeds, and then deploy them in different real-world networks in production. However, this property has been barely addressed by prior ML-based TE solutions. In contrast, MAGNNETO has demonstrated to generalize succesfully over a wide and varied set of 75 real-world topologies unseen during training. The main reason behind this generalization capability is that the proposed system implements internally a GNN that structures and processes network information as graphs, and computes the information on distributed agents that communicate with their neighbors according to the underlying graph structure (i.e., the network topology). ## Acknowledgment This publication is part of the Spanish I+D+i project TRAINER-A (ref. PID2020-118011GB-C21), funded by MCIN/ AEI/10.13039/501100011033. This work is also partially funded by the Catalan Institution for Research and Advanced Studies (ICREA), the Secretariat for Universities and Research of the Ministry of Business and Knowledge of the Government of Catalonia, and the European Social Fund.
2301.00161
Active RISs: Signal Modeling, Asymptotic Analysis, and Beamforming Design
Reconfigurable intelligent surfaces (RISs) have emerged as a candidate technology for future 6G networks. However, due to the "multiplicative fading" effect, the existing passive RISs only achieve a negligible capacity gain in environments with strong direct links. In this paper, the concept of active RISs is studied to overcome this fundamental limitation. Unlike the existing passive RISs that reflect signals without amplification, active RISs can amplify the reflected signals via amplifiers integrated into their elements. To characterize the signal amplification and incorporate the noise introduced by the active components, we verify the signal model of active RISs through the experimental measurements on a fabricated active RIS element. Based on the verified signal model, we formulate the sum-rate maximization problem for an active RIS aided multi-user multiple-input single-output (MU-MISO) system and a joint transmit precoding and reflect beamforming algorithm is proposed to solve this problem. Simulation results show that, in a typical wireless system, the existing passive RISs can realize only a negligible sum-rate gain of 3%, while the active RISs can achieve a significant sum-rate gain of 62%, thus overcoming the "multiplicative fading" effect. Finally, we develop a 64-element active RIS aided wireless communication prototype, and the significant gain of active RISs is validated by field test.
Zijian Zhang, Linglong Dai, Xibi Chen, Changhao Liu, Fan Yang, Robert Schober, H. Vincent Poor
2022-12-31T09:14:54Z
http://arxiv.org/abs/2301.00161v1
# Active RISs: Signal Modeling, Asymptotic Analysis, and Beamforming Design ###### Abstract Reconfigurable intelligent surfaces (RISs) have emerged as a candidate technology for future 6G networks. However, due to the "multiplicative fading" effect, the existing passive RISs only achieve a negligible capacity gain in environments with strong direct links. In this paper, the concept of active RISs is studied to overcome this fundamental limitation. Unlike the existing passive RISs that reflect signals without amplification, active RISs can amplify the reflected signals via amplifiers integrated into their elements. To characterize the signal amplification and incorporate the noise introduced by the active components, we verify the signal model of active RISs through the experimental measurements on a fabricated active RIS element. Based on the verified signal model, we formulate the sum-rate maximization problem for an active RIS aided multi-user multiple-input single-output (MU-MISO) system and a joint transmit precoding and reflect beamforming algorithm is proposed to solve this problem. Simulation results show that, in a typical wireless system, the existing passive RISs can realize only a negligible sum-rate gain of 3%, while the active RISs can achieve a significant sum-rate gain of 62%, thus overcoming the "multiplicative fading" effect. Finally, we develop a 64-element active RIS aided wireless communication prototype, and the significant gain of active RISs is validated by field test. + Footnote †: publicationid: 978-1-6654-3540-6/22 © 2022 IEEE ## I Introduction From the first generation (1G) to 5G wireless communications, the wireless channel has been considered to be uncontrollable. Recently, due to the advances in meta-materials, reconfigurable intelligent surfaces (RISs) have been proposed [1] for the purpose of intelligently controlling wireless channels to achieve improved communication performance. Specifically, an RIS is an array composed of a very large number of passive elements that reflects electromagnetic signals in a desired manner so as to reconfigure the propagation properties of wireless environment. As an important advantage of RIS, the negligible noise introduced by passive RISs enables a high array gain. Benefiting from this advantage, RISs are expected to introduce significant capacity gains in wireless systems [2]. However, in practice, the expected capacity gains are typically only observed in communication environments where the direct link between transmitter and receiver is completely blocked or very weak. By contrast, in many scenarios where the direct link is not weak, conventional RISs can only achieve negligible capacity gains [3]. The reason behind this phenomenon is the "multiplicative fading" effect introduced by RISs, i.e., the equivalent path loss of the transmitter-RIS-receiver link is the product (instead of the sum) of the path losses of the transmitter-RIS and RIS-receiver links, which is usually thousands of times larger than that of the direct link [1, 2, 3]. As a result, the "multiplicative fading" effect makes it almost impossible for passive RISs to achieve noticeable capacity gains in many wireless environments. Therefore, to advance the practicability of RISs in future 6G wireless networks, a critical issue for RISs to be addressed is: _How to break the fundamental performance bottleneck caused by the "multiplicative fading" effect?_ To overcome the fundamental physical limitation of conventional passive RISs imposed by the "multiplicative fading" effect, in this paper, we investigate the concept of _active_ RISs to overcome the "multiplicative fading" effect. Different from the existing _passive_ RISs that passively reflect signals without amplification, the key feature of active RISs is their ability to actively reflect signals with amplification at the expense of additional power consumption. Firstly, through the experimental measurements on a fabricated active RIS element, we verify the signal model of active RISs, which characterizes the amplification of the incident signal and accounts for the non-negligible thermal noise introduced by the active elements. Based on the verified signal model, we further analyze the asymptotic performance of active RISs and formulate a sum-rate maximization problem for an active RIS aided multi-user multiple-input single-output (MU-MISO) system. Then, a joint transmit precoding and reflect beamforming algorithm is proposed to solve this problem. Simulation results show that, in a typical wireless system, the existing passive RISs achieve only a negligible sum-rate gain of 3%, while the active RISs are able to achieve a substantial sum-rate gain of 62%. Finally, we develop a 64-element active RIS aided wireless communication prototype, and field tests are conducted to validate the significant gain of active RISs. The rest of this paper is organized as follows. In Section II, the concept of RISs and their signal models are introduced. In Section III, the asymptotic performance of active RISs is analyzed. In Section IV, a sum-rate maximization problem is formulated, and a joint precoding and beamforming design is proposed to solve the problem. In Section V, simulation results and experimental measurements are presented to validate the signal model and evaluate the performance of active RISs. Finally, conclusions are drawn in Section VI. ## II Passive RISs and Active RISs ### _Conventional Passive RISs_ The RISs widely studied in most existing works are passive [1, 2, 3]. In general, each passive RIS element consists of a reflective patch terminated with an impedance-adjustable circuit for phase shifting. Thanks to the passive mode of operation, the thermal noise at passive RISs is usually negligible [2]. Thereby, the signal model of an \(N\)-element passive RIS widely used in the literature is given as follows: \[\mathbf{y}=\mathbf{\Theta}\mathbf{x}, \tag{1}\] where \(\mathbf{x}\in\mathbb{C}^{N}\) denotes the incident signal, \(\mathbf{y}\in\mathbb{C}^{N}\) denotes the signal reflected by the RIS, and \(\mathbf{\Theta}:=\mathrm{diag}\left(e^{j\theta_{1}},\cdots,e^{j\theta_{N}} \right)\in\mathbb{C}^{N\times N}\) denotes the phase shift matrix of the RIS with \(\mathrm{diag}(\cdot)\) being the diagonalization operation. By properly adjusting \(\mathbf{\Theta}\) to manipulate the \(N\) signals reflected by the \(N\) RIS elements to coherently add with the same phase at the receiver, a high array gain proportional to \(N^{2}\) can be achieved, which is expected to significantly increase the signal-to-noise ratio (SNR) [1] at the receiver. Unfortunately, this expected high capacity gain often cannot be realized in practice, especially in communication scenarios where the direct link between the transmitter and the receiver is strong. The reason for this negative result is the "multiplicative fading" effect introduced by passive RISs. Specifically, the equivalent path loss of the transmitter-RIS-receiver reflected link is the product (instead of the sum) of the path losses of the transmitter-RIS and RIS-receiver links, and therefore, it is thousands of times larger than that of the unobstructed direct link. Thereby, for an RIS to realize a noticeable capacity gain, thousands (or even millions) of RIS elements are required to compensate for this extremely large path loss [3]. The resulting high signaling overhead for channel estimation and the high complexity of real-time beamforming make the application of such a large number of passive RIS elements in practical wireless networks very challenging. ### _Concept of Active RISs_ To overcome the fundamental performance bottleneck caused by the "multiplicative fading" effect of RISs, we study the concept of active RISs as a promising solution1. As shown in Fig. 1, similar to the existing passive RISs, active RISs can also reflect the incident signals with reconfigurable phase shifts. Different from passive RISs that just reflect the incident signals without amplification, active RISs can further amplify the reflected signals. To achieve this goal, the key component of an active RIS element is the additionally integrated active reflection-type amplifier, which can be realized by different existing active components, such current-inverting converters or some integrated circuits [5]. Footnote 1: Note that active RISs are fundamentally different from relay-type RISs equipped with RF components and relays. Due to space constraints, we refer to the journal version of this paper for a more detailed discussion [4, Remark 1]. With reflection-type amplifiers supported by a power supply, the reflected and amplified signal of an \(N\)-element active RIS can be modeled as follows: \[\mathbf{y}=\underset{\text{Desired signal}}{\text{P}\mathbf{\Theta}\mathbf{x}}+ \underset{\text{Dynamic noise}}{\text{P}\mathbf{\Theta}\mathbf{y}}+\underset{ \text{Static noise}}{\text{P}\mathbf{\Theta}\mathbf{y}}, \tag{2}\] where \(\mathbf{P}:=\mathrm{diag}\left(p_{1},\cdots,p_{N}\right)\in\mathbb{R}^{N\times N}\) denotes the amplification factor matrix of the active RIS, wherein each element \(p_{n}\) can be larger than one thanks to the integrated reflection-type amplifier. Due to the use of active components, active RISs consume additional power for amplifying the reflected signals, and the thermal noise introduced by active RIS elements cannot be neglected as is done for passive RISs. Particularly, as shown in (2), the introduced noise can be classified into dynamic noise and static noise [5]. Specifically, \(\mathbf{v}\) is related to the input noise and the inherent device noise of the active RIS elements [5], while the static noise \(\mathbf{n}_{\mathrm{s}}\) is unrelated to \(\mathbf{P}\mathbf{\Theta}\mathbf{v}\) and is usually negligible compared to the dynamic noise \(\mathbf{P}\mathbf{\Theta}\mathbf{v}\), as will be verified by experimental results in Section V-A. Thus, here we neglect \(\mathbf{n}_{\mathrm{s}}\) and model \(\mathbf{v}\) as \(\mathbf{v}\sim\mathcal{CN}\left(\mathbf{0}_{N},\sigma_{v}^{2}\mathbf{I}_{N}\right)\), where \(\mathcal{CN}(\boldsymbol{\mu},\boldsymbol{\Sigma})\) denotes the complex multivariate Gaussian distribution with mean \(\boldsymbol{\mu}\) and variance \(\boldsymbol{\Sigma}\), \(\mathbf{I}_{L}\) is an \(L\times L\) identity matrix, and \(\mathbf{0}_{L}\) is an \(L\times 1\) zero vector. ### _Active RIS Aided MU-MISO System_ Consider an active RIS aided downlink MU-MISO system as shown in Fig. 1, where an \(M\)-antenna base station (BS) serves \(K\) single-antenna users simultaneously with the aid of an \(N\)-element active RIS. Let \(\mathbf{s}:=\left[s_{1},\cdots,s_{K}\right]^{\mathrm{T}}\in\mathbb{C}^{K}\) denote the transmitted symbol vector for the \(K\) users and let \(\mathbf{w}_{k}\in\mathbb{C}^{M\times 1}\) denote the BS precoding vector for symbol \(s_{k}\). According to (2), signal \(r_{k}\in\mathbb{C}\) received at user \(k\) can be modeled as follows: \[r_{k}=(\underset{\text{Direct link}}{\text{}\underbrace{\mathbf{h}_{k}^{ \mathrm{H}}}}+\underset{\text{Reflected link}}{\text{}\underbrace{\mathbf{f}_{k}^{ \mathrm{H}}\mathbf{P}\mathbf{\Theta}\mathbf{G}}})\sum\nolimits_{j=1}^{K} \mathbf{w}_{j}s_{j}+\underset{\text{Noise introduced by active RIS}}{\text{}\underbrace{ \mathbf{f}_{k}^{\mathrm{H}}\mathbf{P}\mathbf{\Theta}\mathbf{v}}}\] Fig. 1: The downlink transmission in an active RIS aided MU-MISO system. \[+\underbrace{z_{k}}_{\text{Noise introduced at user $k$}}, \tag{3}\] where \([\cdot]^{\text{H}}\) denotes the conjugate-transpose operation; \(\mathbf{G}\in\mathbb{C}^{N\times M}\), \(\mathbf{h}_{k}^{\text{H}}\in\mathbb{C}^{1\times M}\), and \(\mathbf{f}_{k}^{\text{H}}\in\mathbb{C}^{1\times N}\) characterize the channels between the BS and the RIS, between the BS and user \(k\), and between the RIS and user \(k\), respectively; and \(z_{k}\) denotes the additive white Gaussian noise (AWGN) at user \(k\) with zero mean and variance \(\sigma^{2}\). ## III Performance Analysis In this section, we analyze the performance of active RISs to reveal their notable capacity gains compared to passive RISs. To this end, in order to make the problem analytically tractable and get insightful results, in this section, we consider a single-user single-input single-output (SU-SISO) system with \(M=1\) BS antenna and \(K=1\) user, while the general MU-MISO case is studied in Section IV. ### _Asymptotic SNR for Passive RISs and Active RISs_ To illustrate the capacity gain provided by passive/active RIS aided reflected links, for the moment, we ignore the direct link by setting \(\mathbf{h}_{k}\) to zero, as was done in, e.g., [6]. Furthermore, for simplicity, we assume that each active RIS element has the same amplification factor (i.e., \(p_{n}:=p\)). For a fair comparison with the asymptotic performance of passive RISs, similar to [6], we assume Rayleigh-fading channels. We first redefine the BS-RIS channel matrix and the RIS-user channel vector as \(\mathbf{G}:=\mathbf{g}\in\mathbb{C}^{N\times 1}\) and \(\mathbf{f}_{k}:=\mathbf{f}\in\mathbb{C}^{N\times 1}\), respectively. Then, we recall the following lemma from [6] for the asymptotic SNR achieved by passive RISs. **Lemma 1 (Asymptotic SNR for passive RISs):** Assuming \(\mathbf{f}\sim\mathcal{CN}\left(\mathbf{0}_{N},\varrho_{f}^{2}\mathbf{I}_{N}\right)\), \(\mathbf{g}\sim\mathcal{CN}\left(\mathbf{0}_{N},\varrho_{g}^{2}\mathbf{I}_{N}\right)\) and letting \(N\rightarrow\infty\), the asymptotic SNR \(\gamma_{\text{passive}}\) of a passive RIS aided SU-SISO system is given by \[\gamma_{\text{passive}}\to N^{2}\frac{P_{\text{BS}}^{\max}\pi^{2} \varrho_{f}^{2}\varrho_{g}^{2}}{16\sigma^{2}}, \tag{4}\] where \(P_{\text{BS}}^{\max}\) denotes the maximum transmit power at the BS. Proof:: The proof can be found in [6, Proposition 2]. For comparison, under the same transmission conditions, we provide the asymptotic SNR of an active RIS aided SU-SISO system in the following lemma. **Lemma 2 (Asymptotic SNR for active RISs):** Assuming \(\mathbf{f}\sim\mathcal{CN}\left(\mathbf{0}_{N},\varrho_{f}^{2}\mathbf{I}_{N}\right)\), \(\mathbf{g}\sim\mathcal{CN}\left(\mathbf{0}_{N},\varrho_{g}^{2}\mathbf{I}_{N}\right)\) and letting \(N\rightarrow\infty\), the asymptotic SNR \(\gamma_{\text{active}}\) of an active RIS aided SU-SISO system is given by \[\gamma_{\text{active}}\to N\frac{P_{\text{BS}}^{\max}P_{\text{A}}^{ \max}\pi^{2}\varrho_{f}^{2}\varrho_{g}^{2}}{16\left(P_{\text{A}}^{\max} \sigma_{v}^{2}\varrho_{f}^{2}+P_{\text{BS}}^{\max}\sigma^{2}\varrho_{g}^{2}+ \sigma^{2}\sigma_{v}^{2}\right)}, \tag{5}\] where \(P_{\text{A}}^{\max}\) denotes the maximum reflect power of the active RIS. Proof:: Please see the journal version [4, Appendix A]. **Remark 1:** From (5) we observe that, the asymptotic SNR of an active RIS aided SU-SISO system depends on both the BS transmit power \(P_{\text{BS}}^{\max}\) and the reflect power of the active RIS \(P_{\text{A}}^{\max}\). When \(P_{\text{BS}}^{\max}\rightarrow\infty\), the asymptotic SNR will be upper-bounded by \(\gamma_{\text{active}}\to NP_{\text{A}}^{\max}\pi^{2}\varrho_{f}^{2} \big{/}\left(16\sigma^{2}\right)\), which is independent of the BS-RIS channel \(\mathbf{g}\) and the noise power at the active RIS \(\sigma_{v}^{2}\). Similarly, if \(P_{\text{A}}^{\max}\rightarrow\infty\), the asymptotic SNR will be upper-bounded by \(\gamma_{\text{active}}\to NP_{\text{BS}}^{\max}\pi^{2}\varrho_{g}^{2} /16\sigma_{v}^{2}\), which is independent of the RIS-user channel \(\mathbf{f}\) and the noise power at the user \(\sigma^{2}\). These results reveal that, to increase the sum-rate of active RIS aided systems, the negative impact of small \(\mathbf{g}\) and large \(\sigma_{v}^{2}\) on system performance can be alleviated by increasing the BS transmit power \(P_{\text{BS}}^{\max}\), and the negative impact of small \(\mathbf{f}\) and large \(\sigma^{2}\) can be reduced by increasing the reflect power of the active RIS \(P_{\text{A}}^{\max}\). ### _Comparisons between Passive RISs and Active RISs_ We can observe from _Lemma 1_ and _Lemma 2_ that, compared to the asymptotic SNR for passive RISs \(\gamma_{\text{passive}}\) in (4) which is proportional to \(N^{2}\), the asymptotic SNR for active RISs \(\gamma_{\text{active}}\) in (5) is proportional to \(N\) due to the noises introduced by the use of active components. At first glance, it seems that the SNR achieved by passive RISs \(\gamma_{\text{passive}}\) always exceeds the SNR achieved by active RISs \(\gamma_{\text{active}}\). However, this is actually not the case. The reason behind this counterintuitive behavior is that, due to the large path loss caused by the "multiplicative fading" effect and thanks to the use of the reflection-type amplifiers in active RISs, only when \(N\) is unaffordably large can passive RISs outperform active RISs. To illustrate this claim, let us consider two different SU-SISO systems, which are aided by an active RIS and a passive RIS, respectively. Then, the following lemma specifies the condition that has to be met for passive RISs to outperform active RISs. **Lemma 3 (Case when passive RISs outperform active RISs):** Assuming the number of RIS elements \(N\) is large, the required number of elements \(N\) for a passive RIS to outperform an active RIS has to satisfy \[N\geq\frac{P_{\text{BS-A}}^{\max}}{P_{\text{BS-P}}^{\max}}\frac{P_{\text{A}}^ {\max}\sigma^{2}}{\left(P_{\text{A}}^{\max}\sigma_{v}^{2}\varrho_{f}^{2}+P_{ \text{BS-A}}^{\max}\sigma^{2}\varrho_{g}^{2}+\sigma^{2}\sigma_{v}^{2}\right)}, \tag{6}\] where \(P_{\text{BS-A}}^{\max}\) denotes the maximum BS transmit power for the active RIS aided system and \(P_{\text{BS-P}}^{\max}\) denotes that for the passive RIS aided system. Proof:: Please see the journal version [4, Appendix B]. Next, we consider a specific setup to compare the user's achievable SNRs in the above two systems. For a fair comparison, we constrain the total power consumption \(P_{\text{BS-P}}^{\max}\) of the two systems to \(2\) W by setting \(P_{\text{BS-P}}^{\max}=2\) W for the passive RIS aided system and \(P_{\text{BS-A}}^{\max}=P_{\text{A}}^{\max}=1\) W for the active RIS aided system, respectively. Therefore, when \(\sigma^{2}=\sigma_{v}^{2}=-70\) dBm and \(\varrho_{f}^{2}=\varrho_{g}^{2}=-70\) dB, the required number of elements \(N\) for the passive RIS to outperform the active RIS is \(2.5\times 10^{6}\) according to (6), which is impractical to realize with current technology. Conversely, for a more practical number of elements of \(N=256\), according to (5) and (4), the SNR achieved by the passive RIS is \(\gamma_{\text{passive}}\approx 9.0\) dB, while the SNR achieved by the active RIS is \(\gamma_{\text{active}}\approx 49.0\) dB, which is about \(10,000\) times higher than \(\gamma_{\text{passive}}\). ## IV Joint Transmit Precoding and Reflect Beamforming Design To investigate the capacity gain enabled by the use of active RISs in typical wireless communication scenarios, in this section, we consider more general MU-MISO systems. According to the model in (3), the signal-to-interference-plus-noise ratio (SINR) at user \(k\) can be obtained as \[\gamma_{k}=\frac{\left|\overline{\mathbf{h}}_{k}^{\mathrm{H}}\mathbf{w}_{k} \right|^{2}}{\sum_{j=1,j\neq k}^{K}\left|\overline{\mathbf{h}}_{k}^{\mathrm{H }}\mathbf{w}_{j}\right|^{2}+\left\|\overline{\mathbf{f}}_{k}^{\mathrm{H}} \mathbf{P}\mathbf{\Theta}\right\|^{2}\sigma_{v}^{2}+\sigma^{2}}, \tag{7}\] wherein \(\overline{\mathbf{h}}_{k}^{\mathrm{H}}=\mathbf{h}_{k}^{\mathrm{H}}+\mathbf{f }_{k}^{\mathrm{H}}\mathbf{P}\mathbf{\Theta}\in\mathbb{C}^{1\times M}\) is the equivalent channel from the BS to user \(k\), which includes both the direct link and the reflected link. Therefore, the original problem of sum-rate maximization, subject to the power constraints at the BS and the active RIS, can be formulated as follows: \[\mathcal{P}_{o}: \tag{8a}\] \[\mathrm{s.t.} \mathrm{C}_{1}\!:\!\sum\nolimits_{k=1}^{K}\left\|\mathbf{w}_{k} \right\|^{2}\leq P_{\text{BS}}^{\text{max}},\] (8b) \[\mathrm{C}_{2}\!:\!\sum\nolimits_{k=1}^{K}\!\left\|\mathbf{P} \mathbf{\Theta}\mathbf{G}\mathbf{w}_{k}\right\|^{2}\!+\!\left\|\mathbf{P} \mathbf{\Theta}\right\|^{2}\sigma_{v}^{2}\!\leq\!P_{\text{A}}^{\text{max}}, \tag{8c}\] where \(\mathbf{w}:=\left[\mathbf{w}_{1}^{\mathrm{T}},\cdots,\mathbf{w}_{K}^{\mathrm{ T}}\right]^{\mathrm{T}}\) is the overall transmit precoding vector for the \(K\) users; \(\mathrm{C}_{1}\) and \(\mathrm{C}_{2}\) are the power constraints at the BS and active RIS, respectively. Due to the non-convexity and the highly coupled variables in problem \(\mathcal{P}_{o}\) in (8), the joint design of \(\mathbf{w}\), \(\mathbf{P}\), and \(\mathbf{\Theta}\) is challenging. To efficiently solve the above problem, we develop a joint precoding and beamforming algorithm based on alternating optimization and fractional programming (FP). Note that \(\mathbf{P}\) and \(\mathbf{\Theta}\) always appear in product form in problem \(\mathcal{P}_{o}\) in (8). Therefore, \(\mathbf{P}\) and \(\mathbf{\Theta}\) can be merged as \(\mathbf{\Psi}=\mathbf{P}\mathbf{\Theta}=\mathrm{diag}\left(p_{1}e^{j\theta_{1 }},\cdots,p_{N}e^{j\theta_{N}}\right)\in\mathbb{C}^{N\times N}\). We refer to \(\mathbf{\Psi}\) as the RIS beamforming matrix. Next, to deal with the non-convex sum-of-logarithms and fractions in (8), we exploit the FP methods proposed in [7] to decouple the variables in problem \(\mathcal{P}_{o}\) in (8). This leads to the following lemma. **Lemma 4** (Equivalent problem for sum-rate maximization): _By introducing auxiliary variables \(\boldsymbol{\rho}:=[\rho_{1},\cdots,\rho_{K}]\in\mathbb{R}^{K}\) and \(\boldsymbol{\varpi}:=[\varpi_{1},\cdots,\varpi_{K}]\in\mathbb{C}^{K}\), the original problem \(\mathcal{P}_{o}\) in (8) can be equivalently reformulated as_ \[\mathcal{P}_{1}: \tag{9}\] \[\sum\nolimits_{k=1}^{K}\rho_{k}+\sum\nolimits_{k=1}^{K}g(\mathbf{ w},\mathbf{\Psi},\rho_{k},\varpi_{k}),\] \[\mathrm{s.t.} \mathrm{C}_{1},\mathrm{C}_{2},\] _where function \(g(\mathbf{w},\mathbf{\Psi},\rho_{k},\varpi_{k})\) is defined as_ \[g(\mathbf{w},\mathbf{\Psi},\rho_{k},\varpi_{k})=2\sqrt{(1+\rho_ {k})}\mathfrak{R}\left\{\varpi_{k}^{*}\overline{\mathbf{h}}_{k}^{\mathrm{H}} \mathbf{w}_{k}\right\}- \tag{10}\] \[\left|\varpi_{k}\right|^{2}\left(\sum\nolimits_{j=1}^{K}\left| \overline{\mathbf{h}}_{k}^{\mathrm{H}}\mathbf{w}_{j}\right|^{2}+\left\| \overline{\mathbf{f}}_{k}^{\mathrm{H}}\mathbf{\Psi}\right\|^{2}\sigma_{v}^{2}+ \sigma^{2}\right).\] Proof:: Constructive proof can be found in [7, Subsection III-C]. Strong convergence of the FP methods was proved in [7]. Thus, a locally optimal solution to (9) can be obtained by alternately optimizing the variables. For clarity, we summarize the proposed joint precoding and beamforming algorithm in **Algorithm 1**, and the specific solutions for variables \(\mathbf{w}\), \(\mathbf{\Psi}\), \(\mathbf{\rho}\), and \(\boldsymbol{\varpi}\) are given in the following four steps, respectively. ``` 0: Channels \(\mathbf{G}\), \(\mathbf{h}_{k}\), and \(\mathbf{f}_{k}\), \(\forall k\in\{1,\cdots,K\}\). 0: Optimized BS precoding vector \(\mathbf{w}\), amplification factor matrix of active RIS \(\mathbf{P}\), phase shift matrix of active RIS \(\mathbf{\Theta}\), and sum-rate \(R_{\mathrm{sum}}\). 1: Randomly initialize \(\mathbf{w}\), \(\mathbf{P}\) and \(\mathbf{\Theta}\); 2:while no convergence of \(R_{\mathrm{sum}}\)do 3: Update \(\boldsymbol{\rho}\) by (11); 4: Update \(\boldsymbol{\varpi}\) by (12); 5: Update \(\mathbf{w}\) by solving problem \(\mathcal{P}_{2}\) in (14); 6: Update \(\mathbf{\Psi}\) by solving problem \(\mathcal{P}_{3}\) in (15); 7:endwhile 8: Obtain \(\mathbf{P}\) and \(\mathbf{\Theta}\) from \(\mathbf{\Psi}\); 9:return Optimized \(\mathbf{w}\), \(\mathbf{P}\), \(\mathbf{\Theta}\), and \(R_{\mathrm{sum}}\). ``` **Algorithm 1** Proposed joint transmit precoding and reflect beamforming algorithm #### Iv-1 Fix \((\mathbf{w},\mathbf{\Psi},\boldsymbol{\varpi})\) and optimize \(\boldsymbol{\rho}\) After fixing precoding vector \(\mathbf{w}\), beamforming matrix \(\mathbf{\Psi}\), and auxiliary variable \(\boldsymbol{\varpi}\), the optimal \(\boldsymbol{\rho}\) can be obtained by solving \(\frac{\partial R_{\mathrm{sum}}^{\prime}}{\partial\rho_{k}}=0\) as \[\rho_{k}^{\mathrm{opt}}=\frac{\xi_{k}^{2}+\xi_{k}\sqrt{\xi_{k}^{2}+4}}{2},\quad \forall k\in\{1,\cdots,K\}, \tag{11}\] where \(\xi_{k}=\Re\left\{\varpi_{k}^{*}\mathbf{h}_{k}^{\mathrm{H}}\mathbf{w}_{k}\right\}\). #### Iv-2 Fix \((\mathbf{w},\mathbf{\Psi},\boldsymbol{\rho})\) and optimize \(\boldsymbol{\varpi}\) After fixing the precoding vector \(\mathbf{w}\), beamforming matrix \(\mathbf{\Psi}\), and auxiliary variable \(\boldsymbol{\rho}\), the optimal \(\boldsymbol{\varpi}\) can be derived by solving \(\frac{\partial R_{\mathrm{sum}}^{\prime}}{\partial\varpi_{k}}=0\) as \[\varpi_{k}^{\mathrm{opt}}\!=\!\frac{\sqrt{(1+\rho_{k})}\mathbf{h}_{k}^{\mathrm{H}} \mathbf{w}_{k}}{\sum\nolimits_{j=1}^{K}\!\left|\overline{\mathbf{h}}_{k}^{ \mathrm{H}}\mathbf{w}_{j}\right|^{2}\!+\!\left\|\mathbf{f}_{k}^{\mathrm{H}} \mathbf{\Psi}\right\|^{2}\!\sigma_{v}^{2}+\sigma^{2}}. \tag{12}\] #### Iv-3 Fix \((\mathbf{\Psi},\boldsymbol{\rho},\boldsymbol{\varpi})\) and optimize \(\mathbf{w}\) To simplify the notations, we first introduce the following definitions: \[\mathbf{b}_{k}^{\mathrm{H}}\!=\!\sqrt{(1+\rho_{k})}\varpi_{k}^{*} \mathbf{h}_{k}^{\mathrm{H}},\ \mathbf{b}=\left[\mathbf{b}_{1}^{\mathrm{T}},\mathbf{b}_{2}^{\mathrm{T}}, \cdots,\mathbf{b}_{N}^{\mathrm{T}}\right]^{\mathrm{T}}, \tag{13a}\] \[\mathbf{A}\!=\!\mathbf{I}_{K}\!\otimes\!\sum\nolimits_{k=1}^{K} \!\!\left|\varpi_{k}\right|^{2}\!\mathbf{h}_{k}\mathbf{\bar{h}}_{k}^{\mathrm{H}}, \ \mathbf{\Xi}\!=\!\mathbf{I}_{K}\!\otimes\!\left(\mathbf{G}^{\mathrm{H}}\mathbf{\Psi}^{ \mathrm{H}}\mathbf{\Psi}\mathbf{G}\right),\] (13b) \[P_{\mathrm{m}}^{\mathrm{max}}=P_{\mathrm{A}}^{\mathrm{max}}-\left\| \mathbf{\Psi}\right\|^{2}\!\sigma_{v}^{2}, \tag{13c}\] where \(\otimes\) denotes the Kronecker product. Then, problem \(\mathcal{P}_{1}\) in (9) can be reformulated as follows: \[\mathcal{P}_{2}: \max_{\mathbf{w}} \mathfrak{R}\left\{2\mathbf{b}^{\mathrm{H}}\mathbf{w}\right\}- \mathbf{w}^{\mathrm{H}}\mathbf{A}\mathbf{w},\] (14) \[\mathrm{s.t.} \mathrm{C}_{1}:\left\|\mathbf{w}\right wherein \[\mathbf{\upsilon}=\sum\nolimits_{k=1}^{K}\sqrt{(1+\rho_{k})}\mathrm{diag} \left(\mathbf{\upsilon}_{k}^{*}\mathbf{k}_{k}^{\mathrm{H}}\right)\mathbf{G}\mathbf{w}_{k }-\] \[\quad\quad\quad\sum\nolimits_{k=1}^{K}|\varpi_{k}|^{2}\mathrm{diag} \left(\mathbf{\mathsf{f}}_{k}^{\mathrm{H}}\right)\mathbf{G}\sum\nolimits_{j=1}^{K} \mathbf{w}_{j}\mathbf{w}_{j}^{\mathrm{H}}\mathbf{h}_{k}, \tag{16a}\] \[\mathbf{\Omega}=\sum\nolimits_{k=1}^{K}|\varpi_{k}|^{2}\mathrm{diag} \left(\mathbf{\mathsf{f}}_{k}^{\mathrm{H}}\right)\mathrm{diag}\left(\mathbf{\mathsf{f }}_{k}\right)\sigma_{v}^{2}+\] \[\quad\quad\sum\nolimits_{k=1}^{K}|\varpi_{k}|^{2}\sum\nolimits_{j =1}^{K}\mathrm{diag}\left(\mathbf{\mathsf{f}}_{k}^{\mathrm{H}}\right)\mathbf{G} \mathbf{w}_{j}\mathbf{w}_{j}^{\mathrm{H}}\mathbf{G}^{\mathrm{H}}\mathrm{diag} \left(\mathbf{\mathsf{f}}_{k}\right),\] (16b) \[\mathbf{\Pi}=\sum\nolimits_{k=1}^{K}\mathrm{diag}\left(\mathbf{G} \mathbf{w}_{k}\right)\left(\mathrm{diag}\left(\mathbf{G}\mathbf{w}_{k}\right) \right)^{\mathrm{H}}+\sigma_{v}^{2}\mathbf{\mathsf{I}}_{N}. \tag{16c}\] Note that problem \(\mathcal{P}_{3}\) in (15) is also a standard QCQP problem. Thus, the optimal solution \(\mathbf{\psi}^{\mathrm{opt}}\) can be obtained by adopting ADMM. ## V Validation Results ### _Validation Results for Signal Model_ To validate the signal model (2), we designed and fabricated an active RIS element with an integrated reflection-type amplifier for experimental measurements in [8]. Note that this design can be directly extended to the large-array case. Particularly, since the phase-shifting ability of RISs has been widely verified, we focus on studying the reflection gain and the noise introduced by an active RIS element. Thus, the validation of signal model (2) is equivalent to validating \[P_{y}=\underbrace{GP_{x}}_{\text{Desired-signal power}}+\underbrace{G\sigma_{v}^{2}+ \sigma_{s}^{2}}_{\text{noise power}}, \tag{17}\] where \(P_{y}\) is the power of the signals reflected by the active RIS element; \(P_{x}\) is the power of the incident signal; \(G:=p^{2}\) is the reflection gain of the active RIS element; \(G\sigma_{v}^{2}\) and \(\sigma_{s}^{2}\) are the powers of the dynamic noise and static noise introduced by the active RIS element, respectively. #### V-A1 Hardware platform To validate the model in (17), we first establish the hardware platform used for our experimental measurements in Fig. 2. Due to space constraints, we refer the reader to the journal version of this paper [4, Fig. 4] for detailed information about the hardware platform. #### V-A2 Reflection gain measurement Using the measurement system for the reflection gain depicted in [4, Fig. 4 (b)], we first investigate the reflection gain \(G\) of the active RIS element. Note that the reflection gain \(G\) can be reconfigured by the input power of the pump source \(P_{\mathrm{p}}\). By setting the input power of the vector network analyzer as \(P_{x}=-50\) dBm, the reflection gain \(G\) as a function of the signal frequency can be directly measured via a vector network analyzer. Then, in Fig. Fig. 3: Experimental measurement result for reflection gain \(G\) versus signal frequency \(f\). Fig. 2: The experimental devices and environment used for validating the signal model (2) of active RISs. 3, we show the measurement results for reflection gain \(G\) as a function of signal frequency \(f\) for different input powers of the pump source \(P_{\rm p}\). We observe that the active RIS element can achieve a reflection gain \(G\) of more than 25 dB, when \(P_{\rm p}=18.24\) dBm, which confirms the significant reflection gains enabled by active RISs. #### Iv-A3 Noise power measurement We further study the noise power introduced by the active RIS element, i.e., \(G\sigma_{v}^{2}+\sigma_{s}^{2}\) in (17), where \(G\sigma_{v}^{2}\) and \(\sigma_{s}^{2}\) are the powers of the dynamic noise and the static noise introduced at the active RIS element, respectively. Using the noise measurement system in [4, Fig. 4 (c)], we show the measurement results for the spectral density of noise power \(G\sigma_{v}^{2}+\sigma_{s}^{2}\) as a function of \(G\) for different operating frequencies in Fig. 4. We can observe that the noise power increases nearly linearly with \(G\), which verifies the noise model \(G\sigma_{v}^{2}+\sigma_{s}^{2}\) in (17). Particularly, for \(f=2.3601\) GHz, the spectral density of \(\sigma_{s}^{2}\) is about \(-174\) dBm/Hz, while that of \(\sigma_{v}^{2}\) is about \(-160\) dBm/Hz, which is about \(15\) dB higher. The reason for this is that the input noise is amplified by the noise factor, and additional noises are also introduced by the other active components such as the DC source used to power the active RIS. ### _Simulation Results for Sum-Rate_ #### Iv-B1 Simulation setup We consider an active RIS aided MU-MISO system. Particularly, we consider two scenarios with different channel conditions. In scenario 1, the direct link is weak due to severe obstruction, while the direct link is strong in scenario 2. To be specific, two different path loss models from the 3GPP TS 36.814 standard are utilized to characterize the large-scale fading of the channels: i) \(\mathrm{PL}_{s}=37.3+22.0\log d\); ii) \(\mathrm{PL}_{w}=41.2+28.7\log d\), where \(d\) is the distance between two devices. Path loss model \(\mathrm{PL}_{w}\) is used to generate the weak BS-user link in scenario 1, while \(\mathrm{PL}_{s}\) is used to generate the strong BS-user link in scenario 2. For both scenarios, \(\mathrm{PL}_{s}\) is used to generate the BS-RIS and the RIS-user channels. To account for small-scale fading, we adopt the Ricean fading channel model for all channels involved and we assume the Ricean factor as \(\kappa=1\). The BS and the active/passive RIS are located at (0, -60 m) and (200 m, 30 m), respectively. Four users are randomly located in a circle with a radius of 5 m from the center (200 m, 0). The numbers of BS antennas and RIS elements are set as \(M=4\) and \(N=256\), respectively. The noise power is set as \(\sigma^{2}=\sigma_{v}^{2}=-70\) dBm. For fair comparison, we constrain the total power consumption \(P^{\rm max}:=P_{\rm BS}^{\rm max}+P_{\rm A}^{\rm max}\). For the active RIS, **Algorithm 1** is employed for joint precoding and beamforming design, while for the passive RIS, the algorithm from [2] is adopted. #### Iv-B2 Simulation results In Fig. 5 and Fig. 6, we plot the sum-rate versus the total consumed power \(P^{\rm max}\) for the two considered scenarios, where the direct link is weak and strong, respectively. Firstly, in scenario 1 with a weak direct link, the passive RIS can indeed achieve a performance improvement, while the active RIS achieves a much higher sum-rate gain. Secondly, in scenario 2 with a strong direct link, the passive RIS achieves only a negligible sum-rate gain, while the active RIS still realizes a noticeable sum-rate gain. For example, when \(P^{\rm max}=10\) dBW, the capacities without RIS, with passive RIS, and with active RIS in scenario 1 are 5.34 bps/Hz, 7.00 bps/Hz, and 32.41 bps/Hz respectively, while in scenario 2, these values are 19.87 bps/Hz, 20.51 bps/Hz, and 32.18 bps/Hz, respectively. In this case, the passive RIS provides a 31% gain in scenario 1 and a negligible 3% gain in scenario 2. By contrast, the active RIS achieves noticeable sum-rate gains of 507% in scenario 1 and 62% in scenario 2, which are much higher than those achieved by the passive RIS. ### _Field Test for a 64-Element Active RIS Aided Wireless Communication Prototype_ #### Iv-C1 64-element active RIS aided communication prototype To validate the significant gain of active RISs, we develop a 64-element active RIS aided wireless communication prototype, as shown in Fig. 7. Specifically, the hardware structure of this prototype consists of three parts including a BS, a 64-element active RIS, and a user. For the BS and the user, two horn antennas with 13 dBi antenna gain are used to transmit Fig. 5: Simulation results for the sum-rate versus total power consumption \(P^{\rm max}\) in scenario 1 with a weak direct link. Fig. 6: Simulation results for the sum-rate versus total power consumption \(P^{\rm max}\) in scenario 2 with a strong direct link. and receive the signals, and the universal software radio peripherals (USRPs) are deployed to generate and process the baseband and RF signals (hardware version: USRP-2953R). By periodically expanding the active RIS elements designed in [8], the 64-element active RIS is an 8\(\times\)8 plane array, of which each element has a reflection gain of \(G=10\) dB. #### V-B2 Experimental environment Based on the developed prototype, we establish the experimental environment for further validation. To match the transceivers, we configure the operating frequency of the active RIS to \(f=3.5\) GHz and the bandwidth to 40 MHz by adjusting the circuit impedance of active elements. The polarization of the antenna at the BS and that at the user are selected as vertical and horizontal, respectively. The transmit power is set to \(-10\) mW. We fix the heights of the BS, the RIS, and the user as 1 m. The horizontal distance of the BS-RIS link and that of the RIS-user link are set to 2 m and 3.5 m, respectively. The angle of arrival (AoA) at the active RIS is fixed as \(0^{\circ}\), and the angle of departure (AoD) will be specified to evaluate the performance gain of active RISs at different orientations. To observe the reflection gain of the active RIS, we use a metal plate with the same aperture size as the active RIS for performance comparison. #### V-B3 Experimental results By moving the user at different AoDs and configuring the phase shift of the active RIS with discrete Fourier transform (DFT) codebook, we obtain the experimental results shown in Table I. One can observe that, compared with the received power for the metal plate, the active RIS can always achieve a gain of about \(10\) dB. The data rate for the active RIS can hold at about 30 Mbps, while that for the metal plate only ranges from 1 Mbps to 2Mbps. The reason is that, the beamforming at the active RIS can make the reflected beam with high array gain and reflection gain, while the metal plate can only reflect the incident signals randomly without in-phase combination or amplification, which validates the significant gain of active RISs. ## VI Conclusions In this paper, we have studied the concept of active RISs to overcome the fundamental limitation of the "multiplicative fading" effect. Firstly, we have verified the signal model of active RISs through the experimental measurements on a fabricated active RIS element. Based on the verified signal model, we have formulated the sum-rate maximization problem for an active RIS aided MU-MISO system and a joint precoding and beamforming algorithm has been proposed to solve this problem. Simulation results have shown that, in a typical application scenario, the existing passive RIS can realize only a negligible sum-rate gain of about 3%, while the active RIS can achieve a substantial sum-rate gain of about 62%, thus indeed overcoming the "multiplicative fading" effect. Finally, we have developed a communication wireless communication prototype aided by a 64-element active RIS, and the significant gain of active RISs is validated by field test. In the future, many research directions for active RISs are worth pursuing, such as hardware design, prototype development, channel estimation, and energy efficiency analysis. ## Acknowledgment This work was supported in part by the National Key Research and Development Program of China (Grant No. 2020YFB1805005), in part by the National Natural Science Foundation of China (Grant No. 62031019), and in part by the European Commission through the H2020-MSCA-ITN META WIRELESS Research Project under Grant 956256.
2309.04525
Mapping dusty galaxy growth at $z>5$ with FRESCO: Detection of H$α$ in submm galaxy HDF850.1 and the surrounding overdense structures
We report the detection of a 13$\sigma$ H$\alpha$ emission line from HDF850.1 at $z=5.188\pm0.001$ using the FRESCO NIRCam F444W grism observations. Detection of H$\alpha$ in HDF850.1 is noteworthy, given its high far-IR luminosity, substantial dust obscuration, and the historical challenges in deriving its redshift. HDF850.1 shows a clear detection in the F444W imaging data, distributed between a northern and southern component, mirroring that seen in [CII] from the Plateau de Bure Interferometer. Modeling the SED of each component separately, we find that the northern component has a higher mass, star formation rate (SFR), and dust extinction than the southern component. The observed H$\alpha$ emission appears to arise entirely from the less-obscured southern component and shows a similar $\Delta$v$\sim$+130 km/s velocity offset to that seen for [CII] relative to the source systemic redshift. Leveraging H$\alpha$-derived redshifts from FRESCO observations, we find that HDF850.1 is forming in one of the richest environments identified to date at $z>5$, with 100 $z=5.17-5.20$ galaxies distributed across 10 structures and a $\sim$(15 cMpc)$^3$ volume. Based on the evolution of analogous structures in cosmological simulations, the $z=5.17-5.20$ structures seem likely to collapse into a single $>$10$^{14}$ $M_{\odot}$ cluster by $z\sim0$. Comparing galaxy properties forming within this overdensity with those outside, we find the masses, SFRs, and $UV$ luminosities inside the overdensity to be clearly higher. The prominence of H$\alpha$ line emission from HDF850.1 and other known highly-obscured $z>5$ galaxies illustrates the potential of NIRCam-grism programs to map both the early build-up of IR-luminous galaxies and overdense structures.
Thomas Herard-Demanche, Rychard J. Bouwens, Pascal A. Oesch, Rohan P. Naidu, Roberto Decarli, Erica J. Nelson, Gabriel Brammer, Andrea Weibel, Mengyuan Xiao, Mauro Stefanon, Fabian Walter, Jorryt Matthee, Romain A. Meyer, Stijn Wuyts, Naveen Reddy, Pablo Arrabal Haro, Helmut Dannerbauer, Alice E. Shapley, John Chisholm, Pieter van Dokkum, Ivo Labbe, Garth Illingworth, Daniel Schaerer, Irene Shivaei
2023-09-08T18:00:02Z
http://arxiv.org/abs/2309.04525v1
Mapping dusty galaxy growth at \(z>5\) with FRESCO: Detection of H\(\alpha\) in submm galaxy HDF850.1 and the surrounding overdense structures ###### Abstract We report the detection of a 13\(\sigma\) H\(\alpha\) emission line from HDF850.1 at \(z=5.188\pm 0.001\) using the FRESCO NIRCam F444W grism observations. Detection of H\(\alpha\) in HDF850.1 is noteworthy, given its high far-IR luminosity, substantial dust obscuration, and the historical challenges in deriving its redshift. HDF850.1 shows a clear detection in the F444W imaging data, distributed between a northern and southern component, mirroring that seen in [CII] from the Plateau de Bure Interferometer. Modeling the SED of each component separately, we find that the northern component has a higher mass, star formation rate (SFR), and dust extinction than the southern component. The observed H\(\alpha\) emission appears to arise entirely from the less-obscured southern component and shows a similar \(\Delta\)v\(\sim\)+130 km/s velocity offset to that seen for [CII] relative to the source systemic redshift. Leveraging H\(\alpha\)-derived redshifts from FRESCO observations, we find that HDF850.1 is forming in one of the richest environments identified to date at \(z>5\), with 100 \(z=5.17\)-5.20 galaxies distributed across 10 structures and a \(\sim\)(15 cMpc)\({}^{3}\) volume. Based on the evolution of analogous structures in cosmological simulations, the \(z=5.17\)-5.20 structures seem likely to collapse into a single \(>\)10\({}^{14}\)\(M_{\odot}\) cluster by \(z\sim 0\). Comparing galaxy properties forming within this overdensity with those outside, we find the masses, SFRs, and _UV_ luminosities inside the overdensity to be clearly higher. The prominence of H\(\alpha\) line emission from HDF850.1 and other known highly-obscured \(z>5\) galaxies illustrates the potential of NIRCam-grism programs to map both the early build-up of IR-luminous galaxies and overdense structures. keywords: galaxies: evolution - galaxies: high-redshift - large scale structures, protoclusters ## 1 Introduction One of the most important frontiers in extragalactic astronomy has been understanding the formation and evolution of massive galaxies in the early universe. Despite the many significant insights that have been gained into both the build-up of the star formation rates (SFRs) and stellar masses of \(UV\)-bright galaxies from both space and ground-based telescopes (e.g., Madau & Dickinson, 2014; Davidzon et al., 2017; Stefanon et al., 2021; Bouwens et al., 2021; Harikane et al., 2022), it is essential we achieve an equally complete census of star formation from obscured galaxies given how significantly obscured star formation contributes to galaxies with masses \(>\)10\({}^{10}\)\(M_{\odot}\)(e.g. Reddy et al., 2006, 2008; Whitaker et al., 2017). In addition to the great strides made by ALMA in this area (e.g., Casey et al., 2021; Smail et al., 2021; Bouwens et al., 2022; Dayal et al., 2022), a JWST is further revolutionizing this science area thanks to its sensitive near-IR photometric and spectroscopic capabilities to 5\(\mu\)m and beyond, sampling bright Balmer and Paschen series line like H\(\alpha\), Paar, and Pa\(\beta\) to \(z\sim 7\)(e.g. Helton et al., 2023; Alvarez-Marquez et al., 2023; Reddy et al., 2023). Through the identification of bright line emission from e.g. H\(\alpha\) (e.g., the candidate \(z=5.58\) dusty galaxy shown in Oesch et al., 2023), one can assemble substantial spectroscopic samples of far-IR luminous, dusty star-forming galaxies in the early universe with JWST. Particularly useful for this endeavor are NIRCam grism observations that allow for a probe of H\(\alpha\) emission over the full morphology of sources over a \(\sim\)9 arcmin\({}^{2}\) NIRCam field, facilitating redshift determinations even when the escape of H\(\alpha\) occurs over just an isolated region. In this paper, we report on the successful detection of H\(\alpha\) emission from HDF850.1, one of the first sub-millimeter galaxies to be identified in the high-redshift universe, leveraging new grism observations from the First Reionization Era Spectroscopically Complete Observations (FRESCO: Oesch et al., 2023) program. HDF850.1 was initially identified using sensitive sub-millimeter observations over the Hubble Deep Field (HDF) North with the SCUBA camera (Hughes et al., 1998), but no optical counterpart could be identified in the deepest HST images available at the time. Based on the available information, HDF850.1 appeared to have a redshift z \(>\) 2, implying that the star formation rate of this single source could rival the total star formation activity of all coeval galaxies in the HDF combined. This discovery triggered a \(>\)10-year effort to improve the location of the source from the coarse position provided by the 17\({}^{\circ}\) FWHM SCUBA beam, to search for possible counterparts, and to pin down its redshift (Richards, 1999; Downes et al., 1999; Dunlop et al., 2004; Wagg et al., 2007; Cowie et al., 2009). A redshift for HDF850.1 was finally obtained through a millimeter line-scan obtained at the IRAM Plateau de Bure Interferometer (PdBI) by Walter et al. (2012) who detected multiple lines in the rest-frame sub-millimeter ([CII] as well as multiple CO lines), thus unambigously pinpointing the redshift to z = 5.183. This study also revealed that the system is part of a fairly significant galaxy overdensity at these redshifts. Higher resolution imaging of the [CII] line emission from HDF850.1 (Neri et al., 2014) provided evidence that HDF850.1 is a merging system. In addition to enabling the detection of the H\(\alpha\) line from HDF850.1, the new NIRCam imaging observations from FRESCO allow us to measure the continuum flux from HDF850.1 in the rest-optical while facilitating a modeling of the overall SED of both components of the source. Moreover, the NIRCam grism observations allow for the identification of a large number of H\(\alpha\) emitters over a \(\sim\)62 arcmin\({}^{2}\) FRESCO area in the GOODS North field, allowing us to map out the \(z=5.17\)-5.20 overdensity much more extensively than had been possible in earlier work by Walter et al. (2012), Arrabal Haro et al. (2018), and Calvi et al. (2021). We organize the paper as follows. In SS2, we provide a summary of the observational data we utilize for this analysis and describe our procedures for constructing multi-wavelength catalogs. In SS3, we describe our discovery of a detection of H\(\alpha\) from HDF850.1, the photometry we derive for the two components of HDF850.1 and inferences based on SED modeling, and then present information on both the structure of the \(z=5.17\)-5.20 overdensity in which HUDF850.1 resides, and on the characteristics of the large number of other member galaxies. In SS4, we briefly discuss the potential of NIRCam grism surveys for mapping out the build-up of massive galaxies in the early universe given the discovery of H\(\alpha\) emission from HDF850.1 and a significant fraction of other well-known far-IR luminous \(z>5\) galaxies over the GOODS-North field. In SS5, we include a short summary of our primary results and provide a brief look to the future. For consistency with previous work, we express all quantities using the so-called concordance cosmology with \(\Omega_{m}=0.3\) and \(\Omega_{\Lambda}=0.7\), \(H_{0}=70\) km/s/Mpc. Stellar masses and SFRs are presented assuming a Chabrier (2003) IMF. All magnitudes are presented using the AB magnitude system (Oke & Gunn, 1983). ## 2 Data ### FRESCO NIRCam Grism and Imaging Data In this analysis we make use of NIRCam data obtained by the JWST FRESCO survey (GO-1895; see Oesch et al., 2023 for details). FRESCO covered both the GOODS-South and the GOODS-North fields with \(\sim\)62 arcmin\({}^{2}\) of NIRCam/grism spectroscopy in the F444W filter. This coverage is achieved with two 2\(\times\)4 mosaics of NIRCam/grism observations with significant column overlap in order to maximize the wavelength coverage over the field. Only the GRISMR was used due to overhead costs. The maximal wavelength coverage is from 3.8 to 5.0 \(\mu\)m, which is achieved over 73% of the full mosaic. The exposure times per pointing are 7043 s, resulting in an average 5 \(\sigma\) line sensitivity of \(\sim\)2\(\times\)10\({}^{-18}\) erg s\({}^{-1}\) cm\({}^{-2}\) at a resolution of R-1600. The grism data of GOODS-N used here were obtained in February 2023. The NIRCam/grism observations are reduced using the publicly available grizli code1 (see also Brammer et al, in prep). Specifically, we start from the rate files that are obtained from the MAST archive. These are then aligned to a Gaia-matched reference frame. The direct images of a given visit are used to align the associated grism exposures. Before combination of the long-wavelength data, a custom bad-pixel map is applied, which masks residual bad pixels. Footnote 1: [https://github.com/brammer/grizli](https://github.com/brammer/grizli) Following Kashino et al. (2022), we use a median filtering technique to remove the continuum spectra for each row of the individual grism exposures. The filter uses a 12 pixel central gap, which avoids self-subtraction in case of strong emission lines. After the first pass filtering, pixels with significant line flux are identified and masked, before running the median filtering again. For each source of interest from the photometric catalog (see next section), we derive an extraction kernel based on the image morphology in the F444W band and the segmentation map. This kernel is used to perform an optimal extraction of 1D spectra from the aligned and combined grism exposures. We use slightly modified sensitivity curves and spectral traces based on the v4 grism configuration files2. Footnote 2: [https://github.com/mpirzkal/GRISMRCONF](https://github.com/mpirzkal/GRISMRCONF) In addition to the grism observations, FRESCO obtained imaging in the two short-wavelength filters F182M and F210M, as well as direct imaging in the long-wavelength filter F444W. The 5 \(\sigma\) depths of these images are respectively 28.3, 28.1, and 28.2 mag, as measured in circular apertures of 0\(\aas@@fstack{\prime\prime}\)32 diameter. We note that deeper NIRCam imaging data are available over \(\sim\)50% of the FRESCO mosaics thanks to observations from the JADES team (Robertson, 2022). However, these data are not yet publicly available for inclusion in this analysis. ### FRESCO Multi-Wavelength Catalogs In addition to the new JWST NIRCam data, we also make use of all ancillary HST imaging available in the GOODS-North field. Being a key extragalactic field for more than a decade, GOODS North has been targeted by all the main telescope facilities through a large number of programs. A complete listing of HST programs can be found on the Hubble Legacy Field (HLF) release page3(see also Whitaker et al., 2019 and Illingworth et al., 2016). Most importantly, the field was covered with ACS imaging from GOODS (Giavalisco et al., 2004), with ACS and WFC3/IR imaging by the CANDELS survey (Koekemoer et al., 2011; Grogin et al., 2011), as well as WFC3/IR grism imaging by the AGHAST survey (Weiner & AGHAST Team, 2014). Footnote 3: [https://archive.stsci.edu/prepds/hlf/](https://archive.stsci.edu/prepds/hlf/) Here, we use a re-reduction of all available HST ACS and WFC3/IR data in the archive in filters that cover the FRESCO pointings, which we drizzled to a common pixel frame of 40 mas/pixel as the JWST NIRCam data. The 5 \(\sigma\) depth (in the same 0\(\aas@@fstack{\prime\prime}\)32 diameter apertures) for the ancillary data are well-matched to the FRESCO NIRCam imaging: they range from \(\sim\)28.6 mag for the ACS data (F435W, F606W, F775W, F814W, and F850LP) to \(\sim\)28.2 mag for the WFC3/IR data (F105W, F125W, F160W), with the exception of F140W that reaches to \(\sim\) 27.4 mag. In total, we derive photometry in 12 bands for the GOODS-North data set. The F444W images are used as a detection image, and we run SExtractor (Bertin & Arnouts, 1996) in dual image mode to measure matched-aperture photometry. The images are all PSF-matched to the F444W detection image, and fluxes are corrected to total using the default SExtractor AUTO parameters in addition to a small correction for remaining flux outside the AUTO aperture based on WebbPSF's F444W curve of growth (Perrin et al., 2014). ### Spectroscopic Sample of H\(\alpha\) Emitters We briefly summarize the construction of the H\(\alpha\) catalog at \(z\simeq 4.9-6.6\) over the GOODS-North FRESCO field and refer readers to Brammer et al. (2023, in prep) and Naidu et al. (2023, in prep) for more detailed description of the overall methods and catalog validation. Line extractions from the grism data are based on EAZY photometric redshifts (\(z_{\rm phot}\)). For each object, we search for lines in a window around \(z_{\rm phot}\pm 0.05\times(1+z_{\rm phot})\). The photometric redshifts are derived from the PSF-matched photometry described above (SS2.2). Additionally, we also use the default gri2li photometric redshifts catalog that uses a stack of the FRESCO imaging (F182M+F210M+F444W) as a detection image, and makes two different choices in performing the photometry (e.g., with different source deblending parameters). We use both sets of photometric redshifts to marginalize over such choices. In practice, the best-fit redshifts at \(z_{\rm phot}>4.5\) have a small median difference of \(|\lambda_{\rm z}|<0.1\) and in most cases result in the same line-candidates, but a fraction (\(\approx 25\%\)) have \(|\lambda_{\rm z}|>0.3\). Every line extracted with S/N\(>5\) is visually inspected to verify that its morphology is consistent with the source and also to look for false positives (e.g., from broad PAH features in the foreground that slip through the median filtering). Physical properties (e.g., stellar masses) are derived by jointly fitting the photometry and line-fluxes with a non-parametric star-formation history and Chabrier IMF using the Prospector SED-fitting code (Leja et al., 2017). We refer readers to Naidu et al. (2023, in prep) for further details about the modeling choices. ## 3 Results ### Identification of H\(\alpha\) Emission from HDF850.1 One of the first sources we examined after creating catalogs of line emitting sources over the GOODS-North FRESCO field was the well Figure 1: Images centered on HDF850.1 from both _HST_ in the F814W band (_left panel_) and _JWST_/NIRCam in the F182M+F210M (_center panel_) and F444W bands (_right panel_). The blue and red solid lines contours in the left panel show the 3, 4, 5, 6, 7, and 8\(\sigma\) contours for [CII] emission from the northern and southern components, respectively, of HDF850.1 as derived by Neri et al. (2014) using PdBI. Crosses give the spatial position where the [CII] emission for each component reaches a peak. The dashed contours shown in the center and right panels delineate the regions used in our extractions of near-IR spectra for the northern (_blue_) and southern (_red_) components. These two regions have been designed to mirror the shape and morphology of the two components in the F444W imaging data. The white arrow in the upper-right corner of the right panel shows the dispersion direction of the NIRCam grism. known dusty star-forming galaxy HDF850.1 with a spectroscopic redshift \(z=5.1853\) from the 157.74\(\mu\)m [CII] line (Walter et al., 2012; Neri et al., 2014). As we discussed in SS1, that source evaded a direct redshift determination for \(>\)10 years following its initial discovery (e.g. Wagg et al., 2007), and a redshift only came in 2012 thanks to a dedicated spectral scan for [CII] and various CO lines with PdBI Walter et al. (2012). The _JWST_ F182M, F210M, and F444W NIRCam imaging we have available for HDF850.1 from FRESCO is shown in the center and right panels of Figure 1 together with the blueshifted and redshifted high spatial resolution [CII] contours from PdBI. Also shown is the imaging data of HDF850.1 at 0.8\(\mu\)m from F814W, again demonstrating that HDF850.1 radiates essentially no flux at \(<\)1\(\mu\)m. From this imaging data, it is clear that while both components are detected in the NIRCam F444W data, only the southern component shows a clear detection in the F182M and F210M imaging data. We made use of the two prominent components of HDF850.1 in the F444W imaging data to construct segmentation maps, shown in the central and right panels of Figure 1 as red and blue dashed lines. We then used gri21i to extract spectra of each component. In Figure 2, we show both a two-dimensional and one-dimensional spectral extraction for the southern component to HDF850.1. In the lower panel to Figure 2, we present a collapsed one-dimensional spectrum which not only reveals a 13\(\sigma\) detection of the H\(\alpha\) line, but shows the detection of [NII]\({}_{6583}\) at 4\(\sigma\) from HDF850.1. No significant detection of [NII]\({}_{6548}\) is apparent in the FRESCO grism spectra, but this is not surprising given the fact that [NII]\({}_{6548}\) is \(\approx\)3\(\times\) fainter than [NII]\({}_{6583}\)(e.g. Dojcinovic et al., 2023). For H\(\alpha\), we measure a total flux of \((6.4\pm 0.5)\times 10^{-18}\) erg/s/cm\({}^{2}\) for the southern component to HDF850.1 and derive a redshift of 5.188\(\pm\)0.001. The [NII]/H\(\alpha\) ratio we measure for the Southern component to HDF850.1 is \(0.4\pm 0.1\). Such a high [NII]/H\(\alpha\) ratio is commonly found for galaxies at higher masses with solar metallicities (e.g. Shapley et al., 2015). Shocks could also be a contributing factor to the high [NII]/H\(\alpha\) ratio we find (e.g. Kewley et al., 2013; Freeman et al., 2019) - which would not be especially surprising given the apparent merging activity in HDF850.1 based on its two component structure (Neri et al., 2014). The redshift measured are derived from the H\(\alpha\) line implies a velocity offset of 133\(\pm\)34 km/s relative the systemic redshift measurement \(z=5.1853\) derived by Neri et al. (2014). Neri et al. (2014) find a velocity offset of 130 km/s for the [CII] line from the southern component to HDF850.1, which is almost exactly the same velocity offset in [CII] for the southern component as we find here. A detailed comparison of the line profiles and intensity of our new H\(\alpha\) detection is shown in Figure 3 for the Southern component to HDF850.1, both in terms of the raw extraction (_gray histogram_) and Gaussian fits to Figure 3: Comparison between the [CII] detection from Neri et al. (2014) and our H\(\alpha\) detection for both components. The upper two rows show the [CII] P\({}_{\rm J2}\rightarrow\)P\({}_{\rm J12}\) line above the dust continuum from both component of HDF850.1 (_black lines_) as well as the Gaussian fit to the lines (_blue and red lines_) derived by Neri et al. (2014). The lowest row shows our detected H\(\alpha\) line (_black_) along with a Gaussian fit for the southern component (_yellow_). Velocities for the [CII]\({}_{8s\mu m}\) and H\(\alpha\) lines are shown relative to the [CII] redshift determined by Neri et al. (2014) of z = 5.1853 (307.267 GHz). The dashed and dotted vertical lines are shown at the centroids of the Gaussian fits to [CII] for the northern and southern components of HDF850.1, respectively, to highlight the coinciding feature in the southern component. Figure 2: 2D spectrum from the southern component to HDF850.1 (_upper panel_) along with our 1D extraction from the direct image morphology (_lower panel_). The upper panel shows a zoomed-in 2D spectrum around the H\(\alpha\) line after subtracting the continuum using a median-filtered technique following (Kashino et al., 2022: see §2.1). The black dashed lines in the lower panel show the positions of the H\(\alpha\) line (13\(\sigma\) detection) and the [NII]\({}_{6583}\) line (4\(\sigma\) detection) at the fitted redshift of \(z=5.188\). The expected wavelength of the [NII]\({}_{6549}\) line (\(\approx\)3\(\times\) fainter than [NII]\({}_{6583}\): e.g. Dojcinovic et al., 2023) also shown but does not show a significant detection in the FRESCO data. the lines (_yellow lines_). Figure 3 also shows the line profiles for the northern and southern components to HDF850.1 as found by Neri et al. (2014). To further emphasize this similarity, a vertical dotted line is shown at the velocity offset for the southern component found by Neri et al. (2014). From Figure 4, it is furthermore clear that H\(\alpha\) emission we detect (_white contours_) appears to mostly originate from the southern component of HDF850.1, being offset from both the [CII] and continuum IR emission lying to the north (_blue and orange contours, respectively_). Additionally, the H\(\alpha\) line we extract from the southern component of HDF850.1 appears to be very broad overall. Ignoring for the moment the impact spatial extension has on the width of the lines, the best-fit FWHM we find for the Southern component is of 7.6\(\times\)10\({}^{2}\) km/s. To account for the impact that source morphology has in broadening the line, we make use of grizli to forward model the source based on its direct image morphology using the same dispersion direction as in the observations. Based on this forward modeling, we find that the non-zero size of HDF850.1 contributes 6.2\(\times\)10\({}^{2}\) km/s to the measured FWHMs. Subtracting this contribution in quadrature from the southern component, we derive a FWHM of (4.4 \(\pm\) 0.9) \(\times\) 10\({}^{2}\) km/s. The width of this line is therefore consistent with what Neri et al. (2014) derive for [CII] from the southern component. Given similar velocity offsets for both lines (130 km/s for [CII] vs. (1.3\(\pm\)0.3) \(\times\) 10\({}^{2}\) km/s for H\(\alpha\)), it seems clear that ISM material producing both lines show a very similar kinematic structure. In contrast to the southern component, the northern component of HDF850.1 does not show a clear, localized detection of H\(\alpha\) line emission. As a result and due to the proximity of the two components and dispersion direction (shown in Figure 1), line emission from the southern component partially extends into the same spatial region where H\(\alpha\) line flux measurements need to be made for the northern component. As a result of these challenges, we only report an upper limit to the H\(\alpha\) flux for the northern component of \(f_{\rm H\alpha}<2.1\times 10^{-18}\) erg/s/cm\({}^{2}\) (3\(\sigma\)). We derived this upper limit by comparing the observations with the spatial profile expected for H\(\alpha\) emission from the northern component assuming a similar spatial distribution to the continuum light in the direct image. By computing the 2D least squares residuals between the expected and observed line morphologies, we concluded that there is no significant H\(\alpha\) line flux emanating from the northern component, and any line flux evident in the segmentation map for the northern component is consistent with contamination from the southern component. The flux measurements for the two components are provided in Table 1. ### Impact of Dust Obscuration on the H\(\alpha\) Line Emission Thanks to our new measurements of the H\(\alpha\) fluxes for both components of HDF850.1, we can compute an observed SFR for each component. By comparing this SFR to the SFR implied by the respective [CII] luminosities, we can attempt to estimate the approximate dust obscuration of each component. We use the conversion factor from Kennicutt & Evans (2012):4 Footnote 4: While we present SFRs and stellar masses using the Chabrier (2003) IMF and Kennicutt & Evans (2012) presents their SFR relations using Kroupa & Weidner (2003) IMF, Kennicutt & Evans (2012) emphasize that the relation for a Chabrier (2003) IMF is virtually identical. \[{\rm SFR}_{\rm H\alpha}=L_{\rm H\alpha}(M_{\odot}/yr)/(10^{41.27}\,{\rm erg/s}) \tag{1}\] The SFR we estimate from the observed H\(\alpha\) flux for the southern component is 6.0\(\pm\)0.5 (1.5\(\mu\)) M\({}_{\odot}\)/yr, while for the northern component the SFR we estimate is \(<\)1.8 (1.7/\(\mu\)) M\({}_{\odot}\)/yr. In specifying the SFR for each source, we have divided the result by the fiducial magnification factors for the two components derived in Neri et al. (2014) based on the isothermal model they constructed for a nearby \(z=1.224\) elliptical galaxy, i.e., 1.7 for the northern component and 1.5 for the southern component. Our quoted results in Table 1 include the factors (1.7/\(\mu\)) and (1.5/\(\mu\)) for clarity and to allow for a scaling of the results in case of updated lensing magnification factors. We can estimate the approximate dust extinction for each component by relying on the measured [CII] luminosities for HDF850.1 from Neri et al. (2014) to estimate the total SFRs. The ALPINE program (Le Fevre et al., 2020; Bethermin et al., 2020; Faisst et al., 2020) provided the following calibration of the \(L_{[CII]}\)-SFR relation (Schaerer et al., 2020): \[{\rm SFR}_{\rm[CII]}=(L_{\rm[CII]}/10^{6.61}L_{\odot})^{0.855}(M_{\odot}/yr) \tag{2}\] We infer [CII] SFRs of (1.6 \(\pm\) 0.5)\(\times\)10\({}^{2}\) (1.7/\(\mu\)) M\({}_{\odot}\)/yr and (0.9 \(\pm\) 0.3)\(\times\)10\({}^{2}\) (1.5/\(\mu\)) M\({}_{\odot}\)/yr for the northern and southern components to HDF850.1. Dividing the apparent H\(\alpha\) SFRs by the [CII] SFRs, we infer SFR\({}_{\rm H\alpha}\)/SFR\({}_{\rm[CII]}\) ratios of \(<\)0.01 and 0.07\(\pm\)0.03, respectively, demonstrating that even in the case of the clear detection of H\(\alpha\) from the southern component of HDF850.1, dust attenuation would nevertheless appear to have a substantial impact on the observed line emission. The SFR\({}_{\rm H\alpha}\)/SFR\({}_{\rm[CII]}\)'s we infer here are similar to what would be suggested by the \(A_{V}\)'s we derive for the two components in SS3.3.2, i.e., -0.02\({}^{+0.04}_{-0.01}\) and -0.12\({}^{+0.25}_{-0.10}\). ### UV+Optical+far-IR SED Model of HDF850.1 #### 3.3.1 Photometry on the Individual Components of HDF850.1 Because of the very extended wings to the profile from a bright nearby elliptical galaxy, obtaining accurate flux measurements for Figure 4: Spatial distribution of line and dust emission relative to the _JWST_ NIRCam F444W imaging observations of HDF850.1. The extracted H\(\alpha\) map is shown in white contours over the _JWST_ NIRCam F444W imaging. The velocity averaged [CII]\({}_{\rm 18\mu m}\) line detection and the 0.98mm dust continuum from Neri et al. (2014) are shown for comparison. All contours start at 3\(\sigma\) and increase in steps of 1\(\sigma\). HDF850.1 can be challenging to obtain using simple aperture photometry and therefore we elected to measure the flux for HUDR850.1 by modeling the light with analytic Sersic profiles using galfit(Peng et al., 2002). This provides a very effective way of coping with con \begin{table} \begin{tabular}{c c} \hline \hline \multicolumn{2}{c}{Northern Component} \\ \hline Lensing Magnification \(\mu\) & 1.7\({}^{a}\) \\ \(L_{[\rm CIII]}\) & 1.6\(\times\)10\({}^{9}\) (1.7/\(\mu\))\({}^{a}\)\(L_{\odot}\) \\ SFR\({}_{[\rm CIII]}\) & (1.6 \(\pm\) 0.5)\(\times\)10\({}^{2}\) (1.7/\(\mu\))\({}^{a}\) M\({}_{\odot}\)/yr \\ \(v_{[\rm CIII]}\) & \(-\)200 km/s \\ FWHM\({}_{[\rm CIII]}\) & 300 km/s \\ \(f_{\rm flt}\) & \(<\) 2.1 \(\times\) 10\({}^{-18}\) erg/s/cm\({}^{2}\)[3 \(\sigma\)] \\ SFR\({}_{\rm flt}\) & \(<\) 1.8 (1.7/\(\mu\)) \(M_{\odot}\)/yr \\ SFR\({}_{\rm H\alpha}\) / SFR\({}_{[\rm CIII]}\) & \(<\)0.01 \\ \(\tau_{V}\) & 3.9\({}^{+0.6}_{-0.8}\) \\ \(\log_{10}M^{*}/M_{\odot}\) & 10.3\({}^{+0.3}_{-0.3}\) +\(\log_{10}(1.7/\mu)\)\({}^{b}\) \\ \(\log_{10}L_{IR}\) & 11.9\({}^{+0.2}_{-0.2}\) +\(\log_{10}(1.7/\mu)\)\({}^{b}\) \\ SFR\({}_{\rm MAGPHYS}\) & 64\({}^{+38}_{-17}\) (1.7/\(\mu\))\({}^{b}\) M\({}_{\odot}\)/yr \\ \hline \multicolumn{2}{c}{Southern Component} \\ \hline Lensing Magnification \(\mu\) & 1.5\({}^{a}\) \\ \(L_{[\rm CI]}\) & 0.8\(\times\)10\({}^{9}\) (1.5/\(\mu\))\({}^{a}\)\(L_{\odot}\) \\ SFR\({}_{[\rm CIII]}\) & (0.9 \(\pm\) 0.3)\(\times\)10\({}^{2}\) (1.5/\(\mu\))\({}^{a}\) M\({}_{\odot}\)/yr \\ \(v_{[\rm CI]}\) & 1300 km/s \\ FWHM\({}_{[\rm CI]}\) & 410 km/s \\ \(f_{\rm H\alpha}\) & (6.5 \(\pm\) 0.4) \(\times\) 10\({}^{-18}\) erg/s/cm\({}^{2}\) \\ \(\rm v_{\rm H\alpha}\) & (1.3 \(\pm\) 0.3)\(\times\)10\({}^{2}\) km/s \\ \(\rm FWHM_{\rm H\alpha+[\rm\rm\rm\rm\rm\rm\rm\,[\rm\rm\,[\rm\rm\,[\rm\,\,[\rm \ tamination from the bright elliptical spilling over onto our aperture measurements. We begin by using galfit to model the light in the NIRCam F444W band where both the northern and southern components of HDF850.1 are clearly detected, while also fitting to light in the nearby elliptical galaxy. We model the light in the southern component of HDF850.1 with a single Sersic profile and light from the northern component as the sum of three different Sersic profiles. After using these fits to measure the flux of both components to HDF850.1 in the F444W band, we fix the centers and shapes of the different contributors to both components and then refit the amplitudes in each passband. To account for the uncertainties associated with the extended wings of the nearby elliptical galaxy, we alternatively make use of two different Sersic parameters \(n=2\) and \(n=5\) in fitting for the contribution from that galaxy. We take the flux uncertainty to be equal to the differences between the flux measurements in the two fits, and in cases where there is more than 0.4 mag differences between the flux measurements, we treat a component as undetected in a given passband. We present the flux measurements we obtained for the two components of HDF850.1 in Table 2. We measure a F444W-band magnitude of 24.0\(\pm\)0.1 mag for the northern component to HDF850.1 and 26.0\(\pm\)0.2 mag for the southern component. We remark that the F444W flux we measure for the southern component is \(\sim\)4\(\times\) what we would expect accounting for the line emission from H\(\alpha\) alone, suggesting that stellar continuum from HDF850.1 contributes substantially to the F444W flux that we measure for the southern component. For the northern component, the contribution from the stellar continuum seems to dominate the F444W flux given the absence of a clear line detection at its location. #### 3.3.2 SED Modeling of UV+Optical and far-IR Light from HDF850.1 The JWST NIRcam observations presented here provide us with the first direct constraints on the stellar content of HDF850.1. We complement the photometry listed in Table 2 with the flux limits in the Herschel bands reported in Walter et al. (2012), and the 1 mm flux density continuum reported for the two components of the system in Neri et al. (2014). We model the Spectral Energy Distribution of the northern and southern components of HDF850.1 using the high-redshift implementation of MAGPHYS (da Cunha et al., 2008, 2015). MAGPHYS assumes energy balance between energy absorbed by dust in the rest-frame optical range and re-emitted in the far-infrared. The stellar population is modeled based on Bruzual & Charlot (2003)'s spectral libraries, and assuming a delayed exponential function as star formation history. The best-fit model is shown in Fig. 5. We infer the best fit of the free parameters and their uncertainties from the distribution of the posterior probabilities, interpolated at the 16%, 50% and 84% levels. Our best-fit stellar masses, star formation rates, far-IR luminosities, and dust attenuation for the two components of HDF850.1 are provided in Table 1. The NIRCam observations sample the stellar continuum around the Balmer break, which lies at 2.1\(\mu\)m. Our MAGPHYS fits suggest that both components of HDF850.1 are significantly dust reddened. The northern component of HDF850.1 appears more massive and star forming, albeit with a similar specific SFR, compared to the southern component. The precise flux measurements made possible thanks to our new NIRCam photometry pins down the stellar component of the fit; this however, combined with the energy balance assumption in the code, and the large uncertainties associated with the Herschel photometry of HDF850.1 as a whole, leads to the dust continuum emission in the southern component of HDF850.1 being underestimated. A better sampling of the IR continuum emission separately for the two components of HDF850.1 is necessary to improve our characterization of the overall SED. ### Extended Galaxy Structures Surrounding HDF850.1 #### 3.4.1 Redshift Overdensity at \(z\sim 5.2\) and Comparison with Earlier Studies Given the substantial clustering of other star-forming galaxies expected around massive galaxies like HDF850.1 and earlier results showing a significant overdensity of galaxies at \(z\sim 5.2\)(Walter et al., 2012; Arabal Haro et al., 2018; Calvi et al., 2021), it is logical to make use of the substantial number of sources with spectroscopic redshifts in the GOODS North field from FRESCO to investigate this matter more extensively. Using the techniques described in SS2.3, we constructed catalogs of H\(\alpha\) emitters over GOODS North FRESCO field. Figure 6 shows the number of sources as a function of redshift, and it is clear there is a huge overdensity of sources at \(z=5.17\)-5.20, with 100 sources found in that narrow redshift interval. Walter et al. (2012) and Calvi et al. (2021) had both previously reported the same overdensity of galaxies at \(z\sim 5.2\), adding 13 and 6 spectroscopic members, respectively. Analysis of the narrow-band SHARDS observations indicate 44 additional sources whose redshifts are consistent with lying in this overdensity (Arrabal Haro et al., 2018), but the redshift measurements from SHARDS are much less precise, i.e., \(\Delta z\sim 0.07\), and so much more difficult to tie to distinct individual structures identified here. Nineteen of the 100 H\(\alpha\) emitters that we identified in the redshift range \(z=5.16\) to \(z=5.20\) were previously flagged as probable members of the \(z\sim 5.2\) overdensity by Walter et al. (2012), Arabal Haro et al. (2018), and Calvi et al. (2021). These earlier studies appear to have been most efficient at identifying those member galaxies in the \(z\sim 5.2\) overdensity with the highest H\(\alpha\) fluxes, likely as a result of Figure 6: Redshift Distribution of Star-Forming Galaxies over the GOODS-North FRESCO field detected in H\(\alpha\) at \(\simeq\)5\(\sigma\) (red histogram). Sources shown in black in the histogram were identified earlier as part of the spectroscopic sample of Walter et al. (2012) and Calvi et al. (2021). Clearly, there is strong evidence for a substantial spike in the redshift distribution of galaxies at \(z=5.17\)-5.20, centered on the redshift of HUDF850.1. Twenty four of the sources in this redshift spike were identified earlier as part of the Arabal Haro et al. (2018) analysis leveraging the SHARDS data set. \begin{table} \begin{tabular}{c c c c c c c c c c c c} \hline \hline & & \multicolumn{3}{c}{Structure Center} & \multicolumn{1}{c}{z\({}_{\rm mean}\)} & \(\sigma_{v}\) & Radius & \multicolumn{2}{c}{log\({}_{10}\)\(\Sigma_{d}\)} & \# of & Noteworthy \\ ID & RA & Dec & \(\Delta\alpha\) [cMpc] & \(\Delta\delta\) [cMpc] & & [km/s] & [cMpc]b & \(\delta_{g}\)c & \(M_{\rm halo,d}/M_{\odot}\)d & Members & Members \\ \hline [MISSING_PAGE_POST] 5 & \(-\)10.5 & 5.308 & 216 & 1.79 & — & 10.8 & 3 & \\ \hline \end{tabular} \end{table} Table 3: Extended structures Identified From \(z=5.16\) to \(z=5.31\) in the FRESCO GOODS North fielda Figure 7: Spatial distribution of H\(\alpha\) emitters (_left and center panels_) in the prominent overdensity at \(z=5.17\)-5.20 in the GOODS-North FRESCO Field. The black lines enclose the area covered by the FRESCO F444W grism observations. Sources are shown in various color filled circles depending on the extended structure each was tentatively identified to belong based on the redshift and spatial position on the sky. Extended structures 1, 2, 3, 4, and 5 are presented in the left panel, while structures 6, 7, 8, 9, and 10 are presented in the center panel. The larger and smaller blue star show the position of HDF850.1 and another dusty star-forming galaxy S3 (see Figure 9) which appears to be part of the same extended structure 3. The orange, light blue, and cyan stars in the center panel show the positions of three dusty galaxies dusty-1257, dusty-7162, and dusty-16116 identified in extended structures 1, 6, and 7 by Xiao et al. (2023). The horizontal bars towards the lower region of the left and center panels indicate the comoving radii (3 cMpc) of the most extended structures identified here and the comoving diameter (18 cMpc: Chiang et al. 2017) of the regions of the universe at \(z\sim 5.2\) that collapses into \(>\)\(10^{14}\) M\({}_{\odot}\) galaxy clusters by \(z\sim 0\), respectively. The small open circle indicates the expected size of \(10^{12.2}\)\(M_{\odot}\) halos at \(z\sim 5.2\), The rightmost panel shows the number of sources as a function of redshift. Sources in extended structures 1-5, shown as the red histogram, are distributed more towards the southern and western parts of the FRESCO GOODS-North field and mostly have redshifts between 5.165 and 5.185. Meanwhile, sources in structures 6-10, shown as the black histogram, are distributed more towards the eastern and northern parts of the FRESCO field and mostly have redshifts between 5.185 and 5.196. The horizontal bar shows the comoving length scale (Chiang et al. 2017) of structures that collapse into \(>\)\(10^{14}\)\(M_{\odot}\) galaxy cluster by \(z\sim 0\). The Chiang et al. (2017) suggest that all 10 extended structures identified here likely collapse into a single \(>\)\(10^{14}\)\(M_{\odot}\) galaxy cluster by \(z\sim 0\). these same sources showing more prominent Ly\(\alpha\) emission lines for redshift determinations. Three out of the seven brightest H\(\alpha\) emitters (i.e., 43%) were identified as part of earlier spectroscopic studies, as well as five out of the 14 brightest H\(\alpha\) emitters (i.e., 36%). This compares with just 19 out of 100 sources (i.e. 19%) that appeared in these earlier compilations. Comparing our redshift measurements to those from Walter et al. (2012) and Calvi et al. (2021), the redshift measurements we derive are \(\Delta z=-0.009\pm 0.011\) lower in the mean than those in the literature. This is consistent with what we would expect for Ly\(\alpha\) velocity offsets of 436\(\pm\)582 km/s, in the general range of what has been found in many studies at \(z\sim\) 3-8 (e.g., Erb et al., 2014; Tang et al., 2023). Somewhat unexpectedly, only six of the nineteen sources in the spectroscopic redshift catalogs of Walter et al. (2012) and Calvi et al. (2021) with coverage from FRESCO show up in our own catalogs of H\(\alpha\) emitters. Given the high completeness levels expected for our H\(\alpha\)-emitter searches, this suggests that previous spectroscopic samples were dominated by sources with high Ly\(\alpha\) escape fractions. There are three sources over the GOODS North field discussed by Walter et al. (2012) and Calvi et al. (2021) which have spectroscopic redshifts suggesting they are part of the prominent \(z=5.17\)-5.20 overdensity discussed here, but which lie outside the \(\sim\)62 arcmin\({}^{2}\) FRESCO mosaic. These include a QSO at \(z\sim\) 5.18 (Barger et al., 2002) and two other galaxies SHARDS20008777 and a source at 12:36:00.0, 62:12:26.1. Nineteen of the plausible 50 sources with redshifts in the range \(z=5.12\)-5.27 from Arrabal Haro et al. (2018) lie outside the FRESCO mosaic. Of the 31 sources from Arrabal Haro et al. (2018) which lie in the redshift range \(z=5.12\)-5.27 and lie within the FRESCO mosaic, 17 are members of our spectroscopic sample of H\(\alpha\) emitters in the redshift range \(z=5.16\) to \(z=5.20\). With 100 spectroscopic members of the overdensity at \(z=5.17\)-5.20, this overdensity is particularly extreme. \(\sim\)28% of the total number of \(z=5.0\)-6.0 H\(\alpha\) emitters found over the HDF-North field lie in a \(\Delta z\sim 0.03\) interval. This exceeds even the 45 galaxies present in \(z\sim 5.4\) overdensity identified over the GOODS-South field (Helton et al., 2023) and also the 20 galaxy and 24 galaxy overdensities identified at \(z\sim 6.19\) and \(z\sim 6.33\), respectively, over the J0100+2802 field by Kashino et al. (2022). Given the much larger number of star-forming galaxies in the immediate vicinity of HDF850.1 than even present over the J0100+2802 field, this may suggest the halo masses associated with this overdensity may be even more extreme than the bright \(z\sim 6.33\) QSO J0100+2802 that the EIGER program targeted. In particular, the \(z=5.17\)-5.20 overdensity extends over the entire GOODS-North FRESCO field, i.e., a 18 \(\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{ }}}}}} {{{}}}}{{{}}}{{{{}}{{{}}{{{}}{{{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{}{{}{}{}{}{}{}{}{}{}{}{}{}{}{} {{}{}{}{{}{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{}{{}{}{} {{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{}{}{ }{{}{{}{}{{}{}{}{{}{}{}{}{}{{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{{}{}{}{{}{}{}{{}{}{}{ }{{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{{}{}{}{}{}{}{}{{}{}{}{}{{}{ }{{}{}{{}{}{}{{}{}{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{ }{{}{}{{}{}{{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{ }{{}{{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{}{{}{}{}{}{}{}{{}{}{}{}{{}{}{}{}{ }{{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{ }{{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{ }{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{ }{{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{}{}{{}{}{ }{{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{{}{}{ }{{}{}{{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{}{{}{}{{}{}{ {}{{}{}{}{{}{}{}{{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{}{{}{}{}{{}{{}{}{}{ }{{}{{}{{}{}{}{{}{{}{}{}{}{}{{}{}{}{{}{}{{}{}{}{{}{}{{}{}{{}{}{}{{}{}{ }{{}{{}{}{}{}{{}{{}{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{}{}{{}{ }{{}{{}{{}{}{}{{}{}{}{{}{}{{}{}{}{{}{}{{}{}{{}{}{}{{}{}{{}{}{}{{}{ }{{}{{}{}{{}{}{{}{}{}{{}{}{{}{}{}{{}{}{}{{}{}{}{{}{{}{}{}{{}{}{ }{{{}{}{{}{}{{}{}{{}{}{{}{}{}{{}{}{}{{}{}{{}{{}{}{{}{}{}{{}{}{ }{{{}{}{{}{{}{}{{}{}{}{{}{}{{}{}{{}{{}{}{{}{}{{}{}{{}{}{{}{}{{}{ }{{}{{}{{}{}{{}{{}{}{{}{}{}{{}{}{{}{}{{}{}{{}{}{}{}{{}{{}{}{ }{{}{{}{}{{}{{}{}{}{{}{{}{{}{}{}{{}{}{{}{}{}{{}{}{{}{{}{}{ }{{{{}{}{}{{}{}{}{{{}{}{{}{}{}{{}{{}{}{}{}{{}{{}{}{}{}{{}{ }{{{}{{}{}{{{}{}{}{}{{}{}{{}{}{{}{}{}{{}{}{{}{}{{}{}{ }{{{}{}{}{{}{}{}{{}{{}{}{{}{}{{}{}{{}{{}{}{{}{}{{}{}{{}{ }{{{}{{}{}{}{{}{{}{}{}{{}{}{}{}{{}{}{{}{}{}{{}{{}{ }{{}{{}{{}{}{{}{}{}{{}{{}{}{}{{}{}{{}{}{{}{}{{}{}{{}{{}{ }{{{}{{}{}{}{}{}{}{{}{}{}{}{{}{{}{}{}{{}{}{{}{}{ }{{{}{{}{}{{}{}{{}{}{}{{}{{}{}{}{{}{}{{}{}{{}{}{}{}{{}{{}{ }{{}{}{{}{{}{}{}{}{{}{}{{}{}{}{{}{}{{}{}{}{{}{{}{}{{}{{ }{{}{{}{}{{}{}{}{{}{{}{}{}{{}{}{}{{}{}{{}{}{{}{{}{}{}{{}{ {}{}{{}{{}{}{}{{}{}{{}{}{}{{}{}{{}{{}{{}{}{}{{}{}{{}{}{{}{{}{ }{{{}{{}{}{}{{}{{}{}{}{{}{}{}{{}{}{}{{}{{}{}{}{{}{{}{{}{}{{} {{{}{{}{}{}{{}{}{{}{}{}{{}{{}{}{{}{}{{}{{}{}{{}{}{{}{}{{}{{}{}{{}{ {}{}{{}{{}{{}{{}{}{{}{{}{}{}{{} 212 km/s and 197 km/s, for the overdensities in the J0100+2802 field at \(z\sim 6.19\) and \(z\sim 6.33\), respectively. Following Long et al. (2020) and Calvi et al. (2021), we also use the stellar masses of the member galaxies to estimate the integrated halo masses of the extended structures that host the galaxies in the extended structures. We do so by computing the total stellar mass we estimate from the member galaxies and then supposing an integrated star formation efficiency of 5% based on detailed studies comparing the galaxy stellar mass functions to the halo mass functions (Behrroozi and Silk, 2018), but we note that detailed studies (Stefano et al., 2021; McLeod et al., 2021) of the ratio of the stellar mass to halo mass density from galaxy stellar mass functions at \(z\sim 0\)-10 integrated down to \(10^{8}\)\(M_{\odot}\) give factors between 0.5% to 2%. We make use of the masses from Naidu et al. (2023, in prep) SED fits with Prospector - except for the sources in the optically-faint samples of Xiao et al. (2023). The derived integrated halo masses range from \(10^{10.6}\)\(M_{\odot}\) and \(10^{12.6}\)\(M_{\odot}\). Not surprisingly, HDF850.1 and GN10 lie in the halos that belong to the structures with the highest integrated halo masses. Assuming a survey area of \(\sim\)62 arcmin\({}^{2}\) and \(\Delta z\sim 1\) volume (\(\sim\)2\(\times 10^{5}\) cMpc\({}^{3}\)), we estimate that the GOODS North FRESCO volume should contain approximately one \(10^{12.2}\)\(M_{\odot}\) halo. For this calculation, we made use of the public halo mass function calculator HMFcale by Murray et al. (2013) adopting a Planck Collaboration et al. (2018) cosmology. The radius of each extended structure \(R_{\rm structure}\) is computed using the standard formula from Heisler et al. (1985) used in computing dynamical masses of galaxy clusters based on a discrete number of galaxies with measured positions and line-of-sight velocities: \[R_{\rm structure}=\frac{\pi N}{2\Sigma_{i<j}\,\frac{1}{R_{\perp i,j}}} \tag{3}\] where \(N\) is the number of galaxies in an extended structure and \(R_{\perp ij}\) is the projected distance in the plane of the sky between galaxy \(i\) and galaxy \(j\). We also estimate overdensities for the extended structure we identified as part of Table 3 by comparing the volume density of H\(\alpha\) emitters within the estimated radius and a \(|\Delta z|<2\sigma_{z}\) redshift width of an extended structure to that found between \(z=5.0\) to 6.0 for the GOODS-North FRESCO field as a whole. We compute overdensity factors between 8 and 267, far in excess of the linear regime. Based on the volume density of H\(\alpha\) emitters we find in the GOODS North FRESCO volume, i.e., 1.6\(\times 10^{-3}\) cMpc\({}^{-3}\), we can estimate a nominal bias for this population assuming abundance matching and find an average bias factor of \(\sim\)4.7 using the Trenti and Stiavelli (2008) "cosmic variance" calculator. Adjusting for this bias factor, this is suggestive of overdensities in the matter distribution ranging from 2 to 57, which is significantly in excess of \(\approx\)1 for the linear regime but less than 200\(\times\) overdensities expected for completely collapsed structures. It is interesting to ask whether we would expect protoclusters to be present over the FRESCO fields and indeed within the GOODS North FRESCO field. Given that clusters generally have a mass of at least \(10^{14}\)\(M_{\odot}\)(e.g. Kravtsov and Borgani, 2012) at \(z\sim 0\), we can use abundance matching to estimate whether the FRESCO volume contains any such objects. Using the halo mass functions from HMFcale, we estimate the FRESCO GOODS-North volume to include 4 such clusters with mass \(10^{14.0}\)\(M_{\odot}\) and one cluster reaching a mass of \(10^{14.4}\)\(M_{\odot}\). #### 3.4.3 Comparison of Structure Sizes to the Expected Sizes of Protoclusters We can compare the size of the extended structures we find to that expected from the detailed study of protocluster growth (Chiang et al., 2017) making use of several semi-analytic galaxy formation models (Guo et al., 2013; Henriques et al., 2015) applied to the Millennium simulations (Springel et al., 2005). Chiang et al. (2017) present both the comoving sizes of collapsed protoclusters at various redshifts and the sizes of cosmic volumes that will collapse into \(>\)\(10^{14}\)\(M_{\odot}\) galaxy clusters by \(z\sim 0\). Of relevance, Chiang et al. (2017) find protoclusters to extend to have a comoving radius of \(\sim\)9 cMpc at \(z\sim 5.2\), implying overdense structures to extend over a comoving distance of \(\sim\)18 cMpc, very similar to the spatial extent of the overdensities we have identified over the GOODS North FRESCO field (Figure 7). Chiang et al. (2017) suggest that all 10 extended structures identified here likely collapse into a single \(>\)\(10^{14}\)\(M_{\odot}\) galaxy cluster by \(z\sim 0\). We estimate that the halo masses of the most massive collapsed halo in those extended structures would be \(10^{12.0}\)\(M_{\odot}\) and \(10^{12.2}\)\(M_{\odot}\), respectively, at \(z\sim 5.2\). Assuming that average overdensity factor in the collapsed halos is 200, the size of the halos would be 0.3 Figure 8: KDE Density plots showing the comparison of Prospector-inferred galaxy properties for sources inside and outside of the overdensity. The “in” sample (_red_) contains 47 sources between redshifts \(z=5.17\)–5.20 in the overdensity and identified as part of extended structures, while the “out” sample (_grey_) contains 86 sources at redshift \(z=4.9\)–5.5 excluding the overdensity redshift bin and the structures identified with the method detailed in section 3.4.2. Shown in each panel are differences in the fractional cumulative distribution and associated probability that the two distributions are consistent using a Kolomogrov-Smirnov test. The stellar masses, \(UV\) luminosities, and star formation rates for galaxies inside the overdensities show a clear shift to higher values than galaxies outside the overdensities. cMpc and 0.4 cMpc at \(z\sim 5.2\), i.e., 8" to 13" in the plane of the sky. In computing a radius of the identified structures, the above equation gives greater weight to galaxy pairs that have smaller separations. The fact that many of these extended structures have computed sizes not especially larger than 0.4 cMpc suggests that at least a few of the member galaxies in the extended structures are part of the same halos. #### 3.4.4 Characteristics of Galaxies in the \(z=5.17\)-5.20 Overdensity vs. Those Outside To put the extreme properties of HDF850.1 in context, we use the SED-fitting code Prospector (Leja et al., 2017b; Johnson et al., 2021) to derive galaxy properties from our extensive spectroscopically confirmed sample of galaxies in FRESCO GOODS-North. Our Prospector fits use FSPS with the MIST stellar models. We do not fit for redshift and set it to the derived \(z_{\rm grism}\) value inferred by our pipeline using grizli. Along with the photometry, fluxes from emission lines identified in the Grism 1D spectra are also used as an input. We make use of Prospector's more flexible non-parametric star-formation history (SFH) with 8 bins evenly separated in lookback time with a "continuity" prior for smoother SFH. The output catalog contains all sources detected in the detection image that were visually inspected by at least one member of the FRESCO team along with the inferred properties from SED-fitting such as the retained stellar mass \(M_{\star}\), UV slope \(\beta\), star formation rates SFR or age at 50 % of the total stellar mass. To highlight the distinct properties galaxies may have within the overdensity or its structures and outside of it, we compare two samples. One contains all the galaxies in the \(z=5.17\)-5.20 overdensity that have been identified as being part of a structure ("_in_ with 40 sources) while the second sample contains all the remaining galaxies from \(z=4.9\) to \(z=5.5\) not identified in such structures ("_out_" with 86) and also excluding the redshift bin \(z=5.17\)-5.20. We also apply a magnitude cut of 27 AB mag in F182M. While this will limit our inferences to galaxies intrinsically brighter than \(-20\) mag, this choice was made to alleviate the uncertainties on the inferred parameters from faint sources - particularly for stellar masses - and avoid a bias towards strong H\(\alpha\) emitters only. We report the output of the Prospector fits for these two samples with kernel density estimate (KDE) plots as shown in Figure 8. We also compare SFRs inferred from H\(\alpha\) using Eq. 1. We use the H\(\alpha\) flux corrected for dust attenuation with the empirical relation linking it to \(\beta\) slopes from Shivaei et al. (2020). Finally, we use a two-sample Kolmogorov-Smirnov (KS) test to comment on the dissimilarities in our selection of sources. Based on the sizes of our two samples, we require a critical D-value of 0.3311 to demonstrate at 99.5% confidence (\(\alpha=0.005\)) that the distributions of the properties for sources inside and outside overdensities are different. While we do not find a clear differences (at (99.5% confidence) in the distributions for \(\beta\) slopes and ages at 50% of the SFH, the D-values and p-values for the masses and ages show clear differences between the two populations of galaxies at this redshift. Our results suggest that the galaxies in the overdensity are more evolved, more massive and with higher SFRs than the galaxies outside of these structures. The median stellar mass \(M_{\star}\) in the overdensity is \(\sim 2.5\times 10^{9}M_{\odot}\) vs. \(\sim 8\times 10^{8}M_{\odot}\) for the galaxies outside, a factor of \(\sim\)3 difference. Earlier work by Steidel et al. (2005) had shown that the masses and ages of galaxies in a similarly prominent (i.e, \(\delta_{g}\sim 7\)) overdensity at \(z=2.3\) were twice as high as those outside the overdensity in line with our results. More evidence from FRESCO for the most extreme sources being found in overdense environments can be found in the Xiao et al. (2023) H\(\alpha\) selection of highly obscured sources over the GOODS North field. In particular, the five \(z=4.9\)-6.6 sources with the highest estimated stellar masses and SFRs all lie within the overdensities at \(z\sim 5.16\)-5.20 and \(z\sim 5.30\). ## 4 Utility of Wide-Area Grism Surveys for Mapping the Build-Up of Dust-Obscured Galaxies at \(z>5\) The identification of a 13\(\sigma\) H\(\alpha\) emission line in the well-known dust-obscured galaxy HDF850.1 clearly demonstrates the enormous utility of the NIRCam grism for deriving redshifts for dust-obscured galaxies at \(z>4\) and mapping out the build-up of these galaxies from early times. Another particularly exciting example of such a source as relates to HDF850.1 is shown in Figure 9 and been given a source ID S3. That source has a redshift \(z=5.179\) derived from the detection of both H\(\alpha\) (\(>\)20\(\sigma\)) and the [SII]\({}_{6719,6730}\) doublet (\(\sim\)9\(\sigma\): Xiao et al., 2023). Given the similar redshift and spatial proximity, S3 appears to be part of the same extended structure as HDF850.1. Remarkably, this source shows even more extreme stellar masses and SFRs than HDF850.1 (Xiao et al., 2023). Earlier, Oesch et al. (2023) presented a third example of such a highly obscured source with a detected H\(\alpha\) line in their Figure 10. In parallel with the current study, Xiao et al. (2023) present a much larger sample of obscured, H\(\alpha\) emitters at \(z>5\), leveraging the FRESCO NIRCam grism data to derive spectroscopic redshifts. The existence of large numbers of obscured galaxies with detectable H\(\alpha\) emission may arise because of prevalent merging activity amongst the brightest far-IR sources (e.g. Clements et al., 1996; Tacconi et al., 2008) and the possibility that the merging activity could create less obscured sight lines by which H\(\alpha\) photons could escape from galaxies (e.g., Le Reste et al., 2023). One of the components in a merging system might also be subject to substantially less dust obscuration, which would also enhance the detectability of IR luminous sources. Note that many ultra-luminous far-IR bright galaxies at \(z\sim 1\)-3 even reveal the presence of escaping Lyman-\(\alpha\) photons (e.g. Chapman et al., 2003, 2005) through sensitive rest-\(UV\) spectroscopy. Given the utility of the Ly\(\alpha\) line, the ubiquity of H\(\alpha\) line emission from \(z>5\) ULIRGs is thus less surprising. Of course, the non-detection of H\(\alpha\) in the rather extreme star-forming source SPT0311-58 at \(z=6.900\) with MIRI and the present non-detection of the northern component to HDF850.1 suggests that not every highly obscured source will be well detected in H\(\alpha\)(Alvarez-Marquez et al., 2023b) in NIRCam grism observations, but the detection of H\(\alpha\) at high significance (\(>\)10\(\sigma\)) from the southern component to HDF850.1, GN10, and a handful of other far-IR luminous galaxies with the FRESCO grism observations (Xiao et al., 2023) suggests that the selection of dusty star-forming galaxies could be moderately complete in very wide-area grism observations. Given the identification of 7 highly obscured galaxies populating the \(z=5.15\)-5.32 structures over the GOODS North FRESCO field (Xiao et al., 2023), the extension of such a survey over much wider areas, e.g., \(>\)600 arcmin\({}^{2}\), has the potential to identify \(>\)70 obscured far-IR bright star-forming galaxies at \(z>5\) while simultaneously allowing for a characterization of the structures in which the massive galaxies are forming. Such a survey would yield even more far-IR luminous galaxies at \(z>5\) than even the \(\approx\)42 (707 sources \(\times\approx\)6% at \(z>4\)) far-IR bright galaxies estimated to be identified at \(z>4\)(Dudzeviolate et al., 2020) over the 1 deg\({}^{2}\) AS2UDS program (Stach et al., 2019). Such a program would yield many IR-bright sources at \(z>4\) than the \(\approx\)4 far-IR bright galaxies identified over 184 arcmin\({}^{2}\) from the 2mm MORA program (Casey et al., 2021). Even scaling a MORA-like program to 600 arcmin\({}^{2}\) would only result in a yield of \(\approx\)13 \(z>4\) galaxies. ## 5 Summary In this paper, we present the detection of a 13\(\sigma\)\(H\alpha\) line for HDF850.1 in the NIRCam F444W grism observations over the GOODS North field from FRESCO (Oesch et al., 2023), recovering a redshift of 5.188\(\pm\)0.001. Detection of H\(\alpha\) in HDF850.1 is particularly noteworthy, given how obscured star formation from the source is HDF850.1 was one of the first submm galaxies to be identified with SCUBA (Hughes et al., 1998) and evaded efforts to pin down its redshift for \(>\)10 years until the Plateau de Bure Interferometer secured the detection of [CII] and various CO lines from a spectral scan (Walter et al., 2012). In addition, HDF850.1 is clearly detected in the F444W imaging observations available over the source with NIRCam, with the emission segregated into a distinct northern and southern component. These two distinct components were also evident in the earlier observations of [CII] from HDF850.1 with PdBI (Neri et al., 2014). Modeling the SEDs of the two components of HDF850.1 based on the available HST + NIRCam F182M, F210M, and F444W imaging observations, we find a much higher SFR, stellar mass, and dust obscuration for the northern component than the southern component. The majority of the H\(\alpha\) emission for HDF850.1 appears consistent with originating from the southern component, not only due to the spatial localization of the emission but also due to its showing a very similar \(\Delta\nu\)\(\sim\)130 km/s velocity offset to that seen in the southern component from [CII] emission (Neri et al., 2014). Comparison of the SFR inferred from the observed H\(\alpha\) emission with that seen from [CII] is suggestive of the H\(\alpha\) emission from the southern component being 93\(\pm\)3% attenuated and the northern component \(>\)98% attenuated. Leveraging redshift determinations possible from the FRESCO NIRCam grism observations in F444W over the GOODS North field, we note the existence of 100 galaxies in total in the \(z=5.17-5.20\) interval, indicative of a huge 8\(\times\) overdensity of galaxies in that redshift interval relative to the \(z=5.0-6.0\) population of H\(\alpha\) emitters we see over GOODS North. Earlier work by Walter et al. (2012), Arrabal Haro et al. (2018), and Calvi et al. (2021) had previously demonstrated the existence of a substantial overdensity around HDF850.1, but with a much smaller number of spectroscopically confirmed sources than we find as part of this study. Taking advantage of both the spatial and redshift information we have of galaxies in the \(z=5.17\)-5.20 and slightly higher redshift (\(z=5.22\)-5.31) overdensities, we have organized sources into 18 extended structures and computed radii, masses, velocity dispersions for the structures. The median 1\(\sigma\) dispersion in the identified structures in redshift and \(v_{los}\) is 0.0035 and 167 km/s, which is only slightly (18\(\pm\)8%) lower than what Kashino et al. (2022) find for the extended structures in the J0100+2802 field at \(z\sim 6.19\) and \(z\sim 6.33\). Interestingly, the comoving physical size of the extended structures we find around HDF850.1 extend over a 18 cMpc \(\times\) 18 cMpc \(\times\) 15 cMpc volume, very similar to the \(\sim\)18 cMpc physical size Chiang et al. (2017) expect for protoclusters based on an analysis of various semi-analytic galaxy formation results building on the Millennium simulation (Springel et al., 2005: SS3.4.3). The link to the \(z=5.17\)-5.20 overdensity being the progenitor to a \(>\)10\({}^{14}\)\(M_{\odot}\)\(z\sim 0\) galaxy cluster is further strengthened by noting that within the FRESCO GOODS North volume we would expect four \(>\)10\({}^{14}\)\(M_{\odot}\) clusters and one \(>\)10\({}^{14,4}\)\(M_{\odot}\) cluster to form by \(z\sim 0\). Additionally, we have made a systematic comparison of galaxy masses, SFRs, \(UV\) luminosities, ages, and apparent dust extinctions inside the \(z=5.17\)-5.20 overdensity to that outside the \(z=5.17\) Figure 9: (_upper panels_) Color composite image of another highly obscured star-forming source S3 at \(z=5.179\) in the same extended structure #3 as HDF850.1, as seen in a false color image (60”\(\times\)30”) made with the _JWST_ NIRCam F182M, F210M, F444W data. The redshift \(z=5.179\) of this source is based on the detection of both H\(\alpha\) (\(>\)20\(\sigma\)) and the [SII]\({}_{19,679}\)doubled (\(\phi\)-\(\sigma\)) in the FRESCO grism data. Another H\(\alpha\) emitter (ID 3946) at \(z=5.177\) from the same extended structure (indicated with the yellow star) also lies within this same footprint and could reside in the same dark matter halo as S3 (being separated by 7” [or 0.26 cMpc] in the plane of the sky). Redshift measurements have been obtained for a large number of other highly obscured galaxies over the \(\sim\)124 arcmin\({}^{2}\) FRESCO mosaic using detected H\(\alpha\) lines (Xiao et al., 2023), with typical S/N’s of \(>\)10 for the sources with the highest inferred SFRs. The detection of H\(\alpha\) line emission from sources like HDF850.1 and S3 – and many similarly dust-obscured and high-SFR sources – shows the huge potential wide-area NIRCam grism observations have for mapping the build-up of massive, dust-enshrouded star forming galaxies in the early universe. 5.20 overdensity and other overdensities in the field. We find strong evidence (\(>\)3\(\sigma\)) that galaxies inside the overdensities have higher stellar masses, SFRs, and \(UV\) luminosities than those outside these overdensities. This conclusion is strengthened by the inclusion of optically faint, massive, high-SFR dust-obscured galaxies at \(z=\) 5.0-6.0, almost all of which lie inside some overdensity within the GOODS North. In the future, it should be possible to significantly expand the number of dusty star-forming galaxies identified in the \(z>5\) universe thanks to on-going NIRCam grism observations from EIGER (Kashino et al., 2022), ASPIRE (Wang et al., 2023), a new NIRCam grism program from MIRI GTO team over GOODS South (program ID 4549), as well as deeper NIRCam grism observations approved over a deep NIRCam parallel field (program ID 4540) and various HFF clusters (program ID 2883, 3516, 3538). Of course, to obtain the strongest constraints, a substantially wider area NIRCam program would be ideal, especially over areas that already have sensitive observations of the far-IR continuum as has been obtained by the ASPIRE program over clusters (Wang et al., 2023), the ALMA GOODS program obtained over the CANDELS Deep region of the GOODS South (Franco et al., 2020), and ex-MORA program over the COSMOS field (Casey et al., 2021). ## Acknowledgements We are grateful to Roberto Neri and collaborators for providing us with spatially resolved information on both the dust-continuum and [CII] line emission from their high spatial resolution PdBI observations. RJB acknowledges support from NWO grants 600.065.140.11N211 (vrij competitive) and TOP grant TOP1.16.057. The Cosmic Dawn Center (DAWN) is funded by the Danish National Research Foundation under grant No. 140. Cloud-based data processing and file storage for this work is provided by the AWS Cloud Credits for Research program. Support for this work was provided by NASA through grant JWST-GO-01895 awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. RPN acknowledges funding from JWST programs GO-1933 and GO-2279. Support for this work was provided by NASA through the NASA Hubble Fellowship grant HST-HF2-51515.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555. MS acknowledges support from the CIDEGENT/2021/059 grant, from project PID2019-109592GB-100/AEI/10.13039/501100011033 from the Spanish Ministerio de Ciencia e Innovaciona - Agencia Estatal de Investigacion. This study forms part of the Astrophysics and High Energy Physics programme and was supported by MCIN with funding from European Union NextGenerationEU (PRTRI-C17.11) and by Generalitat Valenciana under the project n. ASFAE/2022/025. RAM acknowledges support from the ERC Advanced Grant 740246 (Cosmic_Gas) and the Swiss National Science Foundation through project grant 200020_207349. This work is based on observations made with the NASA/ESA/CSA James Webb Space Telescope. The data were obtained from the Mikulski Archive for Space Telescopes at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127 for JWST. These observations are associated with program # 1895. This paper made use of several publicly available software packages. We are indebted to the respective authors for their work: Tythron (Perez & Granger, 2007), matplotlib (Hunter, 2007), numpy (Oliphant, 2015), scipy (Virtanen et al., 2020), jupyter (Kluyver et al., 2016), Astropy (Astropy Collaboration et al., 2013, 2018), grizli (v1.7.11; Brammer, 2018; Brammer et al., 2022), EAZY (Brammer et al., 2008), SExtractor (Bertin & Arnouts, 1996), ## Data Availability All data used here are available from the Barbara A. Mikulski Archive for Space Telescopes (MAST: [https://mast.stsci.edu](https://mast.stsci.edu)), both in the form of raw and high level science products.
2301.00175
Bounded Littlewood identity related to alternating sign matrices
An identity that is reminiscent of the Littlewood identity plays a fundamental role in recent proofs of the facts that alternating sign triangles are equinumerous with totally symmetric self-complementary plane partitions and that alternating sign trapezoids are equinumerous with holey cyclically symmetric lozenge tilings of a hexagon. We establish a bounded version of a generalization of this identity. Further, we provide combinatorial interpretations of both sides of the identity. The ultimate goal would be to construct a combinatorial proof of this identity (possibly via an appropriate variant of the Robinson-Schensted-Knuth correspondence) and its unbounded version as this would improve the understanding of the relation between alternating sign trapezoids and plane partition objects.
Ilse Fischer
2022-12-31T10:52:12Z
http://arxiv.org/abs/2301.00175v2
# Bounded Littlewood identity related to alternating sign matrices ###### Abstract. An identity that is reminiscent of the Littlewood identity plays a fundamental role in recent proofs of the facts that alternating sign triangles are equinumerous with totally symmetric self-complementary plane partitions and that alternating sign trapezoids are equinumerous with holey cyclically symmetric lozenge tilings of a hexagon. We establish a bounded version of a generalization of this identity. Further, we provide combinatorial interpretations of both sides of the identity. The ultimate goal would be to construct a combinatorial proof of this identity (possibly via an appropriate variant of the Robinson-Schensted-Knuth correspondence) and its unbounded version as this would improve the understanding of the relation between alternating sign trapezoids and plane partition objects. The author acknowledge the financial support from the Austrian Science Foundation FWF, grant P34931. ## 1. Introduction Littlewood's identity reads as \[\sum_{\lambda}s_{\lambda}(X_{1},\ldots,X_{n})=\prod_{i=1}^{n}\frac{1}{1-X_{i} }\prod_{1\leq i<j\leq n}\frac{1}{1-X_{i}X_{j}}, \tag{1.1}\] where \(s_{\lambda}(X_{1},\ldots,X_{n})\) denotes the Schur polynomial associated with the partition \(\lambda\) and the sum is over all partitions \(\lambda\). In fact, the identity was already known to Schur, see [14, p. 163] or [14, p. 456], and written down by Littlewood in [13, p. 238]. This identity has a beautiful combinatorial proof that is based on the Robinson-Schensted-Knuth correspondence and exploits its symmetry, see Appendix A, and e.g., [10] for details. In recent papers [11, 12, 13], where "alternating sign matrix objects" (namely, _alternating sign triangles_ and _alternating sign trapezoids_) have been connected to certain "plane partition objects" (namely, _totally symmetric self-complementary plane partitions_ and _column strict shifted plane partitions of fixed class_, which generalize the better known _descending plane partitions_), a very similar identity played the crucial role to establish this still mysterious [10] connection. All these proofs are not of a combinatorial nature and involve rather complicated calculations, and so the study of the combinatorics of our Littlewood-type identity is very likely lead to a better understanding of the combinatorics of this relation. In order to formulate the identity, we rewrite (1.1) using the bialternant formula for the Schur polynomial [10, 7.15.1] \[s_{(\lambda_{1},\ldots,\lambda_{n})}(X_{1},\ldots,X_{n})=\frac{\det_{1\leq i,j\leq n}\left(X_{i}^{\lambda_{j}+n-j}\right)}{\prod_{1\leq i<j\leq n}(X_{i}- X_{j})}=\frac{\mathbf{ASym}_{X_{1},\ldots,X_{n}}\left[\prod_{i=1}^{n}X_{i}^{ \lambda_{i}+n-i}\right]}{\prod_{1\leq i<j\leq n}(X_{i}-X_{j})},\] allowing zeros at the end section of \((\lambda_{1},\ldots,\lambda_{n})\), with \[\mathbf{ASym}_{X_{1},\ldots,X_{n}}f(X_{1},\ldots,X_{n})=\sum_{\sigma\in \mathcal{S}_{n}}\operatorname{sgn}\sigma\cdot f(X_{\sigma(1)},\ldots,X_{ \sigma(n)})\] as follows. \[\frac{\mathbf{ASym}_{X_{1},\ldots,X_{n}}\left[\sum_{0\leq k_{1}<k_{2}<\ldots<k_{n }}X_{1}^{k_{1}}X_{2}^{k_{2}}\cdots X_{n}^{k_{n}}\right]}{\prod_{1\leq i<j\leq n }(X_{j}-X_{i})}=\prod_{i=1}^{n}\frac{1}{1-X_{i}}\prod_{1\leq i<j\leq n}\frac{1}{ 1-X_{i}X_{j}}\] We have used the following identity in [11, 12]. There it is proved by induction with respect to \(n\). \[\frac{\mathbf{ASym}_{X_{1},\ldots,X_{n}}\left[\prod_{1\leq i<j \leq n}(1+X_{j}+X_{i}X_{j})\sum_{0\leq k_{1}<k_{2}<\ldots<k_{n}}X_{1}^{k_{1}}X _{2}^{k_{2}}\cdots X_{n}^{k_{n}}\right]}{\prod_{1\leq i<j\leq n}(X_{j}-X_{i})}\] \[=\prod_{i=1}^{n}\frac{1}{1-X_{i}}\prod_{1\leq i<j\leq n}\frac{1+X _{i}+X_{j}}{1-X_{i}X_{j}} \tag{1.2}\] In [10], an additional parameter has been introduced, which has to be set to \(1\) to obtain (1.2). \[\frac{\mathbf{ASym}_{X_{1},\ldots,X_{n}}\left[\prod_{1\leq i<j \leq n}(Q+(Q-1)X_{i}+X_{j}+X_{i}X_{j})\sum_{0\leq k_{1}<k_{2}<\ldots<k_{n}} \prod_{i=1}^{n}\left(\frac{X_{i}(1+X_{i})}{Q+X_{i}}\right)^{k_{i}}\right]}{ \prod_{1\leq i<j\leq n}\left(X_{j}-X_{i}\right)}\\ =\prod_{i=1}^{n}\frac{Q+X_{i}}{Q-X_{i}^{2}}\frac{\prod_{1\leq i<j \leq n}(Q(1+X_{i})(1+X_{j})-X_{i}X_{j})}{\prod_{1\leq i<j\leq n}(Q-X_{i}X_{j})}. \tag{1.3}\] Among other things, we will see in this paper that we can also introduce another parameter in (1.2) as follows: \[\frac{\mathbf{ASym}_{X_{1},\ldots,X_{n}}\left[\prod_{1\leq i<j \leq n}(1+wX_{i}+X_{j}+X_{i}X_{j})\sum_{0\leq k_{1}<k_{2}<\ldots<k_{n}}X_{1}^{ k_{1}}X_{2}^{k_{2}}\cdots X_{n}^{k_{n}}\right]}{\prod_{1\leq i<j\leq n}(X_{j}-X_{i})}\\ =\prod_{i=1}^{n}\frac{1}{1-X_{i}}\prod_{1\leq i<j\leq n}\frac{1+X _{i}+X_{j}+wX_{i}X_{j}}{1-X_{i}X_{j}}, \tag{1.4}\] in fact, there is even the following common generalization of (1.3) and (1.4). \[\frac{\mathbf{ASym}_{X_{1},\ldots,X_{n}}\left[\prod_{1\leq i<j \leq n}(Q+(Q+r)X_{i}+X_{j}+X_{i}X_{j})\sum_{0\leq k_{1}<k_{2}<\ldots<k_{n}} \prod_{i=1}^{n}\left(\frac{X_{i}(1+X_{i})}{Q+X_{i}}\right)^{k_{i}}\right]}{ \prod_{1\leq i<j\leq n}(X_{j}-X_{i})}\\ =\prod_{i=1}^{n}\frac{Q+X_{i}}{Q-X_{i}^{2}}\frac{\prod_{1\leq i<j \leq n}Q(1+X_{i})(1+X_{j})+rX_{i}X_{j}}{\prod_{1\leq i<j\leq n}(Q-X_{i}X_{j})} \tag{1.5}\] The main purpose of this paper is to derive bounded versions of these identities and to provide combinatorial interpretations of the identities that would allow us to approach them with a combinatorial proof, possibly by a variant of the Robinson-Schensted-Knuth correspondence that mimics the proof for the classical Littlewood identity. By bounded version we mean that the sums \(\sum_{0\leq k_{1}<k_{2}<\ldots<k_{n}}\) are restricted to, say, \(\sum_{0\leq k_{1}<k_{2}<\ldots<k_{n}\leq m}\). Macdonald [14] has provided such a bounded version of the classical identity (1.1), namely \[\sum_{\lambda\in(m^{n})}s_{\lambda}(X_{1},\ldots,X_{n})=\sum_{0\leq k _{1}\leq k_{2}\leq\ldots<k_{n}\leq m}s_{(k_{n},k_{n-1},\ldots,k_{1})}(X_{1}, \ldots,X_{n})\\ =\frac{\det_{1\leq i,j\leq n}\left(X_{i}^{j-1}-X_{i}^{m+2n-j} \right)}{\prod_{i=1}^{n}(1-X_{i})\prod_{1\leq i<j\leq n}(X_{j}-X_{i})(1-X_{i}X _{j})}, \tag{1.6}\] which he used to prove MacMahon's conjecture. Very recent work on bounded Littlewood identities can be found in [13]. More specifically, we will prove the following. **Theorem 1.1**.: _For \(n\geq 1\), we have_ \[\frac{1}{\prod\limits_{1\leq i<j\leq n}(X_{j}-X_{i})}\mathbf{A} \mathbf{S}\mathbf{ym}_{X_{1},\ldots,X_{n}}\Bigg{[}\prod\limits_{1\leq i<j\leq n }(Q+(Q+r)X_{i}+X_{j}+X_{i}X_{j})\\ \times\sum_{0\leq k_{1}<k_{2}<\ldots<k_{n}\leq m}\left(\frac{X_{ 1}(1+X_{1})}{Q+X_{1}}\right)^{k_{1}}\left(\frac{X_{2}(1+X_{2})}{Q+X_{2}}\right) ^{k_{2}}\cdots\left(\frac{X_{n}(1+X_{n})}{Q+X_{n}}\right)^{k_{n}}\Bigg{]}\\ =\frac{\det_{1\leq i,j\leq n}\left(a_{j,m,n}(Q,r;X_{i})\right)}{ \prod\limits_{1\leq i\leq j\leq n}(Q-X_{i}X_{j})\prod\limits_{1\leq i<j\leq n }(X_{j}-X_{i})} \tag{1.7}\] _with_ \[a_{j,m,n} (Q,r;X)=(1+QX^{-1})X^{j}(1+X)^{j-1}(Q+rX+QX)^{n-j}\] \[-X^{2n}Q^{-n}\left(\frac{(1+X)X}{Q+X}\right)^{m}(1+X)\left(QX^{-1 }\right)^{j}(1+QX^{-1})^{j-1}(Q+rQX^{-1}+Q^{2}X^{-1})^{n-j}.\] Setting \(Q=1\) and \(r=w-1\), we obtain, after simplifying the right-hand side, the following corollary. **Corollary 1.2**.: _For \(n\geq 1\), we have_ \[\frac{1}{\prod\limits_{1\leq i<j\leq n}(X_{j}-X_{i})}\mathbf{A} \mathbf{S}\mathbf{ym}_{X_{1},\ldots,X_{n}}\Bigg{[}\prod\limits_{1\leq i<j\leq n }(1+wX_{i}+X_{j}+X_{i}X_{j})\sum_{0\leq k_{1}<k_{2}<\ldots<k_{n}\leq m}X_{1}^{ k_{1}}X_{2}^{k_{2}}\cdots X_{n}^{k_{n}}\Bigg{]}\\ =\frac{\det_{1\leq i,j\leq n}\left(X_{i}^{j-1}(1+X_{i})^{j-1}(1+ wX_{i})^{n-j}-X_{i}^{m+2n-j}(1+X_{i}^{-1})^{j-1}(1+wX_{i}^{-1})^{n-j} \right)}{\prod\limits_{i=1}^{n}(1-X_{i})\prod\limits_{1\leq i<j\leq n}(1-X_{i} X_{j})(X_{j}-X_{i})}. \tag{1.8}\] In the second part of the paper, we will then provide combinatorial interpretations for both sides of the identity in the corollary. **Outline**.: In Section 2, we give a proof of (1.1). In Appendix A, we discuss a point of view on the combinatorics of the classical Littlewood identity (1.1) and its bounded version (1.6) that is beneficial for possible combinatorial proofs of the Littlewood-type identities that we establish in this paper. Recall that this is of interest because such identities have been used several times [17, 18, 19] to establish connections between alternating sign matrix objects and plane partition objects. To approach this, we offer combinatorial interpretations of the left-hand sides of (1.4) and (1.8) in Section 3 and in Appendix B. Then, in Section 4, we offer a combinatorial interpretation of the right-hand sides of (1.4) and (1.8). These interpretations are nicest in the cases \(w=0,1\). In Section 5, we offer an outlook on related work on the cases \(w=0,-1\), which will appear in a forthcoming paper with Florian Schreier-Aigner. ## 2. Proof of Theorem 1.1 Bressoud's elementary proof [1] of (1.6) turned out to be useful to obtain the following (still elementary, but admittedly very complicated) proof of Theorem 1.1 provided here. Conceptually the proof is not difficult: We use induction with respect to \(n\) and show that both sides satisfy the same recursion. ### The case \(m\to\infty\) We start by proving the \(m\to\infty\) case of Theorem 1.1. This is equivalent to proving that \[\mathbf{ASym}_{X_{1},\ldots,X_{n}}\left[\prod_{1\leq i<j\leq n}(Q +(Q+r)X_{i}+X_{j}+X_{i}X_{j})\prod_{i=1}^{n}\left(\frac{X_{i}(1+X_{i})}{Q+X_{i }}\right)^{i-1}\prod_{i=1}^{n}\frac{1}{1-\prod_{j=i}^{n}\frac{X_{j}(1+X_{j})}{ Q+X_{j}}}\right]\\ =\prod_{i=1}^{n}\frac{Q+X_{i}}{Q-X_{i}^{2}}\frac{\prod_{1\leq i<j \leq n}(X_{j}-X_{i})(Q(1+X_{i})(1+X_{j})+rX_{i}X_{j})}{\prod_{1\leq i<j\leq n }(Q-X_{i}X_{j})}, \tag{2.1}\] which is just (1.5) multiplied on both sides with \(\prod_{1\leq i<j\leq n}(X_{j}-X_{j})\). To see this, we rewrite the left-hand side of (1.7) by using the summation formula for the geometric series \(n\) times. As \(m\to\infty\), \(a_{j,m,n}(Q,r;X_{i})\) simplifies to \[(1+QX^{-1})X^{j}(1+X)^{j-1}(Q+rX+QX)^{n-j}\] in a formal power series sense, and \[\det_{1\leq i,j\leq n}\left((1+QX^{-1})X^{j}(1+X)^{j-1}(Q+rX+QX)^{n-j}\right)\] can be computed using the Vandermonde determinant evaluation, we are led to the right-hand side of (2.1) eventually. We denote by \(L_{n}(X_{1},\ldots,X_{n})\) the left-hand side of (2.1) and observe that the following recursion is satisfied. \[L_{n}(X_{1},\ldots,X_{n})=\sum_{k=1}^{n}(-1)^{k-1}\frac{1}{1- \prod_{i=1}^{n}\frac{X_{i}(1+X_{i})}{Q+X_{i}}}L_{n-1}(X_{1},\ldots,\widehat{X _{k}},\ldots,X_{n})\\ \times\prod_{1\leq j\leq n,\atop j\neq k}\frac{X_{j}(1+X_{j})(Q+ (Q+r)X_{k}+X_{j}+X_{k}X_{j})}{Q+X_{j}}, \tag{2.2}\] where \(\widehat{X_{k}}\) means that we omit \(X_{k}\). Indeed, suppose more generally that \[P(X_{1},\ldots,X_{n})=\mathbf{ASym}_{X_{1},\ldots,X_{n}}\left[\prod_{1\leq i< j\leq n}s(X_{i},X_{j})\prod_{i=1}^{n}t(X_{i})^{i-1}\prod_{i=1}^{n}\frac{1}{1- \prod_{j=i}^{n}u(X_{j})}\right],\] then \[\begin{split} P(X_{1},\dots,X_{n})&=\sum_{k=1}^{n} \sum_{\begin{subarray}{c}\sigma s\delta_{n:}\\ \sigma(1)-k\end{subarray}}\operatorname{sgn}\sigma\frac{1}{1-\prod_{j=1}^{n}u (X_{j})}\prod_{\begin{subarray}{c}1\leq j\leq n,\\ j\neq k\end{subarray}}s(X_{k},X_{j})t(X_{j})\\ &\qquad\times\sigma\Bigg{[}\prod_{2\leq i<j\leq n}s(X_{i},X_{j}) \prod_{i=2}^{n}t(X_{i})^{i-2}\prod_{i=2}^{n}\frac{1}{1-\prod_{j=i}^{n}u(X_{j})} \Bigg{]}\\ &=\sum_{k=1}^{n}(-1)^{k-1}\frac{1}{1-\prod_{j=1}^{n}u(X_{j})}P(X_ {1},\dots,\widehat{X_{k}},\dots,X_{n})\\ &\qquad\times\prod_{\begin{subarray}{c}1\leq j\leq n,\\ j\neq k\end{subarray}}s(X_{k},X_{j})t(X_{j}).\end{split} \tag{2.3}\] The last equality follows from the fact that the sign of \(\sigma\) is the product of \((-1)^{k-1}\) and the sign of the restriction of \(\sigma\) to \(\{2,3,\dots,n\}\), assuming \(\sigma(1)=k\) and "identifying" the preimage \(\{2,3,\dots,n\}\) as well as the image \(\{1,\dots,n\}\setminus\{k\}\) with \(\{1,2,\dots,n-1\}\) in the natural way. We show (2.1) by induction with respect to \(n\). The case \(n=1\) is easy to check. It suffices to show that the right-hand side of (2.1) satisfies the recursion (2.2), i.e., \[\begin{split}\prod_{i=1}^{n}\frac{Q+X_{i}}{Q-X_{i}^{2}}\frac{ \prod_{1\leq i<j\leq n}(X_{j}-X_{i})(Q(1+X_{i})(1+X_{j})+rX_{i}X_{j})}{\prod \limits_{1\leq i<j\leq n}(Q-X_{i}X_{j})}\\ =\sum_{k=1}^{n}(-1)^{k-1}\frac{1}{1-\prod_{i=1}^{n}\frac{X_{i}(1 +X_{i})}{Q+X_{i}}}\prod_{\begin{subarray}{c}1\leq j\leq n,\\ j\neq k\end{subarray}}\frac{X_{j}(1+X_{j})(Q+(Q+r)X_{k}+X_{j}+X_{k}X_{j})}{Q-X _{j}^{2}}\\ \times\frac{\prod_{1\leq i<j\leq n,i,j\neq k}(X_{j}-X_{i})(Q(1+X_{ i})(1+X_{j})+rX_{i}X_{j})}{\prod\limits_{1\leq i<j\leq n,i,j\neq k}(Q-X_{i}X_{j})}. \end{split}\] We multiply by \(\Big{(}1-\prod_{i=1}^{n}\frac{X_{i}(1+X_{i})}{Q+X_{i}}\Big{)}\prod_{1\leq i \leq j\leq n}(Q-X_{i}X_{j})\) and obtain \[\begin{split}&\left(\prod_{i=1}^{n}(Q+X_{i})-\prod_{i=1}^{n}X_{i }(1+X_{i})\right)\prod_{1\leq i<j\leq n}(X_{j}-X_{i})(Q(1+X_{i})(1+X_{j})+rX_{ i}X_{j})\\ &=\prod_{1\leq i<j\leq n}(X_{j}-X_{i})(Q(1+X_{i})(1+X_{j})+rX_{i} X_{j})\\ &\qquad\times\sum_{k=1}^{n}(Q-X_{k}^{2})\prod_{\begin{subarray}{ c}1\leq j\leq n\\ j\neq k\end{subarray}}\frac{X_{j}(X_{j}+1)(Q+(Q+r)X_{k}+X_{j}+X_{k}X_{j})(Q-X_{j}X_{ k})}{(X_{j}-X_{k})(Q(1+X_{j})(1+X_{k})+rX_{j}X_{k})}.\end{split} \tag{2.4}\] For each \(s\in\{1,2,\dots,n\}\), both sides are polynomials in \(X_{s}\) of degree no greater than \(2n\). It is not hard to see that both sides vanish for \(X_{s}=X_{t}\) and \(X_{s}=-\frac{Q(1+X_{t})}{Q+QX_{t}+rX_{t}}\) for any \(t\in\{1,2,\dots,n\}\setminus\{s\}\). Moreover, it is also not hard to see that the evaluations also agree for \(X_{s}=0,-1\), which gives a total of \(2n\) evaluations for each \(X_{s}\). It follows that the difference of the left-hand side and the right-hand side is up to a constant in \(\mathbb{Q}(Q,r)\) equal to \[\prod_{i=1}^{n}X_{i}(1+X_{i})\prod_{1\leq i<j\leq n}(X_{j}-X_{i})(Q(1+X_{i})(1+X _{j})+rX_{i}X_{j}). \tag{2.5}\] To show that this constant is indeed zero, we consider the following specialization \[(X_{1},X_{2},X_{3},X_{4},\ldots)=\left(X_{1},\frac{Q}{X_{1}},X_{3},\frac{Q}{X_{3} },\ldots\right).\] Note first that (2.5) does not vanish at this specialization, and, therefore, it suffices to show that the left-hand side and the right-hand side of (2.4) agree on this specialization. If \(n\) is even, this is particularly easy to see, because both sides vanish (on the right-hand side all summands vanish, which is due to the factor \(Q-X_{j}X_{k}\)). If \(n\) is odd, then only the last summand on the right-hand side remains and it is not hard to see that it is equal to the left-hand side. ### The general case We rewrite the identity from Theorem 1.1 that we need to prove as follows. \[\det_{1\leq i,j\leq n}\left(a_{j,m,n}(Q,r;X_{i})\right)\\ =\mathbf{ASym}_{X_{1},\ldots,X_{n}}\left[\prod_{i=1}^{n}(Q-X_{i}^{ 2})\prod_{1\leq i<j\leq n}(Q+(Q+r)X_{i}+X_{j}+X_{i}X_{j})(Q-X_{i}X_{j})\right.\\ \times\sum_{0\leq k_{1}<k_{2}<\ldots<k_{n}\leq m}\left(\frac{X_{1 }(1+X_{1})}{Q+X_{1}}\right)^{k_{1}}\left(\frac{X_{2}(1+X_{2})}{Q+X_{2}}\right) ^{k_{2}}...\left(\frac{X_{n}(1+X_{n})}{Q+X_{n}}\right)^{k_{n}}\right] \tag{2.6}\] We set \[F(m;X_{1},\ldots,X_{n})=\prod_{i=1}^{n} \left(Q-X_{i}^{2}\right)\prod_{1\leq i<j\leq n}(Q+(Q+r)X_{i}+X_{j }+X_{i}X_{j})(Q-X_{i}X_{j})\\ \times\sum_{0\leq k_{1}<k_{2}<\ldots<k_{n}\leq m}\left(\frac{X_{1 }(1+X_{1})}{Q+X_{1}}\right)^{k_{1}}\left(\frac{X_{2}(1+X_{2})}{Q+X_{2}}\right) ^{k_{2}}...\left(\frac{X_{n}(1+X_{n})}{Q+X_{n}}\right)^{k_{n}}\] and observe that \[F(m;X_{1},\ldots,X_{n})=(Q-X_{1}^{2})\prod_{j=2}^{n} (Q+(Q+r)X_{1}+X_{j}+X_{1}X_{j})(Q-X_{1}X_{j})\\ \times\sum_{l=0}^{m}\frac{Q+X_{1}}{X_{1}(1+X_{1})}\left(\prod_{i= 1}^{n}\frac{X_{i}(1+X_{i})}{Q+X_{i}}\right)^{l+1}F(m-1-l;X_{2},\ldots,X_{n}).\] We set \[A(m;X_{1},\ldots,X_{n})=\mathbf{ASym}_{X_{1},\ldots,X_{n}}F(m;X_{1},\ldots,X_ {n}),\] and observe that \[A(m;X_{1},\ldots,X_{n})=\sum_{k=1}^{n}\sum_{l=0}^{m}(-1)^{k+1}(Q- X_{k}^{2})\frac{Q+X_{k}}{X_{k}(1+X_{k})}\left(\prod_{i=1}^{n}\frac{X_{i}(1+X_{i})}{ Q+X_{i}}\right)^{l+1}\\ \times A(m-l-1;X_{1},\ldots,\widehat{X_{k}},\ldots,X_{n})\\ \times\prod_{1\leq i\leq n,i\neq k}(Q+(Q+r)X_{k}+X_{i}+X_{i}X_{k}) (Q-X_{i}X_{k}),\] by the same argument that has led to (2.3). By the induction hypothesis, we have \[A(m-l-1;X_{1},\ldots,\widehat{X_{k}},\ldots,X_{n})=\det_{\begin{subarray}{c}1 \leq i\leq n,i\neq k\\ 1\leq j\leq n-1\end{subarray}}\left(a_{j,m-l-1,n-1}(Q,r;X_{i})\right).\] Therefore, the right-hand side of (2.6) is \[\sum_{k=1}^{n}(-1)^{k+1}(Q-X_{k}^{2})\prod_{1\leq i\leq n,i\neq k}(Q+ (Q+r)X_{k}+X_{i}+X_{i}X_{k})(Q-X_{i}X_{k})\\ \times\sum_{l=0}^{m}\left(\frac{X_{k}(1+X_{k})}{Q+X_{k}}\right)^{l }\det_{\genfrac{}{}{0.0pt}{}{1\leq i\leq n,i\neq k}{1\leq j\leq n-1}}\left( \left(\frac{X_{i}(1+X_{i})}{Q+X_{i}}\right)^{l+1}a_{j,m-l-1,n-1}(Q,r;X_{i}) \right) \tag{2.7}\] and we need to show that it is equal to \(\det_{1\leq i,j\leq n}\left(a_{j,m,n}(Q,r;X_{i})\right)\). Noting that \[\left(\frac{X(1+X)}{Q+X}\right)^{l+1}a_{j,m-l-1,n-1}(Q,r;X)\\ =(1+QX^{-1})X^{j+l+1}(1+X)^{j+l}(Q+rX+QX)^{n-1-j}(Q+X)^{-l-1}\\ -X^{2n-2}Q^{-n+1}\left(\frac{(1+X)X}{Q+X}\right)^{m}(1+X)\left( QX^{-1}\right)^{j}(1+QX^{-1})^{j-1}(Q+rQX^{-1}+Q^{2}X^{-1})^{n-1-j}\\ =X^{j+l}(1+X)^{j+l}(Q+X)^{-l}(Q+rX+QX)^{n-1-j}\\ -X^{-j+m+n}(1+X)^{m+1}(Q+X)^{j-m-1}(X+r+Q)^{n-1-j}, \tag{2.8}\] we can write the determinant in (2.7) as \[\sum_{\sigma,S}(-1)^{I(\sigma)+|S|}\prod_{i\in S}X_{i}^{-\sigma(i) +m+n}\big{(}1+X_{i}\big{)}^{m+1}\big{(}Q+X_{i}\big{)}^{\sigma(i)-m-1}\big{(}X_ {i}+r+Q\big{)}^{n-1-\sigma(i)}\\ \times\prod_{i\in\overline{S}}X_{i}^{\sigma(i)+l}\big{(}1+X_{i} \big{)}^{\sigma(i)+l}\big{(}Q+X_{i}\big{)}^{-l}\big{(}Q+rX_{i}+QX_{i}\big{)}^{ n-1-\sigma(i)},\] where the sum is over all bijections \(\sigma:\{1,2,\ldots,n\}\setminus\{k\}\to\{1,2,\ldots,n-1\}\), all subsets \(S\) of \(\{1,2,\ldots,n\}\setminus\{k\}\) and \(I(\sigma)\) is the number of all inversions, i.e., pairs \(i,j\in\{1,2,\ldots,n\}\setminus\{k\}\) with \(i<j\) and \(\sigma(i)>\sigma(j)\). Moreover, \(\overline{S}\) denotes the complement of \(S\) in \(\{1,2,\ldots,n\}\setminus\{k\}\). Comparing with (2.7), we multiply by \(\left(\frac{X_{k}(1+X_{k})}{Q+X_{k}}\right)^{l}\) and take the sum over \(l\). \[\sum_{\sigma,S}(-1)^{I(\sigma)+|S|}\prod_{i\in S}X_{i}^{-\sigma(i) +m+n}(1+X_{i})^{m+1}(Q+X_{i})^{\sigma(i)-m-1}\big{(}X_{i}+r+Q\big{)}^{n-1- \sigma(i)}\\ \times\prod_{i\in\overline{S}}X_{i}^{\sigma(i)}(1+X_{i})^{\sigma (i)}(Q+rX_{i}+QX_{i})^{n-1-\sigma(i)}\sum_{l=0}^{m}\left(\frac{X_{k}(1+X_{k}) }{Q+X_{k}}\right)^{l}\prod_{i\in\overline{S}}\left(\frac{X_{i}(1+X_{i})}{Q+X_ {i}}\right)^{l}.\] We evaluate the sum and rearrange some terms. \[\sum_{S}(-1)^{|S|}\frac{1-\left(\frac{X_{k}(1+X_{k})}{Q+X_{k}} \right)^{m+1}\prod_{i\in\overline{S}}\left(\frac{X_{i}(1+X_{i})}{Q+X_{i}}\right) ^{m+1}}{1-\frac{X_{k}(1+X_{k})}{Q+X_{k}}\prod_{i\in\overline{S}}\frac{X_{i}(1+ X_{i})}{Q+X_{i}}}\\ \times\prod_{i\in S}X_{i}^{m+n-1}(1+X_{i})^{m+1}(Q+X_{i})^{-m}(X_ {i}+r+Q)^{n-2}\prod_{i\in\overline{S}}X_{i}(1+X_{i})\big{(}Q+rX_{i}+QX_{i})^{n-2} \\ \times\sum_{\sigma}(-1)^{I(\sigma)}\prod_{i\in\bar{S}}(QX_{i}^{-1} )^{\sigma(i)-1}(1+QX_{i}^{-1})^{\sigma(i)-1}(Q+rQX_{i}^{-1}+Q^{2}X_{i}^{-1})^{ -\sigma(i)+1}\\ \times\prod_{i\in\overline{S}}X_{i}^{\sigma(i)-1}\big{(}1+X_{i} \big{)}^{\sigma(i)-1}\big{(}Q+rX_{i}+QX_{i}\big{)}^{-\sigma(i)+1}\] The inner sum is a Vandermonde determinant, which we evaluate. We obtain \[\sum_{S}(-1)^{|S|}\frac{1-\left(\frac{X_{k}(1+X_{k})}{Q+X_{k}} \right)^{m+1}\prod_{i\in\overline{S}}\left(\frac{X_{i}(1+X_{i})}{Q+X_{i}} \right)^{m+1}}{1-\frac{X_{k}(1+X_{k})}{Q+X_{k}}\prod_{i\in\overline{S}}\frac{X_ {i}(1+X_{i})}{Q+X_{i}}}\\ \times\prod_{i\in S}X_{i}^{m+n-1}(1+X_{i})^{m+1}(Q+X_{i})^{-m}(X _{i}+r+Q)^{n-2}\prod_{i\in\overline{S}}X_{i}(1+X_{i})\big{(}Q+rX_{i}+QX_{i})^{ n-2}\\ \times\prod_{1\leq i<j\leq n,i,j\not=k}\left(\frac{Y_{j}(1+Y_{j}) }{Q+rY_{j}+QY_{j}}-\frac{Y_{i}(1+Y_{i})}{Q+rY_{i}+QY_{i}}\right),\] with \(Y_{i}=X_{i}\) if \(i\in\overline{S}\) and \(Y_{i}=QX_{i}^{-1}\) if \(i\in S\). From (2.7), we add the sum over all \(k\) and finally have the full right-hand side of (2.6). We exchange the sum over \(k\) and \(S\): now we sum over all proper subsets \(S\subseteq[n]\) and all \(k\) not in \(S\). If we write \(i\notin S\), then we mean \(i\in\{1,2,\ldots,n\}\setminus S\). \[\sum_{S}(-1)^{|S|}\frac{1-\prod_{i\notin S}\left(\frac{X_{i}(1+X_ {i})}{Q+X_{i}}\right)^{m+1}}{1-\prod_{i\notin S}\frac{X_{i}(1+X_{i})}{Q+X_{i}}}\\ \times\prod_{i\in S}X_{i}^{m+n-1}(1+X_{i})^{m+1}(Q+X_{i})^{-m} \big{(}X_{i}+r+Q)^{n-2}\prod_{i\notin S}X_{i}(1+X_{i})\big{(}Q+rX_{i}+QX_{i} \big{)}^{n-2}\\ \times\sum_{k\notin S}(-1)^{k+1}(Q-X_{k}^{2})X_{k}^{-1}\big{(}1+X _{k}\big{)}^{-1}\big{(}Q+rX_{k}+QX_{k}\big{)}^{-n+2}\\ \times\prod_{1\leq i\in s,i\not=k}(Q+(Q+r)X_{k}+X_{i}+X_{i}X_{k}) \big{(}Q-X_{i}X_{k}\big{)}\\ \times\prod_{1\leq i\in j\in s,i\not=k}\left(\frac{Y_{j}(1+Y_{j} )}{Q+rY_{j}+QY_{j}}-\frac{Y_{i}(1+Y_{i})}{Q+rY_{i}+QY_{i}}\right) \tag{2.9}\] We rewrite \[(-1)^{k-1}\prod_{\begin{subarray}{c}1\leq i\leq n,i\neq k\\ i\leq S\end{subarray}}(Q+(Q+r)X_{k}+X_{i}+X_{i}X_{k})(Q-X_{i}X_{k})\] \[\quad=\prod_{\begin{subarray}{c}1\leq i\leq k-1\\ i\leq S\end{subarray}}(Q+(Q+r)X_{k}+X_{i}+X_{i}X_{k})(X_{i}X_{k}-Q)\] \[\quad\quad\quad\times\prod_{\begin{subarray}{c}k+1\leq i\leq n\\ i\leq S\end{subarray}}(Q+(Q+r)X_{k}+X_{i}+X_{i}X_{k})(Q-X_{i}X_{k})\] \[\quad\quad\quad\times\prod_{\begin{subarray}{c}i\in S\\ i\leq S\end{subarray}}X_{i}(X_{i}+r+Q)(Q+rX_{k}+QX_{k})\] \[\quad\quad\quad\times\prod_{\begin{subarray}{c}1\leq i\leq k-1\\ i\leq S\end{subarray}}\left(\frac{X_{k}(1+X_{k})}{Q+rX_{k}+QX_{k}}-\frac{QX_{i }^{-1}(1+QX_{i}^{-1})}{Q+rQX_{i}^{-1}Q^{2}X_{i}^{-1}}\right)\] \[\quad\quad\quad\times\prod_{\begin{subarray}{c}k+1\leq i\leq n\\ i\leq S\end{subarray}}\left(\frac{QX_{i}^{-1}(1+QX_{i}^{-1})}{Q+rQX_{i}^{-1}Q^ {2}X_{i}^{-1}}-\frac{X_{k}(1+X_{k})}{Q+rX_{k}+QX_{k}}\right).\] We use this to rewrite (2.9) as follows. \[\sum_{S}(-1)^{|S|}\frac{1-\prod_{i\notin S}\left(\frac{X_{i}(1+X_{ i})}{Q+X_{i}}\right)^{m+1}}{1-\prod_{i\notin S}\frac{X_{i}(1+X_{i})}{Q+X_{i}}} \prod_{i}X_{i}\] \[\quad\quad\times\prod_{i\in S}X_{i}^{m+n-1}(1+X_{i})^{m+1}(Q+X_{ i})^{-m}(X_{i}+r+Q)^{n-1}\prod_{i\notin S}(1+X_{i})(Q+rX_{i}+QX_{i})^{n-1}\] \[\quad\quad\quad\times\sum_{k\notin S}(Q-X_{k}^{2})X_{k}^{-1}(1+X _{k})^{-1}\] \[\quad\quad\quad\times\prod_{\begin{subarray}{c}1\leq i\leq k-1\\ i\notin S\end{subarray}}\frac{(Q+(Q+r)X_{k}+X_{i}+X_{i}X_{k})(X_{i}X_{k}-Q)}{( Q+rX_{k}+QX_{k})(Q+rX_{k}+QX_{i})}\] \[\quad\quad\quad\times\prod_{\begin{subarray}{c}k+1\leq i\leq n\\ i\leq S\end{subarray}}\frac{(Q+(Q+r)X_{k}+X_{i}+X_{i}X_{k})(Q-X_{i}X_{k})}{( Q+rX_{k}+QX_{k})(Q+rX_{i}+QX_{i})}\] \[\quad\quad\quad\times\prod_{\begin{subarray}{c}1\leq i\leq k-1\\ i\leq S\end{subarray}}\left(\frac{X_{k}(1+X_{k})}{Q+rX_{k}+QX_{k}}-\frac{QX_{i }^{-1}(1+QX_{i}^{-1})}{Q+rQX_{i}^{-1}Q^{2}X_{i}^{-1}}\right)\] \[\quad\quad\quad\times\prod_{\begin{subarray}{c}k+1\leq i\leq n\\ i\leq S\end{subarray}}\left(\frac{QX_{i}^{-1}(1+QX_{i}^{-1})}{Q+rQX_{i}^{-1}Q^ {2}X_{i}^{-1}}-\frac{X_{k}(1+X_{k})}{Q+rX_{k}+QX_{k}}\right)\] \[\quad\quad\quad\times\prod_{\begin{subarray}{c}1\leq i\leq j \leq n,i\neq k\\ i\leq S\end{subarray}}\left(\frac{Y_{j}(1+Y_{j})}{Q+rY_{j}+QY_{j}}-\frac{Y_{i}(1+Y _{i})}{Q+rY_{i}+QY_{i}}\right)\] This is further equal to \[\begin{split}&\sum_{S}(-1)^{|S|}\frac{1-\prod_{i\in S}\left(\frac{X_{i }(1+X_{i})}{Q+X_{i}}\right)^{m+1}}{1-\prod_{i\in S}\frac{X_{i}(1+X_{i})}{Q+X_{i }}}\prod_{i}X_{i}\\ &\quad\times\prod_{i\in S}X_{i}^{m+n-1}(1+X_{i})^{m+1}(Q+X_{i})^ {-m}(X_{i}+r+Q)^{n-1}\\ &\quad\times\prod_{i\notin S}(1+X_{i})\big{(}Q+rX_{i}+QX_{i})^{n- 1}\\ &\quad\times\prod_{1\leq i<j\leq n,(i,j)\cap S\not\in\partial} \left(\frac{Y_{j}(1+Y_{j})}{Q+rY_{j}+QY_{j}}-\frac{Y_{i}(1+Y_{i})}{Q+rY_{i}+QY _{i}}\right)\\ &\quad\times\sum_{k\notin S}(Q-X_{k}^{2})X_{k}^{-1}(1+X_{k})^{-1 }\\ &\quad\times\prod_{1\leq i<j\leq n,i,j\notin S\cup\{k\}}\left(\frac {Q+(Q+r)X_{k}+X_{i}+X_{i}X_{k}}{Q+rX_{k}+QX_{i}}-\frac{X_{i}(1+X_{i})}{Q+rX_{i }+QX_{i}}\right)\\ &\quad\times\prod_{1\leq i<j\leq n,i,j\notin S\cup\{k\}}\left( \frac{X_{j}(1+X_{j})}{Q+rX_{j}+QX_{j}}-\frac{X_{i}(1+X_{i})}{Q+rX_{i}+QX_{i}} \right).\end{split} \tag{2.10}\] We divide (2.4) by \(\prod_{i=1}^{n}(Q+rX_{i}+QX_{i})^{n-1}X_{i}(1+X_{i})\), and, after some further modifications, we obtain \[\begin{split}&\left(\prod_{i=1}^{n}\frac{Q+X_{i}}{X_{i}(1+X_{i})}-1 \right)\prod_{1\leq i<j\leq n}\left(\frac{X_{j}(1+X_{j})}{Q+rX_{j}+QX_{j}}- \frac{X_{i}(1+X_{i})}{Q+rX_{i}+QX_{i}}\right)\\ &=\sum_{k=1}^{n}(Q-X_{k}^{2})X_{k}^{-1}(1+X_{k})^{-1}\\ &\quad\times\prod_{j=1}^{k-1}\frac{(Q+(Q+r)X_{k}+X_{j}+X_{k}X_{j })(X_{j}X_{k}-Q)}{(Q+rX_{j}+QX_{j})(Q+rX_{k}+QX_{k})}\\ &\quad\times\prod_{j=k+1}^{n}\frac{(Q+(Q+r)X_{k}+X_{j}+X_{k}X_{j })(Q-X_{j}X_{k})}{(Q+rX_{j}+QX_{j})(Q+rX_{k}+QX_{k})}\\ &\quad\times\prod_{1\leq i<j\leq n,i,j\notin k}\left(\frac{X_{j} (1+X_{j})}{Q+rX_{j}+QX_{j}}-\frac{X_{i}(1+X_{i})}{Q+rX_{i}+QX_{i}}\right). \end{split}\] We can use this to replace the sum over all \(k\in S\) in (2.10) by something simpler. \[\sum_{S}(-1)^{|S|}\frac{1-\prod_{i\notin S}\left(\frac{X_{i}(1+X_{i })}{Q+X_{i}}\right)^{m+1}}{1-\prod_{i\notin S}\frac{X_{i}(1+X_{i})}{Q+X_{i}}}\prod _{i}X_{i}\\ \times\prod_{i\in S}X_{i}^{m+n-1}(1+X_{i})^{m+1}(Q+X_{i})^{-m}(X_{ i}+r+Q)^{n-1}\prod_{i\notin S}(1+X_{i})(Q+rX_{i}+QX_{i})^{n-1}\\ \times\prod_{1\leq i<j\leq n,\{i,j\}\cap S\notin\emptyset}\left( \frac{Y_{j}(1+Y_{j})}{Q+rY_{j}+QY_{j}}-\frac{Y_{i}(1+Y_{i})}{Q+rY_{i}+QY_{i}} \right)\left(\prod_{i\notin S}\frac{Q+X_{i}}{X_{i}(1+X_{i})}-1\right)\\ \times\prod_{1\leq i<j\leq n,i,j\notin S}\left(\frac{X_{j}(1+X_{j })}{Q+rX_{j}+QX_{j}}-\frac{X_{i}(1+X_{i})}{Q+rX_{i}+QX_{i}}\right)\] This can be further simplified as follows, \[\sum_{S}(-1)^{|S|}\left(1-\prod_{i\notin S}\left(\frac{X_{i}(1+X_{ i})}{Q+X_{i}}\right)^{m+1}\right)\\ \times\prod_{i\in S}X_{i}^{m+n}(1+X_{i})^{m+1}(Q+X_{i})^{-m}(X_{i }+r+Q)^{n-1}\prod_{i\notin S}(Q+X_{i})(Q+rX_{i}+QX_{i})^{n-1}\\ \times\prod_{1\leq i<j\leq n}\left(\frac{Y_{j}(1+Y_{j})}{Q+rY_{j} +QY_{j}}-\frac{Y_{i}(1+Y_{i})}{Q+rY_{i}+QY_{i}}\right),\] recalling that \(Y_{i}=X_{i}\) if \(i\notin S\) and \(Y_{i}=QX_{i}^{-1}\) if \(i\in S\). We write this as \[\sum_{S,\sigma}(-1)^{|S|+I(\sigma)}\prod_{i\in S}X_{i}^{m+n- \sigma(i)+1}(1+X_{i})^{m+1}(Q+X_{i})^{-m+\sigma(i)-1}(X_{i}+r+Q)^{n-\sigma(i)} \\ \times\prod_{i\notin S}X_{i}^{\sigma(i)-1}(1+X_{i})^{\sigma(i)-1} (Q+X_{i})(Q+rX_{i}+QX_{i})^{n-\sigma(i)}\\ -\sum_{S,\sigma}(-1)^{|S|+I(\sigma)}\prod_{i\in S}X_{i}^{m+n- \sigma(i)+1}\big{(}1+X_{i}\big{)}^{m+1}(Q+X_{i})^{-m+\sigma(i)-1}(X_{i}+r+Q)^{ n-\sigma(i)}\\ \times\prod_{i\notin S}X_{i}^{m+\sigma(i)}\big{(}1+X_{i}\big{)}^{ m+\sigma(i)}\big{(}Q+X_{i}\big{)}^{-m}(Q+rX_{i}+QX_{i})^{n-\sigma(i)}. \tag{2.11}\] Recall that the sums are over all proper subsets \(S\), but since the sums are equal for \(S=\{1,2,\ldots,n\}\) we can also sum over all subsets \(S\). Now the second sum is equal to \[\prod_{i=1}^{n}X_{i}^{m+1}(1+X_{i})^{m+1}(Q+X_{i})^{-m}\\ \times\det_{1\leq i,j\leq n}\left(X_{i}^{j-1}(1+X_{i})^{j-1}(Q+rX _{i}+QX_{i})^{n-j}-X_{i}^{n-j}(Q+X_{i})^{j-1}(X_{i}+r+Q)^{n-j}\right).\] The determinant can be seen to vanish as follows: First observe that it is a polynomial in \(X_{1},\ldots,X_{n}\) of degree no greater than \(2n-2\) in each \(X_{i}\). For \(1\leq i<j\leq n\), the \(i\)-th row and the \(j\)-th row of the underlying matrix are collinear when setting \(X_{i}=X_{j}\) or \(X_{i}=QX_{j}^{-1}\). Moreover, the \(i\)-th row vanishes when setting \(X_{i}^{2}=Q\). It follows that \(\prod_{i=1}^{n}(X_{i}^{2}-Q)\prod_{1\leq i<j\leq n}(X_{j}-X_{j})(1-QX_{i}X_{j})\) is a divisor of the determinant, but since it is of degree \(2n\) in each \(X_{i}\), the determinant vanishes. The second sum in (2.11) remains and it can easily be seen to be equal to \(\det_{1\leq i,j\leq n}\left(a_{j,m,n}(Q,r;X_{i})\right)\). This concludes the proof of Theorem 1.1. ## 3. Combinatorial interpretations of the left-hand sides ### Arrowed Gelfand-Tsetlin patterns To continue the analogy with the ordinary Littlewood identity (1.1) and Macdonald's bounded version (1.6) of it, both sides of the identities (1.4) and (1.8) will be interpreted combinatorially. For the left-hand side, this was accomplished in another recent paper [10], and we will describe the result and adjust to our context next. In order to motivate the definition for the combinatorial objects, recall the combinatorial interpretation of the left-hand sides of (1.1) and (1.6) in terms of Gelfand-Tsetlin patterns, which is described in Appendix A.3. We need to extend the discussion from there in so far that there is also a sensible extension of the definition of Gelfand-Tsetlin patterns to arbitrary integers sequences \((\lambda_{1},\ldots,\lambda_{n})\). The notion of signed intervals is crucial for this: \[\underline{[a,b]}=\begin{cases}[a,b],&a\leq b\\ \varnothing,&b=a-1\\ [b+1,a-1],&b<a-1\end{cases}\] If we are in the last case, then the interval is said to be _negative_. The condition that defines Gelfand-Tsetlin pattern can also be written as \(a_{i,j}\in[a_{i+1,j},a_{i+1,j+1}]\). If the bottom row is weakly increasing, we can replace this condition also by \(a_{i,j}\in\underline{[a_{i+1,j},a_{i+1,j+1}]}\) (since we then have \(a_{i+1,j}\leq a_{i+1,j+1}\) as can be seen inductively with respect to \(n\)). We use this now as the definition for arbitrary bottom rows: A (generalized) Gelfand-Tsetlin pattern is a triangular array \(A=(a_{i,j})_{1\leq j\leq i\leq n}\) of integers with \(a_{i,j}\in\underline{[a_{i+1,j},a_{i+1,j+1}]}\) for all \(i,j\). Then the sign of a Gelfand-Tsetlin pattern \(A\) is \[(-1)^{\#\text{ of negative intervals }\underline{[a_{i+1,j},a_{i+1,j+1}]}}=: \operatorname{sgn}A.\] Then \[s_{(\lambda_{1},\ldots,\lambda_{n})}(X_{1},\ldots,X_{n})=\sum_{A=(a_{i,j})_{1 \leq j\leq i\leq n}}\operatorname{sgn}A\prod_{i=1}^{n}X_{i}^{\sum_{j-1}a_{i, j}-\sum_{j=1}^{i-1}a_{i-1,j}}, \tag{3.1}\] where the sum is over all Gelfand-Tsetlin patterns \(A=(a_{i,j})_{1\leq j\leq i\leq n}\) with bottom row \((\lambda_{n},\lambda_{n-1},\ldots,\lambda_{1})\) and \[s_{(\lambda_{1},\ldots,\lambda_{n})}(X_{1},\ldots,X_{n})=\frac{\det_{1\leq i, j\leq n}\left(X_{i}^{\lambda_{j}+n-j}\right)}{\prod_{1\leq i<j\leq n}(X_{i}-X_{j})}.\] This result is a special case of Theorem 3.4 below that will also cover the combinatorial interpretation of the left-hand side of (1.4) and (1.8). However, this special case appeared essentially also earlier in [12] (with some details missing). **Definition 3.1**.: _An arrowed Gelfand-Tsetlin pattern (AGTP)1 is a triangular array of the following form_ Footnote 1: They appeared first in [10] as extended arrowed monotone triangles. \[\begin{array}{ _where each entry \(a_{i,j}\) is an integer decorated with an element from \(\{\,\raisebox{-1.0pt}{\scalebox{1.0}{$\kappa$}}\,,\,\raisebox{-1.0pt}{\scalebox{1.0}{$ \kappa$}}\,,\,\raisebox{-1.0pt}{\scalebox{1.0}{$\kappa$}}\,,\,\raisebox{-1.0pt}{ \scalebox{1.0}{$\varnothing$}}\}\) and the following is satisfied for each entry \(a\) not in the bottom row: Suppose \(b\) is the \(\,\raisebox{-1.0pt}{\scalebox{1.0}{$\nu$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$ \cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{ \scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0 pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$ \cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox {1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{ \scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0 pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$ \cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox {1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{ \scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0 pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$ \cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox {1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{ \scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0 pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$ \cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox {1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{ \scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0 pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$ \cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$ \cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox {1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{ \scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0 pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$ \cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0 }{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox {1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{ \scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0 pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$ \cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0 }{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox {1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{ \scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0 pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$ \cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0 }{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox {1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{ \scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0 pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$ \cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox {1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{ \scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0 pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{ \scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0 pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{ \scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0 pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{ \scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0 pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{ \scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0 pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{ \scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0 pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0 pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{ \scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0 pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\, \raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$ \cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox {1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{ \scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0 pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{ \scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0 pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{ \scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0 pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{ \scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{ \scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{ \scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0 pt}{\scalebox{1.0}{$\cdot$}}\,\raisebox{-1.0pt}{\scalebox{1.0}{$\cdot$}}\,\ _The sign is \(-1\) to the number of entries \(a\) that are equal to their \(\,\varkappa\,\)-neighbor \(b\) as well as to to their \(\,\varsigma\,\)-neighbor \(c\), and \(b\) is decorated with \(\,\varkappa\,\) or \(\,\mathbb{X}\,\) and \(c\) is decorated with \(\,\kappa\,\) and \(\,\mathbb{X}\,\)._ Proof.: Suppose \((a_{i,j})_{1\leq j\leq i\leq n}\) is an AGTP. If \(a_{i+1,j}<a_{i+1,j+1}\) for particular \(i,j\), then \(a_{i+1,j}\leq a_{i,j}\leq a_{i+1,j+1}\). The first inequality has to be strict if the decoration of \(a_{i+1,j}\) contains an arrow pointing towards \(a_{i,j}\) (i.e., \(\operatorname{decor}(a_{i+1,j})\in\{\,\varkappa\,\}\)), while the second inequality has to be strict if \(a_{i+1,j+1}\) contains an arrow pointing towards \(a_{i,j}\) (i.e., \(\operatorname{decor}(a_{i+1,j+1})\in\{\,\kappa\,\}\)). On the other hand, if \(a_{i+1,j}=a_{i+1,j+1}\) for particular \(i,j\), then \(a_{i+1,j}=a_{i,j}=a_{i+1,j+1}\). In this case \[(\operatorname{decor}(a_{i+1,j}),\operatorname{decor}(a_{i+1,j+1}))\in\{ \varnothing,\,\varsigma\,\}\times\{\varnothing,\,\varkappa\,\}\] or \[(\operatorname{decor}(a_{i+1,j}),\operatorname{decor}(a_{i+1,j+1}))\in\{\, \varkappa\,\}\times\{\,\kappa\,\}, \tag{3.2}\] where in the second case there is a contribution of \(-1\) to the sign of the object. These observations imply that, if the bottom row is weakly increasing, then the underlying undecorated triangular array is an ordinary Gelfand-Tsetlin pattern and that the properties on the decoration stated in the proposition are satisfied. The only instance when we have a contribution to the sign is in the case of (3.2). Conversely, a decoration of a given Gelfand-Tsetlin pattern that follows the rule as given in the statement of the proposition is eligible for an arrowed Gelfand-Tsetlin pattern according to Definition 3.1. **Remark 3.3**.: In the case that the bottom row of an arrowed Gelfand-Tsetlin pattern is strictly increasing and we forbid the decoration \(\varnothing\), we have that all rows are strictly increasing and we obtain a monotone triangle. Recall that monotone triangles are defined as Gelfand-Tsetlin patterns with strictly increasing rows; their significance comes from the fact that monotone triangles with bottom row \(1,2,\ldots,n\) are in easy bijective correspondence with \(n\times n\) alternating sign matrices, see, e.g., [1]. In such a case, there is no instance where we gain a \(-1\) that contributes to the sign. These objects were used in [12] to study alternating sign matrices. Among other things, the generating function of these decorated monotone triangles can be interpreted as a generating function of (undecorated) monotone triangles, thus of alternating sign matrices. The following explicit formula for the generating function of arrowed Gelfand-Tsetlin patterns with fixed bottom row \(k_{1},k_{2},\ldots,k_{n}\) is proved in [12]. **Theorem 3.4**.: _The generating function of arrowed Gelfand-Tsetlin patterns with bottom row \(k_{1},\ldots,k_{n}\) is_ \[\prod_{i=1}^{n}(t+uX_{i}+vX_{i}^{-1}+w)\prod_{1\leq i<j\leq n}\ \left(t+u \mathrm{E}_{k_{i}}+v\mathrm{E}_{k_{j}}^{-1}+w\mathrm{E}_{k_{i}}\mathrm{E}_{k_{ j}}^{-1}\right)s_{(k_{n},k_{n-1},\ldots,k_{1})}(X_{1},\ldots,X_{n}),\] _where \(\mathrm{E}_{x}\) denotes the shift operator, defined as \(\mathrm{E}_{x}p(x)=p(x+1)\)._ The formula has to be applied as follows: First interpret \(k_{1},\ldots,k_{n}\) as variables and apply the operator \(\prod_{1\leq i<j\leq n}\ \left(t+u\mathrm{E}_{k_{i}}+v\mathrm{E}_{k_{j}}^{-1}+w \mathrm{E}_{k_{i}}\mathrm{E}_{k_{j}}^{-1}\right)\) to \(s_{(k_{n},k_{n-1},\ldots,k_{1})}(X_{1},\ldots,X_{n})\). This will result in a linear combination of expressions of the form \(s_{(k_{n}+i_{n},k_{n-1}+i_{n-1},\ldots,k_{1}+i_{1})}(X_{1},\ldots,X_{n})\) for some (varying) integers \(i_{j}\). The \(k_{j}\) are only specialized to the actual integers after that. Note that we do not necessarily have \(k_{n}+i_{n}\geq k_{n-1}+i_{n-1}\geq\ldots\geq k_{1}+i_{1}\) even if \(k_{n}\geq k_{n-1}\geq\ldots\geq k_{1}\), so that the extension of the Schur polynomial in (3.1) is necessary. **Example 3.5**.: We illustrate the theorem on the example \((k_{1},k_{2},k_{3})=(1,2,3)\). We list the 8 Gelfand-Tsetlin pattern with bottom row \(1,2,3\) and indicate the possible decorations (one will be listed twice with a disjoint set of decorations), where \(L=\{\varnothing,\,\nwarrow\}\), \(R=\{\varnothing,\,\nearrow\}\) and \(LR=\{\varnothing,\,\nwarrow,\,\nwarrow,\,\nwarrow\}\), and on the right we indicate the generating function restricted to the particular underlying Gelfand-Tsetlin patterns with the indicated decorations, where we use \[L(X)=t+vX^{-1},R(X)=t+uX\quad\text{and}\quad LR(X)=t+uX+vX^{-1}+w.\] **Corollary 3.6**.: _The generating function of arrowed Gelfand-Tsetlin patterns with bottom row \(k_{1},\ldots,k_{n}\) is_ \[\frac{\mathbf{ASym}_{X_{1},\ldots,X_{n}}\left[\prod_{1\leq i\leq j\leq n}\ \left(v+wX_{i}+tX_{j}+uX_{i}X_{j}\right)\prod_{i=1}^{n}X_{i}^{k_{i}-1}\right]}{ \prod_{1\leq i<j\leq n}\left(X_{j}-X_{i}\right)}. \tag{3.3}\] Proof.: Observe that \[\prod_{i=1}^{n}(t+uX_{i}+vX_{i}^{-1}+w)\prod_{1\leq i<j\leq n}\ \left(t+u\mathrm{E}_{k_{i}}+v\mathrm{E}_{k_{j}}^{-1}+w\mathrm{E}_{k_{i}} \mathrm{E}_{k_{j}}^{-1}\right)s_{(k_{n},k_{n-1},\ldots,k_{1})}(X_{1},\ldots,X _{n})\] \[=\prod_{i=1}^{n}(uX_{i}+vX_{i}^{-1}+w)\prod_{1\leq i<j\leq n}\ \left(t+u\mathrm{E}_{k_{i}}+v\mathrm{E}_{k_{j}}^{-1}+w\mathrm{E}_{k_{i}} \mathrm{E}_{k_{j}}^{-1}\right)\frac{\mathbf{ASym}_{X_{1},\ldots,X_{n}}\left[ \prod_{i=1}^{n}X_{i}^{k_{i}+i-1}\right]}{\prod_{1\leq i<j\leq n}(X_{j}-X_{i})}\] \[=\prod_{i=1}^{n}(t+uX_{i}+vX_{i}^{-1}+w)\frac{\mathbf{ASym}_{X_{1 },\ldots,X_{n}}\left[\prod_{1\leq i<j\leq n}\ \left(t+uX_{i}+vX_{j}^{-1}+wX_{i}X_{j}^{-1}\right)\prod_{i=1}^{n}X_{i}^{k_{i} +i-1}\right]}{\prod_{1\leq i<j\leq n}(X_{j}-X_{i})}\] \[=\frac{\mathbf{ASym}_{X_{1},\ldots,X_{n}}\left[\prod_{1\leq i \leq j\leq n}\ \left(v+wX_{i}+tX_{j}+uX_{i}X_{j}\right)\prod_{i=1}^{n}X_{i}^{k_{i}-1}\right] }{\prod_{1\leq i<j\leq n}(X_{j}-X_{i})}\] and the assertion follows. **Remark 3.7**.: Suppose \(\left(k_{1}-1,k_{2}-1,\ldots,k_{n}-1\right)\) is a partition (allowing zero parts) then, when setting \(u=v=0\), \(w=1\) and replacing \(t\) by \(-t\) (3.3), we obtain the Hall-Littlewood polynomials [10] up to a factor that is a rational function in \(t\). ### Generating function with respect to a Schur polynomial weight We are now ready to obtain our first interpretation. Multiplying (1.4) and (1.8) with \(\prod_{i=1}^{n}(X_{i}^{-1}+1+w+X_{i})\) gives \[\frac{\mathbf{ASym}_{X_{1},\ldots,X_{n}}\left[\prod_{1\leq i\leq j \leq n}(1+wX_{i}+X_{j}+X_{i}X_{j})\sum_{0\leq k_{1}<k_{2}<\ldots<k_{n}}X_{1}^{ k_{1}-1}X_{2}^{k_{2}-1}\cdots X_{n}^{k_{n}-1}\right]}{\prod_{1\leq i<j\leq n}(X_{j}-X_{i})}\] \[=\prod_{i=1}^{n}(X_{i}^{-1}+1+w+X_{i})\prod_{i=1}^{n}\frac{1}{1-X _{i}}\prod_{1\leq i<j\leq n}\frac{1+X_{i}+X_{j}+wX_{i}X_{j}}{1-X_{i}X_{j}}, \tag{3.4}\] and \[\frac{\mathbf{ASym}_{X_{1},\ldots,X_{n}}\left[\prod_{1\leq i\leq j \leq n}(1+wX_{i}+X_{j}+X_{i}X_{j})\sum_{0\leq k_{1}<k_{2}<\ldots<k_{n}\leq m}X_ {1}^{k_{1}-1}X_{2}^{k_{2}-1}\cdots X_{n}^{k_{n}-1}\right]}{\prod_{1\leq i<j \leq n}(X_{j}-X_{i})}\\ =\prod_{i=1}^{n}(X_{i}^{-1}+1+w+X_{i})\\ \times\frac{\det_{1\leq i,j\leq n}\left(X_{i}^{j-1}(1+X_{i})^{j-1 }(1+wX_{i})^{n-j}-X_{i}^{m+2n-j}(1+X_{i}^{-1})^{j-1}(1+wX_{i}^{-1})^{n-j} \right)}{\prod_{i=1}^{n}(1-X_{i})\prod_{1\leq i<j\leq n}(1-X_{i}X_{j})(X_{j}-X_ {i})}, \tag{3.5}\] respectively, and we can now interpret the left-hand sides as the generating function of arrowed Gelfand-Tsetlin patterns with non-negative strictly increasing bottom row, where we need to specialize \(t=u=v=1\) in the weight and in the second case the entries in the bottom row are less than or equal to \(m\). **Remark 3.8**.: 1. For \(\mathbf{X}=(X_{1},\ldots,X_{n})\), let \(\mathcal{AGTP}(t,u,v,w;\mathbf{k};\mathbf{X})\) denote the generating function of arrowed Gelfand-Tsetlin patterns with bottom row \(\mathbf{k}=(k_{1},\ldots,k_{n})\). Then, using (3.3), it follows by changing \((X_{1},\ldots,X_{n})\) to \((X_{n},X_{n-1},\ldots,X_{1})\) that \[\mathcal{AGTP}(t,u,v,w;\mathbf{k};\mathbf{X})=(-1)^{\binom{n}{2}}\mathcal{AGTP }(w,u,v,t;\overline{\mathbf{k}};\mathbf{X}),\] where \(\overline{\mathbf{k}}=(k_{n},\ldots,k_{1})\). Therefore, the left-hand sides are up to the sign \((-1)^{\binom{n}{2}}\) also the generating function of AGTPs with strictly _decreasing_ bottom row of non-negative integers, where we need to set \(u=v=w=1\) and replace \(t\) by \(w\) in the weight, and, in the case of (1.8), the entries in the bottom row are less than or equal to \(m\). 2. For the case \(t=0\), there is worked out a possibility in [10] to get around the multiplication with the extra factor \(\prod_{i=1}^{n}(X_{i}^{-1}+1+w+X_{i})\) by working with "down arrows" as decorations. In our application, this can be used in combination with our second combinatorial interpretation concerning AGTPs with strictly decreasing bottom row to give combinatorial interpretations of the left-hand sides of (1.4) and (1.8) in the special case \(w=0\). It is an open problem to explore whether the down-arrowed array can be extended to general \(t\). In Appendix B, we develop some other (maybe less interesting) combinatorial interpretations of the left-hand sides, which we include for the sake of completeness. ## 4. Combinatorial interpretations of the right-hand sides of (3.4) and (3.5) ### Right-hand side of (3.4) For the right-hand side of (3.4), which is \[\prod_{i=1}^{n}\frac{X_{i}^{-1}+1+w+X_{i}}{1-X_{i}}\prod_{1\leq i<j\leq n} \frac{1+X_{i}+X_{j}+wX_{i}X_{j}}{1-X_{i}X_{j}}, \tag{4.1}\] it is straightforward to give a combinatorial interpretation as a generating function. Recall that, in the ordinary case (1.1), the right-hand side \(\prod_{i=1}^{n}\frac{1}{1-X_{i}}\prod_{1\leq i<j\leq n}\frac{1}{1-X_{i}X_{j}}\) is interpreted as two-line arrays with entries in \(\{1,2,\ldots,n\}\), ordered lexicographically, with the top element of each column being greater than or equal to its bottom element. The exponent of \(X_{i}\) in the weight is computed by subtracting from the total number of \(i\)'s in the two-line array the number of columns with \(i\) as top and bottom element. To extend this to an interpretation of (4.1), we have one additional column \(\binom{j}{i}\) for all pairs \(i\leq j\), which are either overlined, underlined, both or neither. An overlined column \(\binom{j}{i}\) with \(i<j\), contributes an additional multiplicative \(X_{j}\) to the weight, while an underlined column with \(i\) as bottom element contributes an additional \(X_{i}\), and if a column is overlined and underlined then such a column contributes, in addition to \(X_{i}X_{j}\), \(w\). Moreover, an overlined column \(\binom{i}{i}\) contributes an additional \(X_{i}\) to the weight and if it is underlined then it contributes \(X_{i}^{-1}\) to the weight, and, again, if the column is overlined and underlined, then it contributes also \(w\). In both cases, if the column is neither underlined nor overlined, it contributes nothing in addition. ### Right-hand side of (3.5) The following theorem provides an interpretation of the right-hand side of (3.5) as a weighted count of (partly non-intersecting) lattice paths. This right-hand side differs from the right-hand side of (1.8) by a simple multiplicative factor. We work as long as possible with general \(w\), however, it will turn out that we need to specialize to \(w=0,1\) at some point to obtain a nicer interpretation. We present two different proofs to obtain the result, where the second one is only sketched. Figure 1 seeks to illustrate the theorem in the case that \(m\) is odd. **Theorem 4.1**.: _(1) Assume that \(m=2l+1\). Then the right-hand side of (3.5) has the following interpretation as weighted count of families of \(n\) lattice paths._ * _The_ \(i\)_-th lattice path starts in one point in the set_ \(A_{i}=\{(-3i+1,-i+1),(-i+1,-3i+1)\}\)_,_ \(i=1,2,\ldots,n\)_, and the end points of the paths are_ \(E_{j}=(n-j+l+1,j-l-2)\)_,_ \(j=1,2,\ldots,n\)_._ * _Below and on the line_ \(x+y=0\)_, the step set is_ \(\{(1,1),(-1,1)\}\) _for steps that start in_ \((-3i+1,-i+1)\) _and it is_ \(\{(1,1),(1,-1)\}\) _for steps that start in_ \((-i+1,-3i+1)\)_. Steps of type_ \((-1,1)\) _and_ \((1,-1)\) _with distance_ \(0,2,4,\ldots\) _from_ \(x+y=0\) _are equipped with the weights_ \(X_{1},X_{2},X_{3},\ldots\)_, respectively, while such steps with distance_ \(1,3,5,\ldots\) _are equipped with the weights_ \(X_{1}^{-1},X_{2}^{-1},X_{3}^{-1},\ldots\)_, respectively._ * _Above the line_ \(x+y=0\)_, the step set is_ \(\{(1,0),(0,1)\}\)_. Above the line_ \(x+y=j-1\)_, horizontal steps of the path that ends in_ \(E_{j}\) _are equipped with the weight_ \(w\)_._ * _The paths can be assumed to be non-intersecting below the line_ \(x+y=0\)_. In case_ \(w=1\)_, we can also assume them to be non-intersecting above the line_ \(x+y=0\)_. In case_ \(w=0\)_,_ \(E_{j}\) _can be replaced by_ \(E_{j}^{\prime}=(n-j+l+1,2j-n-l-2)\)_,_ \(j=1,2,\ldots,n\)_, and then we can also assume the paths to be non-intersecting above the line_ \(x+y=0\)_._ * _The sign of family of paths is the sign of the permutation_ \(\sigma\) _with the property that the_ \(i\)_-th path connects_ \(A_{i}\) _to_ \(E_{\sigma(i)}\) _with an extra contribution of_ \(-1\) _if we choose_ \((-i+1,-3i+1)\) _from_ \(A_{i}\)_. Moreover, we have an overall factor of_ \[(-1)^{n+1\choose 2}\prod_{i=1}^{n}X_{i}^{l}\ (X_{i}^{-1}+1+w+X_{i})(1+X_{i}).\] * _In case_ \(w=0,1\)_, when restricting to non-intersecting paths, let_ \(1\leq i_{1}<i_{2},\ldots<i_{m}<n\) _be the indices for which we chose_ \((-3i+1,-i+1)\) _from_ \(A_{i}\)_. Then the sign can assumed to be_ \((-1)^{i_{1}+\ldots+i_{m}}\) _and the overall factor is_ \[\prod_{i=1}^{n}X_{i}^{l}\ (X_{i}^{-1}+1+w+X_{i})(1+X_{i}).\] _(2) Assume that \(m=2l\). Then, to obtain an interpretation for the right-hand side of (3.5), we only need to replace \(E_{j}\) by a set of two possible endpoints \(E_{j}=\{(n-j+l+1,j-l-2),(n-j+l,j-l-1)\}\). The overall factor is_ \[(-1)^{n+1\choose 2}\prod_{i=1}^{n}X_{i}^{l}\ (X_{i}^{-1}+1+w+X_{i})\] _in the case when we do not specialize \(w\). The endpoints are replaced by \(E_{j}^{\prime}=\{(n-j+l+1,2j-n-l-2),(n-j+l,2j-n-l-1)\}\) if \(w=0\). In case \(w=0,1\) if we restrict to non-intersecting paths and the sign is taken care of as above, then the overall factor is_ \[\prod_{i=1}^{n}X_{i}^{l}\ (X_{i}^{-1}+1+w+X_{i}).\] We discuss the weight and the sign on the example in Figure 1. The weights that come from the individual paths are \[X_{1}^{-1}\cdot X_{1}^{-1}\cdot X_{2}\cdot X_{1}X_{3}\cdot X_{2}X_{3}^{-1} \cdot X_{5}^{-2},\] where the factors are arranged in a manner that the \(i\)-th factor is the weight of the path that starts in the set \(A_{i}\). To compute the sign, observe that \(\sigma=\left(6\,5\,4\,3\,2\,1\right)\) in one-line notation so that \(\operatorname{sgn}\sigma=-1\) and that we choose the second starting point in \(A_{i}\) except for \(i=1\), so that the total sign is \((-1)\cdot(-1)^{5}=1\). In the case that \(m\) is odd, we always need to choose the second lattice point in \(A_{i}\) if \(l\geq n-2\) because then all \(E_{i}\) have a non-positive \(y\)-coordinate and this implies that they cannot be reached by any of the first lattice points in \(A_{i}\) since any lattice path starting from the first lattice point in \(A_{i}\) intersects the line \(x+y=0\) in a lattice point with positive \(y\)-coordinate. This implies that, in the non-intersecting case, the sign is always \(1\). In the case that \(m\) is even, the condition is \(l\geq n-1\). In theses cases and when we have in addition \(w=0\), we can translate the lattice paths easily into pairs of plane partitions. The case \(m=2l+1\) is illustrated in Figure 2, while the case \(m=2l\) is illustrated in Figure 3. A similar result can in principal be derived for the case \(w=1\), but we omit this here. **Corollary 4.2**.: _Let \(w=0\)._ _(1) Assume that \(m=2l+1\). In case \(l\geq n-2\), the right hand side of (3.5) is the generating function of plane partitions \((P,Q)\) of shapes \(\lambda,\mu\), respectively, where \(\mu\) is the complement of \(\lambda\) in the \(n\times l\)-rectangle, \(P\) is a column-strict plane partition such that the entries in the \(i\)-th row are bounded by \(2n+2-2i\), and \(Q\) is a row-strict plane partition of positive integers such that the entries in the \(i\)-th row are bounded by \(n-i\). The weight is_ \[\prod_{i=1}^{n}X_{i}^{l}(X_{i}^{-1}+1+X_{i})(1+X_{i})X_{i}^{\#\text{ of }2i-1\text{ in }P}X_{i}^{-\#\text{ of }2i\text{ in }P}.\] Figure 1. An example of families of lattice paths in Theorem 4.1. (2) Assume that \(m=2l\). In case \(l\geq n-1\), the right-hand side of (3.5) is the generating function of plane partitions \((P,Q)\) of (straight) shape \(\lambda\) and skew shape \(\mu\), respectively, such that \(\mu\) is the complement of \(\lambda\) in the \(n\times(l-1)\)-rectangle after possibly deleting the first column of \(\mu\), \(P\) is a column strict plane partition such that the entries in the \(i\)-th row are bounded by \(2n+2-2i\) and \(Q\) is a row-strict plane partition such that the entries in the \(i\)-th row are bounded by \(n-i\). The weight is_ \[\prod_{i=1}^{n}X_{i}^{l}(X_{i}^{-1}+1+X_{i})X_{i}^{\#\text{ of }2i-1\text{ in }P}X_{i}^{-\#\text{ of }2i\text{ in }P}.\] Figure 2. Illustration of Corollary 4.2 (1) for \(n=7\) and \(l=12\). Proof.: We consider the case \(m\) is odd. Assume that \(1\leq k_{1}<k_{2}<\ldots<k_{n}\) are chosen such that \((k_{i},-k_{i})\) is the last point in the intersection of the line \(x+y=0\) with the path that connects \(A_{i}\) to \(E^{\prime}_{n+1-i}\) when traversing the path from \(A_{i}\) to \(E^{\prime}_{n+1-i}\). Note that the portion of the path from \(A_{i}\) to \((k_{i},-k_{i})\) has \(k_{i}-i\) steps of type \((1,-1)\) and \(2i-1\) steps of type \((1,1)\). These portions correspond to the plane partition \(P\) is follows: The \(i\)-th path corresponds to the \((n+1-i)\)-th row where the \((1,-1)\)-steps correspond to the parts, where we fill the cells in the Ferrers diagram from left to right when traversing the path from \(A_{i}\) to \((k_{i},-k_{i})\), and a \((1,-1)\)-step at distance \(d\) from \(x+y=0\) gives the entry \(d+1\). It follows that the length of row \(i\) is \(k_{n+1-i}-n-1+i\) and that the entries in row \(i\) are bounded by \(2n+2-2i\). Figure 3. Illustration of Corollary 4.2 (1) for \(n=7\) and \(l=13\). Now the portion of the path from \((k_{i},-k_{i})\) to \(E^{\prime}_{n+1-i}\) corresponds to the \(i\)-th row of the plane partition \(Q\). More precisely, the horizontal steps correspond to the parts, where we fill the cells in the Ferrers diagram from right to left when traversing the path from \((k_{i},-k_{i})\) to \(E^{\prime}_{n+1-i}\), where the \(j\)-th step gives the entry \(j\). Note that there are \(i-k_{i}+l\) steps of type \((1,0)\) in this portion, while there are \(n-i\) steps in total, so that the length of the \(i\)-th row is \(i-k_{i}+l\) and the entries in row \(i\) are bounded by \(n-i\). The case \(m\) is even is very similar and it is therefore omitted here. **Remark 4.3**.: (1) The plane partitions \(P\) in the corollary are in easy bijection with symplectic tableaux as defined in [14, Section 4]. Also the weight is up to an overall multiplicative factor essentially just the weight that is used for symplectic tableaux. As a consequence, the corollary can be interpreted as to provide the expansion of the generating function of arrowed Gelfand-Tsetlin into symplectic characters. This is in the vein of main results in [11] and in [12, Remark 2.6]. (2) In the case \(m\) is odd, the plane partitions \(Q\) are in easy bijective correspondence with \(2n\times 2n\times 2n\) totally symmetric self-complementary plane partitions. The bijection is provided in [12, Remark 2.6]. In the case \(m\) is even, we place the part \(n+1-i\) into the cell in the \(i\)-th row of the inner shape and that way we obtain plane partitions that are in easy bijective correspondence with \((2n+2)\times(2n+2)\times(2n+2)\) totally symmetric self-complementary plane partitions. ### The cases \(n=2\) and \(m=2,3\) In this section, we give a list of all objects for the left-hand side and right-hand side of (3.5) in the case \(n=2\) and \(m=2,3\). We start with the case that \(m=3\), since this is easier on the right-hand side. Note that \(m=3\) implies \(l=1\). The arrowed monotone triangles are as follows, using the notation from Section 3. \[\begin{array}{ \[-w,-X_{1},-X_{1}^{-1},-X_{2},-X_{2}^{-1},-X_{1}^{-1}-X_{2},-X_{1}^{-1}X_{2}^{-1},-1, -X_{1}X_{2},-X_{1}X_{2}^{-1}, \tag{4.3}\] up to the overall factor \[-X_{1}X_{2}(1+X_{1})(1+X_{2})(X_{1}^{-1}+1+w+X_{1})(X_{2}^{-1}+1+w+X_{2})=-X_{1} X_{2}(1+X_{1})(1+X_{2})LR(X_{1})LR(X_{2}),\] and, as can easily be seen, the sum of weights agrees with those for the arrowed Gelfand-Tsetlin patterns. Now we consider the case \(m=2\). We have \(l=1\). The arrowed monotone triangles are as follows. \[\begin{array}{ where the last two weights come from the last picture, first by interpreting the endpoint of the path that starts in \(A_{1}\) as element of \(E_{2}\) and second as element of \(E_{1}\). ### First proof of Theorem 4.1 The approach of the first proof of Theorem 4.1 is closely related to the approach we used in the proof of Theorem 2.2 in [10]. We consider the following bases for Laurent polynomials in \(X\) that are invariant under the transformation \(X\to X^{-1}\): let \[q_{i}(X)=\frac{X^{i}-X^{-i}}{X-X^{-1}}\qquad\text{and}\qquad b_{i}(X)=(X+X^{-1 })^{i},\] then \((q_{i}(X))_{i\geq 0}\) and \((b_{i}(X))_{i\geq 0}\) are two such bases. It is not hard to verify that \[q_{m}(X)=\sum_{r=0}^{(m-1)/2}(-1)^{r}\binom{m-r-1}{r}b_{m-1-2r}(X). \tag{4.4}\] In order to derive a combinatorial interpretation of the right-hand side of (3.5), consider \[\det_{1\leq i,j\leq n}\left(X_{i}^{j-1}(1+X_{i})^{j-1}(1+wX_{i})^{n-j}-X_{i}^{ m+2n-j}(1+X_{i}^{-1})^{j-1}(1+wX_{i}^{-1})^{n-j}\right). \tag{4.5}\] _We start by considering the case that \(m\) is odd:_ We set \(m=2l+1\), and pull out \(\prod_{i=1}^{n}X_{i}^{l+n}\). \[\prod_{i=1}^{n}X_{i}^{l+n}\det_{1\leq i,j\leq n}\left(X_{i}^{j-l-n-1}(1+X_{i})^ {j-1}(1+wX_{i})^{n-j}-X_{i}^{-j+l+n+1}(1+X_{i}^{-1})^{j-1}(1+wX_{i}^{-1})^{n-j}\right) \tag{4.6}\] The entry in the \(i\)-th row and \(j\)-th column of the matrix underlying the determinant is obtained from \[\frac{X^{j-l-n-1}(1+X)^{j-1}(1+wX)^{n-j}-X^{-j+l+n+1}(1+X^{-1})^{ j-1}(1+wX^{-1})^{n-j}}{X-X^{-1}}\\ =\sum_{p,q\geq 0}\binom{j-1}{p}\binom{n-j}{q}w^{q}\frac{X^{j-l-n+ p+q-1}-X^{-j+l+n-p-q+1}}{X-X^{-1}}\] by multiplying with \(X-X^{-1}\) and then setting \(X=X_{i}\). Note that this expression is invariant under replacing \(X\) by \(X^{-1}\). From (4.4), it follows that this is further equal to \[\sum_{\begin{subarray}{c}p,q,r\geq 0\\ |j-l-n+p+q-1|-1-2r\geq 0\end{subarray}}\operatorname{sgn}(j-l-n+p+q-1)(-1)^{r}w^ {q}\binom{j-1}{p}\binom{n-j}{q}\\ \times\binom{|j-l-n+p+q-1|-r-1}{r}b_{|j-l-n+p+q-1|-1-2r}(X).\] We apply the following lemma. A proof can be found in [11, Lemma 7.2]. Note that the lemma also involves complete homogeneous symmetric polynomials \(h_{k}\) with negative \(k\) as defined in [11, Section 5]. Concretely, we define \(h_{k}(X_{1},\ldots,X_{n})=0\) for \(k=-1,-2,\ldots,-n+1\) and \[h_{k}(X_{1},\ldots,X_{n})=(-1)^{n+1}X_{1}^{-1}\ldots X_{n}^{-1}h_{-k-n}(X_{1}^ {-1},\ldots,X_{n}^{-1}) \tag{4.7}\] for \(k\leq-n\). Note that a consequence of this definition is that the latter relation is true for any \(k\). **Lemma 4.4**.: _Let \(f_{j}(Y)\) be formal Laurent series for \(1\leq j\leq n\), and define_ \[f_{j}[Y_{1},\dots,Y_{i}]=\sum_{k\in\mathbb{Z}}(Y^{k})f_{j}(Y)\cdot h_{k-i+1}(Y_{1 },\dots,Y_{i}),\] _where \((Y^{k})f_{j}(Y)\) denotes the coefficient of \(Y^{k}\) in \(f_{j}(Y)\) and \(h_{k-i+1}\) denotes the complete homogeneous symmetric polynomial of degree \(k-i+1\). Then_ \[\frac{\det_{1\leq i,j\leq n}\left(f_{j}(Y_{i})\right)}{\prod_{1\leq i<j\leq n} \left(Y_{j}-Y_{i}\right)}=\det_{1\leq i,j\leq n}\left(f_{j}[Y_{1},\dots,Y_{i}] \right).\] Noting that a Laurent polynomial in \(X\) that is invariant under the replacement \(X\to X^{-1}\) can be written as a polynomial in \(X+X^{-1}\), we use the lemma to basically rewrite (4.6) as follows. \[\frac{\det_{1\leq i,j\leq n}\left(X_{i}^{j-l-n-1}(1+X_{i})^{j-1}( 1+wX_{i})^{n-j}-X_{i}^{-j+l+n+1}(1+X_{i}^{-1})^{j-1}(1+wX_{i}^{-1})^{n-j} \right)}{\prod_{1\leq i<j\leq n}(X_{j}+X_{j}^{-1}-X_{i}-X_{i}^{-1})}\\ =\prod_{i=1}^{n}(X_{i}-X_{i}^{-1})\det_{1\leq i,j\leq n}\left( \sum_{\genfrac{}{}{0.0pt}{}{p,q,r\geq 0}{|j-l-n+p+q-1|-i-2r\geq 0}}\operatorname{ sgn}(j-l-n+p+q-1)(-1)^{r}w^{q}\binom{j-1}{p}\binom{n-j}{q}\right.\\ \left.\times\binom{|j-l-n+p+q-1|-r-1}{r}\right)h_{|j-l-n+p+q-1|- i-2r}(X_{1}+X_{1}^{-1},\dots,X_{i}+X_{i}^{-1})\right)\] Now, as \(X_{j}+X_{j}^{-1}-X_{i}-X_{i}^{-1}=(X_{i}-X_{j})(1-X_{i}X_{j})X_{i}^{-1}X_{j}^{-1}\), in order to find a combinatorial interpretation for the right-hand side of (3.5), we need to find a combinatorial interpretation of \[(-1)^{\binom{n}{2}}\prod_{i=1}^{n}X_{i}^{l+1}\ (X_{i}^{-1}+1+w+X_{i})(X_{i}-X_{i}^{-1})(1-X_{i})^{-1}\\ \times\det_{1\leq i,j\leq n}\left(\sum_{\genfrac{}{}{0.0pt}{}{p,q,r\geq 0}{|j-l-n+p+q-1|-i-2r\geq 0}}\operatorname{sgn}(j-l-n+p+q-1)(-1)^{r}w^{q} \binom{j-1}{p}\binom{n-j}{q}\right.\\ \left.\times\binom{|j-l-n+p+q-1|-r-1}{r}h_{|j-l-n+p+q-1|-i-2r}(X_{ 1}+X_{1}^{-1},\dots,X_{i}+X_{i}^{-1})\right)\\ =(-1)^{\binom{n+1}{2}}\prod_{i=1}^{n}X_{i}^{l}\ (X_{i}^{-1}+1+w+X_{i})(1+X_{i})\\ \times\det_{1\leq i,j\leq n}\left(\sum_{\genfrac{}{}{0.0pt}{}{p,q,r\geq 0}{|j-l-n+p+q-1|-i-2r\geq 0}}\operatorname{sgn}(j-l-n+p+q-1)(-1)^{r}w^{q} \binom{j-1}{p}\binom{n-j}{q}\right.\\ \left.\times\binom{|j-l-n+p+q-1|-r-1}{r}h_{|j-l-n+p+q-1|-i-2r}(X_ {1}+X_{1}^{-1},\dots,X_{i}+X_{i}^{-1})\right).\] For this purpose, we find a combinatorial interpretation of the entry of the underlying matrix, i.e., \[\sum_{\begin{subarray}{c}p,q,r\geq 0\\ |j-l-n+p+q-1|-i-2r\geq 0\end{subarray}}\operatorname{sgn}(j-l-n+p+q-1)(-1)^{r} w^{q}\binom{j-1}{p}\binom{n-j}{q}\binom{|j-l-n+p+q-1|-r-1}{r}\] \[\times h_{|j-l-n+p+q-1|-i-2r}\big{(}X_{1}+X_{1}^{-1},\ldots,X_{i} +X_{i}^{-1}\big{)}\] in terms of a lattice paths generating function. We simplify the expression using the transformation \(q\to n-j-q\). \[\sum_{\begin{subarray}{c}p,q,r\geq 0\\ |p-q-l-1|-i-2r\geq 0\end{subarray}}\operatorname{sgn}(p-q-l-1)(-1)^{r}w^{n-j-q} \binom{j-1}{p}\binom{n-j}{q}\binom{|p-q-l-1|-r-1}{r}\\ \times h_{|p-q-l-1|-i-2r}\big{(}X_{1}+X_{1}^{-1},\ldots,X_{i}+X_ {i}^{-1}\big{)} \tag{4.8}\] We simplify the expression further using the following lemma. A combinatorial proof of it using a sign-reversing involution is provided in [12, Lemma 7.7]. **Lemma 4.5**.: _Let \(a,i\) be positive integers with \(i\leq a\). Then_ \[\sum_{r=0}^{(a-i)/2}(-1)^{r}\binom{a-r-1}{r}h_{a-i-2r}\big{(}X_{1}+X_{1}^{-1}, \ldots,X_{i}+X_{i}^{-1}\big{)}=h_{a-i}(X_{1},X_{1}^{-1},\ldots,X_{i},X_{i}^{-1 }).\] Therefore, the sum in (4.8) is equal to \[\sum_{p,q}\operatorname{sgn}(p-q-l-1)w^{n-j-q}\binom{j-1}{p}\binom{n-j}{q}h_{ |p-q-l-1|-i}(X_{1},X_{1}^{-1},\ldots,X_{i},X_{i}^{-1}). \tag{4.9}\] We claim the following: If \(p-q-l-1\geq 0\), then (4.9) is the generating function of lattice paths from \((-3i+1,-i+1)\) to \((n-j+l+1,j-l-2)\) such that the following is satisfied. * Below and on the line \(x+y=0\), the step set is \(\{(1,1),(-1,1)\}\). Steps of type \((-1,1)\) with distances \(0,2,4,\ldots\) from \(x+y=n\) are equipped with the weights \(X_{1},X_{2},X_{3},\ldots\), respectively, while steps of type \((-1,1)\) with distances \(1,3,5,\ldots\) are equipped with the weights \(X_{1}^{-1},X_{2}^{-1},X_{3}^{-1},\ldots\), respectively. * Above the line \(x+y=0\), the step set is \(\{(1,0),(0,1)\}\). Above the line \(x+y=j-1\), horizontal steps are equipped with the weight \(w\). Namely, if we assume that there are \(q\) steps of type \((0,1)\) above the line \(x+y=j-1\), and, therefore, \(n-j-q\) steps of type \((1,0)\), then the path intersects the line \(x+y=j-1\) in the lattice point \((l+1+q,j-l-2-q)\), assuming that the endpoint of the path is \((n-j+l+1,j-l-2)\), and there are \(\binom{n-j}{q}\) of such paths each of them contributing \(w^{n-j-q}\) to the weight. Note that this weight depends on \(j\) if \(w\not=0,1\), and this causes complications when applying the Lindstrom-Gessel-Viennot lemma. If we further assume that there are \(p\) steps of type \((1,0)\) below the line \(x+y=j-1\), and, therefore, \(j-p\) steps of type \((0,1)\), then the last lattice point of such a path on the line \(x+y=0\) when traversing the path from \((-3i+1,-i+1)\) to \((n-j+l+1,j-l-2)\) is \((-p+q+l+1,p-q-l-1)\). Note that by the assumption \(p-q-l-1\geq 0\), the lattice point \((-p+q+l+1,p-q-l-1)\) is in the second quadrant, i.e., \(\{(x,y)|x\leq 0,y\geq 0\}\). Finally, lattice paths from \((-3i+1,-i+1)\) to \((-p+q+l+1,p-q-l-1)\) with step set \(\{(1,1),(-1,1)\}\) have \(p-q-l-1-i\) steps of type \((-1,1)\) and \(2i-1\) steps of type \((1,1)\). The generating function of such paths is clearly \(h_{p-q-l-1\cdot i}(X_{1},X_{1}^{-1},\ldots,X_{i},X_{i}^{-1})=h_{|p-q-l-1|\cdot i}( X_{1},X_{1}^{-1},\ldots,X_{i},X_{i}^{-1})\). The situation is very similar if \(p-q-l-1\leq 0\), except that we need to replace the starting point \((-3i+1,-i+1)\) by \((-i+1,-3i+1)\) and the step set is \(\{(1,1),(1,-1)\}\) below the line \(x+y=0\). Again we can assume that \((-p+q+l+1,p-q-l-1)\) is the last lattice point on the line \(x+y=0\) when traversing the path from \((-i+1,-3i+1)\) to \((-p+q+l+1,p-q-l-1)\). In this case, \((-p+q+l+1,p-q-l-1)\) lies in the fourth quadrant \(\{(x,y)|x\leq 0,y\leq 0\}\). We have \(-p+q+l+1-i=|p-q-l-1|-i\) steps of type \((1,-1)\) and \(2i-1\) steps of type \((1,1)\), thus the generating function in this segment is also \(h_{|p-q-l-1|\cdot i}(X_{1},X_{1}^{-1},\ldots,X_{i},X_{i}^{-1})\). Consequently, we can conclude that the right-hand side of (3.5) has the following combinatorial interpretation: We consider families of \(n\) lattice paths from \(A_{i}=\{(-3i+1,-i+1),(-i+1,-3i+1)\}\), \(i=1,2,\ldots,n\), to \(E_{j}=(n-j+l+1,j-l-2)\), \(j=1,2,\ldots,n\), with steps sets and weights as described above. By the Lindstrom-Gessel-Viennot lemma [11, 10, 12], the paths can be assumed to be non-intersecting on and below the line \(x+y=0\). In case \(w=0,1\), we can also assume them to be non-intersecting. This is clear for \(w=1\). In case \(w=0\), we can assume that there are no steps of type \((1,0)\) above the line \(x+y=j-1\), and, therefore, we can also have \((n-j+l+1,2j-n-l-2)\) on the line \(x+y=j-1\) as endpoint since above the line all the \(n-j\) steps have to be of type \((0,1)\). Whenever we choose \((-i+1,-3i+1)\), this contributes \(-1\) to the weight. In the non-intersecting setting, suppose we choose \((-i+1,-3i+1)\) from \(A_{i}\) for \(1\leq i_{1}<\ldots<i_{m}\leq n\), then the sign of the permutation \(\sigma\) such that \(A_{i}\) is connected to \(E_{\sigma(i)}\) via the paths is \((-1)^{i_{1}+i_{2}+\ldots+i_{m}-m}\). This gives a total sign of \((-1)^{i_{1}+i_{2}+\ldots+i_{m}}\). Recall also that we have an additional overall weight of \[(-1)^{n+1\choose 2}\prod_{i=1}^{n}X_{i}^{l}\;(X_{i}^{-1}+1+w+X_{i})(1+X_{i}).\] Combining the sign from above with \((-1)^{n+1\choose 2}=(-1)^{1+2+\ldots+n}\), the sign can also be computed as follows: suppose we choose \((-3i+1,-i+1)\) from \(A_{i}\) precisely for \(i_{1},\ldots,i_{m}\), then the sign is \((-1)^{i_{1}+i_{2}+\ldots+i_{m}}\) and in this setting the overall weight is \[\prod_{i=1}^{n}X_{i}^{l}\;(X_{i}^{-1}+1+w+X_{i})(1+X_{i}).\] This concludes the proof of the first part of Theorem 4.1. _Now we consider the case that \(m\) is even:_ We set \(m=2l\) in (4.5), and pull out \(\prod_{i=1}^{n}X_{i}^{l+n}\). \[\prod_{i=1}^{n}X_{i}^{l+n}\det_{1\leq i,j\leq n}\left(X_{i}^{j-l-n-1}(1+X_{i}) ^{j-1}(1+wX_{i})^{n-j}-X_{i}^{-j+l+n}(1+X_{i}^{-1})^{j-1}(1+wX_{i}^{-1})^{n-j} \right).\] The entry in the \(i\)-th row of the \(j\)-th column of the matrix underlying the determinant is obtained from \[\frac{X^{j-l-n-1}(1+X)^{j-1}(1+wX)^{n-j}-X^{-j+l+n}(1+X^{-1})^{j-1 }(1+wX^{-1})^{n-j}}{1-X^{-1}}\\ =\sum_{p,q\geq 0}{j-1\choose p}{n-j\choose q}w^{q}\frac{X^{j- l-n+p+q-1}-X^{-j+l+n-p-q}}{1-X^{-1}}\] when multiplying with \(1-X^{-1}\) and then setting \(X=X_{i}\). Now note that \[\frac{X^{m}-X^{-m-1}}{1-X^{-1}}=q_{m+1}(X)+q_{m}(X)\] for any integer \(m\), so that we obtain \[\sum_{p,q\geq 0}\binom{j-1}{p}\binom{n-j}{q}w^{q}\left(q_{j-l-n+p+q}(X)+q_{j- l-n+p+q-1}(X)\right).\] It follows from (4.4) that this is \[\sum_{\begin{subarray}{c}p,q,r\geq 0\\ |j-l-n+p+q|-1-2r\geq 0\end{subarray}}\operatorname{sgn}(j-l-n+p+q)(-1)^{r}w^{q} \binom{j-1}{p}\\ \times\binom{n-j}{q}\binom{|j-l-n+p+q|-r-1}{r}b_{|j-l-n+p+q|-1-2r }(X)\\ +\sum_{\begin{subarray}{c}p,q,r\geq 0\\ |j-l-n+p+q-1|-1-2r\geq 0\end{subarray}}\operatorname{sgn}(j-l-n+p+q-1)(-1)^{r}w^{q} \binom{j-1}{p}\binom{n-j}{q}\\ \times\binom{|j-l-n+p+q-1|-r-1}{r}b_{|j-l-n+p+q-1|-1-2r}(X).\] Also here we simplify the expression using the replacement \(q\to n-j-q\). \[\sum_{\begin{subarray}{c}p,q,r\geq 0\\ |-l+p-q|-1-2r\geq 0\end{subarray}}\operatorname{sgn}(-l+p-q)(-1)^{r}w^{n-j-q} \binom{j-1}{p}\binom{n-j}{q}\binom{|-l+p-q|-r-1}{r}b_{|-l+p-q|-1-2r}(X)\\ +\sum_{\begin{subarray}{c}p,q,r\geq 0\\ |-l+p-q-1|-1-2r\geq 0\end{subarray}}\operatorname{sgn}(-l+p-q-1)(-1)^{r}w^{n-j-q} \binom{j-1}{p}\binom{n-j}{q}\binom{|-l+p-q-1|-r-1}{r}b_{|-l+p-q-1|-1-2r}(X)\] This implies the following. \[\frac{\det_{1\leq i,j\leq n}\left(X_{i}^{j-l-n-1}(1+X_{i})^{j-1}( 1+wX_{i})^{n-j}-X_{i}^{-j+l+n}(1+X_{i}^{-1})^{j-1}(1+wX_{i}^{-1})^{n-j}\right) }{\prod_{1\leq i<j\leq n}(X_{j}+X_{j}^{-1}-X_{i}-X_{i}^{-1})}\\ =\prod_{i=1}^{n}(1-X_{i}^{-1})\det_{1\leq i,j\leq n}\left(a_{i,j} \right),\] with \[a_{i,j}=\sum_{\begin{subarray}{c}p,q,r\geq 0\\ |p-q-l|-1-2r\geq 0\end{subarray}}\operatorname{sgn}(p-q-l)(-1)^{r}w^{n-j-q} \binom{j-1}{p}\binom{n-j}{q}\\ \times\binom{|p-q-l|-r-1}{r}h_{|p-q-l|-i-2r}(X_{1}+X_{1}^{-1}, \ldots,X_{i}+X_{i}^{-1})\\ +\sum_{\begin{subarray}{c}p,q,r\geq 0\\ |p-q-l|-1-2r\geq 0\end{subarray}}\operatorname{sgn}(p-q-l-1)(-1)^{r}w^{n-j-q} \binom{j-1}{p}\binom{n-j}{q}\\ \times\binom{|p-q-l-1|-r-1}{r}h_{|p-q-l-1|-i-2r}(X_{1}+X_{1}^{-1}, \ldots,X_{i}+X_{i}^{-1}).\] Using Lemma 4.5, we see that this is equal to \[b_{i,j}=\sum_{p,q\geq 0}\operatorname{sgn}(p-q-l)w^{n-j-q}\binom{j-1}{p} \binom{n-j}{q}h_{|p-q-l|-i}(X_{1},X_{1}^{-1},\ldots,X_{i},X_{i}^{-1})\\ +\sum_{p,q\geq 0}\operatorname{sgn}(p-q-l-1)w^{n-j-q}\binom{j-1}{p }\binom{n-j}{q}h_{|p-q-l-1|-i}(X_{1},X_{1}^{-1},\ldots,X_{i},X_{i}^{-1}). \tag{4.10}\] Here we need to find a combinatorial interpretation of \[(-1)^{n+1\choose 2}\prod_{i=1}^{n}X_{i}^{l}\ (X_{i}^{-1}+1+w+X_{i})\det_{1\leq i,j\leq n}\left(b_{i,j}\right).\] The only modification compared to the odd case is that the endpoints have to be replaced by the following set of two endpoints \(E_{j}=\{(n-j+l+1,j-l-2),(n-j+l,j-l-1)\}\) and that the overall factor is \[\prod_{i=1}^{n}X_{i}^{l}\ (X_{i}^{-1}+1+w+X_{i}),\] given that the sign is taken care of as above. This concludes the proof of Theorem 4.1. ### Right-hand side of (3.5), second proof In this section, we sketch a second proof of Theorem 4.1. It is closely related to the proof of Theorem 2.4 in [11]. We only study the case \(m=2l+1\). Again, we need to consider \[\det_{1\leq i,j\leq n}\left(X_{i}^{j-l-n-1}(1+X_{i})^{j-1}(1+wX_{i})^{n-j}-X_{ i}^{-j+l+n+1}(1+X_{i}^{-1})^{j-1}(1+wX_{i}^{-1})^{n-j}\right). \tag{4.11}\] We have the following lemma. **Lemma 4.6**.: _For \(n\geq 1\) and \(l\in\mathbb{Z}\), the following identity holds._ \[\frac{1}{\prod_{1\leq i<j\leq n}(X_{j}-X_{i})(X_{j}^{-1}-X_{i}^{-1 })\prod_{i,j=1}^{n}(X_{j}^{-1}-X_{i})}\\ \times\det_{1\leq i,j\leq n}\left(X_{i}^{j-l-n-1}(1+X_{i})^{j-1}( 1+wX_{i})^{n-j}-X_{i}^{-j+l+n+1}(1+X_{i}^{-1})^{j-1}(1+wX_{i}^{-1})^{n-j}\right) \\ \times\det_{1\leq i,j\leq n}\left(X_{i}^{j-l-n-1}(1+X_{i})^{j-1}( 1+wX_{i})^{n-j}+X_{i}^{-j+l+n+1}(1+X_{i}^{-1})^{j-1}(1+wX_{i}^{-1})^{n-j}\right) \\ =\frac{(-1)^{n}}{2}\det_{1\leq i,j\leq n}\left(\sum_{k,q}\binom{j -1}{-j+k+l+n-q+1}\binom{n-j}{q}w^{q}(h_{k-i+1}-h_{k+i-1-2n})\right)\\ \times\det_{1\leq i,j\leq n}\left(\sum_{k,q}\binom{j-1}{-j+k+l+n-q +1}\binom{n-j}{q}w^{q}(h_{k+i-1-n}+h_{k-i+1-n})\right)\] Proof.: We use \[\det(A-B)\det(A+B)=\det\left(\begin{array}{c|c}A-B&B\\ \hline 0&A+B\\ \end{array}\right)=\det\left(\begin{array}{c|c}A-B&B\\ \hline B-A&A\\ \end{array}\right)=\det\left(\begin{array}{c|c}A&B\\ \hline B&A\\ \end{array}\right)\] to see that the product of determinants on the left-hand side in the assertion of the lemma is equal to \[\det\left(\frac{\left(X_{i}^{j-l-n-1}(1+X_{i})^{j-1}(1+wX_{i})^{n-j}\right)_{1 \leq i,j\leq n}}{\left(X_{i}^{-j+l+n+1}(1+X_{i}^{-1})^{j-1}(1+wX_{i}^{-1})^{n- j}\right)_{1\leq i,j\leq n}}\right).\] Setting \(X_{n+i}=X_{i}^{-1}\) for \(i=1,2,\ldots,n\), we can also write this is as \[\det\left(\ \left(X_{i}^{j-l-n-1}\big{(}1+X_{i}\big{)}^{j-1}\big{(}1+wX_{i} \big{)}^{n-j}\right)_{\genfrac{}{}{0.0pt}{}{1\leq i\leq 2n}{1\leq j\leq n}}\ \Big{|}\ \big{(}X_{i}^{-j+l+n+1}\big{(}1+X_{i}^{-1}\big{)}^{j-1}\big{(}1+wX_{i}^{-1} \big{)}^{n-j}\big{)}_{\genfrac{}{}{0.0pt}{}{1\leq i\leq 2n}{1\leq j\leq n}}}\ \right).\] We apply Lemma 4.4 to \[\frac{\det\left(\ \left(X_{i}^{j-l-n-1}\big{(}1+X_{i}\big{)}^{j-1}\big{(}1+wX_{i} \big{)}^{n-j}\right)_{\genfrac{}{}{0.0pt}{}{1\leq i\leq 2n}{1\leq j\leq n}} \ \Big{|}\ \big{(}X_{i}^{-j+l+n+1}\big{(}1+X_{i}^{-1}\big{)}^{j-1}\big{(}1+wX_{i}^{-1} \big{)}^{n-j}\big{)}_{\genfrac{}{}{0.0pt}{}{1\leq i\leq 2n}{1\leq j\leq n}}} \right)}{\prod_{1\leq i<j\leq 2n}(X_{j}-X_{i})}\] and obtain \[\det\left(\left(\sum_{k,q}\binom{j-1}{-j+k+l+n-q+1}\binom{n-j}{q}w^{q}h_{k-i+ 1}\big{(}X_{1},\ldots,X_{i}\big{)}\right)_{\genfrac{}{}{0.0pt}{}{1\leq i\leq 2n}{1 \leq j\leq n}}\right.\\ \left.\left(\sum_{k,q}\binom{j-1}{-j-k+l+n-q+1}\binom{n-j}{q}w^{q }h_{k-i+1}\big{(}X_{1},\ldots,X_{i}\big{)}\right)_{\genfrac{}{}{0.0pt}{}{1\leq i \leq 2n}{1\leq j\leq n}}\right).\] We multiply from the left with the following matrix \[\big{(}h_{j-i}\big{(}X_{j},X_{j+1},\ldots,X_{2n}\big{)}\big{)}_{1\leq i,j\leq 2n}\] with determinant \(1\). For this purpose, note that \[\sum_{l=1}^{2n}h_{l-i}\big{(}X_{l},X_{l+1},\ldots,X_{2n}\big{)}h_{k-l+1}(X_{1},\ldots,X_{l})=h_{k-i+1}(X_{1},\ldots,X_{2n}),\] and, therefore, the multiplication results in \[\det\left(\left(\sum_{k,q}\binom{j-1}{-j+k+l+n-q+1}\binom{n-j}{q}w^{q}h_{k-i+ 1}\big{(}X_{1},\ldots,X_{2n}\big{)}\right)_{\genfrac{}{}{0.0pt}{}{1\leq i\leq 2n}{1 \leq j\leq n}}\right.\\ \left(\sum_{k,q}\binom{j-1}{-j-k+l+n-q+1}\binom{n-j}{q}w^{q}h_{k- i+1}\big{(}X_{1},\ldots,X_{2n}\big{)}\right)_{\genfrac{}{}{0.0pt}{}{1\leq i\leq 2n}{1 \leq j\leq n}}\right).\] We set \(X_{i+n}=X_{i}^{-1}\) for \(i=1,2,\ldots,n\) (so that the arguments of all complete symmetric functions are \(\big{(}X_{1},\ldots,X_{n},X_{1}^{-1},\ldots,X_{n}^{-1}\big{)}\)) and omit the \(X_{i}\)'s now. Also note that, under this specialization, the denominator \(\prod_{1\leq i<j\leq 2n}(X_{j}-X_{i})\) specializes to the denominator on the left-hand side in the assertion of the lemma. \[\det\left(\ \left(\sum_{k,q}\binom{j-1}{-j+k+l+n-q+1}\binom{n-j}{q}w^{q}h_{k-i+ 1}\right)_{\genfrac{}{}{0.0pt}{}{1\leq i\leq 2n}{1\leq j\leq n}}\ \Big{|}\ \Big{(}\sum_{k,q}\binom{j-1}{-j-k+l+n-q+1}\binom{n-j}{q}w^{q}h_{k-i+1}\Big{)} _{\genfrac{}{}{0.0pt}{}{1\leq i\leq 2n}{1\leq j\leq n}}\ \right)\] With this specialization, we have \(h_{k}=-h_{-k-2n}\) using (4.7). Therefore, the above is \[(-1)^{n}\det\left(\ \left(\sum_{k,q}\binom{j-1}{-j+k+l+n-q+1}\binom{n-j}{q}w^{q }h_{k-i+1}\right)_{\genfrac{}{}{0.0pt}{}{1\leq i\leq 2n}{1\leq j\leq n}}\ \Big{|}\ \Big{(}\sum_{k,q}\binom{j-1}{-j-k+l+n-q+1}\binom{n-j}{q}w^{q}h_{k-i+1-2n} \Big{)}_{\genfrac{}{}{0.0pt}{}{1\leq i\leq 2n}{1\leq j\leq n}}\right)\] \[=(-1)^{n}\det\left(\ \left(\sum_{k,q}\binom{j-1}{-j+k+l+n-q+1}\binom{n-j}{q}w^{q }h_{k-i+1}\right)_{\genfrac{}{}{0.0pt}{}{1\leq i\leq 2n}{1\leq j\leq n}}\ \Big{(}\sum_{k,q}\binom{j-1}{-j+k+l+n-q+1}\binom{n-j}{q}w^{q}h_{k -i-1-2n}\Big{)}_{\genfrac{}{}{0.0pt}{}{1\leq i\leq 2n}{1\leq j\leq n}}\right).\] Now, for \(j=1,2,\ldots,n\), we subtract the \((j+n)\)-th column from the \(j\)-th column. \[(-1)^{n}\det\left(\left(\sum_{k,q}\binom{j-1}{-j+k+l+n-q+1}\binom{n- j}{q}w^{q}(h_{k-i+1}-h_{k+i-1-2n})\right)_{\genfrac{}{}{0.0pt}{}{1\leq i\leq 2n}{ 1\leq j\leq n}}\right|\\ \left(\sum_{k,q}\binom{j-1}{-j+k+l+n-q+1}\binom{n-j}{q}w^{q}h_{k+ i-1-2n}\right)_{\genfrac{}{}{0.0pt}{}{1\leq i\leq 2n}{1\leq j\leq n}}\right)\] For \(i=n+2,n+3,\ldots,2n\), we add the \((2n+2-i)\)-th row to the \(i\)-th row. This gives a zero block for \(\{(i,j)|n+1\leq i\leq 2n,1\leq j\leq n\}\), since \[\sum_{k,q}\binom{j-1}{-j+k+l+n-q+1}\binom{n-j}{q}w^{q}(h_{k-i+1}-h_{k+i-1-2n}+ h_{k-(2n+2-i)+1}-h_{k+(2n+2-i)-1-2n})=0.\] The lower right block is \[\det_{\genfrac{}{}{0.0pt}{}{n+1\leq i\leq 2n}{1\leq j\leq n}} \left(\sum_{k,q}\binom{j-1}{-j+k+l+n-q+1}\binom{n-j}{q}w^{q}(h_{k +i-1-2n}+[i\not=n+1]h_{k+2n+2-i-1-2n})\right)\] \[=\det_{\genfrac{}{}{0.0pt}{}{n+1\leq i\leq 2n}{1\leq j\leq n }} \left(\sum_{k,q}\binom{j-1}{-j+k+l+n-q+1}\binom{n-j}{q}w^{q}(h_{k+i-1-2n}+[i \not=n+1]h_{k-i+1})\right)\] \[=\frac{1}{2}\det_{1\leq i,j\leq n }\left(\sum_{k,q}\binom{j-1}{-j+k+l+n-q+1}\binom{n-j}{q}w^{q}(h_{k+i-1-n}+h_{k -i+1-n})\right).\] This concludes the proof of the lemma. The identity in the lemma involves, up to factors, a product of two determinants on the left-hand side and also a product of two determinants on the right-hand side. This suggests that each of the determinants on the left-hand side equals up to factors a determinant on the right-hand side. This is indeed the case. More specifically, one can show that (4.11) is up to factors equal to \[(-1)^{n}\prod_{i=1}^{n}(1+X_{i})X_{i}^{l}\det_{1\leq i,j\leq n}\left(\sum_{k,q }\binom{j-1}{-j+k+l+n-q+1}\binom{n-j}{q}w^{q}(h_{k+i-1-n}+h_{k-i+1-n})\right).\] This can be shown by induction with respect to \(n\) as suggested in [10, Remark 7.4]. Thus Lemma 4.6 would actually not have been necessary, however, it explains how the expression was obtained much better than the proof by induction. Using Lemma 7.5 from [10], we can conclude further that the expression is equal to \[(-1)^{n}\prod_{i=1}^{n}(1+X_{i})X_{i}^{l}\\ \times\det_{1\leq i,j\leq n}\left(\sum_{k,q}\binom{j-1}{-j+k+l+n-q +1}\binom{n-j}{q}w^{q}h_{k+i-1-n}(X_{1},X_{1}^{-1},\ldots,X_{n-i+1},X_{n-i+1}^ {-1})\right).\] Also this formula can be proven directly by induction with respect to \(n\). Our next goal would be to give a combinatorial interpretation of \[\sum_{k,q}\binom{j-1}{-j+k+l+n-q+1}\binom{n-j}{q}w^{q}(h_{k+i-1-n}(X_{1},X_{1}^{-1 },\ldots,X_{n-i+1},X_{n-i+1}^{-1}).\] We replace \(i\) by \(n-i+1\) and \(q\) by \(n-j-q\), and then we get rid of \(k\) by setting \(p=k+l+q+1\). \[\sum_{p,q}\binom{j-1}{p}\binom{n-j}{q}w^{n-j-q}h_{p-q-l-1-i}(X_{1},X_{1}^{-1}, \ldots,X_{i},X_{i}^{-1}).\] This is equal to (4.9) when taking into account the definition of complete symmetric functions \(h_{k}\) for negative \(k\)'s as given in (4.7). ## 5. Explicit product formulas in case \((X_{1},\ldots,X_{n})=(1,\ldots,1)\) and \(w=0,-1\) When evaluating the specializations of the LHS or RHS of (1.8) at \((X_{1},\ldots,X_{n})=(1,\ldots,1)\) in the cases \(w=0,-1\) for small values of \(n\), one observes that the numbers involve only small prime factors, and, therefore, it is likely that they are expressible by product formulas. (A similar observation is true for the case \(w=1\), but there the explanation is simple, since \(\prod_{1\leq i<j\leq n}(1+wX_{i}+X_{j}+X_{i}X_{j})\) on the left-hand side of (1.8) is symmetric then.) For the LHS and the case \(m=n-1\), these are unpublished conjectures of Florian Schreier-Aigner from 2018. For instance, in the case \(w=-1\) and \(m=n-1\), we obtain the following numbers \[1,4,60,3328,678912,\ldots=2^{n(n-1)/2}\prod_{j=0}^{n-1}\frac{(4j+2)!}{(n+2j+1)!}\] that have also appeared in recent work of Di Francesco [1]. Related conjectures for arbitrary \(m\) have now been proven and will appear in forthcoming work with Florian Schreier-Aigner. The approach we have been successful with involves the transformation of the bialternant formula on the RHS of (1.8) into a Jacobi-Trudi type determinant. Then we can easily set \(X_{i}=1\) and we were able to guess the LU-decompositions of the relevant matrices. Proving these guesses involves the evaluation of certain triple sums, which is possible using Sister Celine's algorithm [10] and the fabulous Mathematica packages provided by RISC [11]. We state the results next. First we deal with the case \(w=0\). **Theorem 5.1**.: _The specialization of the generating function of arrowed Gelfand-Tsetlin patterns with \(n\) rows and strictly increasing non-negative bottom row where the entries are bounded by \(m\) at \((X_{1},\ldots,X_{n})=(1,\ldots,1)\), \(u=v=t=1\) and \(w=0\) is equal to_ \[3^{\binom{n+1}{2}}\prod_{i=1}^{n}\frac{(2n+m+2-3i)_{i}}{(i)_{i}}.\] Now we turn to the case \(w=-1\). **Theorem 5.2**.: _The specialization of the generating function of arrowed Gelfand-Tsetlin patterns with \(n\) rows and strictly increasing non-negative bottom row where the entries are bounded by \(m\) at \((X_{1},\ldots,X_{n})=(1,\ldots,1)\), \(u=v=t=1\) and \(w=-1\) is equal to_ \[2^{n}\prod_{i=1}^{n}\frac{(m-n+3i+1)_{i-1}(m-n+i+1)_{i}}{\left(\frac{m-n+i+2}{ 2}\right)_{i-1}(i)_{i}}.\] Other future work concerns extending results that have been obtained using (1.2) by replacing this identity by Theorem 1.1 or specializations thereof. Appendix A Aspects of the combinatorics of the classical Littlewood identity and its bounded version ### The combinatorics of the classical Littlewood identity We start by reviewing the classical combinatorial proof of (1.1): one can interpret the Schur polynomial \(s_{\lambda}(X_{1},\ldots,X_{n})\) as the (multivariate) generating function of semistandard Young tableaux of shape \(\lambda\) with entries in \(\{1,2,\ldots,n\}\), where the exponent of \(X_{i}\) is just the number of occurrences of \(i\) in a given semistandard Young tableau. Then the left-hand side of (1.1) is simply the generating function of all such semistandard Young tableaux of any shape \(\lambda\). The right-hand side can be interpreted as the generating function of symmetric \(n\times n\) matrices with non-negative integer entries: expanding \(\frac{1}{1-X_{i}X_{j}}=\sum_{a_{i},j\geq 0}(X_{i}X_{j})^{a_{i,j}}\) corresponds to the entries \(a_{i,j}=a_{j,i}\) for \(i<j\), while expanding \(\frac{1}{1-X_{i}}=\sum_{a_{i},i\geq 0}X_{i}^{a_{i,i}}\) corresponds to the diagonal entries \(a_{i,i}\). Then such a matrix determines a two-line array with \(a_{i,j}\) occurrences of the pair \(\left(\begin{smallmatrix}i\\ j\end{smallmatrix}\right)\) such that the pairs are ordered lexicographically. The semistandard Young tableau \(P\) is simply obtained by applying the Robinson-Schensted-Knuth (RSK) algorithm to the bottom row of the two-line array. It suffices to construct the so-called insertion tableau because by the symmetry of the RSK algorithm, it is equal to the recording tableau. Thus, to reconstruct the two-line array, we apply the inverse Robinson-Schensted-Knuth algorithm to \((P,P)\). ### Simpler decription of the classical bijection Now we discuss a related, but simpler bijective proof of (1.1) that does not invoke the symmetry of the RSK algorithm. After its description, we will actually discover that "only" the description of the algorithm is simpler as we will show that the bijection agrees with the classical one. However, this second version could be of interest for developing the combinatorics of (1.4) and (1.8). As discussed above, the right-hand side of (1.1) can be interpreted as the generating function of symmetric \(n\times n\) matrices \(A=(a_{i,j})_{1\leq i,j\leq n}\) with non-negative integer entries. They are also equivalent to lexicographically ordered two-line arrays with the property that the upper entry in each column is no smaller than the lower entry: For \(i\leq j\), let \(a_{i,j}=a_{j,i}\) be the number of columns of type \(\left(\begin{smallmatrix}j\\ i\end{smallmatrix}\right)\). Comparing to the two-line array from the classical proof, we just have to delete all columns \(\left(\begin{smallmatrix}j\\ i\end{smallmatrix}\right)\) with \(i>j\). Now we apply the following variant of RSK, which transforms a lexicographically ordered two-line array such that no upper element is smaller than the corresponding lower element into a semistandard Young tableau. * As usual, we work through the columns of the two-line array from left to right. * Suppose \(\left(\begin{smallmatrix}j\\ i\end{smallmatrix}\right)\), \(i\leq j\), is our current column. We use the usual RSK algorithm to insert \(i\) in to the current tableau. * If \(i<j\), we additionally place \(j\) into the tableau as follows: Suppose that the insertion of \(i\) ends with adding an entry to row \(r\), then we add \(j\) to row \(r+1\) in the leftmost column where there is no entry so far. **Example A.1**.: To give an example, observe that the symmetric matrix \[A=\begin{pmatrix}1&0&2&1\\ 0&0&1&4\\ 2&1&2&0\\ 1&4&0&1\end{pmatrix}\] is equivalent to the following two-line array \[\left(\begin{array}{cccccccccccc}1&3&3&3&3&3&4&4&4&4&4&4\\ 1&1&1&2&3&3&1&2&2&2&2&4\end{array}\right)\] and that the algorithm results in the following semistandard Young tableau. \[\begin{array}{|c|c|c|c|c|c|c|c|c|}\hline 1&1&1&1&2&2&2&2&4\\ \hline 2&3&3&3&3&4&4&\\ \hline 3&4&4&\\ \hline\end{array}\] _Well-definedness of the algorithm._ We argue that the resulting tableau is always a semistandard Young tableau. For this, we need an observation that can be deduced from [11, Lemma 7.11.2 (b)], which says that if we insert a weakly increasing sequence of positive integers \(i_{1}\leq i_{2}\leq\ldots\leq i_{r}\) from left to right into a semistandard Young tableau, then the "insertion path" of an earlier element lies strictly to the left of a later element. Moreover, for \(p<q\), the insertion path of \(i_{p}\) ends in a row below and to the left of the end of the insertion path of \(i_{q}\), or in the same row to the left of the end of the insertion path of \(i_{q}\). This implies that if the \(i_{k}\)'s are the bottom elements of the columns with top element \(j\) in the two line array, then, if the insertion path of an \(i_{k}\) with \(i_{k}<j\) ends in row \(r\), the elements in row \(1,2,\ldots,r\) are in \(\{1,2,\ldots,j-1\}\). We show by induction on the number of elements in the tableau that our algorithm always leads to a semistandard Young tableau. Now, if we insert the element \(i\) of the column \(\binom{j}{i}\) using the classical RSK algorithm into the current semistandard Young tableau, then we obtain another semistandard Young tableau, see [11, Lemma 7.11.3]. Placing the top element \(j\) in case \(j>i\) into the next row will also not destroy the columnstrictness as the elements above the row of \(j\) are in \(\{1,2,\ldots,j-1\}\), as discussed in the previous paragraph. **Remark A.2**.: Note that from the proof of well-definedness it follows that we may also add all top \(j\)'s at once after we have inserted the bottom entries of columns that have \(j\)'s as top entries in our algorithm: Consider the skew shape \(\lambda/\mu\) where \(\mu\) is the shape of the tableau that we had before the insertion of all these bottom entries and \(\lambda\) is the shape of the tableau we obtain after the insertion (but not yet adding the \(j\)'s from the top row of the two-line array) except that we exclude in the latter tableau all \(j\)'s that come from the bottom of the two-line array. Now, if there are \(c\) cells in row \(r\) of the skew shape then we add \(c\)\(j\)'s in row \(r+1\) to the semistandard Young tableaux with the bottom entries inserted, now including also those that come from columns \(\binom{j}{j}\). This is because the cells of the skew shape are added to the tableau in the course of insertion from bottom to top and within a row from left to right. _Reverse algorithm._ We construct the inverse algorithm inductively, where the induction is with respect to the largest element in the tableau. Suppose \(n\) is the largest element in the semistandard Young tableau, then we want to recover the part of the two-line array that has \(n\) in the top row (which is an ending section of the array). Suppose \[\left(\begin{array}{ccccc}n&n&\ldots&n\\ i_{1}&i_{2}&\ldots&i_{s}\end{array}\right)\] is this section, which implies \(i_{1}\leq i_{2}\leq\ldots\leq i_{s}\), and let \(r\) be maximal with \(i_{r}<n\) so that \(i_{r+1}=i_{r+2}=\ldots=i_{s}=n\). Now, from the algorithm it follows that \(s-r\) is just the number \(n\)'s in the top row of the tableau and we can delete these elements. Again it follows from [10, Lemma 7.11.2 (b)] that we need to determine the number \(u\) of \(n\)'s in the second row, remove them, and the apply the inverse bumping algorithm to the last \(u\) element in the first row, from right to left (which means that we just remove them and put them in the bottom row of the two-line array). We continue by counting (and removing) the \(n\)'s in the third row, and, if \(v\) is this number, apply the inverse bumping to the last \(v\) elements in the second row, from right to left. We work through the rows from top to bottom in this way. Finally, we discover that this algorithm is just another description of the classical bijection. **Proposition A.3**.: _The algorithm just described establishes the same bijection between symmetric \(n\times n\) matrices \(A\) with non-negative integer entries and semistandard Young tableaux with entries in \(\{1,2,\ldots,n\}\) as the classical one._ Sketch of proof.: The proof is by induction with respect to \(n\). For \(n=1\), there is nothing to prove since the two algorithms coincide in this case. We perform the step from \(n-1\) to \(n\). We can assume \(a_{n,n}=0\) since increasing \(a_{n,n}\) has the same effect in both algorithms as in both cases we just add \(a_{n,n}\) columns \(\binom{n}{n}\) at the end of the two-line arrays and apply the same procedure to these columns, in both cases at the end of the algorithm. Suppose \(B\) is the restriction of \(A\) to the first \(n-1\) rows and the first \(n-1\) columns. By the induction hypothesis, we know that \(B\) is transformed into the same semistandard Young tableau \(P\) under both algorithms. Moreover, let \(a\) be the two-line array that corresponds to \(A\) in the classical algorithm and \(a^{\prime}\) be the initial section that disregards all columns with an \(n\) in the top row. Clearly, we can obtain \(P\) also by applying RSK to the bottom row of \(a^{\prime}\) and then deleting all \(n\)'s because the two-line array \(b\) that corresponds to \(B\) under the classical algorithm is obtained from \(a^{\prime}\) by deleting all columns that have an \(n\) in the bottom row and the \(n\)'s will never bump an element, but at most be bumped in final steps of insertions. Let \(Q\) denote the semistandard Young tableau where the \(n\)'s are kept (i.e., what we obtain after applying RSK to the bottom row of \(a^{\prime}\)). Now note that the final sections of the two-line array with \(n\) in the top row agree for both two-line arrays, and denote it by \(s\). Since we assume \(a_{n,n}=0\), the bottom row of \(s\) does not contain any \(n\). It is also clear that we will obtain the same tableau if we apply the following two different procedures: Insert the bottom row of \(s\) to \(P\), or, insert the bottom row of \(s\) to \(Q\) and then delete the \(n\)'s. This is because \(P\) and \(Q\) agree on all entries different from \(n\) and \(n\)'s are at most bumped in final steps in the second case. This implies that the two procedures (namely, the "classical" one and the one that is the subject of this section) result in the same two tableaux when disregarding the \(n\)'s. Therefore, it remains to show that they also agree on the \(n\)'s. Now we use the fact that the positions of the \(n\)'s (as for any other entry) can also be determined by considering the recording tableau (which is due to the symmetry of the classical RSK algorithm), in particular we need to study how the recording tableau is built up when adding \(s\) since this is the only time when \(n\)'s are added to the recording tableau. These \(n\)'s are added in the final cells of the insertion paths when inserting the bottom row of \(s\) into \(Q\). Such an insertion path can either agree with the corresponding insertion path in \(P\) or it has one additional step where an \(n\) gets bumped. As we already know that up to the \(n\)'s we obtain the same tableaux in both cases, we are always in the case that \(n\)'s are bumped and this proves the assertion. ### RSK in terms of Gelfand-Tsetlin patterns It is well-known that semistandard Young tableaux can be replaced by Gelfand-Tsetlin patterns in the definition of Schur polynomials (and thus in the combinatorial interpretation of the left-hand sides of (1.1) and (1.6)) as there is an easy bijective correspondence, which will be described next. This point of view is valuable for us because the left-hand sides of our Littlewood-type identities can also be interpreted combinatorially as generating functions of Gelfand-Tsetlin-pattern-type objects (see Section 3). The purpose of the current section is to indicate how the classical RSK algorithm works on (classical) Gelfand-Tsetlin patterns, with the hope that something similar can be established for our variant (i.e., arrowed Gelfand-Tsetlin patterns, see Section 3.1). A Gelfand-Tsetlin pattern is a finite triangular array of integers with centered rows as follows \[\begin{array}{ccccc}&&a_{1,1}&&\\ &&a_{2,1}&&a_{2,2}&\\ &\ddots&&\ldots&&\ddots&\\ a_{n,1}&&a_{n,2}&&\ldots&&a_{n,n}\end{array}\] such that we have a weak increase in \(\,\varkappa\)-direction as well as in \(\,\searrow\,\)-direction, i.e., \(a_{i+1,j}\leq a_{i,j}\leq a_{i+1,j+1}\), for all \(1\leq j\leq i\leq n-1\). The bijection between semistandard Young tableaux of shape \((\lambda_{1},\lambda_{2},\ldots,\lambda_{n})\) (we allow zero entries here) and parts in \(\{1,2,\ldots,n\}\), and Gelfand-Tsetlin patterns with bottom row \((\lambda_{n},\lambda_{n-1},\ldots,\lambda_{1})\) is as follows: reading the \(i\)-th row of a Gelfand-Tsetlin pattern in reverse order gives a partition, and this is precisely the shape constituted by the entries less than or equal to \(i\) in the corresponding semistandard Young tableau. Under this bijection, the number of entries equal to \(i\) in the semistandard Young tableau is equal to the difference of the \(i\)-th row sum and the \((i-1)\)-st row sum in the Gelfand-Tsetlin pattern. Therefore, \[s_{(\lambda_{1},\ldots,\lambda_{n})}(X_{1},\ldots,X_{n})=\sum\prod_{i=1}^{n}X_ {i}^{\sum_{j-1}^{i}a_{i,j}-\sum_{j=1}^{i-1}a_{i-1,j}},\] where the sum is over all Gelfand-Tsetlin patterns \((a_{i,j})_{1\leq j\leq i\leq n}\) with bottom row \((\lambda_{n},\lambda_{n-1},\ldots,\lambda_{1})\). To give an example, observe that the Gelfand-Tsetlin pattern corresponding to the following semistandard Young tableaux (A.1) \[\begin{array}{|c|c|c|c|c|c|c|}\hline 1&1&2&2&3&5\\ \hline 2&2&4&5&7&8\\ \hline 4&5&5&7&8\\ \hline 5&6&6&8\\ \hline 7&8\\ \hline\end{array}\] is \[\begin{array}{ccccccccccccc}&&&&3&&&&\\ &&&&2&&5&&&&\\ &&&&0&&2&&6&&\\ &&&&0&&1&&3&&6&&\\ &&&&0&&1&&3&&4&&7&&\\ &&&&0&&0&&3&&3&&4&&7\\ &&&&0&&0&&1&&3&&4&&5&&7\\ &&&&0&&0&&1&&3&&4&&5&&7\\ &&&&0&&0&&2&&4&&5&&6&&7\\ &&&&0&&0&&2&&4&&5&&6&&7\\ \end{array}.\] Now suppose we use the RSK algorithm to insert the integer \(m\) into a semistandard Young tableau. On the corresponding Gelfand-Tsetlin pattern, we have to do the following. * If the number \(n\) of rows of the pattern is less than \(m\) and the bottom row of the pattern is \(k_{1},\ldots,k_{n}\), then we add rows of the form \(0,\ldots,0,k_{1},\ldots,k_{n}\) with the appropriate number of \(0\)'s until we have \(m\) rows. * Now we start a path in the pattern that starts at the last entry in row \(m\) with (unit) steps in \(\searrow\)-direction or \(\swarrow\)-direction progressing from one entry to a neighboring entry in this direction. The rule is as follows: Whenever the \(\searrow\)-neighbor of the current entry is equal to the current entry we extend our path to the next entry in \(\searrow\)-direction, otherwise we go to the next entry in \(\swarrow\)-direction. We continue with this path until we reach the bottom row. * Finally, we add \(1\) to all entries in the path. To give an example, if we use RSK to insert \(3\) into the semistandard Young tableau from (A.1), we obtain the following tableau, where the insertion path is indicated in red. \[\begin{array}{|c|c|c|c|c|c|c|}\hline 1&1&1&2&3&3\\ \hline 2&2&4&5&5&8&\\ \hline 4&5&5&7&7&\\ \hline 5&6&6&8&8&\\ \hline 7&8&&\\ \hline\end{array}\] On the corresponding Gelfand-Tsetlin pattern, we obtain the following. \[\begin{array}{ccccccccccccc}&&&&3&&&&\\ &&&&2&&5&&\\ &&&&0&&2&&7&&\\ &&&&0&&1&&3&&7&&\\ &&&&0&&1&&3&&5&&7\\ &&&&0&&0&&3&&3&&5&&7\\ &&&&0&&0&&1&&3&&5&&5&&7\\ &&&&0&&0&&1&&3&&5&&5&&7\\ &&&&0&&0&&2&&5&&5&&6&&7\\ \end{array}\] It corresponds to the tableau with the \(3\) inserted. Now suppose in our simplified algorithm to prove (1.1), we "insert" the column \(\binom{j}{i}\) into the Gelfand-Tsetlin pattern. At this point, the Gelfand-Tsetlin pattern should have \(j\) rows. Then we apply the algorithm just described to insert \(i\) into the pattern. To insert also \(j\) (in case \(j\not\pm i\)), add \(1\) to the entry immediately left of the entry that is the end of the path that is induced by the insertion of \(i\). Whenever we progress to the first column with \(j\) as top element in the two-line array, we add one row to the Gelfand-Tsetlin by copying the current bottom row and adding one \(0\) at the beginning. ### The right-hand side of the bounded Littlewood identity (1.6) The irreducible characters of the special orthogonal group \(SO_{2n+1}(\mathbb{C})\) associated with the partition \(\lambda=(\lambda_{1},\dots,\lambda_{n})\) are \[so_{\lambda}^{\operatorname{odd}}(X_{1},\dots,X_{n})=\prod_{i=1}^{n}X_{i}^{n- 1/2}\frac{\det_{1\leq i,j\leq n}\left(X_{i}^{-\lambda_{j}-n+j-1/2}-X_{i}^{ \lambda_{j}+n-j+1/2}\right)}{\left(1+[\lambda_{n}=0]\right)\prod_{i=1}^{n}(1-X _{i})\prod_{1\leq i<j\leq n}(X_{j}-X_{i})(1-X_{i}X_{j})},\] see [11, Eq. (24.28)]. These characters can be seen as generating functions of certain halved Gelfand-Tsetlin patterns that are defined next. This can even be extended to so-called half-integer partitions as will be explained also. A half-integer partition is a finite, weakly decreasing sequence of positive half-integers. **Definition A.4**.: _For a positive integer \(n\), a \(2n\)-split orthogonal (Gelfand-Tsetlin) pattern is an array of non-negative integers or non-negative half-integers with \(2n\) rows of lengths \(1,1,2,2,\dots,n,n\), which are aligned as follows for \(n=3\)_ \[\begin{array}{ccccc}a_{1,1}&&&&\\ &a_{2,1}&&&&\\ a_{3,1}&&a_{3,2}&&\\ &a_{4,1}&&a_{4,2}&&\\ a_{5,1}&&a_{5,2}&&a_{5,3}&\\ &a_{6,1}&&a_{6,2}&&a_{6,3}\end{array},\] _such that the entries are weakly increasing along \(\,\mathcal{\gamma}\,\)-diagonals and \(\,\searrow\,\)-diagonals, and in which the entries, except for the first entries in the odd rows (called odd starters), are either all non-negative integers or all non-negative half-integers. Each starter is independently either a non-negative integer or a non-negative half-integer. The weight of a \(2n\)-split orthogonal pattern is_ \[\prod_{i=1}^{n}X_{i}^{r_{2i}-2r_{2i-1}+r_{2i-2}}\] _where \(r_{i}\) is the sum of entries in row \(i\) and \(r_{0}=0\)._ The following theorem is the first part of Theorem 7.1 in [10]. **Theorem A.5**.: _Let \(\lambda=(\lambda_{1},\dots,\lambda_{n})\) be a partition (allowing zero entries) or a half-integer partition. Then the generating function of \(2n\)-split orthogonal patterns with respect to the above weight that have \(\lambda\) as bottom row, written in increasing order, is_ \[\prod_{i=1}^{n}X_{i}^{n-1/2}\frac{\det_{1\leq i,j\leq n}\left(X_{i}^{-\lambda_ {j}-n+j-1/2}-X_{i}^{\lambda_{j}+n-j+1/2}\right)}{\left(1+[\lambda_{n}=0] \right)\prod_{i=1}^{n}(1-X_{i})\prod_{1\leq i<j\leq n}(X_{j}-X_{i})(1-X_{i}X_ {j})}.\] Now the right-hand side of (1.6) can be written as \[\frac{\det_{1\leq i,j\leq n}\left(X_{i}^{j-1}-X_{i}^{m+2n-j} \right)}{\prod_{i=1}^{n}(1-X_{i})\prod_{1\leq i<j\leq n}(X_{j}-X_{i})(1-X_{i} X_{j})}\\ =\prod_{i=1}^{n}X_{i}^{(m-1)/2+n}\frac{\det_{1\leq i,j\leq n} \left(X_{i}^{j-n-(m+1)/2}-X_{i}^{-j+n+(m+1)/2}\right)}{\prod_{i=1}^{n}(1-X_{i} )\prod_{1\leq i<j\leq n}(X_{j}-X_{i})(1-X_{i}X_{j})},\] so that we can deduce from Theorem A.5 that it is equal to \[\prod_{i=1}^{n}X_{i}^{m/2}so^{\text{odd}}_{(m/2,m/2,\ldots,m/2)}(X_{1},\ldots,X_ {n}).\] From (1.6), it now follows that (A.2) \[\sum_{\lambda\subseteq(m^{n})}s_{\lambda}(X_{1},\ldots,X_{n})=\prod_{i=1}^{n}X _{i}^{m/2}so^{\text{odd}}_{(m/2,m/2,\ldots,m/2)}(X_{1},\ldots,X_{n}).\] A combinatorial proof of this fact can be found in [12, Corollary 7.4]. It would be interesting to see whether there is a bijective proof of (A.2) that uses RSK. More concretely, under the bijection that is used in the classical bijective proof of (1.1) semistandard Young tableaux whose shape is in \((m^{n})\) correspond to two-line arrays such that the longest increasing subsequence of the bottom row has at most \(m\) elements, see [12, Proposition 7.23.10]. Next we argue that we can also read off the \(m\) from the two-line array we use for our simplified proof of (1.1), in the following sense. The longest increasing subsequence of the bottom row of the "classical" two-line array can be read off the corresponding matrix \(A\) with non-negative integers as follows: we consider walks through the matrix with unit \(\to\)-steps and unit \(\downarrow\)-steps and add up the entries we traverse. The maximal sum we can achieve with such a path is the length of the longest increasing subsequence of the bottom row of the classical two-line array. Now, if the matrix \(A\) is symmetric, we can confine such walks to be weakly above the main diagonal and the two-line array of the simplified algorithm is constituted by this part of the matrix. Finally, we give a bijective proof of (A.2) in the case \(n=2\). The left hand side can be seen as the generating function of semistandard Young tableaux with entries in \(\{1,2\}\), with the weight \[X_{1}^{\#\text{ of }1\text{'s}}X_{2}^{\#\text{ of }2\text{'s}}.\] Such tableaux have at most \(2\) rows and can be encoded by three non-negative integers \(x,y,z\): let \(y\) be the number of \(2\)'s in the second row, \(z\) be the number of \(2\)'s in the first row and \(x+y\) be the number of \(1\)'s, which are necessarily in the first row. The two-line array that corresponds to such a tableau under our simplified algorithm is constituted by \(x\) columns \(\binom{1}{1}\), \(y\) columns \(\binom{2}{1}\) and \(z\) columns \(\binom{2}{2}\), ordered lexicographically. The corresponding \(4\)-split pattern can be obtained as follows: Add \(\frac{x+y+z}{2}\) to all entries of the following \(4\)-split pattern. \[\begin{array}{ccccc}\frac{-x-y-\min(x,z)}{2}&&&&\\ &-\min(x,z)&&\\ \frac{-y-z-\min(x,z)}{2}&&0&&\\ &&0&&0\\ \end{array}\] ## Appendix B Further combinatorial interpretations of the left-hand sides ### Generating function of AGTPs with respect to the bottom row Setting \(X_{1}=X_{2}=\ldots=X_{n}=1\) in Theorem 3.4, we see that the generating function of AGTPs with bottom row \(k_{1},\ldots,k_{n}\) and with respect to the weight (B.1) \[\operatorname{sgn}(A)t^{\varnothing}u^{\nearrow}v^{\narrow}w^{\narrow}\] is (B.2) \[(t+u+v+w)^{n}\prod_{1\leq i<j\leq n}\ \big{(}t+u\mathrm{E}_{k_{i}}+v \mathrm{E}_{k_{j}}^{-1}+w\mathrm{E}_{k_{i}}\mathrm{E}_{k_{j}}^{-1}\big{)}\prod_{1 \leq i<j\leq n}\frac{k_{j}-k_{i}+j-i}{j-i}\\ =\big{(}t+u+v+w\big{)}^{n}\prod_{1\leq i<j\leq n}\ \big{(}t \mathrm{E}_{k_{j}}+u\mathrm{E}_{k_{i}}\mathrm{E}_{k_{j}}+v+w\mathrm{E}_{k_{i}} \big{)}\prod_{j=1}^{n}\mathrm{E}_{k_{j}}^{-j+1}\prod_{1\leq i<j\leq n}\frac{k_ {j}-k_{i}+j-i}{j-i}\\ =\big{(}t+u+v+w\big{)}^{n}\prod_{1\leq i<j\leq n}\ \big{(}t \mathrm{E}_{k_{j}}+u\mathrm{E}_{k_{i}}\mathrm{E}_{k_{j}}+v+w\mathrm{E}_{k_{i}} \big{)}\prod_{1\leq i<j\leq n}\frac{k_{j}-k_{i}}{j-i},\] using the fact \(s_{(k_{n},k_{n-1},\ldots,k_{1})}(1,\ldots,1)=\prod_{1\leq i<j\leq n}\frac{k_ {j}-k_{i}+j-i}{j-i}\), which follows from [10, (7.105)] when taking the limit \(q\to 1\). Generalizing a computation in Section 6 of [10] slightly, it can be seen that the coefficient of \(X_{1}^{k_{1}}X_{2}^{k_{2}}\cdots X_{n}^{k_{n}}\) in \[(t+u+v+w)^{n}\prod_{i=1}^{n}X_{i}^{-n+1}\ \big{(}1-X_{i}\big{)}^{-n}\prod_{1 \leq i<j\leq n}(X_{j}-X_{i})(u+tX_{i}+wX_{j}+vX_{i}X_{j})\] is the generating function of AGTPs with bottom \(k_{1},k_{2},\ldots,k_{n}\) as given in (B.2), when interpreting the rational function as a formal Laurent series in \(X_{1},X_{2},\ldots,X_{n}\) with \((1-X_{i})^{-1}=\sum_{k\geq 0}X_{i}^{k}\) and assuming \((k_{1},k_{2},\ldots,k_{n})\geq 0\). Phrased differently, for any \((k_{1},\ldots,k_{n}),(m_{1},\ldots,m_{n})\in\mathbb{Z}^{n}\) with \((k_{1}+m_{1},\ldots,k_{n}+m_{n})\geq 0\), the coefficient of \(X_{1}^{m_{1}}\cdots X_{n}^{m_{n}}\) in \[(t+u+v+w)^{n}\prod_{i=1}^{n}X_{i}^{-n+1-k_{i}}\ (1-X_{i})^{-n}\prod_{1\leq i <j\leq n}(X_{j}-X_{i})(u+tX_{i}+wX_{j}+vX_{i}X_{j})\] is the generating function of AGTPs with bottom \((k_{1}+m_{1},\ldots,k_{n}+m_{n})\). Therefore, the coefficient of \(X_{1}^{m_{1}}\cdots X_{n}^{m_{n}}\) in \[(t+u+v+w)^{n}\\ \times\mathbf{Sym}_{X_{1},\ldots,X_{n}}\left[\prod_{i=1}^{n}X_{i}^ {-n+1-k_{i}}\ (1-X_{i})^{-n}\prod_{1\leq i<j\leq n}(X_{j}-X_{i})(u+tX_{i}+wX_{j}+vX_{i}X_{j})\right]\] is the generating function of pairs of AGTPs and permutations \(\sigma\), where the difference of the bottom row and \((k_{1},\ldots,k_{n})\) is the permutation of \(\{m_{1},\ldots,m_{n}\}\) given by \(\sigma\), assuming \((k_{1}+m_{\sigma(1)},\ldots,k_{n}+m_{\sigma(n)})\geq 0\) for every permutation \(\sigma\). The latter is always satisfied if \((k_{1},\ldots,k_{n}),(m_{1},\ldots,m_{n})\geq 0\). The above expression is equal to \[(t+u+v+w)^{n}\prod_{i=1}^{n}\ \big{(}1-X_{i}\big{)}^{-n}\prod_{1 \leq i<j\leq n}(X_{j}-X_{i})\\ \times\mathbf{ASym}_{X_{1},\ldots,X_{n}}\left[\prod_{i=1}^{n}X_{i }^{-k_{i}}\ \prod_{1\leq i<j\leq n}(v+wX_{i}^{-1}+tX_{j}^{-1}+uX_{i}^{-1}X_{j}^{-1}) \right].\] We sum over all \(0\leq k_{1}<k_{2}<\ldots<k_{n}\leq m\). \[(t+u+v+w)^{n}\prod_{i=1}^{n}\ (1-X_{i})^{-n}\prod_{1\leq i<j\leq n}(X_{j}- X_{i})\\ \times\mathbf{ASym}_{X_{1},\ldots,X_{n}}\left[\prod_{1\leq i<j \leq n}(v+wX_{i}^{-1}+tX_{j}^{-1}+uX_{i}^{-1}X_{j}^{-1})\sum_{0\leq k_{1}<k_{2}< \ldots<k_{n}\leq m}X_{1}^{-k_{1}}X_{2}^{-k_{2}}\cdots X_{n}^{-k_{n}}\right]\] For \((m_{1},\ldots,m_{n})\geq 0\), the coefficient of \(X_{1}^{m_{1}}\cdots X_{n}^{m_{n}}\) is the generating function of pairs of AGTPs \(A\) and permutations \(\sigma\) of \(\{1,2,\ldots,n\}\) such that, if \((m_{\sigma(1)},\ldots,m_{\sigma(n)})\) is added to the bottom row of \(A\), we obtain a strictly increasing sequence of non-negative integers. In particular, the constant term is the generating function of AGTPs (with respect to the weight (B.1)), whose bottom row is a strictly increasing sequence of non-negative integers, multiplied by \(n!\). Setting \(t=u=v=1\), this is by (1.8) equal to \[(3+w)^{n}\prod_{i=1}^{n}\ (1-X_{i})^{-n}\prod_{1\leq i<j\leq n}(X_{j }-X_{i})\\ \times\frac{\det_{1\leq i,j\leq n}\left(X_{i}^{-j+1}(1+X_{i}^{-1} )^{j-1}(1+wX_{i}^{-1})^{n-j}-X_{i}^{-m-2n+j}(1+X_{i})^{j-1}(1+wX_{i})^{n-j} \right)}{\prod_{i=1}^{n}(1-X_{i}^{-1})\prod_{1\leq i<j\leq n}(1-X_{i}^{-1}X_{ j}^{-1})}\\ =(3+w)^{n}\prod_{i=1}^{n}\ (1-X_{i})^{-n}\prod_{1\leq i<j\leq n}(X_{j }-X_{i})\\ \times\frac{\det_{1\leq i,j\leq n}\left(X_{i}^{-j+2}(1+X_{i})^{j -1}(w+X_{i})^{n-j}-X_{i}^{-m-n+j}(1+X_{i})^{j-1}(1+wX_{i})^{n-j}\right)}{\prod _{i=1}^{n}(X_{i}-1)\prod_{1\leq i<j\leq n}(X_{i}X_{j}-1)}.\] Generating function of alternating sign triangles with respect to the positions of the \(1\)-columns Alternating sign triangles have been introduced recently in [1]. **Definition B.1**.: _An alternating sign triangle (AST) with \(n\geq 1\) rows is a triangular array with \(n\) centered rows of the following shape_ \[\begin{array}{ccccccccc}a_{1,1}&a_{1,2}&\ldots&\ldots&\ldots&\ldots&a_{1,2n- 1}\\ &a_{2,2}&\ldots&\ldots&\ldots&a_{2,2n-2}\\ &&\ldots&\ldots&\ldots\\ &&&a_{n,n}\end{array}\] _such that \(a_{i,j}\in\{0,1,-1\}\), non-zero entries alternate in each row and column, all rows sum to \(1\) and the topmost non-zero entry (if any) in each column is \(1\)._ Next we give an example of an AST with \(5\) rows. \[\begin{array}{ccccccccc}0&0&0&1&0&0&0\\ &0&1&-1&0&1\\ &&0&0&1\\ &&1\end{array}\] It is known that there is the same number of \(n\times n\) ASMs as there is of ASTs with \(n\) rows, but no bijection is known so far. It has even been possible to identify certain equidistributed statistics, see [11, 12, 13]. The columns of an AST sum to \(0\) or \(1\). A column that sums to \(1\) is said to be a \(1\)-column. The central column is always a \(1\)-column. Since the sum of all entries in an AST with \(n\) rows is \(n\), there are precisely \(n-1\) other \(1\)-columns. A certain type of generating function with respect to the \(1\)-columns has been derived in [12, Theorem 7]. It involves one other statistic, which we introduce next: A \(11\)-column is a \(1\)-column with \(1\) as bottom element, while a 10-column is a 1-column with 0 as bottom element. For an AST, \(T\) we define \[\rho(T)=\#11\text{-columns left of the central column}\] \[+\#10\text{-columns right of the central column}+1\] **Theorem B.2**.: _Let \(n\) be a positive integer, \(0\leq r\leq n-1\) and \(0\leq j_{1}<j_{2}<\ldots<j_{n-1}\leq 2n-3\). The coefficient of \(t^{r-1}X_{1}^{j_{1}}X_{2}^{j_{2}}\cdots X_{n-1}^{j_{n-1}}\) in_ (B.3) \[\prod_{i=1}^{n-1}(t+X_{i})\prod_{1\leq i<j\leq n-1}(1+X_{i}+X_{i}X_{j})(X_{j}- X_{i})\] _is the number of ASTs \(T\) with \(n\) rows, \(\rho(T)=r\) and \(1\)-columns in positions \(j_{1},j_{2},\ldots,j_{n-1}\), where we exclude the central column and count from the left starting with \(0\)._ For what follows, the crucial question is whether we can give the coefficient of \(t^{r-1}X_{1}^{j_{1}}X_{2}^{j_{2}}\cdots X_{n-1}^{j_{n-1}}\) of (B.3) also a meaning if \((j_{1},\ldots,j_{n-1})\) is not strictly increasing. Such an interpretation does not exist so far. Phrased differently, the theorem states that the coefficient of \(X_{1}^{m_{1}}X_{2}^{m_{2}}\cdots X_{n-1}^{m_{n-1}}\) in \[\prod_{i=1}^{n-1}(t+X_{i}^{-1})X_{i}^{j_{i}}\prod_{1\leq i<j\leq n -1}(1+X_{i}^{-1} +X_{i}^{-1}X_{j}^{-1})(X_{j}^{-1}-X_{i}^{-1})\\ =\prod_{i=1}^{n-1}(1+tX_{i})X_{i}^{j_{i}-2n+1}\prod_{1\leq i<j \leq n-1}(1+X_{j}+X_{i}X_{j})(X_{i}-X_{j})\] is the generating function of ASTs with 1-columns in positions \(j_{1}-m_{1},j_{2}-m_{2},\ldots,j_{n-1}-m_{n-1}\) with respect to \(\rho(T)-1\), provided that \(j_{1}-m_{1}<j_{2}-m_{2}<\cdots<j_{n-1}-m_{n-1}\). Therefore, the coefficient of \(X_{1}^{m_{1}}X_{2}^{m_{2}}\cdots X_{n-1}^{m_{n-1}}\) in \[\operatorname{\mathbf{Sym}}_{X_{1},\ldots,X_{n-1}}\left[\prod_{i=1}^{n-1}(1+tX _{i})X_{i}^{j_{i}-2n+1}\prod_{1\leq i<j\leq n-1}(1+X_{j}+X_{i}X_{j})(X_{i}-X_{ j})\right]\] is the generating function of pairs of ASTs and permutations of \(\{1,2,\ldots,n-1\}\), such that \(j_{1}-m_{\sigma(1)},j_{2}-m_{\sigma(2)},\ldots,j_{n-1}-m_{\sigma(n-1)}\) are the positions of 1-columns, provided that \((j_{1}-m_{\sigma(1)},j_{2}-m_{\sigma(2)},\ldots,j_{n-1}-m_{\sigma(n-1)})\) is strictly increasing for all \(\sigma\). Note that it is possible to satisfy the strictly increasing condition, for instance if \((j_{1},\ldots,j_{n-1})\) is strictly increasing and the differences between consecutive \(j_{l}\) are large while the \(m_{l}\) are small. The expression is equal to \[\prod_{i=1}^{n-1}(1+tX_{i})X_{i}^{-2n+1}\prod_{1\leq i<j\leq n-1}(X_{i}-X_{j}) \operatorname{\mathbf{ASym}}_{X_{1},\ldots,X_{n-1}}\left[\prod_{i=1}^{n-1}X_{ i}^{j_{i}}\prod_{1\leq i<j\leq n-1}(1+X_{j}+X_{i}X_{j})\right].\] We sum over all \(p\leq j_{1}<j_{2}<\ldots<j_{n-1}\leq q\). (B.4) \[\prod_{i=1}^{n-1}(1+tX_{i})X_{i}^{-2n+1+p}\prod_{1\leq i<j\leq n- 1}(X_{i}-X_{j})\\ \times\operatorname{\mathbf{ASym}}_{X_{1},\ldots,X_{n-1}}\left[ \prod_{1\leq i<j\leq n-1}(1+X_{j}+X_{i}X_{j})\sum_{0\leq j_{1}<j_{2}<\cdots<j _{n-1}\leq q-p}X_{1}^{j_{1}}\cdots X_{n-1}^{j_{n-1}}\right]\] Now, the coefficient of \(X_{1}^{m_{1}}X_{2}^{m_{2}}\cdots X_{n-1}^{m_{n-1}}\) in this expression is the generating function of pairs of, let us say, _extended_ ASTs and permutations of \(\{1,2,\ldots,n-1\}\) such that if \(m_{\sigma(1)},\ldots,m_{\sigma(n-1)}\) is added to the positions of the 1-columns, we obtain a strictly increasing sequence of integers between \(p\) and \(q\). Extended refers to the fact that we now would need an extended version of Theorem B.2 as indicated above, as we cannot guarantee that \((j_{1}-m_{\sigma(1)},j_{2}-m_{\sigma(2)},\ldots,j_{n-1}-m_{\sigma(n-1)})\) are strictly increasing when we sum over all \(p\leq j_{1}<j_{2}<\ldots<j_{n-1}\leq q\). An exception in this respect is the case when all \(m_{l}=0\). It follows that the constant term of (B.4) is the generating function of ASTs with \(n\) rows whose \(1\)-columns are between \(p\) and \(q\). Using (1.8), this is equal to \[\prod_{i=1}^{n-1}\frac{\left(1+tX_{i}\right)X_{i}^{-2n+1+p}}{1-X_{i}}\prod_{1 \leq i<j\leq n-1}\frac{X_{i}-X_{j}}{1-X_{i}X_{j}}\det_{1\leq i,j\leq n-1}\left( X_{i}^{j-1}(1+X_{i})^{j-1}-X_{i}^{q-p+2n-2j+1}(1+X_{i})^{j-1}\right).\]
2309.04531
Dissipationless counterflow currents above T_c in bilayer superconductors
We report the existence of dissipationless currents in bilayer superconductors above the critical temperature $T_c$, assuming that the superconducting phase transition is dominated by phase fluctuations. Using a semiclassical $U(1)$ lattice gauge theory, we show that thermal fluctuations cause a transition from the superconducting state at low temperature to a resistive state above $T_c$, accompanied by the proliferation of unbound vortices. Remarkably, while the proliferation of vortex excitations causes dissipation of homogeneous in-plane currents, we find that counterflow currents, flowing in opposite direction within a bilayer, remain dissipationless. The presence of a dissipationless current channel above $T_c$ is attributed to the inhibition of vortex motion by local superconducting coherence within a single bilayer, in the presence of counterflow currents. Our theory presents a possible scenario for the pseudogap phase in bilayer cuprates.
Guido Homann, Marios H. Michael, Jayson G. Cosme, Ludwig Mathey
2023-09-08T18:00:05Z
http://arxiv.org/abs/2309.04531v1
# Dissipationless counterflow currents above \(T_{c}\) in bilayer superconductors ###### Abstract We report the existence of dissipationless currents in bilayer superconductors above the critical temperature \(T_{c}\), assuming that the superconducting phase transition is dominated by phase fluctuations. Using a semiclassical \(U(1)\) lattice gauge theory, we show that thermal fluctuations cause a transition from the superconducting state at low temperature to a resistive state above \(T_{c}\), accompanied by the proliferation of unbound vortices. Remarkably, while the proliferation of vortex excitations causes dissipation of homogeneous in-plane currents, we find that counterflow currents, flowing in opposite direction within a bilayer, remain dissipationless. The presence of a dissipationless current channel above \(T_{c}\) is attributed to the inhibition of vortex motion by local superconducting coherence within a single bilayer, in the presence of counterflow currents. Our theory presents a possible scenario for the pseudogap phase in bilayer cuprates. _Introduction_ - Underdoped cuprates exhibit two characteristic temperature scales. While these materials are superconducting only below the critical temperature \(T_{c}\), they feature a gap-like suppression of the density of low-energy electronic states up to a significantly higher temperature \(T^{*}\). The precise nature of this pseudogap regime is still under debate [1; 2]. As the density of superconducting charge carriers is relatively small in underdoped cuprates, it was proposed that the breakdown of superconductivity at \(T_{c}\) is dominated by phase fluctuations [3]. This scenario, in which Cooper pairs exist up to \(T^{*}\), is consistent with the similarity between the symmetries of the superconducting gap and the pseudogap [4; 5]. Remarkably, measurements of the Nernst effect [6; 7; 8; 9], magnetization experiments [10; 11] and optical spectroscopy [12; 13] indicate the existence of superconducting fluctuations well above \(T_{c}\). Further evidence for superconducting fluctuations in the pseudogap regime is provided by pump-probe experiments involving parametric amplification of Josephson plasmons in YBa\({}_{2}\)Cu\({}_{3}\)O\({}_{7-\delta}\) (YBCO) [14; 15; 16; 17]. To contribute to the ongoing debate, we investigate the phenomenology of the pseudogap phase in bilayer superconductors, under the assumption that the transition is dominated by phase fluctuations while the pairing amplitude remains essentially constant up to temperatures close to \(T^{*}\). We utilize a semiclassical \(U(1)\) lattice gauge theory [18; 19; 20] to simulate dynamics of the superconducting phase of a bilayer superconductor in the presence of thermal fluctuations. We find a crossover from an ordered state to a highly fluctuating state without global phase coherence at high temperatures, which we associate with the pseudogap phase. Our simulations show that in the pseudogap phase, long-range coherence is destroyed by the proliferation of vortex excitations, consistent with studies on the anisotropic \(XY\) model [21; 22; 23; 24; 25; 26]. We find that the loss of long-range coherence is accompanied by a resistive transition where superconductivity is lost. However, our simulations show that short-range intrabilayer coherence persists far above \(T_{c}\). A striking consequence of short-range intrabilayer superconducting correlations is the presence of a dissipationless counterflow current. To be precise, the in-plane conductivity of a bilayer superconductor can be divided into a symmetric and an anti-symmetric component as depicted in Fig. 1(a). Our simulations, presented in Fig.1(b), show that the symmetric in-plane conductivity \(\sigma_{+}(\omega)\) no longer features a \(1/\omega\) divergence at temperatures \(T\gtrsim T_{c}\), signaling the emergence of a resistive state. In contrast, in-plane currents with opposite direction in the lower and upper layer flow without dissipation. This phenomenon manifests itself in a \(1/\omega\) divergence of the antisymmetric conductivity \(\sigma_{-}(\omega)\), as shown in Fig. 1(b). The effect that we present here is conceptually related to a vari Figure 1: Dissipationless counterflow in a bilayer superconductor. (a) Current configurations in the copper oxide layers characterized by the symmetric and the antisymmetric conductivity, respectively. (b) Imaginary part of the symmetric and the antisymmetric conductivity at \(36\) K \(\sim 1.4T_{c}\). The error bars indicate the standard errors of the ensemble averages. ety of other phenomena, such as antisymmetric quasi-order in a bilayer of superfluids [27], counterflow superfluidity in one-dimensional Bose mixtures [28; 29], and Bose-Einstein condensation of excitons in bilayer electron systems [30; 31; 32; 33]. We clarify that these phenomena are distinct from the normal-superfluid counterflow of the two-fluid model, which occurs within the superfluid phase[34]. _Model_ - Following the Ginzburg-Landau theory of superconductivity [35], we describe the superconducting state by a complex order parameter \(\psi_{\bf r}=|\psi_{\bf r}|e^{i\phi\epsilon}\), which is discretized on a three-dimensional lattice with \({\bf r}\) being the lattice site. Each superconducting layer is represented by a square lattice as depicted in Fig. 2(a). The crystalline \(c\) axis is oriented along the \(z\) direction. Due to the Cooper pair charge of \(-2e\), the order parameter is coupled to the electromagnetic field. We employ the Peierls substitution such that the electromagnetic vector potential \({\bf A_{r}}\) enters the gauge-invariant phase differences between the lattice sites. The gauge-invariant phase differences, which are defined below, govern the nearest-neighbor tunneling of Cooper pairs. In our model, we fit our parameters the bilayer cuprate YBCO[14; 15; 16; 36]. The interlayer distances are \(d_{s}\) for intrabilayer (strong) junctions and \(d_{w}\) for interbilayer (weak) junctions. Here we choose \(d_{s}\) and \(d_{w}\) such that the interlayer distances approximately reproduce the spacing of CuO\({}_{2}\) layers in the bilayer cuprate YBCO. The in-plane lattice constant \(d_{ab}\) is introduced as a short-range cutoff below the in-plane coherence length. The tunneling coefficients are \(t_{ab}\) for in-plane junctions, \(t_{s}\) for intrabilayer junctions, and \(t_{w}\) for interbilayer junctions. We choose \(t_{s}\) and \(t_{w}\) such that the Josephson plasma frequencies of the simulated bilayer superconductor are comparable to those of YBCO: \(\omega_{31}/2\pi\approx 1\) THz and \(\omega_{\rm J2}/2\pi\approx 14\) THz. The in-plane tunneling coefficient \(t_{ab}\) determines the in-plane plasma frequency, which we take to be \(\omega_{ab}/2\pi\sim 75\) THz at zero temperature. The Lagrangian and the equations of motion are presented in the Supplemental Material [36]. We add Langevin noise and damping terms to the equations of motion and employ periodic boundary conditions. We then integrate the stochastic differential equations using Heun's method with a step size of \(\Delta t=1.25\) as. In the following, we consider a bilayer superconductor with \(N_{z}=4\) layers and \(N_{xy}=40\times 40\) sites per layer. The layers \(n=1\) and \(n=2\) form a bilayer, as well as the layers \(n=3\) and \(n=4\). The complete set of model parameters is specified in the Supplemental Material [36]. _Interlayer phase coherence_ - To characterize the coherence of the gauge-invariant intra- and interbilayer phase differences, we introduce effective interlayer tunneling coefficients. First, we define the unitless vector potential \(a_{j,{\bf r}}=-2ed_{j,{\bf r}}A_{j,{\bf r}}/\hbar\) on the bond between the lattice site \({\bf r}\equiv(l,m,n)\) and its nearest neighbor in the \(j\in\{x,y,z\}\) direction, where \(d_{j,{\bf r}}\) denotes the length of the bond. The gauge-invariant intrabilayer phase differences between layers \(n=1\) and \(n=2\) are \(\theta_{l,m}^{s}=\mathcal{P}(\phi_{l,m,1}-\phi_{l,m,2}+a_{l,m,1}^{z})\), and the gauge-invariant interbilayer phase differences between layers \(n=2\) and \(n=3\) are \(\theta_{l,m}^{w}=\mathcal{P}(\phi_{l,m,2}-\phi_{l,m,3}+a_{l,m,2}^{z})\). Note that the gauge-invariant phase differences are mapped onto the interval \((-\pi,\pi]\) by the projection operator \(\mathcal{P}(\cdot)\). In the presence of thermal fluctuations, we determine the effective interlayer tunneling coefficients \[t_{s,\rm eff} =t_{s}\left<\cos\theta_{l,m}^{s}\right>, \tag{1}\] \[t_{w,\rm eff} =t_{w}\left<\cos\theta_{l,m}^{w}\right>. \tag{2}\] The temperature dependence of the effective tunneling coefficients is shown in Fig. 2(b), where we average the cosine of the interlayer phase differences over the \(xy\) plane, for a time interval of 2 ps, and an ensemble of 100 trajectories. The onset of strong phase fluctuations dramatically suppresses the interbilayer tunneling coefficient around a crossover temperature of 25 K, which we take to be the transition temperature \(T_{c}\). While the advent of strong phase fluctuations suppresses also the intrabilayer coefficient \(t_{s}\), it remains nonzero up to large temperatures. This indicates that while long-range order is lost across the transition, the pseudogap phase still retains strong local phase coherence within each bilayer. The consequences of phase fluctuations on the plasma Figure 2: Semiclassical simulation of a bilayer superconductor. (a) Schematic illustration of the lattice gauge model. (b) Temperature dependence of the effective interlayer tunneling coefficients. The intra-bilayer coherence remains non-zero above \(T_{c}\). (c) Snapshot of the vorticity of the superconducting order parameter at 36 K \(\sim 1.4T_{c}\). (d) Temperature dependence of the number of vortices per layer. Vortices and antivortices contribute equally to this number. The vortex number rises sharply around \(T_{c}\), suggesting that the transition is driven by vortex unbinding. resonances as well as the temperature dependence of the in-plane tunneling coefficient and the amplitude of the order parameter in our model are presented in the Supplemental Material [36]. _Vortices_ - To understand the microscopic nature of the phase transition, we turn our attention to the role of vortices in the pseudogap phase. In continuum theories, a vortex is defined through the phase winding of the order parameter along a closed path, \[\Phi=\oint\mathbf{\nabla}\phi\text{-}\mathrm{d}\mathbf{r}=\oint\left(\mathbf{\nabla} \phi+\frac{2e}{\hbar}\mathbf{A}\right)\text{-}\mathrm{d}\mathbf{r}-\oint\frac {2e}{\hbar}\mathbf{A}\text{-}\mathrm{d}\mathbf{r}. \tag{3}\] In our simulation, we use the latter representation as it is based on quantities that directly enter the Lagrangian. We define the vorticity of a single plaquette in the \(xy\) plane as \[\begin{split} v_{l,m,n}&=\frac{1}{2\pi}\left(a_{l,m,n}^{x}+a_{l+1,m,n}^{y}-a_{l,m+1,n}^{x}-a_{l,m,n}^{y}\right)\\ &\quad-\frac{1}{2\pi}\left(\theta_{l,m,n}^{x}+\theta_{l+1,m,n}^{ y}-\theta_{l,m+1,n}^{x}-\theta_{l,m,n}^{y}\right),\end{split} \tag{4}\] where \(\theta_{l,m,n}^{x}=\mathcal{P}(\phi_{l,m,n}-\phi_{l+1,m,n}+a_{l,m,n}^{x})\) and \(\theta_{l,m,n}^{y}=\mathcal{P}(\phi_{l,m,n}-\phi_{l,m+1,n}+a_{l,m,n}^{y})\). The vorticity can assume the values \(-1\), \(0\), and \(+1\). A vorticity of \(+1\) corresponds to a vortex, while a vorticity of \(-1\) corresponds to an antivortex. In Fig. 2(c), we show a snapshot of the vorticity in the lowest layer at a temperature of \(36\) K \(\sim 1.4T_{c}\). Even though most vortex and antivortex form pairs or clusters (seen as blue and red squares next to each other), we crucially also find isolated vortices and antivortices, indicating that the phase transition is driven by vortex-antivortex unbinding. In Figure 2(d), we plot the number of vortices per layer as a function of temperature. The number of vortices exhibits a rapid increase between \(15\) K and \(30\) K. Details of the behavior of vortices in the pseudogap phase are captured by computing in-plane and out-of plane correlations, which can be found in the Supplemental Material [36]. In addition, the presence of vortices leads to a disordered Josephson intrabilayer potential. The strength and spectral behavior of the vortex-induced disorder is also presented in the Supplemental Material [36]. Here we focus on the consequences of the phase transition on the conductivity. _In-plane conductivity_ - We separate the in-plane conductivity of a bilayer superconductor into a symmetric and an antisymmetric component as shown in Fig. 1(a). To calculate the two components of the conductivity, we introduce an oscillating symmetric (anti-symmetric) current, \(J_{\pm}\). Once a steady state is reached, we compute \(\sigma_{\pm}(\omega)=J_{\pm}(\omega)/E_{\pm}(\omega)\), where \(E_{\pm}\) is the symmetric(anti-symmetric) electric field. Details of the conductivity measurements are provided in the Supplemental Material [36]. The symmetric and the antisymmetric conductivity are shown for different temperatures in Fig. 3. At a temperature of \(15\) K \(\sim 0.6T_{c}\), \(\sigma_{+}\) and \(\sigma_{-}\) are in good agreement with each other. The imaginary part of \(\sigma_{+}\) and \(\sigma_{-}\) exhibits the characteristic \(1/\omega\) behavior of a superconductor. While the real part of both conductivities is relatively flat for frequencies above \(10\) THz, it tends to slowly increase with decreasing frequency below \(10\) THz. The values of \(\mathrm{Re}\,\sigma_{+}\) and \(\mathrm{Re}\,\sigma_{-}\) at \(1\) THz bear some un Figure 3: Symmetric and antisymmetric conductivity at different temperatures. (a)–(d) Imaginary part. (e)–(h) Real part. The error bars indicate the standard errors of the ensemble averages. The crossover temperature is \(T_{c}\sim 25\) K.The anti-symmetric conductivity indicates superconductivity above \(T_{c}\). certainty due to slow numerical convergence. At temperatures \(T\gtrsim T_{c}\), the phase transition is accompanied by a dissipative transition, where the imaginary part of \(\sigma_{+}\) no longer diverges as \(1/\omega\). At temperatures close to \(T_{c}\), the real part of \(\sigma_{+}\) rises significantly at small frequencies. By contrast, the imaginary part of \(\sigma_{-}\) exhibits a \(1/\omega\) divergence up to temperatures well above \(T_{c}\) while the real part has no significant temperature dependence. This manifestation of remnant superconductivity above \(T_{c}\) is the key result of the present work. _Origin of dissipationless counterflow_ - The breakdown of the \(1/\omega\) divergence of the imaginary part of \(\sigma_{+}\) reveals a transition to a resistive state at \(T_{c}\). This indicates an unbinding of planar vortex-antivortex pairs above \(T_{c}\), similar to the resistive transition in superconducting thin films [37, 38]. The underlying mechanism of this transition is the following. In the presence of a current \(\mathbf{J}\), a single vortex is exposed to a Magnus force \(\mathbf{F}=\mathbf{J}\times\mathbf{\Phi}_{0}\), where \(\mathbf{\Phi}_{0}\) has the magnitude of a flux quantum \(\Phi_{0}=\pi\hbar/e\) and points in the direction of the magnetic field inside the vortex core. In the case of a DC current, unbound vortices and antivortices drift in opposite directions perpendicular to the current, dissipating energy. The simultaneous observation of dissipationless counterflow suggests that unbound vortex lines cut through an entire bilayer rather than just a single layer. Equivalently, a vortex in one layer of a bilayer is paired with a vortex of the same vorticity in the other layer of the same bilayer. In this scenario, the dissipation of currents in the two layers of the same bilayer is different depending on if the currents flow in the same or opposite directions. If the currents flow in the same direction, the Magnus force points in the same direction for all vortices in the two layers with the same vorticity. Thus, each vortex-vortex pair experiences a drift motion perpendicular to the current direction as depicted in Fig. 4(a). Analogously to the case of a thin film, the vortex motion dissipates energy, implying a nonzero resistivity. If the currents flow in opposite directions, however, the Magnus force points into opposite directions for the two vortices of each intrabilayer pair. The remnant intrabilayer coherence leads to an effective potential with a linear dependence on the pair size, acting as string tension against the flow of the two vortices away from each other in the presence of counterflow currents [39]. Thus, the intrabilayer vortex pairs experience no net force as highlighted by Fig. 4(b). Since the vortices do not move in this case, the flow of counterdirected currents is dissipationless, consistent with the observation of a \(1/\omega\) divergence of the imaginary part of \(\sigma_{-}\) above \(T_{c}\). We note that the previous paragraph provides only a simplified description of the vortex dynamics in the presence of in-plane currents. In fact, the vortex dynamics is very complicated due to the high density of vortices in the layers and fast creation/annihilation processes; see Supplemental Material [36]. Nonetheless, the scenario of intrabilayer vortex-vortex pairs that are essentially unbound from any antivortices is supported by several correlation functions [36]. In the Supplemental Material, we also calculate conductivities at finite momentum along the \(z\) direction [36]. The results are momentum independent, which corroborates the picture of decoupled bilayers, where vortex lines between different bilayers are uncorrelated. _Conclusion_ - We have discovered the existence of dissipationless counterflow currents in a \(U(1)\) gauge-invariant model for bilayer superconductors. Experimental verification of the existence of dissipationless counterflow in bilayer cuprates would provide smoking gun evidence that the pseudogap phase corresponds to phase-fluctuating superconductivity with strong intrabilayer superconducting correlations up to high temperatures. We expect counterflow currents to appear when a magnetic field is applied in parallel to the layers, giving rise to a diamagnetic response [40]. The results presented here open up interesting research questions about the full range of consequences of such dissipationless currents and whether they can be technologically exploited. We would like to acknowledge useful discussions with Patrick A. Lee and Eugene Demler. This work is supported by the Deutsche Forschungsgemeinschaft (DFG) in the framework of SFB 925, Project No. 170620586, and the Cluster of Excellence "Advanced Imaging of Matter" (EXC 2056), Project No. 390715994. M.H.M. is grateful for the support from the Alexander von Humboldt foundation and the hospitality of the Max Planck Institute for the Structure and Dynamics of Matter. J.G.C. is funded by the UP System Balik PhD Program (OVPAA-BPhD-2021-04). Figure 4: Dynamics of intrabilayer vortex pairs in the presence of in-plane currents, sustaining dissipationless counterflow. (a) Vortex dynamics in the presence of unidirected in-plane currents. (b) Vortex dynamics in the presence of counterdirected in-plane currents. Vortices in different superconducting layers within the same bilayer are pinned relative to each other due to the residual superconducting coherence within a single bilayer.
2309.07038
Efficient Reinforcement Learning for Jumping Monopods
In this work, we consider the complex control problem of making a monopod reach a target with a jump. The monopod can jump in any direction and the terrain underneath its foot can be uneven. This is a template of a much larger class of problems, which are extremely challenging and computationally expensive to solve using standard optimisation-based techniques. Reinforcement Learning (RL) could be an interesting alternative, but the application of an end-to-end approach in which the controller must learn everything from scratch, is impractical. The solution advocated in this paper is to guide the learning process within an RL framework by injecting physical knowledge. This expedient brings to widespread benefits, such as a drastic reduction of the learning time, and the ability to learn and compensate for possible errors in the low-level controller executing the motion. We demonstrate the advantage of our approach with respect to both optimization-based and end-to-end RL approaches.
Riccardo Bussola, Michele Focchi, Andrea Del Prete, Daniele Fontanelli, Luigi Palopoli
2023-09-13T15:46:40Z
http://arxiv.org/abs/2309.07038v4
# Efficient Reinforcement Learning for Jumping Monopods ###### Abstract In this work, we consider the complex control problem of making a monopod reach a target with a jump. The monopod can jump in any direction and the terrain underneath its foot can be uneven. This is a template of a much larger class of problems, which are extremely challenging and computationally expensive to solve using standard optimisation-based techniques. Reinforcement Learning (RL) could be an interesting alternative, but the application of an end-to-end approach in which the controller must learn everything from scratch, is impractical. The solution advocated in this paper is to guide the learning process within an RL framework by injecting physical knowledge. This expedient brings to widespread benefits, such as a drastic reduction of the learning time, and the ability to learn and compensate for possible errors in the low-level controller executing the motion. We demonstrate the advantage of our approach with respect to both optimization-based and end-to-end RL approaches. ## I Introduction Legged robots have become a popular technology to navigate unstructured terrains, but are complex devices for which control design is not trivial. Remarkable results have been reached for locomotion tasks like walking and trotting [1]. Other tasks, like performing jumps, are more challenging because even small deviations from the desired trajectory can have a large impact on the landing location and orientation [2]. This problem has received some attention in the last few years. A line of research has produced heuristic approaches relying on physical intuitions and/or on simplified models to be used in the design of controllers or planners [3, 4]. However, the hand-crafted motion actions produced by these approaches are not guaranteed to be physically implementable. Another common approach is to use full-body numerical optimisation [5]. Very remarkable is the rich set of aerial motions produced by MIT Mini Cheetah in [6, 7, 8] using a centroidal momentum-based nonlinear optimisation. A problem with optimisation-based approaches on high dimensional nonlinear problems is the high computational cost, which makes them unsuitable for a real-time implementation, especially to replan trajectories over a receding horizon. Recent advances [9, 5] have brought to significant improvements in the efficiency of Model Predictive Control (MPC) for jumping tasks. However, the price to pay is to introduce some artificial constraints such as fixing the contact sequence, the time-of-flight, or optimising the contact timings offline. A third set of approaches is based on Reinforcement Learning (RL). The seminal work of Lillicrap [10] showed that a Deep Deterministic Policy Gradient (DDPG) algorithm combined with a Deep Q network could be successfully applied to learn end-to-end policies for a continuous action domain, using an actor-critic setting. In view of these results, several groups have then applied RL to quadrupeds for locomotion tasks [11, 12, 13, 14, 15], and to _in-place_ hopping legs [16]. As with most of model-free reinforcement approaches, DDPG requires a large number of training steps (on the order of millions) to find good solutions. Other approaches [17, 18, 19] seek to improve the efficiency and the robustness of the learning process by combining Trajectory Optimization (TO) with RL: they use the former to generate initial trajectories to bootstrap the exploration of alternatives made by the latter. As a final remark, the efficiency and the robustness of the RL learning process is heavily affected by a correct choice of the action space [20, 21]. Some approaches require that the controller directly generates the torques [22], while others suggest that the controller should operate in Cartesian or joint space [12, 23]. **Paper Contribution**. This work proposes an RL framework capable of producing an omni-directional jump trajectory (from standstill) on _uneven_ terrain, computed within a few milliseconds, unlocking real-time usage with the current controller rates (e.g. in the order of \(kHz\)). The main objective of this paper is to reduce the duration of the learning phase without sacrificing the system's performance. A reduced length of the learning phase has at least two indisputable advantages: lowering the barriers for the access to this technology for professionals and companies with a limited availability of computing power, and addressing the environmental concerns connected with the carbon footprint of learning technologies [24]. Our strategy is based on the following ideas: first, learning is performed in Cartesian space rather than in joint space so that the agent can more directly verify the effect of its actions. Second, we notice that while the system is airborne, its final landing point are dictated by simple mechanical laws (ballistic). Therefore, the learning process can focus solely on the _thrusting_ phase. Third, we know from biology [25] that mammals are extremely effective in learning how to walk because of "prior" knowledge in their genetic background. This means that the learning process can be _guided_ by an approximate knowledge of what the resulting motion should "look like". Specifically, we parametrise the thrusting trajectory (i.e. from standstill to lift-off) for the Center of Mass (CoM) with a (3\({}^{rd}\) order) Bezier curve. This choice of a Bezier curve to parametrize the action space is not uncommon in the literature [26, 27, 28], and in our case is motivated by the physical intuition that a jump motion, to exploit the full joint range, needs a "charging" phase to compress the legs followed by an extension phase where the CoM is accelerated both upwards and in the jump direction. Clearly, by making this restriction, we prevent the learning phase from exploring alternative options. However, as we will see in the paper, the final result is very close to optimal and the system retains good generalisation abilities. Our approach is based on TD3, a state-of-the-art Deep Reinforcement Learning (Deep-RL) algorithm [29] trained to minimise a cost very similar to the ones typically used in optimal control. The Cartesian trajectory generated by our RL agent is translated into joint space via inverse kinematics, and tracked by a low-level joint-space Proportional-Derivative (PD) controller with gravity compensation. Our results reveal that possible inaccuracies in the controller can be learned and compensated by the RL agent. In this paper we do not focus on the landing phase, which we assume to be managed by a different controller, such as [2]. We compare our approach (that we will call Guided Reinforcement Learning (GRL)) with both a baseline TO controller with a _fixed_ duration of the thrusting phase, and a "standard" End-to-end Reinforcement Learning (EEE) (which considers joint references as action space). In the first case, we achieved better or equal performance, and a dramatic reduction in the online computation time. With respect to E2E _locomotion_ approaches [22, 12], we observed a substantial reduction in the number of episodes (without considering parallelization) needed to achieve good learning performances. Instead, an E2E implementation specific for a _jump_ motion (Section IV-.2), did not provide satisfactory learning results. The paper is organized as follows: Section II presents the GRL approach, detailing the core components of the MDP (action, reward functions). Section III provides implementation details. In Section IV we showcase our simulation results comparing with state-of-the-art approaches. Finally, we draw the conclusions in Section V. ## II Problem description and solution overview Simple notions of bio-mechanics suggest that legged animals execute their jumps in three phases: 1. _thrust:_ an initial compression is followed by an explosive extension of the limbs in order to gain sufficient momentum for the lift-off; the phase finishes when the foot leaves the ground; 2. _flight:_ the body, subject uniquely to gravity, reaches an apex where the vertical CoM velocity changes its sign and the robot adjusts its posture to prepare for landing; 3. _landing:_ the body realizes a touch-down, which means that the foot establishes again contact with the ground. For the sake of simplicity, we consider a simplified and yet realistic setting: a monopod robot, whose base link is sustained by passive _prismatic_ joints preventing any change in its orientation (see Section III-C). The extension to a full quadruped (which requires considering also angular motions) is left for future work. In this paper, _we focus on the thrust phase_. The flight phase is governed by the ballistic law. Let \(\mathbf{c}_{tg}\) be the target location and let the CoM state at lift-off be \((\mathbf{c}_{lo},\dot{\mathbf{c}}_{lo})\). After lift-off, the trajectory lies on the vertical plane containing \(\mathbf{c}_{lo}\) and \(\mathbf{c}_{tg}\). The set of possible landing CoM positions are function of \(\mathbf{c}_{lo}\), \(\dot{\mathbf{c}}_{lo}\) and are given by the following equation: \[\begin{cases}\mathbf{c}_{tg,xy}=\mathbf{c}_{lo,xy}+\dot{\mathbf{c}}_{lo,xy}T_ {fl}\\ \mathbf{c}_{tg,z}=\mathbf{c}_{lo,z}+\dot{\mathbf{c}}_{lo,z}T_{fl}+-\frac{1}{2} gT_{fl}^{2}\end{cases} \tag{1}\] where \(T_{fl}=(\mathbf{c}_{tg,xy}-\mathbf{c}_{lo,xy})/\dot{\mathbf{c}}_{lo,xy}\) is the flight time. In this setting, our problem can be stated as follows: **Problem 1**: _Synthesize a thrust phase that produces a lift-off configuration (i.e. CoM position and velocity) that: 1. satisfies (1), 2. copes with the potentially adverse conditions posed by the environment (i.e. contact stability, friction constraints), 3. satisfies the physical and control constraints._ Nonlinear optimisation is frequently used for similar problems. However, it has two important limitations that discourage its application in our specific case: 1. the computation requirements are very high, complicating both the real-time execution and the use of low-cost embedded hardware, 2. the problem is strongly non-convex, which can lead the solvers to local minima. ### _Overview of the approach_ In this work, we use RL to learn optimal joint trajectories to realise a jump motion, that is then tracked by a lower-level controller. We adopted the state-of-the-art Twin Delayed Deep Deterministic Policy Gradient (TD3), a Deep-RL technique. This algorithm is based on the well-known Actor-Critic architecture. The main idea behind TD3 is to adopt two Deep Neural Network (NN) to approximate the policy, represented by the Actor, and two Deep NN to approximate the action-value function, represented by the Critic. Our RL pipeline is depicted in Fig. 1. Fig. 1: Diagram of the RL Framework. The framework is split into two levels: the RL agents and the planner. The RL agent produces an action for the planner, based on a desired target. This computes a Bezier reference curve that is mapped into joint motion via inverse kinematics and tracked by the PD controller that provides the joint torques to feed the robot. During the training, at the end of each episode a reward is computed and fed back to the RL agent. ## III Design and implementation of the RL agent Based on the discussion in the previous section, the thrust phase is characterised by the lift-off position \(\mathbf{c}_{lo}\) and velocity \(\dot{\mathbf{c}}_{lo}\) and by the thrust time \(T_{th}\) which is the time spent to reach the lift-off configuration from the initial state. The state of the environment is defined as (\(\mathbf{c}\), \(\mathbf{c}_{tg}\)) where \(\mathbf{c}\in\mathbb{R}^{3}\) is the CoM position and \(\mathbf{c}_{tg}\in\mathbb{R}^{3}\) the CoM at the landing location (_target_). The objective of the RL agent is to find the jump parameters (\(\mathbf{c}_{lo}\), \(\dot{\mathbf{c}}_{lo}\), \(T_{th}\in\mathbb{R}\)), that minimise the landing error at touch-down \(\|\mathbf{c}-\mathbf{c}_{tg}\|\) while satisfying the physical constraints. Our jumping scenario thought can be seen as a _single-step_ trajectory where the only action performed leads always to the end state. ### _The Action Space_ The dimension of the action space has a strong impact on the performance of the RL algorithm. Indeed, a NN with a smaller number of outputs is usually faster to train. What is more, a smaller action space reduces the complexity of the mapping, speeding up the learning process. A first way to reduce the complexity of the action space is by expressing \(\mathbf{c}_{lo}\) and \(\dot{\mathbf{c}}_{lo}\) in spherical coordinates. Because of the peculiar nature of a jump task, the trajectory lies in the plane containing the CoM \(\mathbf{c}\) and its desired target location \(\mathbf{c}_{tg}\). Hence, the yaw angle \(\varphi\) remains constant (\(\bar{\varphi}\)) throughout the flight and we can further restrict the coordinates to a convex bi-dimensional space: \[\begin{cases}\mathbf{c}_{lo,x}=r\ \cos(\theta)\cos(\bar{\varphi})\\ \mathbf{c}_{lo,y}=r\ \cos(\theta)\sin(\bar{\varphi})\\ \mathbf{c}_{lo,z}=r\ \sin(\theta)\\ \end{cases}\quad\begin{cases}\dot{\mathbf{c}}_{lo,x}=r_{v}\ \cos(\theta_{v})\cos(\bar{ \varphi})\\ \dot{\mathbf{c}}_{lo,y}=r_{v}\ \cos(\theta_{v})\sin(\bar{\varphi})\\ \dot{\mathbf{c}}_{lo,z}=r\ \sin(\theta_{v})\\ \end{cases} \tag{2}\] As shown in Fig. 2, the lift-off position \(\mathbf{c}_{lo}\) is identified by: the radius \(r\) (i.e., the maximum leg extension), the yaw angle \(\varphi\) and the pitch angle \(\theta\). Likewise, the lift-off velocity \(\dot{c}_{lo}\), is described by its magnitude \(r_{v}\), and the pitch angle \(\theta_{v}\) with respect to the ground. Therefore, by using this assumption, we have reduced the dimension of the action space from \(7\) to \(5\): \(\mathbf{a}=(T_{th},r,\theta,r_{v},\theta_{v})\in\mathbb{R}^{5}\). The action space can be further restricted by applying some domain knowledge. The radius \(r\) has to be smaller than a value \(r_{max}\) (\(0.32\ m\)) to prevent boundary singularity due to over-extension, and greater than a value \(r_{min}(0.25\ m)\) to avoid complete leg retraction. The bounds on the velocity \(\dot{c}_{lo}\), represented by \(r_{v}\in[0.1,4]\ m/s\), and \(\theta_{v}\in\left[\frac{\pi}{6},\frac{\pi}{2}\right]\ rad\), and the bounds on pitch angle \(\theta\in\left[\frac{\pi}{4},\frac{\pi}{2}\right]\ rad\) are set to rule out jumps that involve excessive foot slippage and useless force effort. Specifically, restricting to a positive \(\theta_{v,min}\) ensures a non-negligible vertical component for the velocity, while bounding \(\theta_{v}\) to the positive quadrant secures that the lift-off velocity be oriented "toward" the target. #### Iii-A1 Trajectory Parametrisation in Cartesian Space Our strategy to tackle the problem of generating a compression-extension trajectory for the leg, to achieve a given lift-off configuration \(\mathbf{c}_{lo}\), is based on two important choices: 1. making the RL agent learn the trajectory in Cartesian Space and then finding the joint trajectories through inverse kinematics, 2. restricting the search of the Cartesian space evolution to curves generated by known parametric functions. This has the aim to reduce the search space and simplifying convergence. The analytical and geometric properties of \(3^{rd}\) order Bezier curves make them a perfect fit for our problem. A \(3^{rd}\) order Bezier curve is defined by _four_ control points. In our case, the first and the final points are constrained to be the initial and final CoM positions, respectively. The derivative of a \(3^{rd}\) degree Bezier curve is itself a Bezier curve of \(2^{nd}\) degree with 3 control points defined as \(3(\mathbf{P}_{i+1}-\mathbf{P}_{i})\). The curve domain is defined only in the normalised time interval: \(t\in\left[0,1\right]\). Defining the following Bernstein polynomials: \[\boldsymbol{\eta}(t) =\left[(1-t)^{3}\quad 3(1-t)^{2}t\quad 3(1-t)t^{2}\quad t^{3} \right]^{T} \tag{3}\] \[\boldsymbol{\dot{\eta}}(t) =\left[(1-t)^{2}\quad 2(1-t)t\quad t^{2}\right]^{T} \tag{4}\] we can compactly write the Bezier curve as function of its \(\mathbf{P}_{i}\in\mathbb{R}^{3}\) control points: \[\mathbf{c}=\left[\mathbf{P}_{0}\quad\mathbf{P}_{1}\quad\mathbf{P}_{2}\quad \mathbf{P}_{3}\right]\boldsymbol{\eta}(t) \tag{5}\] Since we are considering an execution time \(T_{exe}\in\left[0,T_{th}\right]\) and \(t=\frac{T_{exe}}{T_{th}}\), then the derivative writes: \[\dot{\mathbf{c}}=\frac{1}{T_{th}}\left[\mathbf{P}_{0}^{\prime}\quad\mathbf{P}_{ 1}^{\prime}\quad\mathbf{P}_{2}^{\prime}\right]\boldsymbol{\dot{\eta}}(t) \tag{6}\] From the definition of the curve (5) and its derivative (6), we can compute the control points \(\mathbf{P}_{i}\) by setting the boundary conditions of the initial/lift-off CoM position \(\mathbf{c}_{0}\), \(\mathbf{c}_{lo}\) and initial/lift-off CoM velocity \(\dot{\mathbf{c}}_{0}\),\(\dot{\mathbf{c}}_{lo}\) in (7). \[\begin{cases}\mathbf{P}_{0}^{\prime}=\frac{3}{T_{th}}(\mathbf{P}_{1}-\mathbf{P}_{ 0})\\ =\dot{\mathbf{c}}_{0}=\mathbf{0}\\ \mathbf{P}_{1}^{\prime}=\frac{3}{T_{th}}(\mathbf{P}_{2}-\mathbf{P}_{1})\\ \mathbf{P}_{2}^{\prime}=\frac{3}{T_{th}}(\mathbf{P}_{3}-\mathbf{P}_{2})\\ =\dot{\mathbf{c}}_{lo}\end{cases}\quad\begin{cases}\mathbf{P}_{0}=\mathbf{c}_{0} \\ \mathbf{P}_{1}=\frac{T_{th}}{3}\mathbf{P}_{0}^{\prime}+\mathbf{P}_{0}\\ =\frac{T_{th}}{3}\dot{\mathbf{c}}_{0}+\mathbf{c}_{0}\\ \mathbf{P}_{2}=-\frac{T_{th}}{3}\dot{\mathbf{c}}_{lo}+\mathbf{c}_{lo}\\ \mathbf{P}_{3}=\mathbf{c}_{lo}\end{cases} \tag{7}\] ### _A physically informative reward function_ In RL, an appropriate choice of the reward function is key to the final outcome. Furthermore, we can use the reward function as a means to inject prior knowledge into the learning process. In our case, the reward function was designed to penalise the violations of the physical constraints while giving a positive reward to the executions that make the Fig. 2: Action parametrization and its bounds. On the left the top view, on the right the side view of the jumping plane. robot land in proximity of the target point. The constraints that must be enforced throughout the whole thrust phase are called _path_ constraints. To transform the violations into costs, we introduce a linear activation function \(A(x,\underline{x},\bar{x})\) of the evaluated constraint, as function of the value \(x\) and its lower and upper limits \(\underline{x}\), \(\bar{x}\): \[A(x,\underline{x},\bar{x})=|\min(x-\underline{x},0)+\max(x-\bar{x},0)|\] The output of the activation function is zero if the value is in the allowed range, the violation otherwise. **Physical feasibility check**: Before starting each episode, we perform a sanity check on the action **a**: if the vertical velocity is not sufficient to reach the target height, we abort the simulation returning a high penalty cost \(C_{ph}\). This can be computed by obtaining the time to reach the apex \(T_{fup}=\dot{\mathbf{c}}_{lo,z}/g\) and substituting it in the ballistic equation: \[\bar{\mathbf{c}}_{z}(T_{fup})=\mathbf{c}_{lo,z}+\dot{\mathbf{c}}_{lo,z}T_{fup} +\frac{1}{2}(-g)T_{fup}^{2} \tag{8}\] this results in \(\bar{\mathbf{c}}_{z}(T_{fup})=\mathbf{c}_{lo,z}+\frac{1}{2}\frac{\dot{ \mathbf{c}}_{lo,z}^{2}}{g}\) which is the apex elevation. If \(\mathbf{c}_{tg,z}>\bar{\mathbf{c}}_{z}(T_{fup})\), the episode is aborted. **Unilaterality constraint**: in a legged robot, a leg can only push on the ground and not pull. This is because the component of the force **F** along the contact normal (\(Z\) for flat terrains), must be positive. **Friction constraint**: To avoid slippage, the tangential component of the contact force \(\|\mathbf{F}_{x,y}\|\) is bounded by the foot-terrain friction coefficient \(\mu\): \(\|\mathbf{F}_{x,y}\|\leq\mu\mathbf{F}_{z}\). **Joint range and torque constraints**: the three joint \(\bar{q}_{i}\) kinematic limits must not be exceeded. Similarly, each of the joint actuator torque \(\bar{\tau}_{i}\) limits must be respected. **Singularity constraint**: the singularity constraint avoids the leg being completely stretched. During the thrusting phase the CoM **c** must stay in the hemisphere of radius equal to the maximum leg extension. This condition prevents the robot from getting close to points with reduced mobility that produce high joint velocities in the inverse kinematic computation. Even though this constraint is enforced by construction in the action generation, the actual trajectory might still violate it due to tracking inaccuracies. If a singular configuration is reached, the episode is interrupted and a high cost is returned. The costs caused by the violation of path constraints are evaluated for each time step of the thrust phase and accumulated into the feasibility cost \(C_{f}\). In addition to these path constraints, we also want to take into account the error between actual and desired lift-off state. This penalty \(C_{lo}\) encourages lift-off configurations that are better tracked by the controller. Another penalty \(C_{td}\) is introduced when an episode does not result in a touchdown. This is done to force in-place jumps, and avoid the robot to stay stationary. The positive component of the reward function is the output of a nonlinear _landing target_ reward function, which evaluates how close the CoM arrived to the desired target. This reward grows exponentially when this distance approaches zero: \[R_{lt}(\mathbf{c},\mathbf{c}_{tg})=\frac{\beta}{k\left\|\mathbf{c}-\mathbf{c }_{tg}\right\|+\epsilon}, \tag{9}\] where \(k\) is a gain to encourage jumps closer to the target position, and \(\beta\) is an adjustable parameter to bound the max value of \(R_{lt}\) and scale it. An infinitesimal value \(\epsilon\) is added at the denominator to avoid division by zero. Hence, the total reward function is: \[R=1_{\mathbb{R}^{+}}\left[R_{lt}(\mathbf{c},\mathbf{c}_{tg})-\sum_{i=0}^{n_{c }}C_{i}\right] \tag{10}\] with \(n_{c}=8\), and where \(C_{i}\) are the previously introduced feasibility costs. We decided to perform _reward shaping[30]_, by clamping the total reward to \(\mathbb{R}^{+}\) by mean of an indicator function \(1_{\mathbb{R}^{+}}\). This aims to promote the actions that induce constraint satisfaction. ### _Implementation details_ The training of the RL agent and the sim-to-sim validation of the learned policy was performed on top of a Gazebo simulator. Because we are considering only translational motions, we modelled a 3 Degrees of Freedom (DoFs) monopod with three passive prismatic joints attached at the base. These prismatic joints constrain the robot base's movements to planes parallel to the ground. For the sake of simplicity, we also considered the landing phase under the responsibility of a different controller (e.g., see [2]). Our interest was simply on the touch-down event, which is checked by verifying that the contact force exceeds a positive threshold \(f_{th}\). Therefore, the termination of the episode is determined by the occurrence of three possible conditions: execution timeout, singularity, or touch-down event. The control policy (_default NN_) is implemented as a neural network that takes the _state_ as an input, and outputs the _actions_. The NN is a multi-layer perceptron with three hidden layers of sizes 256, 512 and 256 with ReLU activations between each layer, and with _tanh_ output activation to map the output between -1 and 1. A low-level PD plus gravity compensation controller generates the torques that are sent to the Gazebo simulator at 1 kHz. The joint reference positions at the lift-off are reset to the initial configuration \(\mathbf{q}_{0}\) to enable the natural retraction of the leg and avoid stumbling. Landing locations at different heights are achieved by making a 5x5 cm platform appear at the desired landing location only at the apex moment. 1 The impact of the foot with the platform determines the touch-down moment and the consequent termination of the _episode_. To train the RL agent, the interaction with the simulation environment is needed. The communication between the planner component and the Gazebo simulator is managed by the Locosim framework [31]. To interact with the planner, and consequentially, with the environment, we developed a ROS node called _JumplegAgent_, where we implemented the RL agent. The code is available at 2. During the initial stage of the training process, the action is randomly generated to allow for an initial broad exploration of the action space for \(N_{exp}\) episodes. Footnote 1: Making the platform appear only at apex is needed for purely vertical jumps, because it avoids impacts with the platform during the trusting phase. ## IV Simulation Results In this section, we discuss some simulation results that show the validity of the proposed approach and compare it with state-of-the-art approaches. We used a computer with the following hardware specifications: CPU AMD Ryzen 5 3600, GPU Nvidia GTX1650 4GB, RAM 16 GB DDR4 3200 MHz. During training we generated targets inside a predefined _training_ region. This samples are generated _randomly_ inside a cylinder centered on the robot's initial CoM, with a radius from 0 to \(0.65\) m and a height from \(0.25\) m to \(0.5\) m. The size was selected to push the system to its performance limits. The parameters of the robot, controller and simulation are presented in Table I. #### Iv-1 Nonlinear Trajectory Optimisation The first approach to compare with is a standard optimal control strategy based on FDDP. FDDP is one of the most efficient optimisation algorithms for whole-body control [32] because it takes full advantage of the intrinsic sparse structure of the optimal control problem. The FDDP solver is implemented with the optimal control library Croccoddyl [33] and uses the library Pinocchio [34] to enable fast computations of costs, dynamics, and their derivatives. For the problem at hand, we discretised the trajectory into \(N\) successive knots with a timestep \(dT=0.001\ s\) to ensure high precision. As decision variables we chose the joint torques. We split the jump in three phases: thrusting, flying, and landed. The constraints in FDDP are encoded as soft penalties in the cost. We encoded friction cones, and tracking of foot locations and velocities at the thrusting/landed stages, respectively. We added a tracking cost for the CoM reference during the flying phase to encourage the robot to lift-off. We regularized control inputs and states throughout the horizon. #### Iv-2 End-to-end RL At the opposite side of the spectrum of optimal control is the application of approaches entirely based on Deep Learning, i.e., using RL end-to-end _without_ injecting any prior domain knowledge. The RL agent sets joint position references to a low-level (PD)+gravity compensation controller. The use of a PD controller allows the system to inherit the stabilisation properties of the feedback controller, but at the same time, it allows for explosive torques (by regulating the references to have a bigger error w.r.t. to the actual positions). We query the action until the apex moment because we need to have set-points for the joints also when the leg is air-borne. After apex, we set the default configuration \(\mathbf{q}_{0}\) for the landing. To be more specific, instead of directly setting the joint references \(\mathbf{q}^{d}\), the control policy produces as action \(\mathbf{a}\) joint angle deviations \(\mathbf{\tilde{q}}\in\mathbb{R}^{3}\) w.r.t. to the nominal joint angle configuration \(\mathbf{q}_{0}\). As suggested by [35, 36] to ease the learning, we run at a frequency that is 1/5th than the controller one (1 \(kHz\)). We aggregate the reward for each control loop iteration and perform a training every \(N_{train}=100\) queries. We terminate the episode if: 1) a touchdown is detected, 2) the robot has fallen (i.e. the base link close to the ground), 3) we reach singularity 4) a timeout of 2.5 \(s\) is reached. We include in the state CoM and joint position/velocity. Since the domain is not changing (and the state is Markovian), augmenting the state with the history of some past samples [37, 12] was not necessary, therefore we tried to keep the problem dimensionality as low as possible. Hence, the observation state becomes (\(\mathbf{c}\), \(\mathbf{q}\), \(\mathbf{\hat{c}}\), \(\mathbf{\hat{q}}\), \(\mathbf{c}_{tg}\)). As in the case of the GRL approach, the initial state at the start of each episode is set at the nominal joint pose (\(\mathbf{c}_{0},\mathbf{q}_{0}\)), with zero velocity. We also encourage _smoothness_ by penalizing the quantity \(\mathbf{\tilde{q}}^{j}-\mathbf{\tilde{q}}^{j-1}\). Because of the different units, to have better conditioning in the gradient of the NN, we scale each state variable against its range. With respect to the GRL approach we increase the batch size to 512 to collect more observations for training. We provide the feasibility rewards \(C_{i}\) at each loop as _differential_ w.r.t. the previous loop. This approach is meant to ease learning while converging to the same solution [38]. At the end of each episode, we strongly penalize the lack of a touch-down event, and use the same target function (9) to encourage landing close to the target. The fact that we provide this quantity at every step enables us to achieve an informative reward [38] also _before_ the touchdown. Following the _curriculum learning_ idea [39], we gradually increase the difficulty of the jump enlarging the bounds of the training region (where the targets are sampled) in accordance with the number of episodes and the average reward. ### _Policy performance: the feasibility region_ We tested the agent in _inference_ mode, for omni-directional jumps at different heights, for 726 target positions _uniformly_ spaced on a grid (_test region_) of the same shape of the training region. The _test region_ is 20\(\%\) bigger than the training region, in order to demonstrate the generalisation abilities of the system. The policy was periodically evaluated on the _test region_ set in order to assess the evolution of the models stored during the training phase. To measure the quality of a jump, we used the Relative Percentual Error (RPE), which we define as the distance between the touch-down and the desired target point, divided by the jump length. The feasibility region represents the area where the agent is capable of an accurate landing, i.e. RPE \(\leq 10\%\). #### Iv-A1 Performance baseline: Trajectory Optimisation We compared the approach with the baseline FDDP approach repeating the same optimisation for all the points in the _test region_ without changing the costs'weights and limiting the number of iterations to \(500\). For optimal control, the average computation time was \(17\)\(s\) for back jumps and \(7.6\) s for front jumps while a single evaluation of the NN requires only \(0.7\)\(ms\). Fig. 3 (right) shows that a reasonable accuracy is obtained for landing locations in the front of the robot, while FDDP behaves poorly for locations in the back of the robot. Computing the mean RPE separately for the back region and the front region, we obtained an accuracy of 52 \(\%\) and 16.5\(\%\), respectively. #### Iv-B2 Performance of the GRL approach We repeated the simulations on the test region using the GRL approach with the _default_ NN and with a NN where we halved the number of neurons in each hidden layer (_half NN_). Fig. 4 shows that, in both cases, with our GRL approach the RPE decreases (i.e. accuracy increases) monotonically with the number of episodes. In the front case, going from approximately 40\(\%\) to 16\(\%\) for 100k episodes. A satisfactory level (i.e. RPE 20\(\%\) for front jumps) is already achieved after 50k episodes. All the feasibility constraints turn out to be mostly satisfied after 10k episodes. The figure also shows that the GRL approach always outperforms the standard optimisation method in terms of jump accuracy in the case of back jumps, achieving a comparable accuracy for front jumps. Halving the number of neurons, the two models behave with similar accuracy, showing that the RL model could ideally be further simplified. From Fig. 3 we can observe that the feasibility region expands with the number of training epochs. In the same figure we can see that the _test region_ (black cylinder) is bigger than the region where we trained the NN (gray cylinder), demonstrating its extrapolation capabilities. It is important to remark that the GRL approach is also capable, to some extent, to learn the dynamics and compensate the tracking inaccuracies of the underlying low-level controller. This is shown in the accompanying video 3 by the _purple_ ball that represents the _ideal_ landing location (i.e. if the CoM lift-off velocity associated to each action was perfectly tracked). The _purple_ ball is different from the target location (_blue_ ball) because of tracking inaccuracies but the agent learned to provide a lift-off velocity that compensates for these, managing to reach accurately the desired location (blue ball). In the same video we show how that the quality of the jumps steadily improves with the number of training episodes. Footnote 3: Link to the accompanying video #### Iv-B3 Performance baseline: end to end RL We did not achieve satisfactory results with the E2E approach. The reward function had an erratic behaviour during the training and even after 1M of training steps there were only a few targets out of \(726\) where the algorithm managed to attain an error below \(10\)\(\%\). More details are reported in the accompanying video. ## V Conclusions In this work we proposed a guided RL based strategy to perform _omni-directional_ jumps on uneven terrain with a legged robot. Exploiting some domain knowledge and taking a few assumptions on the shape of the jump trajectory, we have shown that, in a few thousand episodes of training, the agent obtains the ability to jump in a big area maintaining high accuracy while reaching the boundary of his performances (deriving from the the physical limitations of the machine). The approach manages also to learn and proficiently compensate from the tracking inaccuracies of the low-level controller. The proposed approach is very efficient (it requires a small number of training episodes to reach a good performance), it achieves a good generalisation (e.g., by executing jumps in a region 20\(\%\) larger than the one used for training), and it outperforms a standard end-to-end RL that resulted not able to learn the jumping motion. Compared to optimal control, the GRL approach 1) achieves the same level of performance in front jumps, but is also able to perform backward jumps (optimal control is not) 2) requires several orders of magnitude lower computation time. In the future, we plan to extend the approach to a full quadruped robot, considering not only linear but also angular motions. Leveraging the angular part we can build a framework that is able to perform a variety of jumping motions (e.g. twist, somersault, barrel jumps) on inclined surfaces. We are also Fig. 4: Plot of the average RPE as a function of the number of training episodes. Fig. 3: Top-view of the feasibility region: (1-5) for different number of episodes of the training phase (the number of reachable points is computed for each \(X\),\(Y\) pair) and (right) in the case of the baseline FDDP. seeking ways to improve robustness by including robot non idealities in the learning phase and to speed up the training phase by leveraging parallel computation.
2307.06852
Optimization of Speed and Network Deployment for Reliable V2I Communication in the Presence of Handoffs and Interference
Vehicle-to-infrastructure (V2I) communication is becoming indispensable for successful roll-out of connected and autonomous vehicles (CAVs). While increasing the CAVs' speed improves the average CAV traffic flow, it increases communication handoffs (HOs) thus reducing wireless data rates. Furthermore, unplanned density of active base-stations (BSs) may result in severe interference which negatively impacts CAV data rate. In this letter, we first characterize macroscopic traffic flow by considering log-normal distribution of the spacing between CAVs. We then derive novel closed-form expressions for the exact HO-aware rate outage probability and ergodic capacity in a large-scale network with interference. Then, we formulate a traffic flow maximization problem to optimize the speed of CAVs and deployment density of BSs with HO-aware rate constraints and collision avoidance constraints. Our numerical results validate the closed-form analytical expressions, extract useful insights about the optimal speed and BS density, and highlight the key trade-offs between the HO-aware data rates and CAV traffic flow.
Haider Shoaib, Hina Tabassum
2023-05-31T16:23:48Z
http://arxiv.org/abs/2307.06852v1
Optimization of Speed and Network Deployment for Reliable V2I Communication in the Presence of Handoffs and Interference ###### Abstract Vehicle-to-infrastructure (V2I) communication is becoming indispensable for successful roll-out of connected and autonomous vehicles (CAVs). While increasing the CAVs' speed improves the average CAV traffic flow, it increases communication handoffs (HOs) thus reducing wireless data rates. Furthermore, unplanned density of active base-stations (BSs) may result in severe interference which negatively impacts CAV data rate. In this letter, we first characterize macroscopic traffic flow by considering log-normal distribution of the spacing between CAVs. We then derive novel closed-form expressions for the exact HO-aware rate outage probability and ergodic capacity in a large-scale network with interference. Then, we formulate a traffic flow maximization problem to optimize the speed of CAVs and deployment density of BSs with HO-aware rate constraints and collision avoidance constraints. Our numerical results validate the closed-form analytical expressions, extract useful insights about the optimal speed and BS density, and highlight the key trade-offs between the HO-aware data rates and CAV traffic flow. Connected automated vehicles, vehicular networks, network planning, handoffs, handoff-aware data rate, traffic flow, speed optimization, interference. ## I Introduction Vehicle-to-infrastructure (V2I) communication is pivotal to enable autonomous and well-informed decision-making in connected and autonomous vehicles (CAVs); thus, improving road safety and traffic flow [1]. Nevertheless, optimizing the speed of CAVs while maximizing the CAV traffic flow and achieving reliable connectivity at the same time is challenging. The reason is that while increasing the CAVs' speed improves the traffic flow, it also increases communication handoffs (HOs) as the CAVs switch from one base-station (BS) to another, thus reducing data rates. Furthermore, unplanned deployment of network infrastructure such as roadside units or BSs may result in severe interference which negatively impacts CAV data rate [2]. A fundamental trade-off thus exists between the communication data rates and CAV traffic flow. In this context, this letter aims to answer the following questions, i.e., **(i)**_how to analyze the performance of V2I communication considering HOs and interference?_ and **(ii)**_what is the optimal BS density and average CAV speed that maximize traffic flow?_ Prior research works in the realm of V2I communications have considered mobility in terms of data rates, however traffic flow is typically overlooked. For instance, in [3], Arshad et al. derived HO-aware data rates as a function of the speed of the user device and the HO cost from the BSs. In [2], Hossan et al. presented a stochastic geometry framework to derive the HO-aware coverage probability of a two-tier wireless network with RF and THz BSs. In [4], Lin et al. proposed a random way-point mobility model to characterize the HO rate and sojourn time using stochastic geometry considering randomly distributed BSs. Recently, in [5], Yan et al. proposed a reinforcement learning approach for joint V2I network selection and autonomous driving policies considering both RF and THz BSs. Their results demonstrated inter-dependency of a CAV's motion dynamics, HOs, and data rate to adopt safe driving behaviours for CAVs. Another series of research contributions characterizes traffic flow using macroscopic models [6, 7], however V2I communications is typically overlooked. None of the aforementioned research works considered the problem of _CAVs traffic flow maximization_ with log-normally distributed CAVs' spacing and _HO-aware data rate_ constraints in a large-scale interference-limited wireless network. To this end, our contributions can be summarized as follows: \(\bullet\) We characterize macroscopic traffic flow by considering log-normal distribution of the spacing between CAVs. We then derive novel and tractable closed-form expressions for the probability density function (PDF) and cumulative density function (CDF) of signal-to-interference-plus-noise ratio (SINR), HO-aware rate outage probability and ergodic capacity in a large-scale network with interference. The derived expressions capture the network parameters such as height of the BSs, safety distance of BSs from the road, interference from neighboring BSs, and channel fading. \(\bullet\) We develop a novel optimization framework to jointly optimize the deployment density of the BSs and speed of CAVs to maximize the CAVs' traffic flow with collision avoidance and minimum HO-aware data rate constraints. \(\bullet\) Numerical results confirm the accuracy of the derived expressions and extract useful insights related to dynamics of BS density, CAV minimum data rate requirements, average CAV speed, BS heights and safety distances, etc. ## II System Model and Performance Metrics We consider a road of length \(L_{R}\) on which \(N_{c}\) CAVs travel. We consider \(N\) BSs are deployed alongside the road at a certain distance \(d_{\rm safe}\) with a density defined as the number of BSs deployed per unit distance, i.e., \(\mu=N/L_{R}\). The distance between BSs is \(1/\mu\) as shown in Fig. 1. The CAVs' density \(k\) on the road is defined as the number of CAVs per unit distance, and the CAVs' speed is \(v\). According to macroscopic traffic flow theory, the flow of vehicles is defined as \(q=kv\) vehicles per unit time [8]. Note that \(k\) is inversely proportional to \(s\), i.e., \(k=1/s\) and \(s\) is the spacing between neighboring vehicles. Given the PDF of the density of vehicles \(k\) on the road \(f_{K}(k)\), the traffic flow can be defined as follows: \[Q=\int_{0}^{\infty}kvf_{K}(k)\mathrm{d}k. \tag{1}\] In this paper, we model the inter-vehicle spacing \(s\) with a log-normal distribution. This model has been proven to be accurate for daytime hours through various empirical studies [9]. The research works observed the traffic flow behaviour during different times of the day, and proved that the inter-vehicle spacing distribution is log-normally distributed during daytime hours (i.e., moderate traffic). Therefore, we consider the inter-vehicle spacing \(s\) to be log-normally distributed during day-time hours with PDF given as: \[f_{S}(s)=\frac{1}{s\sigma_{\mathrm{LN}}\sqrt{2\pi}}\exp{\left(-\frac{(\ln{(s) }-\mu_{\mathrm{LN}})^{2}}{2\sigma_{\mathrm{LN}}^{2}}\right)},\] where \(\mu_{\mathrm{LN}}\) and \(\sigma_{\mathrm{LN}}\) are the logarithmic average and scatter parameters of log-normal distribution, respectively, and erf(\(\cdot\)) is the error function. Therefore, (1) can be rewritten as follows: \[Q=\int_{0}^{\infty}\frac{1}{s}vf_{S}(s)\mathrm{d}s=v\exp{\left(\frac{\sigma_{ \mathrm{LN}}^{2}-2\mu_{\mathrm{LN}}}{2}\right)}. \tag{2}\] Each CAV is assumed to be connected to a single nearest BS at any given time. Considering the distance-based path-loss and short-term multi-path fading at the transmission channel, the received signal power at a given CAV \(i\) from a given BS \(j\) in the downlink can be modeled as follows: \[S_{i,j}=G_{R}^{\text{tx}}G_{R}^{\text{tx}}\left(\frac{c}{4\pi f_{R}}\right)^{ 2}\frac{P_{j}^{\text{tx}}}{d_{i,j}^{\alpha}}\chi_{i,j}=\gamma_{R}P_{j}^{\text{ tx}}d_{i,j}^{-\alpha}\chi_{i,j}, \tag{3}\] The signal-to-interference-plus noise ratio (SINR) received at a \(i\)-th CAV from \(j\)-th BS can thus be modeled as follows: \[\mathrm{SINR}_{i,j}=\frac{S_{i,j}}{N_{R}+I_{i}}=\frac{\gamma_{R}P_{j}^{\text{ tx}}d_{i,j}^{-\alpha}\chi_{i,j}}{N_{R}+I_{i}}, \tag{4}\] where \(G_{R}^{\text{tx}}\) and \(G_{R}^{\text{rx}}\) represent the transmitting and receiving antenna gains, respectively, \(P_{j}^{\text{tx}}\) represents the transmit power of the BS \(j\), \(\chi_{i,j}\) represents the short-term channel fading of BS \(j\) modeled with Rayleigh distribution, \(c\) and \(f_{R}\) represent the speed of an electromagnetic wave and RF carrier frequency, respectively, \(d_{i,j}\) represents the distance between the \(j\)-th BS and \(i\)-th CAV, and \(\alpha\) represents the path-loss exponent. Furthermore, \(N_{R}\) is the thermal noise power at the receiver and \(I_{i}=\sum_{k\neq j}P_{k}^{\text{tx}}\gamma_{R,k}d_{i,k}^{-\alpha}\chi_{i,k}\) is the cumulative interference at the \(i\)-th CAV from the interfering BSs, where \(\gamma_{R}=G_{R}^{\text{tx}}G_{R}^{\text{rx}}(c/4\pi f_{R})^{2}\), \(d_{i,k}\) represents the distance between the \(i\)-th CAV and \(k\)-th interfering BS, and \(\chi_{i,k}\) is the power of fading from the \(k\)-th interfering BS to the CAV. Furthermore, as detailed in Fig. 1, the distance between a CAV \(i\) and BS \(j\) can be calculated using their respective coordinates as \(d_{i,j}=\sqrt{x_{i,j}^{2}+h_{\mathrm{bs}}^{2}+d_{\mathrm{safe}}^{2}}\), where \(h_{\mathrm{bs}}\) represents the height of the BSs, \(d_{\mathrm{safe}}\) represents the safety distance from the CAV on the road to the BS, and \(x_{i,j}\) is the distance parallel to the road with respect to the location of the CAV on the road to the BS. Subsequently, given the Shannon-Hartley theorem for infinite block-length regime, the data rate without mobility between a BS and CAV can be defined as \(R_{i,j}=W\log_{2}(1+\text{SINR}_{i,j})\) where \(W\) represents the bandwidth of the channel. As CAVs drive along the road, HOs between different BSs occur, and due to HO delays and failures, an increase in the rate of HOs can negatively impact the CAV data rate. Therefore, we incorporate the effect of these HOs through the HO-related cost. HO cost is proportional to the HO delay (\(h_{d}\)) measured in seconds per HO, and HO rate (\(H\)) measured in number of HOs per second, i.e., \(H_{c}=h_{d}\times H\). Since we assume nearest BS association for CAVs, we define \(H\) as the _number of cell boundaries a CAV crosses per second_, where a cell boundary is the coverage range of a BS. Note that the number of boundaries per unit distance are the same as the BS density \(\mu\). By using the speed of CAVs \(v\), we can calculate the number of boundaries covered by a CAV per second as \(H=\mu v\). Thus, we model the HO-aware data rate [3] as: \[M_{i,j}=R_{i,j}(1-H_{c,\text{max}})=R_{i,j}\left(1-h_{d}\frac{\mu v_{i}}{\mu_{ \text{max}}V_{\text{max}}}\right), \tag{5}\] where \(H_{c,\text{max}}=h_{d}\frac{H}{\mu_{\text{max}}V_{\text{max}}}=h_{d}\frac{\mu v }{\mu_{\text{max}}V_{\text{max}}}\) is the normalized HO cost in equation (5) to ensure \(0\leq H_{c,\text{max}}<1\), \(\mu_{\text{max}}\) is the maximum regulated BS density, and \(V_{\text{max}}\) is the maximum regulated speed of the CAVs. Finally, a stable CAV connection with the nearest BS requires a minimum HO-aware data rate \(R_{\text{th}}\) which ensures that every CAV achieves the required QoS. ## III HO-Aware Rate Outage Probability Analysis In this section, we derive novel closed-form expressions for the PDF and CDF of SINR, HO-aware rate outage probability, as well as the ergodic HO-aware data rate. The HO-aware rate outage probability \(P_{\mathrm{out}}\) is defined as: \[P_{\mathrm{out}}=\Pr{(M_{i,j}\leq R_{\mathrm{th}})}=\Pr{\left(Z=\frac{S_{i,j}} {N_{R}+I_{i}}\leq\gamma_{\mathrm{th}}\right)}, \tag{6}\] where \(\gamma_{\mathrm{th}}\) is the desired SINR threshold given as \(\gamma_{\mathrm{th}}=2^{\frac{R_{i}}{W(1-H_{c,\text{max}})}}\) and \(H_{c,\text{max}}=h_{d}\frac{\mu v}{\mu_{\text{max}}V_{\text{max}}}\). The outage expression can then be derived as shown in the following lemma. Fig. 1: Graphical illustration of the V2I communication model for CAVs. **Lemma 1** (HO-aware Rate Outage Probability).: _Given the PDF and CDF of SINR of ith CAV, the closed-form outage expressions can be given as follows:_ \[P_{\rm out}=1-\sum_{k=1,k\neq j}^{N}\frac{a_{i,j}e^{\frac{\lambda N_{B}}{b_{i,k}} }}{b_{i,k}\gamma_{\rm th}+a_{i,j}}\prod_{l=1,l\neq k}\frac{b_{i,k}}{b_{i,k}-b_{ i,l}}. \tag{7}\] Proof.: To derive the SINR outage, we first calculate the PDF and CDF of SINR. In the sequel, we first determine the PDF and CDF of \(S_{i,j}\) and \(I_{i}\). The random variable \(S_{i,j}\) is a scaled exponential random variable, i.e., \(S_{i,j}=Y=a_{i,j}\chi_{i,j}\) where \(a_{i,j}=\gamma_{R}P_{j}^{\rm st}d_{i-\alpha}^{-\alpha}\). By using a single variable transformation, the PDF and CDF of \(Y\) can be given, respectively, as follows: \[f_{Y}(y)=\frac{\lambda}{a_{i,j}}e^{-\frac{\lambda}{a_{i,j}}y},\quad F_{Y}(y)=1 -e^{-\frac{\lambda}{a_{i,j}}y}. \tag{8}\] The interference \(I_{i}=\sum_{k\neq j}P_{k}^{\rm tx}\gamma_{R}d_{i,k}^{-\alpha}\chi_{i,k}=\sum_{ k\neq j}b_{i,k}\chi_{i,k}\) follows Hypexponential distribution [10] as \(I_{i}\) is the weighted sum of \(n\) independent but non-identical exponential random variables. Each exponential is scaled with a different factor due to the different distance between the CAV and the interfering BSs. The PDF of the interference can thus be given by: \[f_{I_{i}}(I)=\sum_{k=1,k\neq j}^{N}\frac{\lambda}{b_{i,k}}e^{-\frac{\lambda}{b _{i,k}}I}\prod_{l=1,l\neq k}\frac{b_{i,k}}{b_{i,k}-b_{i,l}}, \tag{9}\] Now, we define \(X=N_{R}+I_{i}\). The PDF of \(X\) can be given after a single random variable transformation as \(f_{X}(x)=f_{I}(x-N_{R})\). Finally, given the statistics of \(X\) and \(Y\), we derive the PDF of \(Z=Y/X\)[11] as: \[f_{Z}(z)=\int_{0}^{\infty}xf_{X}(x)f_{Y}(xz)dx\] \[=\sum_{k=1,k\neq j}^{N}\frac{a_{i,j}\,b_{i,k}e^{\frac{\lambda N_{R }}{b_{i,k}}}}{(b_{i,k}z+a_{i,j})^{2}}\prod_{l=1,l\neq k}\frac{b_{i,k}}{b_{i,k}- b_{i,l}}. \tag{10}\] Furthermore, the CDF of \(Z\) is as follows: \[F_{Z}(z)=1-\sum_{k=1,k\neq j}^{N}\frac{a_{i,j}e^{\frac{\lambda N_{R}}{b_{i,k}} }}{b_{i,k}z+a_{i,j}}\prod_{l=1,l\neq k}\frac{b_{i,k}}{b_{i,k}-b_{i,l}}. \tag{11}\] Finally, the outage in **Lemma 1** can be calculated by substituting \(z=\gamma_{\rm th}\) in (11). In the following, we characterize the average HO-aware data rate using the statistics of SINR. **Lemma 2** (Ergodic HO-Aware Data Rate).: _Given the PDF and CDF of SINR, the closed-form ergodic rate expression can be given as follows:_ \[M_{\rm avg}(\mu)=W(1-\bar{h}_{d}\mu v)\sum_{k=1,k\neq j}^{N}\frac{\beta_{k}a_ {i,j}e^{\frac{\lambda N_{B}}{b_{i,k}}}}{a_{i,j}-b_{i,k}}\prod_{l=1,l\neq k} \frac{b_{i,k}}{b_{i,k}-b_{i,l}}, \tag{12}\] _where \(\beta_{k}=\ln(a_{i,j}/b_{i,k})+{\rm atan2}(\lambda/b_{i,k},0)\), and \({\rm atan2}(y,x)\) is the 2-argument arctangent function._ Proof.: We begin by defining the ergodic rate as follows: \[M_{\rm avg}(\mu)=\mathbb{E}\left[W\log_{2}\left(1+Z\right)(1- \bar{h}_{d}\mu v)\right]\] \[=W(1-\bar{h}_{d}\mu v)\mathbb{E}\left[\log_{2}\left(1+Z\right) \right], \tag{13}\] where \(\bar{h}_{d}=h_{d}/(\mu_{\rm max}V_{\rm max})\) and \(Z\) is a function of \(\mu\). Given the definition of ergodic rate [12], we have: \[R_{\rm avg}(\mu)=\mathbb{E}\left[\log_{2}\left(1+Z\right)\right] =\int_{0}^{\infty}\log_{2}(1+Z)f_{Z}(z)dz,\] \[=\frac{1}{\ln(2)}\int_{0}^{\infty}\frac{1-F_{Z}(z)}{1+z}dz,\] \[=\frac{1}{\ln(2)}\int_{0}^{\infty}\sum_{k=1,k\neq j}^{N}\frac{a_ {i,j}e^{\frac{\lambda N_{B}}{b_{i,k}}}\prod_{l=1,l\neq k}\frac{b_{i,k}}{b_{i, k}-b_{i,l}}}{(b_{i,k}+a_{i,j})z+b_{i,k}z^{2}+a_{i,j}}dz, \tag{14}\] where (14) is a function of \(\mu\) because \(d_{i,j},d_{i,k}\) and in turn \(b_{i,j},b_{i,k}\) are functions of \(\mu\). The final step is derived by substituting (11) into (14). The closed-form ergodic capacity can be derived by solving the integral as follows: \[R_{\rm avg}(\mu)=\sum_{k=1,k\neq j}^{N}\beta_{k}\frac{a_{i,j}e^{\frac{\lambda N _{B}}{b_{i,k}}}}{a_{i,j}-b_{i,k}}\prod_{l=1,l\neq k}\frac{b_{i,k}}{b_{i,k}-b_{ i,l}}, \tag{15}\] Finally, the HO-aware data rate can be given by substituting (15) into (13). To simplify the optimization, we also provide a worst-case bound on the interference. **Lemma 3** (Worst-Case Data Rate at CAV \(i\)).: _The worst-case signal power will be observed at a CAV when the CAV is located at the halfway point between two BSs (i.e. \(\frac{1}{2\mu}\)) such that \(d_{i,j}=d_{\rm max}=\sqrt{h_{\rm bs}^{2}+d_{\rm safe}^{2}+\frac{1}{4\mu^{2}}}\). The worst-case interference will also be observed at this location because the CAV will have maximum signal and interference from the neighbouring BSs such that. Finally, the ergodic worst-case data rate is given as_ \[R_{\rm worst}(\mu)=\sum_{k=1,k\neq j}^{N}\frac{\beta_{k}d_{\rm max}^{-\alpha}e^{ \frac{\lambda N_{B}}{b_{i,k}}d_{i,k}^{-\alpha}}}{d_{\rm max}^{-\alpha}-d_{i,k}^{- \alpha}}\prod_{l=1,l\neq k}\frac{d_{i,k}^{-\alpha}}{d_{i,k}^{-\alpha}-d_{i,l}^{- \alpha}}.\] ## IV QoS-Constrained Traffic Flow Maximization In this section, we formulate the macroscopic traffic flow maximization problem with various constraints to jointly optimize the CAV speed \(v\) and the BS density \(\mu\) in the presence of interference. Note that BS density optimization can also be implemented in practice by dynamically switching the BSs. We present a closed-form expression for the optimal CAV speed and a numerical method to compute optimal \(\mu\). The traffic flow maximization problem is formulated as: \[\begin{split}&\textbf{(P1)}\quad\max_{v,\mu}\quad\quad Q=v\exp \left(\frac{\sigma_{\rm LN}^{2}-2\mu_{\rm LN}}{2}\right)\\ &{\rm s.t.}\textbf{(C1)}\quad v\leq\frac{\exp\left(\sigma_{\rm LN} \sqrt{2}\operatorname{erf}^{-1}\left(2\epsilon-1\right)+\mu_{\rm LN}\right)}{ \tau}=V_{\rm safe}\end{split}\] \[\textbf{(C2)}\quad v\leq\frac{1}{\bar{h}_{d}\mu}\left(1-\frac{R_{ \rm th}}{WR_{\rm worst}(\mu)}\right)=V_{\rm data}(\mu)\] \[\textbf{(C3)}\quad R_{\rm th}\leq WR_{\rm worst}(\mu)\] \[\textbf{(C4)}\quad 0<v\leq V_{\rm max}\] \[\textbf{(C5)}\quad 0\leq\mu\leq\mu_{\rm max}\] where **C1** is the collision avoidance constraint which ensures that the speed of the CAVs should not exceed \(s/\tau\), i.e., \[\Pr\Big{(}v\geq\frac{s}{\tau}\Big{)}=\Pr\left(s\leq v\tau\right)\leq\epsilon, \tag{16}\] where \(s\) is the distance between two CAVs and is always greater than zero, \(\tau\) represents the processing time for the CAVs to act on a decision, and \(\epsilon\) is the crash tolerance level. In addition, equation (16) can be rewritten by substituting the CDF of \(s\) where \(\Pr\left(v\geq\frac{s}{\tau}\right)=\frac{1}{2}\left(1+\mathrm{erf}\left(\frac{ \ln\left(v\tau\right)-\mu_{\mathrm{LN}}}{\sigma_{\mathrm{LN}}\sqrt{2}}\right)\right)\). By using the inverse error function and taking the inverse log of both sides and factoring for \(v\), (16) can be rewritten as in **C1. C1** indicates that speed is capped to create an adequate safety distance that is traversed during the processing time. We refer to the right side of **C1** as the maximum safe speed that keeps the crash probability below \(\epsilon\). The condition **C1** shows that the maximum safe speed increases with \(\epsilon\), but decreases with the processing time. Furthermore, **C2** is the worst-case HO-aware data rate constraint of the CAV based on **Lemma 3**, i.e., \[M_{\mathrm{worst}}=W(1-\bar{h}_{d}\mu v)R_{\mathrm{worst}}(\mu)\geq R_{ \mathrm{th}}. \tag{17}\] By factoring out for \(v\) we can rewrite (17) as in **C2**, where we refer to the right side of **C2** as the maximum speed, denoted by \(V_{\mathrm{data}}(\mu)\) that ensures the minimum data rate requirement of the CAVs. To ensure that **C2** does not attain negative values of speed and the problem remains feasible, we introduce the constraint **C3**. That is, **C3** ensures \(\frac{R_{\mathrm{th}}}{WR_{\mathrm{worst}}(\mu)}\leq 1\). The final two constraints **C4** and **C5** cap the speed and BS density to a maximum, respectively. Note that problem **P1** is a non-linear programming problem due to constraints **C2** and **C3** which are a non-linear function of \(\mu\). To solve this problem, we first compute the optimal speed \(v^{*}\) which is then further maximized to optimize BS density as in the following Lemma. **Lemma 4**.: _Given that \(Q\) is linearly increasing with \(v\), and that \(v\) is bounded from \(V_{\mathrm{max}}\), \(V_{\mathrm{safe}}\), and \(V_{\mathrm{data}}\), the optimal speed \(v^{*}(\mu)\) is derived as the largest feasible speed that does not violate the three constraints, such that_ \[v^{*}(\mu)=\min\{V_{\mathrm{max}},V_{\mathrm{safe}},V_{\mathrm{data}}(\mu)\}, \tag{18}\] _The optimal BS density \(\mu^{*}\) can then be computed using the fminbnd function in MATLAB which is based on golden-section search algorithm (GSS) [13]. The GSS method can find the global maximum or minimum of a unimodal function; whereas, it converges to a local maximum or minimum for a function containing multiple extrema [14]. GSS is a one-dimensional search that works by reducing the interval in a golden ratio range, where the minimum of the interval lies within the interval. In our case, we want to determine the optimal value of \(\mu\) which maximizes \(V_{\mathrm{data}}\), so we provide fminbnd function with the negative of \(V_{\mathrm{data}}\) as the objective function. The algorithm has a computational complexity of \(\mathcal{O}(\log n)\)[15]. If \(\mu^{*}>\mu_{\mathrm{max}}\), we have \(\mu^{*}=\mu_{\mathrm{max}}\). Also, if \(\mu^{*}\) violates \(R_{\mathrm{th}}\leq WR_{\mathrm{worst}}(\mu^{*})\), the problem becomes infeasible._ It is important to note that due to the interference expression and its dependency on BS density, there is no closed-form expression for \(\mu^{*}\). Therefore, we utilize one-dimensional numerical optimization techniques to solve for \(\mu^{*}\). Finally, by substituting (18) and \(\mu^{*}\) into the objective function of **P1**, we derive the optimal traffic flow as follows: \[Q=\min\{V_{\mathrm{max}},V_{\mathrm{safe}},V_{\mathrm{data}}(\mu^{*})\}\exp \left(\frac{\sigma_{\mathrm{LN}}^{2}-2\mu_{\mathrm{LN}}}{2}\right).\] ## V Numerical Results and Discussions In this section, we validate the accuracy of the derived expressions through computer simulations. Furthermore, we demonstrate the sensitivity of optimal traffic flow by changing key parameters such as crash tolerance level, data rate thresholds, interference, etc. Unless stated otherwise, the values of the system parameters [2] are used in the following figures are listed herein. \(V_{\mathrm{max}}=30\) m/s, \(\mu_{\mathrm{max}}=0.01\) BSs/m, \(h_{d}=3\) s/HO, \(R_{\mathrm{th}}=60\) Mbps, \(W=40\) MHz, \(G_{R}^{\mathrm{tx}}=1\) dB, \(G_{R}^{\mathrm{rx}}=1\) dB, \(c=3\times 10^{8}\) m/s, \(f_{R}=2.1\) GHz, \(N_{R}=1.507\times 10^{-13}\) W/m\({}^{2}\), \(P_{y}^{\mathrm{tx}}=1\) W, \(\alpha=3\), \(\lambda=1\), \(\epsilon=1\%\), \(\tau=6\times 10^{-3}\) sec, \(\mu_{\mathrm{LN}}=0\), \(\sigma_{\mathrm{LN}}=1\), \(d_{\mathrm{safe}}=5\) m, \(h_{\mathrm{bs}}=8\) m, \(L_{R}=2000\) m. Fig. 2 and Fig. 3 demonstrate the outage probability (7) and ergodic capacity (15) as a function of \(\mu\) for various CAV speeds. The analytical expression match perfectly with the simulation results. When \(\mu\) increases, the outage increases and rate decreases due to increasing interference and HOs. Furthermore, higher speeds result in higher probability of outage and lower ergodic capacity when compared to lower speeds, which is due to more frequent HOs. Fig. 4 depicts \(V_{\mathrm{data}}\) as a function of \(\mu\). As \(\mu\) increases, \(V_{\mathrm{data}}\) with \(\alpha=3\) increases up to a certain point due to vicinity of BSs. However, beyond that point \(V_{\mathrm{data}}\) begins to decrease due to an increase in HOs and interference, and the CAVs need to lower their speed. On the other hand, when \(\alpha=4\), the benefit of rate enhancement cannot be seen and a higher \(\mu\) only causes more interference and HOs. Furthermore, we show that different CAV processing times \(\tau\) varies \(V_{\mathrm{safe}}\), where \(v^{*}=V_{\mathrm{safe}}\) for higher \(\tau\) values when \(V_{\mathrm{safe}}<V_{\mathrm{data}}\). Fig. 5 depicts the impact of the data rate threshold \(R_{\mathrm{th}}\) on traffic flow \(Q\) and the optimal BS density \(\mu^{*}\) for various path-loss exponents. As seen on the top of Fig. 5, as \(R_{\mathrm{th}}\) increases, \(Q\) decreases as CAVs need to reduce their speed to limit the number of HOs to meet the required data rate. For the same reason, when \(R_{\mathrm{th}}\), \(\mu^{*}\) increases. The environments with higher path-loss exponents require more dense BS deployment. Note that we consider 100 points (or \(R_{\mathrm{th}}\) values) along the x-axis between \(6\times 10^{7}\) bps and \(8\times 10^{7}\) bps to compute analytical traffic flow which requires to compute the optimal density of BSs \(\mu^{*}\) numerically using MATLAB's fminbnd function at each \(R_{\mathrm{th}}\) value. Therefore, minor fluctuations are observed due to numerical computations of \(\mu^{*}\). Fig. 6 depicts the relationship between the crash tolerance level \(\epsilon\) and optimum traffic flow \(Q\) for various CAV processing times. At first, \(V_{\mathrm{safe}}<V_{\mathrm{data}}<V_{\mathrm{max}}\), thus the optimal speed is bound to \(V_{\mathrm{safe}}\) as in Lemma 4, where \(V_{\mathrm{safe}}\) increases as the crash intensity level \(\epsilon\) increases since we are relaxing the crash tolerance. After a certain \(\epsilon\) value, \(V_{\mathrm{safe}}>V_{\mathrm{data}}\). The optimal speed is bound to \(V_{\mathrm{data}}\) as in Lemma 4, which is constant with respect to \(\epsilon\) and results in an upbound flat curve. For different CAV processing times \(\tau\), the point at which \(V_{\mathrm{data}}\) takes over changes. When the CAV processing time is larger, \(V_{\rm safe}\) is smaller since it takes a longer time for the CAV to process decisions compared to smaller \(\tau\) values. Finally, Fig 7 depicts the relationship between BS density \(\mu\) and traffic flow \(Q\) with various BS safety distances. At first, when the BS density \(\mu\) is lower, \(V_{\rm safe}<V_{\rm data}<V_{\rm max}\) so the optimal speed is bound to \(V_{\rm safe}\) as in Lemma 4. \(V_{\rm safe}\) is constant with respect to \(\mu\) which results in an upbound flat curve. As \(\mu\) increases, HOs and interference increases which lowers \(V_{\rm data}\) to the point where \(V_{\rm safe}>V_{\rm data}\) and the optimal speed is bound to \(V_{\rm data}\) as in Lemma 4. For different BS safety distances \(d_{\rm safe}\), the switching point between \(V_{\rm safe}\) and \(V_{\rm data}\) changes because the further away the BS is from the CAV, \(V_{\rm data}\) will decrease because of weak signal strength. Furthermore, the traffic flow without interference results in better traffic flow as compared to the traffic flow with interference as the interference deteriorates the data rate, CAV speed, and traffic flow. ## VI Conclusion This letter presents a framework for V2I communications for CAVs where we derive novel expressions for outage probability and ergodic capacity of HO-aware data rate, and jointly optimize the speed and network deployment in the presence of HOs between BSs and interference due to neighbouring BSs. Finally, we demonstrate the trade-off between achievable wireless data rates and traffic flow.
2309.10544
Model Leeching: An Extraction Attack Targeting LLMs
Model Leeching is a novel extraction attack targeting Large Language Models (LLMs), capable of distilling task-specific knowledge from a target LLM into a reduced parameter model. We demonstrate the effectiveness of our attack by extracting task capability from ChatGPT-3.5-Turbo, achieving 73% Exact Match (EM) similarity, and SQuAD EM and F1 accuracy scores of 75% and 87%, respectively for only $50 in API cost. We further demonstrate the feasibility of adversarial attack transferability from an extracted model extracted via Model Leeching to perform ML attack staging against a target LLM, resulting in an 11% increase to attack success rate when applied to ChatGPT-3.5-Turbo.
Lewis Birch, William Hackett, Stefan Trawicki, Neeraj Suri, Peter Garraghan
2023-09-19T11:45:29Z
http://arxiv.org/abs/2309.10544v1
# Model Leeching: An Extraction Attack Targeting LLMs ###### Abstract _Model Leeching_ is a novel extraction attack targeting Large Language Models (LLMs), capable of distilling task-specific knowledge from a target LLM into a reduced parameter model. We demonstrate the effectiveness of our attack by extracting task capability from ChatGPT-3.5-Turbo, achieving 73% Exact Match (EM) similarity, and SQuAD EM and F1 accuracy scores of 75% and 87%, respectively for only $50 in API cost. We further demonstrate the feasibility of adversarial attack transferability from an extracted model extracted via _Model Leeching_ to perform ML attack staging against a target LLM, resulting in an 11% increase to attack success rate when applied to ChatGPT-3.5-Turbo. ## 1 Introduction Large Language Models (LLMs) have seen rapid adoption given their proficiency in handling complex natural language processing (NLP) tasks. LLMs leverage Deep Learning (DL) algorithms to process and understand a variety of natural language tasks spanning text completion, Question & Answering, and summarization [24]. While production LLMs such as ChatGPT, BARD, and LLaMA [18][1][8] have garnered substantial attention, their uptake has also highlighted pressing concerns on growing their exposure to adversarial attacks [8]. Studies on adversarial attacks against LLMs are limited, with urgent need to investigate their risk to data leakage, model stealing (extraction), and attack transferability across models [3][31]. In this paper we propose _Model Leeching_, an extraction attack against LLMs capable of creating an extracted model via distilling task knowledge from a target LLM. Our attack is performed by designing an automated prompt generation system [12] targeting specific tasks within LLMs. The prompt system is used to create an extracted model by extracting and copying task-specific data characteristics from a target model [28]. _Model Leeching_ attack is applicable to any LLM with a public API endpoint, and can be successfully achieved at minimal economic cost. Moreover, we demonstrate how _Model Leeching_ can be exploited to perform ML attack staging onto other LLMs (including the original target LLM). Our contributions are: * We propose the _Model Leeching_ attack method, and demonstrate its effectiveness against LLMs via experimentation using an extraction attack framework [9]. Targeting the ChatGPT-3.5-Turbo model, we disit characteristics upon a question & answering (QA) dataset (SQuAD) into a Roberta-Large base model. Our findings demonstrate that a large QA dataset can be successfully labelled and leveraged to create an extracted model with 73% EM similarity to ChatGPT-3.5-Turbo, and achieve SQuAD EM and F1 accuracy scores of 75% and 87%, respectively at $50 cost. * We study the capability to exploit an extracted model derived from _Model Leeching_ to perform further ML attack staging upon a production LLM. Our results show that a language attack [11] optimized for an extracted model can be successfully transferred into ChatGPT-3.5-Turbo with an 11% attack success increase. Our results highlight evidence of adversarial attack transferability between user-created models and production LLMs. ## 2 Attack Description & Threat Model ### Extraction Attacks Model extraction is the process of extracting the fundamental characteristics of a DL model [25]. An _extracted model_ is created via extracting specific characteristics (architecture, parameters, and hyper-parameters [10]) from a _target model_ of interest, which are then used to perform model recreation [16]. Once the attacker has established an extracted model, further adversarial attacks can be staged encompassing model inversion, membership inference, leaking privacy data, and model intellectual property theft [4]. ### Threat Model State-of-the-art LLMs leveraging the transformer architecture [26] typically comprise hundreds of billions of parameters [30]. Using the established taxonomy of adversaries against DL models [20], our proposed attacks assume a weak adversary capable of providing model input via an LLM API endpoint, and a model output requiring generated text from a target LLM. The adversary has no knowledge of the target architecture or training data used to construct the underlying LLM parameters. Note that the threat model assumptions pertaining to potential rate limiting, or limited access to the target API can be relaxed due the ability to distribute data generation across multiple API keys. ## 3 Model Leeching Attack Design _Model Leeching_ is a black-box adversarial attack which seeks to create an extracted copy of the target LLM within a specific task. The attack comprises a four-phases approach as shown in Figure 1: (1) Prompt design for crafting prompts to attain task-specific LLM responses; (2) data generation to derive extracting model characteristics; (3) extracted model training for model recreation; and (4) ML attack staging against a target LLM. ### Prompt Design Performing _Model Leeching_ successfully requires correct prompt design. Adversaries must design well-structured prompts that accurately define the relevancy and depth of the necessary generated responses in order to identify task-specific knowledge of interest. Depending on the use case, prompt design is achieved manually or through automated methods [28]. Model Leeching leverages the following three-stage prompt design process: 1. **Knowledge Discovery.** An adversary first defines the type of task knowledge to extract. Once defined, an adversary assesses specific target LLM prompt responses to ascertain its affinity to generate task knowledge. This assessment encompasses domain (NLP, image, audio, etc.), response patterns, comprehension limitations, and instruction adherence for particular knowledge domains [7, 29, 15]. Following successful completion of this assessment, the adversary is able to devise an effective strategy to extract desired characteristics. 2. **Construction.** Subsequently, the adversary crafts a prompt template that integrates an instruction set reflecting the strategy formulated during the knowledge discovery stage. Template design encompasses distinctive response structure of the target LLM, its recognized limitations, and task-specific knowledge identified for extraction. This template facilitates dynamic prompt generation within the Model Leeching process. 3. **Validation.** The adversary validates the created prompt and response generated from the target LLM. Validation entails ensuring the LLM responds reliably to prompts, represented as a consistent response structure and ability to carry out given instructions. Ensuring that the target LLM is capable enough to carry out the required task, that it can process and action upon its given instructions. This validation activity enables the Model Leeching method to generate responses that can be used to effectively train local models with extracted task-specific knowledge. The prompt design process follows an iterative approach, typically requiring multiple variations and refinements to devise the most effective instructions and styles for obtaining desired results from a specific LLM for a given task [29]. ### Data Generation Once a suitable prompt has been designed, the adversary targets the given LLM (\(M_{target}\)). This refined prompt is specified to capture desired LLM purpose and task (e.g. Summarization, Chat, Question & Answers, etc.) to be instilled within the extracted model [27]. Given a ground truth dataset (\(D_{truth}\)), all examples are processed into prompts recognized as valid target LLM inputs. Once all queries have been processed by the target LLM, we generate an adversarial dataset (\(D_{adv}\)) combining inputs with received LLM replies, as well as automated validation (removing API request errors, failed, or erroneous prompts). This process can be distributed and parallelised to minimize collection time as well as mitigate the impact of rate-limiting and/or detection by filtering systems when interacting with the web-based LLM API [5]. ### Extracted Model Training Using (\(D_{adv}\)), data is split into train (\(Adv_{train}\)) and evaluation (\(Adv_{eval}\)) sets used for extracted model training and attack success evaluation. A pre-trained or empty base model (\(M_{base}\)) is selected for distilling knowledge from the target LLM. This base model is then trained upon (\(Adv_{train}\)) with selected hyper-parameters producing an extracted model (\(M_{extracted}\)). Using evaluation set (\(Adv_{eval}\)), similarity and accuracy in a given task can be evaluated and compared using answers generated by (\(M_{extracted}\)) and (\(M_{target}\)). ### ML Attack Staging Access to an extracted model (local to an adversary) created from a target LLM facilitates the execution of augmented adversarial attacks. This extracted model allows an adversary to perform unrestricted model querying to test, modify or tailor adversarial attack(s) to discover exploits and vulnerabilities against a target LLM [11]. Furthermore, access to an extracted model enables an adversary to operate in a sandbox environment to conduct adversarial attacks prior to executing the same attack(s) against the target LLM in production (and of particular concern, whilst minimizing the likelihood of detection by the provider). ## 4 Experimental Setup To demonstrate the effectiveness of _Model Leeching_, we created a set of extracted models using ChatGPT-3.5-Turbo as the target model, with Question & Answers as the target task. Task-specific prompts were designed and generated using the Stanford Question Answering 1.1 Dataset (SQuAD) containing 100k examples (85k to 15k evaluation split), representing a context and set of questions and associated answers [21]. ### Prompt Construction A comprehensive array of prompts, encompassing the entirety of the SQuAD dataset was produced. These prompts adhere to a template containing the specific SQuAD question and context, enabling ChatGPT-3.5-Turbo to efficiently process and respond to the given task. As seen in Figure 2, each rule instructs the target LLM to produce an output desired by the adversary ensuring effective capture of task-specific knowledge. The template comprises: 1. Target LLM is specifically directed to provide only the precise answer to the assigned SQuAD question, drawn solely from the provided SQuAD context. This stipulation is crucial due to the inherent tendency of general chat-style LLMs (such as ChatGPT-3.5-Turbo) to produce more verbose responses than necessary. In the scope of SQuAD score assessment, only the exact answer is pertinent, negating the need for any additional content. 2. By including the sentence where the answer occurred, the LLM is required to demonstrate a degree of contextual comprehension beyond simple fact extraction, for valid data generation that contains the correct task knowledge. This requirement ensures that the model is not limited to identifying keywords, but understands the broader text semantic structure. In the case of assessing model performance on ChatGPT-3.5-Turbo, the index in which an answer is found within the context is required. 3. Use of a standardized JSON format for responses facilitates efficient and uniform data handling. The keys _answer_ and _sentence_ provide a clear and concise structure, making the model output easier to process and compare algorithmically and manually. 4. Ability to respond with 'UNSURE' provides a safeguard for quality control of model response. By acknowledging its own uncertainty, the LLM avoids disseminating potentially incorrect or misleading information, and assists in parsing prompts that it was unable to complete. Figure 1: **Overview of Model Leech**. Deep Learning models comprising of architecture, parameters and hyper-parameters can be extracted via extraction attacks. Figure 2: **Example of Prompt Template**. Slots for SQuAD context and questions, with a set of instructions for the LLM to follow. ### Model Base Architectures To evaluate the effectiveness of Model Leeching, we selected three different base model architectures and several variants (with models parameter sizes ranging from between 14 to 123 million) to create an extracted model of our target LLM. These six model architectures include Bert [6], Albert [13], and Roberta [14], were selected due to their parameter size and respective performance upon our selected task [14]. The intention of selecting these architectures as candidate extracted models is to to evaluate whether: 1) more sophisticated models (parameters, architecture) are more effective at learning target LLM characteristics; and 2) low parameter models (i.e. 100x smaller vs. ChatGPT-3.5-Turbo) can learn sufficient characteristics from a target LLM, while achieving comparable performance in a specific task. Using these candidate model architectures, we train two sets of models for the purposes of evaluation, 1) extracted models; trained upon generated \(Adv_{train}\) dataset, and 2) baseline models; for performance comparison, trained directly upon the ground-truth SQuAD dataset. ### ML Attack Staging We created and deployed an adversarial attack derived from AddSent [11] that generates an adversarial context by adding a non-factual yet semantically and syntactically correct sentences to the original context from a SQuAD entry (Figure 3). The goal of this attack is to cause a QA model to incorrectly answer a question when given an adversarial context. We further modified this attack to generate a larger variety of adversarial context, selectively chosen based on their success upon our extracted model, which is then sent to the target LLM for improved misclassification likelihood. ### Model Leeching Scenario We demonstrate the effectiveness of _Model Leeching_ by targeting ChatGPT-3.5-Turbo with a pre-trained Roberta-Large base architecture [14]. Using SQuAD as described in 4.1, we generate a new labelled adversarial dataset through automated prompt generation querying ChatGPT-3.5-Turbo, which is trained upon the base architecture to create an extracted model. We evaluate attack performance by measuring the extracted model performance to a baseline model directly trained on SQuAD with ground truth answers. We demonstrate the feasibility of attack transferability across models by applying the AddSent attack [11] upon the extracted model, generating adversarial perturbations that can be further staged upon the target LLM. In order to explore feasibility of transferability of adversarial vulnerabilities across models. We leverage three metrics for evaluation: Exact Match (EM), and F1 Score used to measure the performance/similarity of our extracted model and ChatGPT-3.5-Turbo [21], and attack success rate for further attack staging representing successful adversarial prompts. ## 5 Results ### Data Generation From 100k examples of contexts, questions and answers within SQuAD, 83,335 total usable examples were collected, with 16,665 failing either from API request errors, or erroneous replies, attributing to a 16.66% error rate when labelling through ChatGPT-3.5-Turbo. From these 83,335 examples, 76,130 can be used for further extracted model training (\(Adv_{train}\)), and 7,205 for evaluation (\(Adv_{eval}\)). Query time was 48 hours and cost $50 to execute API requests. Figure 4: **Model Similarity to ChatGPT-3.5-Turbo.** Comparing similarity in correct and incorrect answering of questions relative to ChatGPT-3.5-Turbo. Figure 3: **Example of AddSent Attack. Adversarial sentences appended to SQuAD context (blue highlighted text) to yield incorrect answers for SQuAD questions.** ### Extraction Similarity Figure 4 shows that each extracted model performed more similarly to ChatGPT-3.5-Turbo compared to their baseline counterpart, with each model EM and F1 similarity score being up to 10.49% and 5% higher, respectively. Roberta Large achieved the highest ChatGPT-3.5-Turbo similarity, with a 0.73 EM and 0.87 F1 score denoting high similarity to the target LLM [17]. Similarity of the baseline models to ChatGPT-3.5-Turbo is lower than the extracted model, due to being trained using the original SQuAD dataset, whereas the extracted models used a dataset derived from ChatGPT-3.5-Turbo. ### Task Performance Extracted model task performance was evaluated by comparing the SQuAD EM and F1 scores to baseline models and ChatGPT-3.5-Turbo. Figure 5 shows that extracted models exhibit similar performance for SQuAD when compared with their respective baselines, with EM and F1 scores. Evaluating our extracted models against ChatGPT-3.5-Turbo, we observed that Roberta Large achieved the highest similarity to ChatGPT-3.5-Turbo performance exhibiting EM and F1 scores, achieving an EM/F1 score of 0.75/0.87 compared to 0.74/0.87 respectively. Extracted model performance from ChatGPT-3.5-Turbo is sufficiently comparable in performance to state-of-the-art literature on QA tasks, where with the hyperparameters used in Roberta Large are more performant than the other architectures [14]. ### ML Attack Staging Roberta Large was used to evaluate the attack success of AddSent upon the extracted model and ChatGPT-3.5-Turbo given its high SQuAD accuracy and similarity. AddSent exhibited an attack success of 0.28 and 0.26 upon the extracted model and ChatGPT-3.5-Turbo, respectively. Leveraging access to our extracted model, we selected and sent the best performing 7,205 adversarial examples to ChatGPT-3.5-Turbo. Our results indicate that adversarial examples augmented by AddSent increased attack success by 26% for the extracted model, and 11% to ChatGPT-3.5-Turbo (Figure 6). Attack effectiveness is reduced across models due to ChatGPT-3.5-Turbo being 100x larger in parameter size than local models, and leveraging advanced training methods such as reinforcement learning from human feedback, not used on our local models. While ChatGPT-3.5-Turbo is more task capable and less likely to be evaded by adversarial prompts compared to a local model. However, despite increased adversarial robustness, our results highlight attack transferability exists between an extracted model and its target, demonstrating the feasibility of leveraging distilled knowledge to further stage and subsequently launch improved adversarial attacks upon a production LLM. ## 6 Discussion ### Dataset Labelling Using the SQuAD dataset containing 100k examples, we successfully labelled 83,335 using ChatGPT-3.5-Turbo (see Section 5.1). In total, this process cost $50 and required 48 hours to complete. Compared to using labelling services such as Amazon SageMaker Data Labeling [2], the estimated cost of labelling would be $0.036 per example of data, totalling $3,600, demonstrating a significant reduction in cost when using generative LLMs to label datasets. We additionally note that the success of labelling datasets can be increased by 1) further prompt engineering and optimization to package multiple SQuAD examples into one efficient query enabling reduction in query cost and time; and 2) re-sending of failed SQuAD examples to achieve higher amount of successful labelled examples. Figure 5: **Baseline and Extracted SQuAD Accuracy**. Comparing the baseline and extracted models’ performance on the original SQuAD dataset questions and answers. Figure 6: **ML Attack Staging Results.** Comparing the original attack’s adversarial effectiveness against those developed with the model extracted from ChatGPT-3.5-Turbo. ### Extraction Similarity Extracted models derived from _Model Leeching_ demonstrate the ability to effectively learn the characteristics of the target model. Highlighted within Section 5.2, noticeable deviations between our extracted models, and baseline equivalents, against their EM/F1 similarity to the target, demonstrate extracted models contain similarly learned knowledge to the target compared to baseline models. The extracted model responses closely align with those of ChatGPT-3.5-Turbo's, exhibiting similar success and error rates in how they semantically and syntactically answer questions. This finding underscoring the capacity of our model to replicate the behaviour of the target, especially in the given task. ### Distilled Knowledge Capability Our findings showcase the possibility of not only extracting knowledge from a LLM, but also transferring this knowledge effectively to a model with significantly fewer parameters. ChatGPT-3.5-Turbo comprises 175 billion parameters, whilst our local models are 100x smaller (See Section 5.3). These smaller local models when trained with the extracted dataset demonstrated the ability to perform the given task effectively. Comparing our extracted model performance upon SQuAD to ChatGPT-3.5-Turbo we observed at worst a 13.2%/12.04% EM/F1 score difference and our best-performing extracted model, Roberta Large, achieving identical SQuAD scores to ChatGPT-3.5-Turbo. ### ML Attack Staging Demonstrated within Section 5.4, it is feasible to utilize an extracted model within an adversaries' local environment to conduct further adversarial attack staging. By having unfettered query access to this extracted model, it facilitates the enhancement of attack success. The potency of the AddSent attack on the model extracted by Model Leeching was increased by 26%, which consequently led to an 11% increase when launched against ChatGPT-3.5-Turbo. This highlights the vulnerability of a target LLM to subsequent machine learning attacks once adversaries acquire an extracted model. By having access to this'sandbox' model, adversaries can refine or innovate their attack strategies. Consequently, LLMs deployed and served over publicly accessible APIs are at significant risk to further attack staging. ## 7 Further Work ### Analysis of Additional Production LLMs Further work includes conducting _Model Leeching_ against a larger array of LLM(s) such as BARD, LLaMA and available variations of GPT models from OpenAI. Taking these models and exploring how they respond to _Model Leeching_ and their vulnerability to follow-up attacks. Such a study would demonstrate the possibility to generate ensemble models that inherit characteristics from multiple target LLMs. Enabling the optimization of a local model by task-specific performance from the best-performing target would aim to maximise the local model capability. ### Extraction By Proxy Multiple open-source versions of popular LLMs have been produced by the ML community. This includes examples such as GPT4All [19] and Llama [24] that can be deployed on consumer-grade devices. These models typically leverage training sets, architectures and prompts used to develop the LLM they are aiming to extract and replicate. If these models share significant characteristics with the original LLM, it may be feasible for an adversary to conduct _Model Leeching_ and then deploy an improved attack against a target LLM it didn't interact with before attack deployment. ### LLM Defenses There has been limited work to defend against attacks on LLMs. Previous research into defending against model extraction attacks for smaller NLP models has been explored, utilizing techniques such as Membership Classification [22], and Model Watermarking [23]. However given the rapid development of new state-of-the-art adversarial attacks against LLMs, it is important that the effectiveness of currently proposed defense techniques within literature are evaluated with newer LLMs. Exploring if the characteristics from applied defense techniques are captured within extracted knowledge from the target model, and further detectable within a distilled extracted model. ## 8 Conclusion In this paper we have proposed a new state-of-the-art extraction attack _Model Leeching_ as a cost-effective means to generate an extracted model with shared characteristics to a target LLM. Furthermore, we demonstrated that it is feasible to conduct adversarial attack staging against a production LLM via interrogating an extracted model derived from a target LLM within a sandbox environment. Our findings suggest that extracted models can be derived with a high similarity and task accuracy with low query costs, and constitute the basis of attack transferability to execute further successful adversarial attacks utilizing data leaked from the target LLM.
2309.11882
Almost splitting and quantitative stratification for super Ricci flow
The aim of this paper is to study almost rigidity properties of super Ricci flow whose Muller quantity is non-negative. We conclude almost splitting and quantitative stratification theorems that have been established by Bamler for Ricci flow. As a byproduct, we obtain an almost constancy for a certain integral quantity concerning scalar curvature at an almost selfsimilar point, which is new even for Ricci flow.
Keita Kunikawa, Yohei Sakurai
2023-09-21T08:32:05Z
http://arxiv.org/abs/2309.11882v3
# Almost splitting and quantitative stratification ###### Abstract. We study almost rigidity properties of super Ricci flow whose Muller quantity is non-negative. We generalize almost splitting and quantitative stratification theorems for Ricci flow established by Bamler [4]. Key words and phrases:Super Ricci flow; Muller quantity; Almost splitting; Quantitative stratification 2020 Mathematics Subject Classification: Primary 53E20; Secondly 58J35 ## 1. Introduction Let \((M,g(t))_{t\in I}\) be a compact manifold equipped with a time-dependent Riemannian metric. Such a time-dependent manifold is called _Ricci flow_ when \[\partial_{t}g=-2\operatorname{Ric},\] which has been introduced by Hamilton [17]. It is well-known that Perelman [30] has vastly developed the Ricci flow theory in three dimensional case, and utilized it for the resolution of the Poincare and geometrization conjectures. A supersolution to the Ricci flow is called _super Ricci flow_; namely, \((M,g(t))_{t\in I}\) is called super Ricci flow when \[\partial_{t}g\geq-2\operatorname{Ric},\] which has been introduced by McCann-Topping [27] in view of the relation between Ricci flow and mass transport. Super Ricci flow can be viewed as an interpolation between Ricci flow and (static) manifolds of non-negative Ricci curvature. Actually, it has been examined in the literature of the geometry of metric measure spaces with a lower Ricci curvature bound developed by Sturm [31], [32], Lott-Villani [26], Ambrosio-Gigli-Savare [1], Gigli [15] and so on (see the work of Sturm [33], and also subsequent works [21], [22], [23]). In a series of works by Bamler [2], [3], [4], he has proposed a new method for dealing with higher dimensional Ricci flow, recovering Perelman's three dimensional results. In [2], [3], he has developed geometric analysis and a compactness theory for super Ricci flow. In [4], he has established a structure theory for non-collapsed limits of Ricci flow. One of the key ingredients in [4] was to investigate almost rigidity phenomena concerning the monotonicity of the Nash entropy, which is similar to the strategy of the structure theory for Gromov-Hausdorff limits of manifolds with a lower Ricci curvature bound due to Cheeger-Colding [7], [8], [9], [10], Colding-Naber [12] and so on. We aim to extend the structure theory for Ricci flow in [4] to super Ricci flow. For that purpose, in this paper, we generalize almost splitting and quantitative stratification theorems obtained in [4, Part 2] for super Ricci flow whose Muller quantity is non-negative. We recall that for a vector field \(V\), the _Muller quantity_ is defined as follows (see [28, Definition 1.3]): \[\mathcal{D}(V):=\partial_{t}H-\Delta H-2|h|^{2}+4\operatorname{div}h(V)-2 \langle\nabla H,V\rangle+2\operatorname{Ric}(V,V)-2h(V,V),\] where \[h:=-\frac{1}{2}\partial_{t}g,\quad H:=\operatorname{tr}h.\] The non-negativity of the Muller quantity brings various benefits such as a lower bound of the (generalized) scalar curvature, the Harnack estimate and the monotonicity of entropies (see Lemma 2.1, 4.1 and Theorem 2.5 below). There are several examples of super Ricci flow whose Muller quantity is non-negative (see [28, Section 2], [13, Section 7]): 1. Ricci flow; 2. Ricci flow coupled with the heat equation, called _List flow_ ([25]); 3. Ricci flow coupled with the harmonic map heat flow, called _Muller flow_ ([29]); 4. mean curvature flow for spacelike hypersurfaces in Lorentzian manifold of non-negative sectional curvature; 5. (scaled) twisted Kahler-Ricci flow. In [24], the authors have extended some results of Bamler-Zhang [5] concerning geometric analysis for Ricci flow under a scalar curvature bound to this object. This paper is organized as follows: In Sections 2, 3, 4, 5, 6, we collect various estimates for the proof of our main theorems. In Section 2, we recall basics of the heat kernel. In Section 3, we review previous results for super Ricci flow obtained in [2]. In Section 4, we study properties of the Nash entropy. In Section 5, we prove some volume and heat kernel estimates. In Section 6, we obtain several estimates under a non-collapsed condition for the Nash entropy. Based on these estimates, in Section 7, we investigate properties of almost selfsimilar points. Sections 8, 9, 10 are devoted to the proof of our main theorems. In Section 8, we show an almost static cone splitting theorem (see Theorem 8.1). In Section 9, we prove an almost splitting theorem (see Theorem 9.1). In Section 10, we conclude a quantitative stratification theorem (see Theorem 10.5). In most parts, we give a proof along the lines of the corresponding argument in [4] paying attention to the appearance of the Muller quantity. On the other hand, in the proof of an almost monotonicity result for a certain integral quantity (Proposition 7.5), we refine the calculation in [4] in order to avoid the complexity. Thanks to this refinement, we obtain a new insight for Ricci flow such that the almost monotonicity can be improved, and an almost constancy holds in the special case (see Corollary 7.6). ## 2. Preliminaries In this section, we recall some basic notions. In what follows, let \((M,g(t))_{t\in I}\) be an \(n\)-dimensional compact manifold with a time-dependent metric. ### Convention and notation For positive constants \(a,b,c,d,\dots\), we denote by \(C_{a,b,c,d,\dots}\) positive constants depending only on \(a,b,c,d,\dots\) and the dimension \(n\) without distinction, where we stress that the dependence on the dimension \(n\) are omitted. When we specify constants, we put numbers such as \(C_{1,a,b,c,d,\dots},C_{2,a,b,c,d,\dots}\). When we assert that "there exists \(\overline{C}_{a,b,c,d,\dots}\) such that if \(C\leq\overline{C}_{a,b,c,d,\dots}\)", we abbreviate "if \(C\leq\overline{C}_{a,b,c,d,\dots}\)" for short. In the same manner, for a lower bound, we say "if \(C\geq\underline{C}_{a,b,c,d,\dots}\)" instead of "if \(C\leq\overline{C}_{a,b,c,d,\dots}\)". Let \(d_{t}\) and \(m_{t}\) be the Riemannian distance and Riemannian volume measure with respect to \(g(t)\), respectively. For \(x\in M,t\in I\) and \(r>0\), let \(B(x,t,r)\) denote the open ball of radius \(r\) centered at \(x\) with respect to \(g(t)\). Note that \(\partial_{t}(dm_{t})=-H\,dm_{t}\). The non-negativity of the Muller quantity (i.e., \(\mathcal{D}(V)\geq 0\) for all vector fields \(V\)) is expressed by \(\mathcal{D}\geq 0\). At the begging of the proof of each statement, we carry out the parallel translation for time and the parabolic rescaling. Recall that for \(\mathrm{r}>0\), the parabolic rescaling (at time \(0\)) is given by \(\bar{g}(s):=\mathrm{r}^{-2}\,g(\mathrm{r}^{2}s)=\mathrm{r}^{-2}g(t)\). Note that the super Ricci flow and the non-negativity of the Muller quantity are preserved under the rescaling (cf. [24, Remark 2.2]). We notice the following lower bound for \(H\), which is a direct consequence of the maximum principle (see [13, Lemma 3.2]): **Lemma 2.1** ([13]).: _Assume \(\mathcal{D}\geq 0\). If \(H(\cdot,t_{0})\geq-\mathcal{H}\) for \(\mathcal{H}>0\), then for all \(t\in[t_{0},\infty)\cap I\),_ \[H(\cdot,t)\geq-\frac{n}{2}\frac{\mathcal{H}}{(n/2)+\mathcal{H}(t-t_{0})}.\] _Remark 2.2_.: The following also holds (see [13, Lemma 3.2]): Assume \(\mathcal{D}\geq 0\). If \(t_{0}\in I\), then for all \(t\in[t_{0},\infty)\cap I\), we have \[H(\cdot,t)\geq-\frac{n}{2(t-t_{0})},\quad\min_{M}H(\cdot,t)\geq\min_{M}H( \cdot,t_{0}).\] ### Heat kernels The _heat operator_ and the _conjugate heat operator_ are defined by \[\square:=\partial_{t}-\Delta,\quad\square^{*}:=-\partial_{t}-\Delta+H.\] We recall the following fundamental formula (see e.g., [6]): For any \(u,v\in C^{\infty}(M\times I)\), \[\frac{d}{dt}\int_{M}\,uv\,dm_{t}=\int_{M}\,(\square u)v\,dm_{t}-\int_{M}\,u( \square^{*}v)\,dm_{t}. \tag{2.1}\] For \(x,y\in M\) and \(s,t\in I\) with \(s<t\), we denote by \(G(x,t;y,s)\) the _heat kernel_ (see e.g., [11], [15]); namely, for a fixed \((y,s)\in M\times I\), it solves \[(\partial_{t}-\Delta_{x})G(\cdot,\cdot;y,s)=0,\quad\lim_{t\searrow s}G(\cdot,t ;y,s)=\delta_{y}.\] Notice that \(G(x,t;\cdot,\cdot)\) is the kernel for the conjugate heat equation; namely, for any \((x,t)\in M\times I\), \[(-\partial_{s}-\Delta_{y}+H)G(x,t;\cdot,\cdot)=0,\quad\lim_{s\nearrow t}G(x,t ;\cdot,s)=\delta_{x}.\] The semigroup property \[G(x,t;y,s)=\int_{M}\,G(x,t;\cdot,\mathfrak{t})G(\cdot,\mathfrak{t};y,s)\,dm_ {\mathfrak{t}} \tag{2.2}\] holds for any \(\mathfrak{t}\in(s,t)\). For a base point \((x_{0},t_{0})\in M\times I\), the (_conjugate_) _heat kernel measure_ is defined by \[\nu_{(x_{0},t_{0})}:=\{\nu_{(x_{0},t_{0});t}\}_{t\in(-\infty,t_{0}]\cap I}, \quad\nu_{(x_{0},t_{0});t}:=G(x_{0},t_{0};\cdot,t)\,m_{t},\quad\nu_{(x_{0},t _{0});t_{0}}:=\delta_{x_{0}}\] for \(t<t_{0}\), which are probability measures. _Remark 2.3_.: We frequently set \(\nu:=\nu_{(x_{0},t_{0})}\) or \(\nu^{0}:=\nu_{(x_{0},t_{0})}\). In that case, without mentioning, we use the notation \(\nu_{t}:=\nu_{(x_{0},t_{0});t}\) or \(\nu_{t}^{0}:=\nu_{(x_{0},t_{0});t}\), respectively. We define \(\tau:=t_{0}-t\), which is called the _parameter_. The _potential_\(f\in C^{\infty}(M\times((-\infty,t_{0})\cap I))\) is determined by \[G(x_{0},t_{0};\cdot,t)=(4\pi\tau)^{-n/2}e^{-f(\cdot,t)},\] which can be written as \[f(x,t)=-\log G(x_{0},t_{0};x,t)-\frac{n}{2}\log\tau-\frac{n}{2}\log 4\pi. \tag{2.3}\] The potential enjoys \[-\partial_{t}f=\Delta f-|\nabla f|^{2}+H-\frac{n}{2\tau}. \tag{2.4}\] The following can be derived from (2.1) and integration by parts (cf. [4, Lemma 4.4]): \[\frac{d}{dt}\int_{M}\,u\,d\nu_{(x_{0},t_{0});t}=\int_{M}\,\square u\,d\nu_{(x _{0},t_{0});t},\quad\int_{M}\,\operatorname{div}(V)\,d\nu_{(x_{0},t_{0});t}= \int_{M}\,\langle V,\nabla f\rangle\,d\nu_{(x_{0},t_{0});t} \tag{2.5}\] for all \(u\in C^{\infty}(M\times I)\) and vector fields \(V\). _Remark 2.4_.: We do not always introduce the notations of the parameter and the potential when it is clear from the context. We define \[\Phi:=h+\nabla^{2}f-\frac{1}{2\tau}g,\quad w:=\tau(2\Delta f-|\nabla f|^{2}+H)+f -n. \tag{2.6}\] We possess the following Perelman type Harnack estimate (see [30], [6, Theorems 1.1, 1.2]): **Theorem 2.5** ([30], [6]).: _We set \(u:=G(x_{0},t_{0};\cdot,\cdot)\). Then we have_ \[\square^{*}(wu)=-2\tau\left|\Phi\right|^{2}u-\tau u\mathcal{D}(\nabla f).\] _Moreover, if \(\mathcal{D}\geq 0\), then \(w\leq 0\)._ ### Wasserstein distance Let \((X,d)\) be a complete separable metric space, and let \(\mu_{1},\mu_{2}\) be two Borel probability measures with finite first moment. We denote by \(W(\mu_{1},\mu_{2})\) their (\(L^{1}\)-)_Wasserstein distance_. The canonical formulation of the Wasserstein distance is based on a coupling method. We would rather make use of the following characterization called the _Kantorovich-Rubinstein duality_ (see e.g., [34, Theorem 5.10]): \[W(\mu_{1},\mu_{2})=\sup_{\phi}\bigg{(}\int_{X}\phi\,d\mu_{2}-\int_{X}\phi\,d \mu_{1}\bigg{)}, \tag{2.7}\] where the supremum is taken over all bounded \(1\)-Lipschitz functions \(\phi:X\to\mathbb{R}\). We also denote by \(\mathrm{Var}(\mu_{1},\mu_{2})\) the _variance_ between \(\mu_{1}\) and \(\mu_{2}\) defined as \[\mathrm{Var}(\mu_{1},\mu_{2}):=\int_{X}\int_{X}d^{2}(x_{1},x_{2})\,d\mu_{1}(x_ {1})d\mu_{2}(x_{2}).\] We recall the following (see e.g., [3, Lemma 2.8]): **Lemma 2.6** ([3]).: _For any two Borel probability measures \(\mu_{1},\mu_{2}\) on \((X,d)\), we have_ \[W(\mu_{1},\mu_{2})\leq\sqrt{\mathrm{Var}(\mu_{1},\mu_{2})}.\] ## 3. Super Ricci flow In this section, we summarize several facts on super Ricci flow. ### Wasserstein monotonicity and concentration bounds Let \(W_{t}\) and \(\mathrm{Var}_{t}\) stand for the Wasserstein distance and the variance induced from \(g(t)\), respectively. We first recall the following monotonicity property for the Wasserstein distance (see [2, Lemma 2.7]): **Proposition 3.1** ([2]).: _Let \((M,g(t))_{t\in I}\) be a super Ricci flow. Let \((x_{0},t_{0}),(x_{1},t_{1})\in M\times I\). Then \(W_{t}(\nu_{(x_{0},t_{0});t},\nu_{(x_{1},t_{1});t})\) is non-decreasing in \(t\in(-\infty,\min\{t_{0},t_{1}\}]\cap I\)._ For \((x_{0},t_{0})\in M\times I\), a point \((z,t)\in M\times((-\infty,t_{0}]\cap I)\) is said to be a (\(\mathcal{C}_{n}\)-)_center_ if \[\mathcal{C}_{n}:=\frac{(n-1)\pi^{2}}{2}+4,\quad\mathrm{Var}_{t}(\delta_{z}, \nu_{(x_{0},t_{0});t})\leq\mathcal{C}_{n}(t_{0}-t).\] For the existence of centers, we have the following (see [2, Proposition 3.12]): **Proposition 3.2** ([2]).: _Let \((M,g(t))_{t\in I}\) be a super Ricci flow. Let \((x_{0},t_{0})\in M\times I\). Then for every \(t\in(-\infty,t_{0}]\cap I\), there is \(z\in M\) such that \((z,t)\) is a center of \((x_{0},t_{0})\)._ We possess the following concentration bounds (see [2, Proposition 3.13, Theorem 3.14]): **Proposition 3.3** ([2]).: _Let \((M,g(t))_{t\in I}\) be a super Ricci flow. Let \((x_{0},t_{0})\in M\times I\), and let \((z,t)\) be a center of \((x_{0},t_{0})\). Then for every \(R>1\) we have_ \[\nu_{(x_{0},t_{0});t}\left(B(z,t,\sqrt{R\mathcal{C}_{n}(t_{0}-t)})\right)\geq 1 -R^{-1}.\] **Proposition 3.4** ([2]).: _Let \((M,g(t))_{t\in I}\) be a super Ricci flow. Let \((x_{0},t_{0})\in M\times I\), and let \((z,t)\) be a center of \((x_{0},t_{0})\). Then for all \(r>0\) we have_ \[\nu_{(x_{0},t_{0});t}\left(M\setminus B(z,t,r)\right)\leq 2\exp\left(-\frac{ \left(r-\sqrt{2\mathcal{C}_{n}(t_{0}-t)}\right)_{+}^{2}}{8(t_{0}-t)}\right).\] _Remark 3.5_.: Under the same setting as in Proposition 3.4, the following more elementary bound holds (cf. [2, Proposition 3.13]): For all \(r>0\) we have \[\nu_{(x_{0},t_{0});t}\left(M\setminus B(z,t,r)\right)\leq\frac{ \operatorname{Var}_{t}(\nu_{(x_{0},t_{0});t},\delta_{z})}{r^{2}}\leq\frac{ \mathcal{C}_{n}(t_{0}-t)}{r^{2}}. \tag{3.1}\] ### Analytic bounds Let \(\Psi:\mathbb{R}\to(0,1)\) be a function determined by \[\Psi^{\prime}(x)=(4\pi)^{-1/2}e^{-x^{2}/4},\quad\lim_{x\to-\infty}\Psi(x)=0, \quad\lim_{x\to\infty}\Psi(x)=1. \tag{3.2}\] For \(t>0\), let \(\Psi_{t}:\mathbb{R}\to(0,1)\) be a function defined by \(\Psi_{t}(x):=\Psi(t^{-1/2}x)\), and let \(\Psi_{t}^{-1}:(0,1)\to\mathbb{R}\) be its inverse. We have the following gradient estimate (see [2, Theorem 4.1]): **Theorem 3.6** ([2]).: _Let \((M,g(t))_{t\in I}\) be a super Ricci flow. For \([t_{0},t_{1}]\subset I\), let \(u\in C^{\infty}(M\times[t_{0},t_{1}])\) be a solution to the heat equation. Let \(T\geq 0\). Suppose that \(u\) only takes values in \((0,1)\) and \(|\nabla(\Psi_{T}^{-1}(u(\cdot,t_{0})))|\leq 1\) if \(T>0\). Then \(|\nabla(\Psi_{T+t-t_{0}}^{-1}(u(\cdot,t)))|\leq 1\) for all \(t\in[t_{0},t_{1}]\)._ We next present the following heat kernel estimates (see [2, Proposition 4.2]): **Proposition 3.7** ([2]).: _Let \((M,g(t))_{t\in I}\) be a super Ricci flow. Let \(x\in M\) and \([s,t]\subset I\). Then for every \(p\in[1,\infty)\) we have_ \[(t-s)^{p/2}\int_{M}\left(\frac{|\nabla_{x}G(x,t;\cdot,s)|}{G(x,t;\cdot,s)} \right)^{p}d\nu_{(x,t);s}\leq C_{p}; \tag{3.3}\] _moreover, for every Borel subset \(\Omega\subset M\) we have_ \[(t-s)^{p/2}\int_{\Omega}\left(\frac{|\nabla_{x}G(x,t;\cdot,s)|}{G(x,t;\cdot,s) }\right)^{p}d\nu_{(x,t);s}\leq C_{p}\,\nu_{(x,t);s}(\Omega)(-\log(\nu_{(x,t) ;s}(\Omega)/2))^{p/2}. \tag{3.4}\] _Also, for every \(\mathrm{v}\in T_{x}M\) with \(|\mathrm{v}|_{t}=1\),_ \[(t-s)\int_{M}\left(\frac{\partial_{\mathrm{v}}G(x,t;\cdot,s)}{G(x,t;\cdot,s) }\right)^{2}d\nu_{(x,t);s}\leq\frac{1}{2}. \tag{3.5}\] _Remark 3.8_.: In (3.3) and (3.4), one can choose \(C_{2}=n/2\). We have the following Poincare inequality (see [18, Theorem 1.5], [2, Theorem 11.1], and also [19, Theorem 1.10]): **Proposition 3.9** ([18], [2]).: _Let \((M,g(t))_{t\in I}\) be a super Ricci flow. Let \((x_{0},t_{0})\in M\times I\). Let \(\tau>0\) satisfy \([t_{0}-\tau,t_{0}]\subset I\). Then for any \(u\in C^{1}(M)\) with \(\int_{M}u\,d\nu_{(x_{0},t_{0});t_{0}-\tau}=0\) and \(p\in[1,\infty)\),_ \[\int_{M}|u|^{p}d\nu_{(x_{0},t_{0});t_{0}-\tau}\leq C_{p}\,\tau^{p/2}\,\int_{M }|\nabla u|^{p}d\nu_{(x_{0},t_{0});t_{0}-\tau}.\] Proof.: Bamler [2] has stated this assertion only for Ricci flow, but the argument also works for super Ricci flow without any changes. For \(p=2\), the desired inequality has been obtained by Haslhofer-Naber [18] (see [18, Theorem 1.5]). For \(p=1\), by noticing the following point, one can prove the desired one along the lines of the proof of [2, Theorem 11.1]: If \(\square u=0\), then the Bochner formula and the Kato inequality imply \[\square|\nabla u|^{2}=2h(\nabla u,\nabla u)-2\operatorname{Ric}(\nabla u, \nabla u)-2|\nabla^{2}u|^{2}\leq-\frac{|\nabla|\nabla u|^{2}|^{2}}{2|\nabla u| ^{2}};\] in particular, \(\square|\nabla u|\leq 0\). For general \(p\), the desired one follows from the assertion for \(p=1\) together with the same argument of the proof of [2, Theorem 11.1]. \(\Box\) _Remark 3.10_.: In Proposition 3.9, we may choose \(C_{1}=\sqrt{\pi}\) and \(C_{2}=2\). ## 4. Nash entropy In this section, we examine basic properties of the Nash entropy. ### Entropy monotonicity Fix a base point \((x_{0},t_{0})\in M\times I\). For \(\tau>0\) with \([t_{0}-\tau,t_{0}]\subset I\), the _pointed Nash-entropy_ is defined by \[\mathcal{N}_{(x_{0},t_{0})}(\tau):=\int_{M}\,f\,d\nu_{(x_{0},t_{0});t_{0}-\tau }-\frac{n}{2}.\] We set \(\mathcal{N}_{(x_{0},t_{0})}(0):=0\) such that \(\mathcal{N}_{(x_{0},t_{0})}(\tau)\) is continuous in \(\tau\) (cf. [2, Proposition 5.2]). The _pointed \(\mathcal{W}\)-entropy_ is defined by \[\mathcal{W}_{(x_{0},t_{0})}(\tau):=\int_{M}\left(\tau(|\nabla f|^{2}+H)+f-n \right)d\nu_{(x_{0},t_{0});t_{0}-\tau}.\] We recall the following monotonicity properties (cf. [2, Proposition 5.2], [17, Theorem 3.1], [16, Theorem 5.2], [13, Lemma 3.1]): **Lemma 4.1**.: _Assume \(\mathcal{D}\geq 0\). Let \((x_{0},t_{0})\in M\times I\). If \(\tau>0\) with \([t_{0}-\tau,t_{0}]\subset I\), then_ \[\frac{d}{d\tau}\left(\tau\mathcal{N}_{(x_{0},t_{0})}(\tau)\right) =\mathcal{W}_{(x_{0},t_{0})}(\tau)\leq 0, \tag{4.2}\] \[\frac{d^{2}}{d\tau^{2}}\left(\tau\mathcal{N}_{(x_{0},t_{0})}(\tau )\right)=-\tau\int_{M}\left(2\left|\Phi\right|^{2}+\mathcal{D}(\nabla f)\right) d\nu_{(x_{0},t_{0});t_{0}-\tau}\leq 0,\] (4.3) \[\frac{d}{d\tau}\mathcal{N}_{(x_{0},t_{0})}(\tau)\leq 0,\] (4.4) \[\mathcal{W}_{(x_{0},t_{0})}(\tau)\leq\mathcal{N}_{(x_{0},t_{0})} (\tau), \tag{4.1}\] _where \(\Phi\) is defined as (2.6)._ Proof.: The formulas (4.1) and (4.2) are well-known (see e.g., [17, Theorem 3.1], [16, Theorem 5.2], [13, Lemma 3.1]). Furthermore, (4.3) and (4.4) can be derived from the same calculation as in the proof of [2, Proposition 5.2] together with (4.1) and (4.2). \(\Box\) We also see the following: **Lemma 4.2**.: _Assume \(\mathcal{D}\geq 0\). Let \((x_{0},t_{0})\in M\times I\). Let \(\tau_{1},\tau_{2}>0\) satisfy \(\tau_{1}\leq\tau_{2}\) and \([t_{0}-\tau_{2},t_{0}]\subset I\). For \(\mathcal{H}>0\), we assume \(H(\cdot,t_{0}-\tau_{2})\geq-\mathcal{H}\). Then we have_ \[\mathcal{N}_{(x_{0},t_{0})}(\tau_{1})-\left((\tau_{2}-\tau_{1})\mathcal{H}+ \frac{n}{2}\log\left(\frac{\tau_{2}}{\tau_{1}}\right)\right)\leq\mathcal{N}_ {(x_{0},t_{0})}(\tau_{2}).\] Proof.: From (4.1) we deduce \[\frac{d}{d\tau}\left(\tau\mathcal{N}_{(x_{0},t_{0})}(\tau)\right) =\mathcal{W}_{(x_{0},t_{0})}(\tau)=\int_{M}\tau\left(|\nabla f|^{2}+H \right)d\nu_{(x_{0},t_{0});t_{0}-\tau}+\mathcal{N}_{(x_{0},t_{0})}(\tau)- \frac{n}{2}\] \[\geq-\tau\mathcal{H}+\mathcal{N}_{(x_{0},t_{0})}(\tau)-\frac{n}{2},\] and hence \[\frac{d}{d\tau}\mathcal{N}_{(x_{0},t_{0})}(\tau)\geq-\left(\mathcal{H}+\frac{ n}{2\tau}\right).\] This implies \[\mathcal{N}_{(x_{0},t_{0})}(\tau_{2})-\mathcal{N}_{(x_{0},t_{0})}(\tau_{1})\geq- \int_{\tau_{1}}^{\tau_{2}}\,\left(\mathcal{H}+\frac{n}{2\tau}\right)\,d\tau=- \left((\tau_{2}-\tau_{1})\mathcal{H}+\frac{n}{2}\log\left(\frac{\tau_{2}}{\tau_ {1}}\right)\right).\] We complete the proof. Lemma 4.1 and Proposition 3.9 yield the following (cf. [2, Proposition 5.13]): **Lemma 4.3**.: _Let \((M,g(t))_{t\in I}\) be a super Ricci flow with \(\mathcal{D}\geq 0\). Let \((x_{0},t_{0})\in M\times I\). For \(\mathcal{H}>0\), we assume \(H(\cdot,t_{0}-\tau)\geq-\mathcal{H}\). Then we have_ \[\int_{M}\tau(|\nabla f|^{2}+H)d\nu_{(x_{0},t_{0});t_{0}-\tau} \leq\frac{n}{2}, \tag{4.6}\] \[\int_{M}\left(f-\mathcal{N}_{(x_{0},t_{0})}(\tau)-\frac{n}{2} \right)^{2}d\nu_{(x_{0},t_{0});t_{0}-\tau}\leq n+2\mathcal{H}\tau. \tag{4.5}\] Proof.: By the same calculation as in the proof of [2, Proposition 5.13], and by (4.1), (4.3), \[\int_{M}\tau(|\nabla f|^{2}+H)d\nu_{(x_{0},t_{0});t_{0}-\tau}=\frac{n}{2}+ \tau\frac{d}{d\tau}\mathcal{N}_{(x_{0},t_{0})}(\tau)\leq\frac{n}{2},\] which proves (4.5). Proposition 3.9 with \(p=2\) together with \[\int_{M}\left(f-\mathcal{N}_{(x_{0},t_{0})}(\tau)-\frac{n}{2}\right)d\nu_{(x_{ 0},t_{0});t_{0}-\tau}=0\] and \[2\tau\int_{M}|\nabla f|^{2}d\nu_{(x_{0},t_{0});t_{0}-\tau}\leq n-2\tau\int_{M }H\,d\nu_{(x_{0},t_{0});t_{0}-\tau}\leq n+2\mathcal{H}\tau\] leads us to (4.6) (see also Remark 3.10). We arrive at the desired estimates. ### Derivative estimates For a fixed \(s\in I\), we define \[\mathcal{N}_{s}(x,t):=\mathcal{N}_{(x,t)}(t-s)\] on \(M\times((s,\infty)\cap I)\). We have the following derivative estimates (cf. [2, Theorem 5.9]): **Lemma 4.4**.: _Let \((M,g(t))_{t\in I}\) be a super Ricci flow with \(\mathcal{D}\geq 0\). Let \(s\in I\). For \(\mathcal{H}>0\), we assume \(H(\cdot,s)\geq-\mathcal{H}\). Then on \(M\times((s,\infty)\cap I)\) we have_ \[|\nabla\mathcal{N}_{s}|\leq\left(\frac{n}{2(t-s)}+\mathcal{H} \right)^{1/2}, \tag{4.8}\] \[-\frac{n}{2(t-s)}\leq\square\mathcal{N}_{s}\leq 0. \tag{4.7}\] Proof.: We may assume \(s=0\). Fix a base point \((x,t)\in M\times((0,\infty)\cap I)\). For \(\mathrm{v}\in T_{x}M\) with \(|\mathrm{v}|_{t}=1\), the same calculation as in the proof of [2, Theorem 5.9] tells us that \[\partial_{\mathrm{v}}\mathcal{N}_{0}(x,t)\] \[=\int_{M}\left(\frac{\partial_{\mathrm{v}}G(x,t;y,0)}{G(x,t;y,0) }\right)\left(f(y,0)-\mathcal{N}_{0}(x,t)-\frac{n}{2}\right)d\nu_{(x,t);0}(y)\] \[\leq\left(\int_{M}\left(\frac{\partial_{\mathrm{v}}G(x,t;y,0)}{G (x,t;y,0)}\right)^{2}d\nu_{(x,t);0}(y)\right)^{1/2}\left(\int_{M}\left(f- \mathcal{N}_{0}(x,t)-\frac{n}{2}\right)^{2}d\nu_{(x,t);0}(y)\right)^{1/2};\] in particular, (3.5) and (4.6) imply \[|\nabla\mathcal{N}_{0}|^{2}(x,t)\leq\frac{1}{2t}(n+2\mathcal{H}t),\] which is the desired gradient estimate (4.7) (see also Remark 2.2). Similarly, we possess \[\square\mathcal{N}_{0}(x,t)=\int_{M}\left(\frac{|\nabla_{x}G(x,t;y,0)|}{G(x,t;y,0 )}\right)^{2}d\nu_{(x,t);0}(y)-\frac{n}{2t},\] and (3.3) with \(p=2\) leads us to (4.8) (see also Remark 3.8). \(\Box\) We conclude the following (cf. [2, Corollary 5.11]): **Lemma 4.5**.: _Let \((M,g(t))_{t\in I}\) be a super Ricci flow with \(\mathcal{D}\geq 0\). For a fixed \(s\in I\), let \((x_{1},t_{1}),(x_{2},t_{2})\in M\times((s,\infty)\cap I)\). For \(\mathcal{H}>0\), we assume \(H(\cdot,s)\geq-\mathcal{H}\). Then for every \(t\in(s,\min\{t_{1},t_{2}\}]\) we have_ \[\mathcal{N}_{s}(x_{1},t_{1})-\mathcal{N}_{s}(x_{2},t_{2})\leq\left(\frac{n}{ 2(t-s)}+\mathcal{H}\right)^{1/2}W_{t}(\nu_{(x_{1},t_{1});t},\nu_{(x_{2},t_{2} );t})+\frac{n}{2}\log\left(\frac{t_{2}-s}{t-s}\right).\] Proof.: We may assume \(s=0\). For \(i=1,2\), we set \(\nu^{i}:=\nu_{(x_{i},t_{i})}\). In virtue of (4.8), we see \[\mathcal{N}_{0}(x_{i},t_{i})\leq\int_{M}\mathcal{N}_{0}(\cdot,t)d\nu^{i}_{t} \leq\mathcal{N}_{0}(x_{i},t_{i})+\frac{n}{2}\log\left(\frac{t_{i}}{t}\right). \tag{4.9}\] On the other hand, due to (4.7) and (2.7), \[\left|\int_{M}\mathcal{N}_{0}(\cdot,t)d\nu^{2}_{t}-\int_{M}\mathcal{N}_{0}( \cdot,t)d\nu^{1}_{t}\right|\leq\left(\frac{n}{2t}+\mathcal{H}\right)^{1/2}W_{ t}(\nu^{1}_{t},\nu^{2}_{t}). \tag{4.10}\] The desired estimate follows from combining (4.9) and (4.10). \(\Box\) We will use Lemma 4.5 in the following form: **Lemma 4.6**.: _Let \((M,g(t))_{t\in I}\) be a super Ricci flow with \(\mathcal{D}\geq 0\). For a fixed \(s\in I\), let \((x_{0},t_{0})\in M\times((s,\infty)\cap I)\). For \(t\in(s,t_{0}]\), we assume that \((z,t)\) is a center of \((x_{0},t_{0})\). For \(\mathcal{H}>0\), we further assume \(H(\cdot,s)\geq-\mathcal{H}\). Then on \(M\), we have_ \[-\mathcal{N}_{s}(\cdot,t)\leq-\mathcal{N}_{s}(x_{0},t_{0})+\left(\frac{n}{2(t -s)}+\mathcal{H}\right)^{1/2}\sqrt{\mathcal{C}_{n}(t_{0}-t)}+\left(\frac{n}{2 (t-s)}+\mathcal{H}\right)^{1/2}d_{t}(z,\cdot).\] Proof.: We may assume \(s=0\). Lemmas 2.6 and 4.5 yield \[\mathcal{N}_{0}(x_{0},t_{0})-\mathcal{N}_{0}(z,t) \leq\left(\frac{n}{2t}+\mathcal{H}\right)^{1/2}W_{t}(\delta_{z},\nu_{(x_{0},t_{0});t})\] \[\leq\left(\frac{n}{2t}+\mathcal{H}\right)^{1/2}\sqrt{\operatorname {Var}(\delta_{z},\nu_{(x_{0},t_{0});t})}\leq\left(\frac{n}{2t}+\mathcal{H} \right)^{1/2}\sqrt{\mathcal{C}_{n}(t_{0}-t)}. \tag{4.11}\] Further, (4.7) and (4.11) lead us to \[-\mathcal{N}_{0}(\cdot,t) \leq-\mathcal{N}_{0}(z,t)+\left(\frac{n}{2t}+\mathcal{H}\right)^{ 1/2}d_{t}(z,\cdot)\] \[\leq-\mathcal{N}_{0}(x_{0},t_{0})+\left(\frac{n}{2t}+\mathcal{H} \right)^{1/2}\sqrt{\mathcal{C}_{n}(t_{0}-t)}+\left(\frac{n}{2t}+\mathcal{H} \right)^{1/2}d_{t}(z,\cdot)\] on \(M\). This proves the lemma. \(\Box\) We also use Lemma 4.5 in the following form: **Lemma 4.7**.: _Let \((M,g(t))_{t\in I}\) be a super Ricci flow with \(\mathcal{D}\geq 0\). For a fixed \(s\in I\), let \((x_{0},t_{0}),(x_{1},t_{1})\in M\times I\) with \(t_{0}\leq t_{1}\). For \(r>0,\alpha\in(0,1)\), we assume \([t_{0}-2\alpha^{-1}r^{2},t_{0}]\subset I\) and \(0\leq t_{1}-t_{0}\leq\alpha^{-1}r^{2}\). For \(\mathcal{H}>0\), we assume \(H(\cdot,t_{0}-2\alpha^{-1}r^{2})\geq-\mathcal{H}r^{-2}\). For \(D>0\), we assume \(W_{s_{0}}(\nu_{(x_{0},t_{0});s_{0}},\nu_{(x_{1},t_{1});s_{0}})\leq Dr\) for some \(s_{0}\in[t_{0}-\alpha^{-1}r^{2},t_{0}-\alpha r^{2}]\). Then we have_ \[\mathcal{N}_{(x_{1},t_{1})}(r^{2})\geq\mathcal{N}_{(x_{0},t_{0})}(r^{2})-C_{D, \alpha,\mathcal{H}}.\] Proof.: We may assume \(r=1\). By Lemma 4.2, we have \[\mathcal{N}_{(x_{0},t_{0})}(2\alpha^{-1})\geq\mathcal{N}_{(x_{0},t_{0})}(1)-C_{ \alpha,\mathcal{H}}. \tag{4.12}\] Furthermore, by Lemma 4.5 and Proposition 3.1, \[\mathcal{N}_{(x_{0},t_{0})}(2\alpha^{-1})-\mathcal{N}_{(x_{1},t_{ 1})}(t_{1}-t_{0}+2\alpha^{-1})\] \[=\mathcal{N}_{t_{0}-2\alpha^{-1}}(x_{0},t_{0})-\mathcal{N}_{t_{0} -2\alpha^{-1}}(x_{1},t_{1})\] \[\leq\left(\frac{n}{2\alpha^{-1}}+\mathcal{H}\right)^{1/2}W_{t_{0} -\alpha^{-1}}(\nu_{(x_{0},t_{0});t_{0}-\alpha^{-1}},\nu_{(x_{1},t_{1});t_{0}- \alpha^{-1}})+\frac{n}{2}\log 3\] \[\leq\left(\frac{n}{2\alpha^{-1}}+\mathcal{H}\right)^{1/2}W_{s_{0} }(\nu_{(x_{0},t_{0});s_{0}},\nu_{(x_{1},t_{1});s_{0}})+\frac{n}{2}\log 3 \leq C_{D,\alpha}. \tag{4.13}\] From (4.3), (4.12) and (4.13), we derive \[\mathcal{N}_{(x_{1},t_{1})}(1)\geq\mathcal{N}_{(x_{1},t_{1})}(t_{1}-t_{0}+2 \alpha^{-1})\geq\mathcal{N}_{(x_{0},t_{0})}(2\alpha^{-1})-C_{D,\alpha}\geq \mathcal{N}_{(x_{0},t_{0})}(1)-C_{D,\alpha,\mathcal{H}}.\] This completes the proof. ## 5. Volume and heat kernel estimates In this section, we derive several volume and heat kernel estimates. ### Lower volume estimates We begin with the following (cf. [2, Theorem 6.2]): **Proposition 5.1**.: _Let \((M,g(t))_{t\in I}\) be a super Ricci flow with \(\mathcal{D}\geq 0\). Let \((x_{0},t_{0})\in M\times I\). For \(r>0\), we assume \([t_{0}-r^{2},t_{0}]\subset I\). For \(\mathcal{H}>0\), we assume \(H(\cdot,t_{0}-r^{2})\geq-\mathcal{H}r^{-2}\). Let \((z,t_{0}-r^{2})\in M\times I\) be a center of \((x_{0},t_{0})\). Then we have_ \[m_{t_{0}-r^{2}}\left(B(z,t_{0}-r^{2},\sqrt{2\mathcal{C}_{n}}r)\right)\geq C_{ \mathcal{H}}\,\exp(\mathcal{N}_{(x_{0},t_{0})}(r^{2}))r^{n}.\] Proof.: We may assume \(t_{0}=1\) and \(r=1\). Let \(\nu:=\nu_{(x_{0},1)}\). By virtue of Proposition 3.3, \[\nu_{0}(B)\geq\frac{1}{2}, \tag{5.1}\] where \(B:=B(z,0,\sqrt{2\mathcal{C}_{n}})\). Lemma 4.3 tells us that \[\int_{M}\left|f-\mathcal{N}_{(x_{0},1)}(1)-\frac{n}{2}\right|d\nu_{0}\leq \left(\int_{M}\left(f-\mathcal{N}_{(x_{0},1)}(1)-\frac{n}{2}\right)^{2}d\nu_{ 0}\right)^{1/2}\leq(n+2\mathcal{H})^{1/2}. \tag{5.2}\] Combining (5.1) and (5.2), we conclude \[\frac{1}{\nu_{0}(B)}\int_{B}f\,d\nu_{0} \geq\mathcal{N}_{(x_{0},1)}(1)+\frac{n}{2}-\frac{1}{\nu_{0}(B)} \int_{B}\left|f-\mathcal{N}_{(x_{0},1)}(1)-\frac{n}{2}\right|d\nu_{0}\] \[\geq\mathcal{N}_{(x_{0},1)}(1)+\frac{n}{2}-2(n+2\mathcal{H})^{1/ 2}.\] We set \(u:=(4\pi)^{n/2}e^{-f}/\nu_{0}(B)\). Since \(\int_{B}u\,dm_{0}=1\), (5.1) tells us that \[\int_{B}u\log u\,dm_{0} =-\frac{1}{\nu_{0}(B)}\int_{B}f\,d\nu_{0}-\log\nu_{0}(B)+\frac{n} {2}\log 4\pi\] \[\leq-\mathcal{N}_{(x_{0},1)}(1)-\frac{n}{2}+2(n+2\mathcal{H})^{1/ 2}+\log 2+\frac{n}{2}\log(4\pi).\] By the Jensen inequality, \[\log\left(\frac{1}{m_{0}(B)}\int_{B}u\,dm_{0}\right)\frac{1}{m_{0}(B)}\int_{B }u\,dm_{0}\leq\frac{1}{m_{0}(B)}\int_{B}u\log u\,dm_{0},\] and thus \[-\log m_{0}(B)\leq\int_{B}u\log u\,dm_{0}\leq-\mathcal{N}_{(x_{0},1)}(1)+C+2(n +2\mathcal{H})^{1/2}.\] We complete the proof. \(\Box\) ### Heat kernel estimates We next show the following (cf. [2, Theorem 7.1]): **Proposition 5.2**.: _Let \((M,g(t))_{t\in I}\) be a super Ricci flow with \(\mathcal{D}\geq 0\). Let \((x_{0},t_{0})\in M\times I\), and let \([t,t_{0}]\subset I\). For \(\mathcal{H}>0\), we assume \(H(\cdot,t)\geq-\mathcal{H}(t_{0}-t)^{-1}\). Then on \(M\), we have_ \[G(x_{0},t_{0};\cdot,t)\leq\frac{C_{\mathcal{H}}}{(t_{0}-t)^{n/2}}\exp(- \mathcal{N}_{(x_{0},t_{0})}(t_{0}-t)).\] Proof.: We may assume \(t=0\) and \(t_{0}=1\). For a fixed \(y\in M\), we set \(u:=G(\cdot,\cdot;y,0)\). By the same argument as in the proof of [2, Theorem 7.1], it suffices to show the following: If \(L\geq\underline{L}_{\mathcal{H}}\), and if for all \((x,s)\in M\times(0,1]\) we have \[u(x,s)\leq\frac{L}{s^{n/2}}\exp(-\mathcal{N}_{0}(x,s)), \tag{5.3}\] then for all \(x\in M\) we have \[u(x,1)\leq\frac{L}{2}\exp(-\mathcal{N}_{0}(x,1)). \tag{5.4}\] First, assuming (5.3), we derive a gradient bound \[|\nabla u|(x,s)\leq C_{\mathcal{H}}L\exp\left(-\mathcal{N}_{0}(x,s)\right) \tag{5.5}\] for all \((x,s)\in M\times[3/4,1]\). We set \[v:=\left(s-\frac{1}{2}\right)|\nabla u|^{2}+u^{2}.\] Due to the Bochner formula, \[\Box v=-|\nabla u|^{2}-2\left(s-\frac{1}{2}\right)|\nabla^{2}u|^{2}-2\left(s- \frac{1}{2}\right)(\operatorname{Ric}-h)(\nabla u,\nabla u)\leq 0\] for every \(s\in[1/2,1]\). Using (5.3), we have \[\left(s-\frac{1}{2}\right)|\nabla u|^{2}(x,s)\leq v(x,s) \leq\int_{M}v(\cdot,1/2)d\nu_{(x,s);1/2}=\int_{M}u^{2}(\cdot,1/2) d\nu_{(x,s);1/2}\] \[\leq 2^{n}L^{2}\int_{M}\exp\left(-2\mathcal{N}_{0}(\cdot,1/2) \right)d\nu_{(x,s);1/2} \tag{5.6}\] for every \((x,s)\in M\times(1/2,1]\). Let \((z,1/2)\) be a center of \((x,s)\) (see Proposition 3.2). In view of Lemma 4.6, we have \[-\mathcal{N}_{0}(\cdot,1/2)\leq-\mathcal{N}_{0}(x,s)+C_{\mathcal{H}}\left(d_{ 1/2}(z,\cdot)+1\right) \tag{5.7}\] on \(M\). From (5.7), the co-area formula and Proposition 3.4, we derive \[\int_{M}\exp\left(-2\mathcal{N}_{0}(\cdot,1/2)\right)d\nu_{(x,s);1/2}\] \[\leq C_{\mathcal{H}}\exp(-2\mathcal{N}_{0}(x,s))\int_{M}\exp(C_{ \mathcal{H}}d_{1/2}(z,\cdot))d\nu_{(x,s);1/2}\] \[\leq C_{\mathcal{H}}\exp(-2\mathcal{N}_{0}(x,s))\left(\int_{0}^{ \infty}e^{C_{\mathcal{H}}r}\,\nu_{(x,s);1/2}(M\setminus B(z,1/2,r))\,dr+1\right)\] \[\leq C_{\mathcal{H}}\exp(-2\mathcal{N}_{0}(x,s))\left(\int_{0}^{ \infty}e^{C_{\mathcal{H}}r}\exp\left(-\frac{\left(r-\sqrt{2\mathcal{C}_{n}(t-1 /2)}\right)_{+}^{2}}{8(t-1/2)}\right)dr+1\right)\] \[=C_{\mathcal{H}}\exp(-2\mathcal{N}_{0}(x,s))\left(\sqrt{t-1/2} \int_{0}^{\infty}e^{C_{\mathcal{H}}\sqrt{t-1/2}\,r}\exp\left(-\frac{\left(r- \sqrt{2\mathcal{C}_{n}}\right)_{+}^{2}}{8}\right)dr+1\right)\] \[\leq C_{\mathcal{H}}\exp(-2\mathcal{N}_{0}(x,s)).\] Combining this with (5.6) implies (5.5). Let \((x,1)\in M\times I\), and set \(\nu:=\nu_{(x,1)}\). For \(\xi\in(0,1/2]\), we put \(t_{1}:=1-\xi^{2}\), and let \((z_{1},t_{1})\) be a center of \((x,1)\). We set \(B:=B(z_{1},t_{1},\sqrt{100\mathcal{C}_{n}\xi^{2}})\). By Lemma 4.6, if \(\xi\leq\overline{\xi}_{\mathcal{H}}\), then \[-\mathcal{N}_{0}(\cdot,t_{1})\leq-\mathcal{N}_{0}(x,1)+C_{\mathcal{H}}d_{t_{1 }}(z_{1},\cdot)+C_{\mathcal{H}}\sqrt{\mathcal{C}_{n}\xi^{2}}\leq-\mathcal{N}_{ 0}(x,1)+C_{\mathcal{H}}+\log 2 \tag{5.8}\] on \(B\); in particular, (5.5) tells us that on \(B\), \[|\nabla u|(\cdot,t_{1})\leq C_{\mathcal{H}}L\exp(-\mathcal{N}_{0}(x,1)). \tag{5.9}\] Now, we write \[u(x,1)=\int_{B}u\,d\nu_{t_{1}}+\int_{M\setminus B}u\,d\nu_{t_{1}} \tag{5.10}\] with the help of (2.2). We first estimate the first term in (5.10). Since \[\frac{d}{ds}\int_{M}u\,dm_{s}=-\int_{M}Hu\,dm_{s}\leq\mathcal{H}\int_{M}u\,dm_ {s}\] for all \(s\in(0,1]\), we possess \[\int_{M}u\,dm_{t_{1}}\leq e^{\mathcal{H}}. \tag{5.11}\] By Proposition 5.1 and (4.3) we have \[m_{t_{1}}(B)\geq C_{\mathcal{H}}\exp(\mathcal{N}_{(x,1)}(\xi^{2}))\xi^{n}\geq C _{\mathcal{H}}\exp(\mathcal{N}_{0}(x,1))\xi^{n}. \tag{5.12}\] In virtue of (5.9), for all \(x_{1},x_{2}\in B\) we have \[u(x_{1},t_{1})\leq u(x_{2},t_{1})+C_{\mathcal{H}}L\exp(-\mathcal{N}_{0}(x,1))\xi. \tag{5.13}\] Integrating (5.13) over \(B\) with respect to \(x_{2}\), and using (5.11) and (5.12) imply \[u(x_{1},t_{1})\leq\frac{1}{m_{t_{1}}(B)}\int_{B}u\,dm_{t_{1}}+C_{\mathcal{H}}L \exp(-\mathcal{N}_{0}(x,1))\xi\leq C_{\mathcal{H}}(C_{\mathcal{H}}\xi^{-n}+L \xi)\exp(-\mathcal{N}_{0}(x,1)).\] It follows that \[\int_{B}u\,d\nu_{t_{1}}\leq C_{\mathcal{H}}(C_{\mathcal{H}}\xi^{-n}+L\xi)\exp (-\mathcal{N}_{0}(x,1)). \tag{5.14}\] We next estimate the second term in (5.10). By (5.3), (5.8), an inequality \(e^{s}\leq 1+\xi e^{s/\xi}\), Propositions 3.3, 3.4 and the co-area formula, if \(\xi\leq\overline{\xi}_{\mathcal{H}}\), then \[\int_{M\setminus B}u\,d\nu_{t_{1}} \leq 2L\int_{M\setminus B}\exp(-\mathcal{N}_{0}(\cdot,t_{1}))d \nu_{t_{1}}\] \[\leq 4L\exp(-\mathcal{N}_{0}(x,1))\int_{M\setminus B}\exp\left(C _{\mathcal{H}}d_{t_{1}}(z_{1},\cdot)\right)d\nu_{t_{1}}\] \[\leq 4L\exp(-\mathcal{N}_{0}(x,1))\int_{M\setminus B}\left(1+\xi \exp\left(C_{\mathcal{H}}\xi^{-1}d_{t_{1}}(z_{1},\cdot)\right)\right)d\nu_{t_ {1}}\] \[\leq 4L\exp(-\mathcal{N}_{0}(x,1))\left(\frac{1}{100}+\xi\int_{M} \exp\left(C_{\mathcal{H}}\xi^{-1}d_{t_{1}}(z_{1},\cdot)\right)d\nu_{t_{1}}\right)\] \[\leq 4L\exp(-\mathcal{N}_{0}(x,1))\left(\frac{1}{100}+\xi+C_{ \mathcal{H}}\int_{0}^{\infty}e^{C_{\mathcal{H}}\tau/\xi}\nu_{t_{1}}(M\setminus B (z_{1},t_{1},r))\,dr\right)\] \[\leq 4L\exp(-\mathcal{N}_{0}(x,1))\left(\frac{1}{40}+C_{ \mathcal{H}}\int_{0}^{\infty}e^{C_{\mathcal{H}}\tau/\xi}\exp\left(-\frac{(r- \sqrt{2\mathcal{C}_{n}}\xi^{2})_{+}^{2}}{8\xi^{2}}\right)dr\right)\] \[=4L\exp(-\mathcal{N}_{0}(x,1))\left(\frac{1}{40}+C_{\mathcal{H}} \xi\int_{0}^{\infty}e^{C_{\mathcal{H}}\tau}\exp\left(-\frac{(r-\sqrt{2\mathcal{ C}_{n}})_{+}^{2}}{8}\right)dr\right)\] \[\leq\left(\frac{1}{10}+C_{\mathcal{H}}\xi\right)L\exp(-\mathcal{N }_{0}(x,1)). \tag{5.15}\] Combining (5.10), (5.14) and (5.15) yields that if \(\xi\leq\overline{\xi}_{\mathcal{H}}\) and \(L\geq\underline{L}_{\mathcal{H},\xi}\), then \[u(x,1)\leq\left(C_{\mathcal{H}}C_{\mathcal{H}\xi^{2}}\xi^{-n}+\frac{L}{10}+C_ {\mathcal{H}}L\xi\right)\exp(-\mathcal{N}_{0}(x,1))\leq\frac{L}{2}\exp(- \mathcal{N}_{0}(x,1)).\] This proves the desired assertion (5.4). We also have the following gradient estimate (cf. [2, Theorem 7.5]): **Proposition 5.3**.: _Let \((M,g(t))_{t\in I}\) be a super Ricci flow with \(\mathcal{D}\geq 0\). Let \((x_{0},t_{0})\in M\times I\), and let \([t,t_{0}]\subset I\). For \(\mathcal{H}>0\), we assume \(H(\cdot,t)\geq-\mathcal{H}(t_{0}-t)^{-1}\). Then on \(M\), we have_ \[\frac{|\nabla_{x}G|(x_{0},t_{0};\cdot,t)}{G(x_{0},t_{0};\cdot,t)}\leq\frac{C_ {\mathcal{H}}}{(t_{0}-t)^{1/2}}\sqrt{\log\left(\frac{C_{\mathcal{H}}\exp(- \mathcal{N}_{(x_{0},t_{0})}(t_{0}-t))}{(t_{0}-t)^{n/2}G(x_{0},t_{0};\cdot,t)} \right)}.\] Proof.: We may assume \(t=0\) and \(t_{0}=1\). Set \(\nu:=\nu_{(x_{0},1)}\). Let \((z,1/2)\) be a center of \((x_{0},1)\) (see Proposition 3.2). For a fixed \(y\in M\), we set \(u:=G(\cdot,1/2;y,0)\). By Lemma 4.6, we have \[-\mathcal{N}_{0}(\cdot,1/2)\leq-\mathcal{N}_{0}(x_{0},1)+C_{\mathcal{H}} \left(d_{1/2}(z,\cdot)+1\right).\] From Proposition 5.2, it follows that \[u\leq C_{\mathcal{H}}\exp(-\mathcal{N}_{0}(\cdot,1/2))\leq C_{\mathcal{H}} \exp(-\mathcal{N}_{0}(x_{0},1))\exp\left(C_{\mathcal{H}}d_{1/2}(z,\cdot)\right).\] Using the co-area formula and Proposition 3.4, we obtain \[\int_{M}u^{2}d\nu_{1/2} \leq C_{\mathcal{H}}\exp(-2\mathcal{N}_{0}(x_{0},1))\int_{M}\exp \left(C_{\mathcal{H}}d_{1/2}(z,\cdot)\right)d\nu_{1/2}\] \[\leq C_{\mathcal{H}}\exp(-2\mathcal{N}_{0}(x_{0},1))\left(\int_{0 }^{\infty}e^{C_{\mathcal{H}}r}\nu_{1/2}(M\setminus B(z,1/2,r))\,dr+1\right)\] \[\leq C_{\mathcal{H}}\exp(-2\mathcal{N}_{0}(x_{0},1))\left(\int_{0 }^{\infty}e^{C_{\mathcal{H}}r}\exp\left(-\frac{(r-\sqrt{2\mathcal{C}_{n}})_{+} ^{2}}{8}\right)dr+1\right)\] \[\leq C_{\mathcal{H}}\exp(-2\mathcal{N}_{0}(x_{0},1)). \tag{5.16}\] By Proposition 5.2 we have \[G(x_{0},1;y,0)\leq\frac{C_{1,\mathcal{H}}}{2}\exp(-\mathcal{N}_{0}(x_{0},1)).\] Put \[a:=\left(\frac{G(x_{0},1;y,0)}{C_{1,\mathcal{H}}\exp(-\mathcal{N}_{0}(x_{0},1) )}\right)^{2}.\] Let \(b\geq 0\) satisfy the following property (cf. [2, Claim 4.6]): There is \(\Omega\subset M\) with \(\nu_{1}(\Omega)=a\) such that \[\left\{\frac{|\nabla_{x}G|(x_{0},1;\cdot,1/2)}{G(x_{0},1;\cdot,1/2)}>b\right\} \subset\Omega\subset\left\{\frac{|\nabla_{x}G|(x_{0},1;\cdot,1/2)}{G(x_{0},1; \cdot,1/2)}\geq b\right\}.\] By (3.4) in Proposition 3.7 with \(p=1\), \[ab\leq\int_{\Omega}\frac{|\nabla_{x}G|(x_{0},1;\cdot,1/2)}{G(x_{0},1;\cdot,1/ 2)}d\nu_{1/2}\leq C\nu_{1}(\Omega)(-\log\nu_{1}(\Omega))^{1/2}=Ca(-\log a)^{1/2},\] and hence \(b\leq C(-\log a)^{1/2}\). By (2.2), (5.16), and (3.4) with \(p=2\), we conclude \[|\nabla_{x}G|(x_{0},1;y,0) \leq\int_{M}\frac{|\nabla_{x}G|(x_{0},1;\cdot,1/2)}{G(x_{0},1; \cdot,1/2)}u\,d\nu_{1/2}\] \[=\int_{\Omega}\frac{|\nabla_{x}G|(x_{0},1;\cdot,1/2)}{G(x_{0},1; \cdot,1/2)}u\,d\nu_{1/2}+\int_{M\setminus\Omega}\frac{|\nabla_{x}G|(x_{0},1; \cdot,1/2)}{G(x_{0},1;\cdot,1/2)}u\,d\nu_{1/2}\] \[\leq\left(\int_{\Omega}\left(\frac{|\nabla_{x}G|(x_{0},1;\cdot,1/ 2)}{G(x_{0},1;\cdot,1/2)}\right)^{2}d\nu_{1}\right)^{1/2}\left(\int_{\Omega}u^ {2}d\nu_{1/2}\right)^{1/2}+b\int_{M}u\,d\nu_{1/2}\] \[\leq C_{\mathcal{H}}\exp(-\mathcal{N}_{0}(x_{0},1))\left(-a\log a \right)^{1/2}+C(-\log a)^{1/2}G(x_{0},1;y,0)\] \[\leq C_{\mathcal{H}}(-\log a)^{1/2}G(x_{0},1;y,0).\] We arrive at the desired estimate. ### Upper volume estimates We further present the following (cf. [2, Theorem 8.1]): **Proposition 5.4**.: _Let \((M,g(t))_{t\in I}\) be a super Ricci flow with \(\mathcal{D}\geq 0\). Let \((x_{0},t_{0})\in M\times I\). For \(r>0\), we assume \([t_{0}-r^{2},t_{0}]\subset I\). For \(\mathcal{H}>0\), we assume \(H(\cdot,t_{0}-r^{2})\geq-\mathcal{H}r^{-2}\). Then for every \(R\geq 1\) we have_ \[m_{t_{0}}(B(x_{0},t_{0},Rr))\leq C_{\mathcal{H}}\exp(\mathcal{N}_{(x_{0},t_{0}) }(r^{2}))\exp(C_{\mathcal{H}}R^{2})r^{n}.\] Proof.: We may assume \(t_{0}=0\) and \(r=1\). By (2.3) we see \[\mathcal{N}_{(x_{0},0)}(1)=-\int_{M}\left(\log G(x_{0},0;\cdot,-1)\right)G(x_{0 },0;\cdot,-1)dm_{-1}-\frac{n}{2}\log 4\pi-\frac{n}{2}.\] Hence, there exists \(y\in M\) such that \[\log G(x_{0},0;y,-1)\geq-\mathcal{N}_{(x_{0},0)}(1)-\frac{n}{2}\log 4\pi- \frac{n}{2};\] in particular, \[G(x_{0},0;y,-1)\geq C\exp(-\mathcal{N}_{(x_{0},0)}(1)). \tag{5.17}\] Set \(u:=G(\cdot,0;y;-1)\). By Lemma 4.4 we have \[-\mathcal{N}_{(\cdot,0)}(1)=-\mathcal{N}_{-1}(\cdot,0)\leq-\mathcal{N}_{-1}(x _{0},0)+\left(\frac{n}{2}+\mathcal{H}\right)^{1/2}d_{0}(x_{0},\cdot)\leq- \mathcal{N}_{(x_{0},0)}(1)+C_{1,\mathcal{H}}R \tag{5.18}\] on \(B(x_{0},0,R)\). By Proposition 5.3 and (5.18), \[\frac{|\nabla u|}{u}\leq C_{\mathcal{H}}\sqrt{\log\left(\frac{C_{1,\mathcal{H }}\exp(-\mathcal{N}_{(\cdot,0)}(1))}{u}\right)}\leq C_{\mathcal{H}}\sqrt{\log \left(\frac{C_{1,\mathcal{H}}\exp(-\mathcal{N}_{(x_{0},0)}(1)+C_{1,\mathcal{H} }R)}{u}\right)} \tag{5.19}\] on \(B(x_{0},0,R)\). We define \[v:=\sqrt{\log\left(\frac{C_{1,\mathcal{H}}\exp(-\mathcal{N}_{(x_{0},0)}(1)+C_ {1,\mathcal{H}}R)}{u}\right)}.\] By (5.17) and (5.19), we obtain \[v(x_{0})\leq C_{\mathcal{H}}\sqrt{R},\quad|\nabla v|\leq C_{\mathcal{H}}\] on \(B(x_{0},0,R)\). This implies \(v\leq C_{\mathcal{H}}\sqrt{R}+C_{\mathcal{H}}R\leq C_{\mathcal{H}}R\) on \(B(x_{0},0,R)\), and hence \[u\geq C_{\mathcal{H}}\exp(-C_{\mathcal{H}}R^{2})\exp(-\mathcal{N}_{(x_{0},0)}( 1)) \tag{5.20}\] on \(B(x_{0},0,R)\). Since \[\frac{d}{dt}\int_{M}G(\cdot,t;y,-1)dm_{t}=-\int_{M}H\,G(\cdot,t;y,-1)dm_{t} \leq\mathcal{H}\int_{M}G(\cdot,t;y,-1)dm_{t},\] the inequality (5.20) leads us to \[C_{\mathcal{H}}\exp(-C_{\mathcal{H}}R^{2})\exp(-\mathcal{N}_{(x_{0},0)}(1))m_{ 0}(B(x_{0},0,R))\leq\int_{B(x_{0},0,R)}u\,dm_{0}\leq e^{\mathcal{H}}.\] This completes the proof. \(\Box\) ## 6. Non-collapsed case Here, we show some estimates under a non-collapsed condition for the Nash entropy. ### Heat kernel measure comparison We first produce the following heat kernel measure comparison between different base points (cf. [4, Proposition 8.1]): **Proposition 6.1**.: _For \(\mathcal{H}>0\), there is \(\mathfrak{C}_{\mathcal{H}}>0\) such that the following holds: Let \((M,g(t))_{t\in I}\) be a super Ricci flow with \(\mathcal{D}\geq 0\). Let \((x_{0},t_{0}),(x_{1},t_{1})\in M\times I\), and let \(s,t\in I\) satisfy \(s<t\leq\min\{t_{0},t_{1}\}\). For \(\kappa>0\), we assume \(\mathcal{N}_{(x_{0},t_{0})}(t_{0}-s)\geq-\kappa\). For \(D>0\) and \(\theta_{0},\theta_{1}\in(-\infty,1)\) with \(\theta_{0}<\theta_{1}\), we assume_ \[H(\cdot,s)\geq-\mathcal{H}(t-s)^{-1},\quad W_{t}(\nu_{(x_{0},t_{ 0});t},\nu_{(x_{1},t_{1});t})\leq D\sqrt{t-s},\] \[t_{0}-t\leq\mathfrak{C}_{\mathcal{H}}\,\frac{\theta_{1}-\theta_ {0}}{1-\theta_{1}}(t-s),\quad t_{1}-t\leq D^{2}(t-s).\] _Then we have_ \[e^{\theta_{0}f_{0}}\nu_{(x_{0},t_{0});s}\leq C_{\kappa,D,\theta_{0},\theta_{1},\mathcal{H}}\,e^{\theta_{1}f_{1}}\nu_{(x_{1},t_{1});s}.\] Proof.: We may assume \(s=0\) and \(t=1\). For \(i=0,1\), let \(\nu^{i}:=\nu_{(x_{i},t_{i})}\). In view of Lemma 2.1, \[H\geq-\mathcal{H},\quad W_{1}(\nu_{1}^{0},\nu_{1}^{1})\leq D,\quad t_{0}-1\leq \mathfrak{C}_{\mathcal{H}}\frac{\theta_{1}-\theta_{0}}{1-\theta_{1}},\quad t _{1}-1\leq D^{2} \tag{6.1}\] on \(M\times(I\cap[0,\infty))\) (see also Remark 2.2). For a fixed \(y\in M\), we set \(u:=G(\cdot,1;y,0)\). By Propositions 5.2 and 5.3, we possess \[u\leq\frac{C_{1,\mathcal{H}}}{10}\exp(-\mathcal{N}_{0}(\cdot,1)),\quad\frac{| \nabla u|}{u}\leq C_{\mathcal{H}}\sqrt{\log\left(\frac{C_{1,\mathcal{H}}\exp( -\mathcal{N}_{0}(\cdot,1))}{u}\right)}. \tag{6.2}\] We further set \(v:=C_{1,\mathcal{H}}^{-1}u\,\exp(\mathcal{N}_{0}(\cdot,1))\). Note that (4.7) and (6.2) lead to \[\frac{|\nabla v|}{v}\leq\frac{|\nabla u|}{u}+|\nabla\mathcal{N}_{0}(\cdot,1)| \leq C_{\mathcal{H}}\sqrt{-\log v}+C_{\mathcal{H}}\leq C_{\mathcal{H}}\sqrt{- \log v};\] in particular, for \(\varphi:=\sqrt{-\log v}\), we have \[|\nabla\varphi|\leq C_{2,\mathcal{H}}. \tag{6.3}\] We define \(\mathfrak{C}_{\mathcal{H}}:=(4C_{2,\mathcal{H}})^{-2}\). For \(i=0,1\), let \((z_{i},1)\) be a center of \((x_{i},t_{i})\) (Proposition 3.2). Lemma 2.6 and (6.1) imply \[d_{1}(z_{0},z_{1}) \leq W_{1}(\delta_{z_{0}},\nu_{1}^{0})+W_{1}(\nu_{1}^{0},\nu_{1}^ {1})+W_{1}(\nu_{1}^{1},\delta_{z_{1}})\] \[\leq\sqrt{\mathcal{C}_{n}(t_{0}-1)}+D+\sqrt{\mathcal{C}_{n}(t_{1 }-1)}\leq C_{D}. \tag{6.4}\] Also, due to Proposition 3.3, we possess \[\nu_{1}^{1}(B)\geq\frac{1}{2}, \tag{6.5}\] where \(B:=B(z_{1},1,\sqrt{2\mathcal{C}_{n}(t_{1}-1)})\). We now write \(\lambda:=(1-\theta_{1})/(1-\theta_{0})\). For a fixed \(x\in B\), (6.3) and (6.4) tell us that \[\varphi(\cdot) \geq\varphi(x)-C_{2,\mathcal{H}}d_{1}(x,\cdot)\] \[\geq\varphi(x)-C_{2,\mathcal{H}}d_{1}(z_{0},\cdot)-C_{2,\mathcal{H }}d_{1}(z_{0},z_{1})-C_{2,\mathcal{H}}\sqrt{2\mathcal{C}_{n}(t_{1}-1)}\] \[\geq\varphi(x)-C_{2,\mathcal{H}}(d_{1}(z_{0},\cdot)+C_{D})-C_{D, \mathcal{H}}\] on \(M\), and we obtain \[\int_{M}v\exp(C_{1,\mathcal{H}}d_{1}(z_{0},\cdot))\,d\nu_{1}^{0}\] \[=\sum_{j=1}^{\infty}\int_{B(z_{0},1,j)\setminus B(z_{0},1,j-1)}( \phi\circ\varphi)\exp(C_{1,\mathcal{H}}d_{1}(z_{0},\cdot))\,d\nu_{1}^{0}\] \[\leq\sum_{j=1}^{\infty}\phi\left((\varphi(x)-C_{2,\mathcal{H}}j-C _{D,\mathcal{H}})_{+}\right)\exp(C_{1,\mathcal{H}}j)\nu_{1}^{0}(M\setminus B (z_{0},1,j-1)), \tag{6.6}\] where \(\phi(\zeta):=\exp(-\zeta^{2})\). By Proposition 3.4, (6.1) and the Young inequality, for every \(r>0\), \[\nu_{1}^{0}\left(M\setminus B(z_{0},1,r)\right) \leq 2\exp\left(-\frac{\left(r-\sqrt{2\mathcal{C}_{n}(t_{0}-1)} \right)_{+}^{2}}{8(t_{0}-1)}\right)\] \[\leq C\exp\left(-\frac{r^{2}}{16(t_{0}-1)}\right)\leq C\exp\left( -\frac{C_{2,\mathcal{H}}^{2}\lambda}{1-\lambda}r^{2}\right). \tag{6.7}\] By (6.6), (6.7), the same calculation as in the proof of [4, Proposition 8.1] tells us that \[\int_{M}v\exp(C_{1,\mathcal{H}}d_{1}(z_{0},\cdot))\,d\nu_{1}^{0}\leq C_{D,\theta_ {0},\theta_{1},\mathcal{H}}\,v(x)^{\lambda}. \tag{6.8}\] Combining (6.5) and (6.8), we obtain \[\int_{M}v\exp(C_{1,\mathcal{H}}d_{1}(z_{0},\cdot))\,d\nu_{1}^{0} \leq\frac{C_{D,\theta_{0},\theta_{1},\mathcal{H}}}{\nu_{1}^{1}(B) }\int_{B}\,v^{\lambda}d\nu_{1}^{1}\] \[\leq C_{D,\theta_{0},\theta_{1},\mathcal{H}}\int_{M}\,v^{\lambda }d\nu_{1}^{1}\leq C_{D,\theta_{0},\theta_{1},\mathcal{H}}\left(\int_{M}\,vd\nu _{1}^{1}\right)^{\lambda}.\] By Lemma 4.6, we see \[0\leq-\mathcal{N}_{0}(\cdot,1) \leq-\mathcal{N}_{0}(x_{0},t_{0})+\left(\frac{n}{2}+\mathcal{H} \right)^{1/2}\sqrt{\mathcal{C}_{n}(t_{0}-1)}+\left(\frac{n}{2}+\mathcal{H} \right)^{1/2}d_{1}(z_{0},\cdot)\] \[\leq C_{\kappa,\theta_{0},\theta_{1},\mathcal{H}}(d_{1}(z_{0}, \cdot)+1).\] It follows that \[\int_{M}v\exp(-\mathcal{N}_{0}(\cdot,1))\,d\nu_{1}^{0}\leq C_{\kappa,D,\theta_ {0},\theta_{1},\mathcal{H}}\left(\int_{M}v\exp(-\mathcal{N}_{0}(\cdot,1))\,d \nu_{1}^{1}\right)^{\lambda};\] in particular, (2.2) leads us to \[\int_{M}u\,d\nu_{1}^{0}\leq C_{\kappa,D,\theta_{0},\theta_{1},\mathcal{H}} \left(\int_{M}u\,d\nu_{1}^{1}\right)^{\lambda},\quad G(x_{0},t_{0};y,0)\leq C _{\kappa,D,\theta_{0},\theta_{1},\mathcal{H}}\left(G(x_{1},t_{1};y,0)\right)^{ \lambda}.\] Thus, we complete the proof. ### Integral estimates In this subsection, we deduce an integral estimate. We first notice that potentials are bounded from below under a non-collapsed condition. **Lemma 6.2**.: _Let \((M,g(t))_{t\in I}\) be a super Ricci flow with \(\mathcal{D}\geq 0\). Let \((x_{0},t_{0})\in M\times I\). For \(\alpha\in(0,1)\) and \(\mathcal{H}>0\), we assume \(H(\cdot,t_{0}-\alpha^{-1}r^{2})\geq-\mathcal{H}r^{-2}\). For \(\kappa>0\), we also assume \(\mathcal{N}_{(x_{0},t_{0})}(r^{2})\geq-\kappa\). Then on \(M\times[t_{0}-\alpha^{-1}r^{2},t_{0})\), we have_ \[f\geq-C_{\kappa,\alpha,\mathcal{H}},\quad f^{2}\leq C_{\kappa,\alpha, \mathcal{H}}+C\tau^{2}(|\nabla^{2}f|^{2}+|\nabla f|^{4}+|h|^{2}).\] Proof.: We may assume \(t_{0}=0\) and \(r=1\). In view of Lemma 2.1, we possess \(H(\cdot,t)\geq-(\mathcal{H}\alpha^{-1})\tau^{-1}\) for \(t\in[-\alpha^{-1},0)\). Hence, by Proposition 5.2, we have \[G(x_{0},0;\cdot,t)\leq\frac{C_{\alpha,\mathcal{H}}}{\tau^{n/2}}\exp(- \mathcal{N}_{(x_{0},0)}(\tau)).\] Therefore, from (2.3) we conclude \[f(\cdot,t)=-\log G(x_{0},0;\cdot,t)-\frac{n}{2}\log\tau-\frac{n}{2}\log 4\pi \geq-\log C_{\mathcal{H}}+\mathcal{N}_{(x_{0},0)}(\tau)-\frac{n}{2}\log 4\pi.\] By Lemma 4.2 and (4.3), it holds that \(\mathcal{N}_{(x_{0},0)}(\tau)\geq-C_{\kappa,\alpha,\mathcal{H}}\); in particular, the lower bound of \(f\) follows. Due to Theorem 2.5, we obtain \[-C_{\kappa,\alpha,\mathcal{H}}\leq f=w-\tau(2\Delta f-|\nabla f|^{2}+H)+n\leq -\tau(2\Delta f-|\nabla f|^{2}+H)+n,\] where \(w\) is defined as (2.6). Therefore, \[f^{2}\leq C_{\kappa,\alpha,\mathcal{H}}+C\tau^{2}(|\nabla^{2}f|^{2}+|\nabla f |^{4}+H^{2})\leq C_{\kappa,\alpha,\mathcal{H}}+C\tau^{2}(|\nabla^{2}f|^{2}+| \nabla f|^{4}+|h|^{2}).\] We complete the proof. We also verify the following (cf. [4, Proposition 6.5]): **Lemma 6.3**.: _Let \((M,g(t))_{t\in I}\) satisfy \(\mathcal{D}\geq 0\). Let \((x_{0},t_{0})\in M\times I\). For \(\mathcal{H}>0\), we assume \(H\geq-\mathcal{H}\). Then for all \(t\in I\cap(-\infty,t_{0})\) and \(\theta\in[0,1/2]\) we have_ \[\int_{M}e^{\theta f}d\nu_{(x_{0},t_{0});t}\leq e^{(n+\tau\mathcal{H})\theta}.\] Proof.: Theorem 2.5 yields \[\frac{d}{d\theta}\int_{M}e^{\theta f}d\nu_{(x_{0},t_{0});t} =\int_{M}fe^{\theta f}d\nu_{(x_{0},t_{0});t}\] \[\leq\int_{M}\left(\tau(-2\Delta f+|\nabla f|^{2}-H)+n\right)e^{ \theta f}d\nu_{(x_{0},t_{0});t}\] \[\leq\int_{M}\left(\tau(-2\Delta f+|\nabla f|^{2})+n+\tau\mathcal{ H}\right)(4\pi\tau)^{-n/2}e^{-(1-\theta)f}dm_{t}\] \[=\int_{M}\left(\tau(2\theta-1)|\nabla f|^{2}+n+\tau\mathcal{H} \right)(4\pi\tau)^{-n/2}e^{-(1-\theta)f}dm_{t}\] \[\leq(n+\tau\mathcal{H})\int_{M}e^{\theta f}d\nu_{(x_{0},t_{0});t}.\] Integrating this over \(\theta\) implies the lemma. We further see the following: **Lemma 6.4**.: _Let \((x_{0},t_{0})\in M\times I\). Then for all \(t\in I\cap(-\infty,t_{0})\) and \(\theta\in[0,1/4]\) we have_ \[\int_{M}|\nabla f|^{4}e^{\theta f}d\nu_{(x_{0},t_{0});t}\leq C\int_{M}|\nabla ^{2}f|^{2}e^{\theta f}d\nu_{(x_{0},t_{0});t}.\] Proof.: We have \[(1-\theta)\int_{M}|\nabla f|^{4}e^{\theta f}d\nu_{(x_{0},t_{0});t} =(1-\theta)(4\pi\tau)^{-n/2}\int_{M}|\nabla f|^{2}\langle\nabla f,\nabla fe^{-(1-\theta)f}\rangle dm_{t}\] \[=(4\pi\tau)^{-n/2}\int_{M}\left(2\langle\nabla^{2}f,df\otimes df \rangle+|\nabla f|^{2}\Delta f\right)e^{-(1-\theta)f}dm_{t}\] \[\leq C(4\pi\tau)^{-n/2}\int_{M}|\nabla^{2}f||\nabla f|^{2}e^{-(1 -\theta)f}dm_{t}\] \[\leq C\int_{M}|\nabla^{2}f|^{2}e^{\theta f}d\nu_{(x_{0},t_{0});t }+\frac{1}{2}\int_{M}|\nabla f|^{4}e^{\theta f}d\nu_{(x_{0},t_{0});t}.\] This yields the desired one. Based on the above lemmas, we prove the following (cf. [4, Proposition 6.2]): **Proposition 6.5**.: _If \(\theta\in[0,\overline{\theta}]\), then the following holds: Let \((M,g(t))_{t\in I}\) be a super Ricci flow with \(\mathcal{D}\geq 0\). Let \((x_{0},t_{0})\in M\times I\). For \(\alpha\in(0,1)\) and \(\mathcal{H}>0\), we assume \(H(\cdot,t_{0}-\alpha^{-1}r^{2})\geq-\mathcal{H}r^{-2}\). For \(\kappa>0\), we assume \(\mathcal{N}_{(x_{0},t_{0})}(r^{2})\geq-\kappa\). Then we have_ \[\int_{t_{0}-\alpha^{-1}r^{2}}^{t_{0}-\alpha r^{2}}\int_{M}\left(\tau(|h|^{2}+| \nabla^{2}f|^{2}+|\nabla f|^{4})+\tau^{-1}f^{2}\right)e^{2\theta f}d\nu_{(x_ {0},t_{0});t}dt\leq C_{\kappa,\alpha,\mathcal{H}}.\] Proof.: We may assume \(t_{0}=0\) and \(r=1\). Set \(\nu:=\nu_{(x_{0},0)}\) and \(u:=G(x_{0},0;\cdot,\cdot)\). We define \(\Psi\) and \(w\) as (2.6). In virtue of (2.1), (2.4) and Theorem 2.5, it holds that \[\frac{d}{dt}\int_{M}we^{\theta f}d\nu_{t}=\int_{M}\left\{(\square e ^{\theta f})wu-e^{\theta f}\square^{*}(wu)\right\}dm_{t}\] \[=\int_{M}\left(2\tau\left|\Phi\right|^{2}+\tau\mathcal{D}(\nabla f )-\theta\left\{\tau^{-1}\left(w-f+\frac{n}{2}\right)+\theta|\nabla f|^{2} \right\}w\right)e^{\theta f}d\nu_{t}\] \[\geq\int_{M}\left(2\tau\left|\Phi\right|^{2}-\theta\tau^{-1} \left(w-f+\frac{n}{2}\right)w\right)e^{\theta f}d\nu_{t}\] \[\geq\int_{M}2\tau\left|\Phi\right|^{2}e^{\theta f}d\nu_{t}-\theta \tau^{-1}\int_{M}\left|w-f+\frac{n}{2}\right|\left|w\right|e^{\theta f}d\nu_{t}\] \[\geq\int_{M}2\tau\left|\Phi\right|^{2}e^{\theta f}d\nu_{t}-C \theta\tau^{-1}\int_{M}(w^{2}+f^{2}+1)e^{\theta f}d\nu_{t}\] \[\geq\int_{M}2\tau\left|\Phi\right|^{2}e^{\theta f}d\nu_{t}-C \theta\tau^{-1}\int_{M}(\tau^{2}((\Delta f)^{2}+|\nabla f|^{4}+H^{2})+f^{2}+1) e^{\theta f}d\nu_{t}\] \[\geq\int_{M}2\tau\left|\Phi\right|^{2}e^{\theta f}d\nu_{t}-C \theta\tau^{-1}\int_{M}(\tau^{2}(|\nabla^{2}f|^{2}+|\nabla f|^{4}+|h|^{2})+f^{2 }+1)e^{\theta f}d\nu_{t}.\] Also, if \(\theta\in[0,1]\), then (2.5) and (2.4) yield \[\frac{d}{dt}\int_{M}\tau H\,e^{\theta f}d\nu_{t}=\int_{M}\square (\tau He^{\theta f})\,d\nu_{t}\] \[=\int_{M}(2\tau|h|^{2}+\tau\mathcal{D}(0)-H)e^{\theta f}d\nu_{t}- \theta\int_{M}H\left(w-f+\frac{n}{2}\right)e^{\theta f}d\nu_{t}\] \[\quad+\theta(\theta-2)\tau\int_{M}H|\nabla f|^{2}e^{\theta f}d\nu _{t}+2\theta\tau\int_{M}H\Delta fe^{\theta f}d\nu_{t}\] \[\geq\int_{M}(2\tau|h|^{2}-H)e^{\theta f}d\nu_{t}-C\theta\int_{M} \left\{\tau H^{2}+\tau^{-1}\left(w^{2}+f^{2}+1\right)\right\}e^{\theta f}d\nu _{t}\] \[\quad-C|\theta-2|\theta\tau\int_{M}(H^{2}+|\nabla f|^{4})e^{ \theta f}d\nu_{t}-C\theta\tau\int_{M}(H^{2}+(\Delta f)^{2})e^{\theta f}d\nu_{t}\] \[\geq\int_{M}(2\tau|h|^{2}-H)e^{\theta f}d\nu_{t}-C\theta\int_{M} \left\{\tau H^{2}+\tau^{-1}\left(w^{2}+f^{2}+1\right)\right\}e^{\theta f}d\nu _{t}\] \[\quad-C|\theta-2|\theta\tau\int_{M}(H^{2}+|\nabla f|^{4})e^{ \theta f}d\nu_{t}-C\theta\tau\int_{M}(H^{2}+(\Delta f)^{2})e^{\theta f}d\nu_{t}\] \[\geq\int_{M}(2\tau|h|^{2}-H)e^{\theta f}d\nu_{t}-C\theta\tau^{-1} \int_{M}(\tau^{2}((\Delta f)^{2}+|\nabla f|^{4}+H^{2})+f^{2}+1)e^{\theta f}d \nu_{t}\] \[\geq\int_{M}(2\tau|h|^{2}-H)e^{\theta f}d\nu_{t}-C\theta\tau^{-1} \int_{M}(\tau^{2}(|\nabla^{2}f|^{2}+|\nabla f|^{4}+|h|^{2})+f^{2}+1)e^{\theta f }d\nu_{t}.\] Therefore, if \(\theta\in[0,1]\), then \[\frac{d}{dt}\int_{M}(w+\tau H)e^{\theta f}d\nu_{t} \geq\int_{M}\left(2\tau\left|\Phi\right|^{2}+2\tau|h|^{2}-H\right) e^{\theta f}d\nu_{t}\] \[\quad-C\theta\tau^{-1}\int_{M}(\tau^{2}(|\nabla^{2}f|^{2}+|\nabla f |^{4}+|h|^{2})+f^{2}+1)e^{\theta f}d\nu_{t}. \tag{6.9}\] Note that \[\left|\nabla^{2}f-\frac{1}{2\tau}g\right|^{2}=|\Phi-h|^{2}\leq 4| \Phi|^{2}+2|h|^{2},\] \[|H|\leq\sqrt{n}|h|\leq\frac{\tau}{2}|h|^{2}+\frac{n}{2\tau},\] \[|\nabla^{2}f|^{2}\leq 2\left|\nabla^{2}f-\frac{1}{2\tau}g\right|^{2 }+\frac{n}{2\tau^{2}};\] in particular, \[2\tau\left|\Phi\right|^{2}+2\tau|h|^{2}-H \geq 2\tau\left(\frac{1}{4}\left|\nabla^{2}f-\frac{n}{2\tau} \right|^{2}-\frac{1}{2}|h|^{2}\right)+2\tau|h|^{2}-\left(\frac{\tau}{2}|h|^{2} +\frac{n}{2\tau}\right)\] \[\geq 2\tau\left(\frac{1}{8}|\nabla^{2}f|^{2}-\frac{1}{2}|h|^{2}- \frac{n}{16\tau^{2}}\right)+2\tau|h|^{2}-\left(\frac{\tau}{2}|h|^{2}+\frac{n} {2\tau}\right)\] \[=\frac{\tau}{4}|\nabla^{2}f|^{2}+\frac{\tau}{2}|h|^{2}-\frac{5n}{ 8\tau}. \tag{6.10}\] Combining (6.9), (6.10) and Lemmas 6.2, 6.4 yields that for \(\theta\in[0,1/4]\), \[\frac{d}{dt}\int_{M}(w+\tau H)e^{\theta f}d\nu_{t}\] \[\geq\int_{M}\left\{\tau\left(\frac{1}{4}-C\theta\right)|\nabla^{ 2}f|^{2}+\tau\left(\frac{1}{2}-C\theta\right)|h|^{2}-\tau^{-1}\left(\frac{5n}{ 8}+C_{\kappa,\alpha,\mathcal{H}}\theta\right)\right\}e^{\theta f}d\nu_{t}.\] In particular, Lemma 6.3 implies that for \(\theta\in[0,\overline{\theta}]\), \[\frac{d}{dt}\int_{M}(w+\tau H)e^{\theta f}d\nu_{t} \geq\frac{\tau}{8}\int_{M}(|\nabla^{2}f|^{2}+|h|^{2})e^{\theta f} d\nu_{t}-C_{\kappa,\alpha,\mathcal{H}}\tau^{-1}\int_{M}e^{\theta f}d\nu_{t}\] \[\geq\frac{\tau}{8}\int_{M}(|\nabla^{2}f|^{2}+|h|^{2})e^{\theta f} d\nu_{t}-C_{\kappa,\alpha,\mathcal{H}}\tau^{-1}e^{(n+\tau\mathcal{H})\theta}.\] Once we obtain this estimate, we can prove the desired one by the same cutoff argument on time as in the proof of [4, Proposition 6.2] together with Lemmas 6.2 and 6.4. ## 7. Almost selfsimilar points For \(\varepsilon\in(0,1),r>0\), a point \((x_{0},t_{0})\in M\times I\) is called \((\varepsilon,r)\)_-selfsimilar_ if we have: 1. \([t_{0}-\varepsilon^{-1}r^{2},t_{0}]\subset I\); 2. we have (7.1) \[\int_{t_{0}-\varepsilon^{-1}r^{2}}^{t_{0}-\varepsilon r^{2}}\int_{M}\tau \left|h+\nabla^{2}f-\frac{1}{2\tau}g\right|^{2}d\nu_{(x_{0},t_{0});t}dt\leq\varepsilon;\] 3. for all \(t\in[t_{0}-\varepsilon^{-1}r^{2},t_{0}-\varepsilon r^{2}]\), we have (7.2) \[\int_{M}\left|\tau(2\Delta f-|\nabla f|^{2}+H)+f-n-\mathrm{N}\right|d\nu_{(x_ {0},t_{0});t}\leq\varepsilon\] for \(\mathrm{N}:=\mathcal{N}_{(x_{0},t_{0})}(r^{2})\); 4. on \(M\times[t_{0}-\varepsilon^{-1}r^{2},t_{0}-\varepsilon r^{2}]\), we have (7.3) \[H\geq-\varepsilon r^{-2}.\] In this section, we investigate various properties of almost selfsimilar points. ### Characterization In this subsection, we prove that the almost selfsimilarity can be characterized by the almost constancy of the Nash entropy. We first prepare the following lemma (cf. [4, Lemma 7.10]): **Lemma 7.1**.: _For \(\kappa>0,\varepsilon\in(0,1)\), if \(\zeta\leq\overline{\zeta}_{\kappa,\varepsilon}\), then the following holds: Let \((M,g(t))_{t\in I}\) be a super Ricci flow with \(\mathcal{D}\geq 0\). Let \((x_{0},t_{0})\in M\times I\). For \(r>0\), we assume \([t_{0}-\zeta^{-1}r^{2},t_{0}]\subset I\). We assume \(\mathcal{N}_{(x_{0},t_{0})}(r^{2})\geq-\kappa\). If_ \[|\mathcal{W}_{(x_{0},t_{0})}(\tau)-\mathrm{N}|\leq\zeta \tag{7.4}\] _for all \(\tau\in[\zeta r^{2},\zeta^{-1}r^{2}]\), then_ \[\int_{M}|w-\mathrm{N}|\,d\nu_{(x_{0},t_{0});t}\leq\varepsilon\] _for all \(t\in[t_{0}-\varepsilon^{-1}r^{2},t_{0}-\varepsilon^{-1}r^{2}]\), where \(w\) is defined as (2.6) and \(\mathrm{N}:=\mathcal{N}_{(x_{0},t_{0})}(r^{2})\)._ Proof.: We may assume \(t_{0}=0\) and \(r=1\). We set \(\nu:=\nu_{(x_{0},0)}\) and \(u:=G(x_{0},0;\cdot,\cdot)\). Let \(v\in C^{\infty}(M\times[-\zeta^{-1},0])\) be a solution to the heat equation with \(|v(\cdot,-\zeta^{-1})|\leq 1\). In view of the maximum principle, we see \(|v|\leq 1\) on \(M\times[-\zeta^{-1},0]\). Due to (2.1) and Theorem 2.5, \[\frac{d}{dt}\int_{M}v(w-\mathrm{N})u\,dm_{t} =-\int_{M}v\Box^{*}(wu)dm_{t}\] \[\geq\int_{M}\Box^{*}(wu)dm_{t}=-\frac{d}{dt}\int_{M}wu\,dm_{t}= \frac{d}{dt}\mathcal{W}_{(x_{0},0)}(|t|).\] From (7.4), it follows that \[\int_{M}v(w-\mathrm{N})\,d\nu_{-\tau_{1}} \leq\int_{M}v(w-\mathrm{N})\,d\nu_{-\tau_{2}}-\mathcal{W}_{(x_{0},0)}(\tau_{2})+\mathcal{W}_{(x_{0},0)}(\tau_{1})\] \[\leq\int_{M}v(w-\mathrm{N})d\nu_{-\tau_{2}}+2\zeta\] for every \(\tau_{1},\tau_{2}\in[\zeta,\zeta^{-1}]\), and hence \[\left|\int_{M}v(w-\mathrm{N})\,d\nu_{-\tau_{1}}-\int_{M}v(w-\mathrm{N})\,d\nu _{-\tau_{2}}\right|\leq 2\zeta. \tag{7.5}\] From Lemma 6.2 and Proposition 6.5, we deduce \[\int_{-2\zeta}^{-\zeta}\int_{M}\left(\tau^{2}(|h|^{2}+|\nabla^{2}f|^{2}+| \nabla f|^{4})+f^{2}\right)d\nu_{t}dt\leq C_{\kappa}\zeta;\] in particular, there exists \(\tau_{0}\in[\zeta,2\zeta]\) such that \[\int_{M}\left(\tau_{0}^{2}(|h|^{2}+|\nabla^{2}f|^{2}+|\nabla f|^{4})+f^{2} \right)d\nu_{-\tau_{0}}\leq C_{\kappa}.\] It follows that \[\int_{M}(w-\mathrm{N})^{2}d\nu_{-\tau_{0}}\leq C_{\kappa}+\int_{M}\left(\tau_ {0}^{2}(|\nabla^{2}f|^{2}+|\nabla f|^{4}+H^{2})+f^{2}\right)d\nu_{-\tau_{0}} \leq C_{\kappa}. \tag{7.6}\] Thanks to Proposition 3.9 with \(p=2\) and Theorem 3.6, we also possess \[\int_{M}|v-a|^{2}\,d\nu_{-\tau_{0}}\leq 2\tau_{0}\int_{M}|\nabla v|^{2}d\nu_{- \tau_{0}}\leq C\tau_{0}\leq C\zeta, \tag{7.7}\] where \(a\in[-1,1]\) is defined as \(a:=\int_{M}v\,d\nu_{-\tau_{0}}\). Therefore, (7.7) and (7.6) lead us to \[\left|\int_{M}v(w-\mathrm{N})d\nu_{-\tau_{0}}\right| \leq\left|a\int_{M}(w-\mathrm{N})d\nu_{-\tau_{0}}\right|+\int_{M} |v-a||w-\mathrm{N}|d\nu_{-\tau_{0}}\] \[\leq\left|\int_{M}w\,d\nu_{-\tau_{0}}-\mathrm{N}\right|+\left( \int_{M}|v-a|^{2}d\nu_{-\tau_{0}}\right)^{1/2}\left(\int_{M}|w-\mathrm{N}|^{2} d\nu_{-\tau_{0}}\right)^{1/2}\] \[\leq\left|\mathcal{W}_{(x_{0},0)}(\tau_{0})-\mathrm{N}\right|+C_ {\kappa}\,\zeta^{1/2}\leq\zeta+C_{\kappa}\zeta^{1/2}.\] Combining this with (7.5) tells us that if \(\zeta\leq\overline{\zeta}_{\kappa,\varepsilon}\), then for all \(\tau\in[\varepsilon,\varepsilon^{-1}]\) we have \[\left|\int_{M}v(w-\mathrm{N})\,d\nu_{-\tau}\right|\leq\varepsilon.\] Since \(v\) is arbitrarily, we complete the proof by the same argument of [4, Lemma 7.10]. We provide the following characterization (cf. [4, Proposition 7.1]): **Proposition 7.2**.: _For \(\kappa>0,\varepsilon\in(0,1)\), if \(\delta\leq\overline{\delta}_{\kappa,\varepsilon}\), then the following holds: Let \((M,g(t))_{t\in I}\) be a super Ricci flow with \(\mathcal{D}\geq 0\). Let \((x_{0},t_{0})\in M\times I\). For \(r>0\), we assume \([t_{0}-\delta^{-1}r^{2},t_{0}]\subset I\) and \(\mathcal{N}_{(x_{0},t_{0})}(r^{2})\geq-\kappa\). If_ \[\mathcal{N}_{(x_{0},t_{0})}(\delta^{-1}r^{2})\geq\mathcal{N}_{(x_{0},t_{0})}( \delta r^{2})-\delta, \tag{7.8}\] _then \((x_{0},t_{0})\) is \((\varepsilon,r)\)-selfsimilar. Vice versa, if \((x_{0},t_{0})\) is \((\delta,r)\)-selfsimilar, then for all \(\tau_{1},\tau_{2}\in[\varepsilon r^{2},\varepsilon^{-1}r^{2}]\) we have_ \[\left|\mathcal{N}_{(x_{0},t_{0})}(\tau_{1})-\mathcal{N}_{(x_{0},t_{0})}(\tau_ {2})\right|\leq\varepsilon. \tag{7.9}\] Proof.: We may assume \(t_{0}=0\) and \(r=1\). Set \(\mathrm{N}:=\mathcal{N}_{(x_{0},0)}(1)\). We first assume (7.8). By (4.3), (4.4) and (7.8), \[\left|\mathcal{N}_{(x_{0},0)}(\tau)-\mathrm{N}\right|\leq\delta,\quad\mathcal{ W}_{(x_{0},0)}(\tau)\leq\mathcal{N}_{(x_{0},0)}(\tau)\leq\mathrm{N}+\delta \tag{7.10}\] for all \(\tau\in[\delta,\delta^{-1}]\). Let \(\zeta\in(\delta,1)\). By (4.2), (4.1) and (7.10), if \(\delta\leq\overline{\delta}_{\kappa,\zeta}\), then \[\mathcal{W}_{(x_{0},0)}(\tau) \geq\frac{1}{\delta^{-1}-\tau}\int_{\tau}^{\delta^{-1}}\mathcal{W }_{(x_{0},0)}(\sigma)d\sigma=\frac{1}{\delta^{-1}-\tau}\left(\delta^{-1} \mathcal{N}_{(x_{0},0)}(\delta^{-1})-\tau\mathcal{N}_{(x_{0},0)}(\tau)\right)\] \[\geq\frac{\delta^{-1}}{\delta^{-1}-\tau}\mathcal{N}_{(x_{0},0)}( \delta^{-1})\geq\frac{\delta^{-1}}{\delta^{-1}-\zeta^{-1}}(\mathrm{N}-\delta) \geq\mathrm{N}-\zeta \tag{7.11}\] for all \(\tau\in[\zeta,\zeta^{-1}]\); in particular, \(\left|\mathcal{W}_{(x_{0},0)}(\tau)-\mathrm{N}\right|\leq\zeta\) for all \(\tau\in[\zeta,\zeta^{-1}]\). By Lemma 7.1, if \(\zeta\leq\overline{\zeta}_{\kappa,\varepsilon}\), then we arrive at (7.2). Secondly, (4.2) yields \[\int_{-\varepsilon^{-1}}^{-\varepsilon}\int_{M}\left(2\tau\left|\Phi\right|^{2 }+\tau\mathcal{D}(\nabla f)\right)d\nu_{(x_{0},0);t}dt=\mathcal{W}_{(x_{0},0)} (\varepsilon)-\mathcal{W}_{(x_{0},0)}(\varepsilon^{-1})\leq\varepsilon,\] where \(\Phi\) is defined as (2.6). Hence, (7.1). Finally, (7.3) is a consequence of Lemma 2.1. We next show (7.9). Assume that \((x_{0},0)\) is \((\delta,1)\)-selfsimilar. Integrating (7.2), we have \[\left|\mathcal{W}_{(x_{0},0)}(\tau)-\mathrm{N}\right|\leq\delta \tag{7.12}\] for all \(\tau\in[\delta,\delta^{-1}]\). By (4.4) and (7.12), we see \[\mathcal{N}_{(x_{0},0)}(\tau)\geq\mathcal{W}_{(x_{0},0)}(\tau)\geq\mathrm{N}-\delta\] for all \(\tau\in[\delta,\delta^{-1}]\). Furthermore, (4.2), (4.1) and (7.12), \[\mathrm{N}+\delta\geq\mathcal{W}_{(x_{0},0)}(\delta) \geq\frac{1}{\tau-\delta}\int_{\delta}^{\tau}\mathcal{W}_{(x_{0},0 )}(\sigma)d\sigma\] \[=\frac{1}{\tau-\delta}\left(\tau\mathcal{N}_{(x_{0},0)}(\tau)- \delta\mathcal{N}_{(x_{0},0)}(\delta)\right)\geq\frac{\varepsilon}{\varepsilon- \delta}\mathcal{N}_{(x_{0},0)}(\tau)\] for all \(\tau\in[\varepsilon,\varepsilon^{-1}]\). Therefore, if \(\delta\leq\overline{\delta}_{\kappa,\varepsilon}\), then \(|\mathcal{N}_{(x_{0},0)}(\tau)-\mathrm{N}|\leq\varepsilon/2\) for all \(\tau\in[\varepsilon,\varepsilon^{-1}]\), and this proves (7.9). ### Improved selfsimilarity We have the following (cf. [4, Proposition 7.3]): **Proposition 7.3**.: _For \(\kappa>0,\varepsilon\in(0,1)\), if \(\theta\in[0,\overline{\theta}],\delta\leq\overline{\delta}_{\kappa,\varepsilon}\), then the following holds: Let \((M,g(t))_{t\in I}\) be a super Ricci flow with \(\mathcal{D}\geq 0\). Let \((x_{0},t_{0})\in M\times I\). For \(r>0\), we assume that \((x_{0},t_{0})\) is \((\delta,r)\)-selfsimilar, and \(\mathcal{N}_{(x_{0},t_{0})}(r^{2})\geq-\kappa\). Then we have_ \[\int_{t_{0}-\varepsilon^{-1}r^{2}}^{t_{0}-\varepsilon r^{2}}\int_ {M}\tau\left|h+\nabla^{2}f-\frac{1}{2\tau}g\right|^{2}e^{\theta f}d\nu_{(x_{0 },t_{0});t}dt\leq\varepsilon,\] \[r^{-2}\int_{t_{0}-\varepsilon^{-1}r^{2}}^{t_{0}-\varepsilon r^{2 }}\int_{M}\left|\tau(-|\nabla f|^{2}+\Delta f)+f-\frac{n}{2}-\mathrm{N} \right|e^{\theta f}d\nu_{(x_{0},t_{0});t}dt\leq\varepsilon,\] \[r^{-2}\int_{t_{0}-\varepsilon^{-1}r^{2}}^{t_{0}-\varepsilon r^{2 }}\int_{M}\left|\square(\tau f)+\frac{n}{2}+\mathrm{N}\right|e^{\theta f}d\nu_ {(x_{0},t_{0});t}dt\leq\varepsilon,\] \[r^{-2}\int_{t_{0}-\varepsilon^{-1}r^{2}}^{t_{0}-\varepsilon r^{2 }}\int_{M}\left|-\tau(|\nabla f|^{2}+H)+f-\mathrm{N}\right|e^{\theta f}d\nu_ {(x_{0},t_{0});t}dt\leq\varepsilon,\] _where \(\mathrm{N}:=\mathcal{N}_{(x_{0},t_{0})}(r^{2})\)._ Proof.: We can prove this assertion by using the calculation technique stated in the proof of [4, Proposition 7.3] together with Proposition 6.5. ### Almost monotonicity In this subsection, we prove an almost monotonicity property for an integral quantity. To do so, let us show the following formula: **Lemma 7.4**.: _Let \((x_{0},t_{0})\in M\times I\). Then for all \(t\in I\cap(-\infty,t_{0})\) we have_ \[\frac{d}{dt}\int_{M}\tau H\,d\nu_{(x_{0},t_{0});t} =2\int_{M}\tau\left\langle\Phi,h+\nabla^{2}f-df\otimes df\right\rangle d \nu_{(x_{0},t_{0});t}\] \[\quad+2\int_{M}\tau(\mathrm{tr}\,\Phi)(|\nabla f|^{2}-\Delta f)d \nu_{(x_{0},t_{0});t}+\int_{M}\tau\mathcal{D}(\nabla f)d\nu_{(x_{0},t_{0});t},\] _where \(\Phi\) is defined as (2.6)._ Proof.: We set \(\nu:=\nu_{(x_{0},t_{0})}\). From (2.5) we deduce \[\frac{d}{dt}\int_{M}\tau H\,d\nu_{t} =\int_{M}\square(\tau H)\,d\nu_{t}=\int_{M}\tau\square H-H\,d\nu_{t}\] \[=\int_{M}(2\tau|h|^{2}+\tau\mathcal{D}(0)-H)d\nu_{t}\] \[=\int_{M}\left(2\tau\left\langle\Phi,h\right\rangle-2\tau\langle h,\nabla^{2}f\rangle+\tau\mathcal{D}(0)\right)d\nu_{t}. \tag{7.13}\] By direct calculations, we see \[\mathrm{div}(h(\nabla f))=(\mathrm{div}\,h)(\nabla f)+\langle h,\nabla^{2}f \rangle,\quad\mathrm{div}(\Phi(\nabla f))=(\mathrm{div}\,\Phi)(\nabla f)+\langle \Phi,\nabla^{2}f\rangle. \tag{7.14}\] The first one in (7.14) and (2.5) yield \[\int_{M}\langle h,\nabla^{2}f\rangle\,d\nu_{t} =\int_{M}\left(h(\nabla f,\nabla f)-(\operatorname{div}h)(\nabla f )\right)d\nu_{t}\] \[=\int_{M}\left(h(\nabla f,\nabla f)-\frac{1}{2}\langle\nabla H, \nabla f\rangle\right)d\nu_{t}\] \[\quad+\int_{M}\left(\frac{1}{2}(\operatorname{Ric}-h)(\nabla f, \nabla f)-\frac{1}{4}\mathcal{D}(\nabla f)+\frac{1}{4}\mathcal{D}(0)\right)d \nu_{t}. \tag{7.15}\] Also, the second one in (7.14), (2.5) and the Ricci identity imply \[\int_{M}\langle\Phi,\nabla^{2}f\rangle\,d\nu_{t} =\int_{M}\left(\Phi(\nabla f,\nabla f)-(\operatorname{div}\Phi)( \nabla f)\right)d\nu_{t}\] \[=\int_{M}\left(\langle\Phi,df\otimes df\rangle-(\operatorname{ div}h)(\nabla f)-(\operatorname{div}\nabla^{2}f)(\nabla f)\right)d\nu_{t}\] \[=\int_{M}\langle\Phi,df\otimes df\rangle d\nu_{t}\] \[\quad+\int_{M}\left(-\frac{1}{2}\langle\nabla H,\nabla f\rangle+ \frac{1}{2}(\operatorname{Ric}-h)(\nabla f,\nabla f)-\frac{1}{4}\mathcal{D}( \nabla f)+\frac{1}{4}\mathcal{D}(0)\right)d\nu_{t}\] \[\quad+\int_{M}\left(-\langle\nabla\Delta f,\nabla f\rangle- \operatorname{Ric}(\nabla f,\nabla f)\right)d\nu_{t}. \tag{7.16}\] Combining (7.15) and (7.16) tells us that \[\int_{M}\left(\langle h,\nabla^{2}f\rangle+\langle\Phi,\nabla^{2} f\rangle\right)\,d\nu_{t}\] \[=\int_{M}\left(\langle\Phi,df\otimes df\rangle-\langle\nabla( \operatorname{tr}\Phi),\nabla f\rangle-\frac{1}{2}\mathcal{D}(\nabla f)+ \frac{1}{2}\mathcal{D}(0)\right)d\nu_{t}. \tag{7.17}\] By substituting (7.17) into (7.13), we obtain \[\frac{d}{dt}\int_{M}\tau H\,d\nu_{t} =2\tau\int_{M}\left\langle\Phi,h+\nabla^{2}f-df\otimes df\right\rangle d \nu_{t}+2\tau\int_{M}\langle\nabla(\operatorname{tr}\Phi),\nabla f\rangle d\nu _{t}\] \[\quad+\tau\int_{M}\mathcal{D}(\nabla f)d\nu_{t}.\] From integration by parts (2.5), we conclude the desired equation. We are now in a position to conclude the following (cf. [4, Proposition 7.9]): **Proposition 7.5**.: _For \(\kappa>0,\varepsilon\in(0,1)\), if \(\delta\leq\overline{\delta}_{\kappa,\varepsilon}\), then the following holds: Let \((M,g(t))_{t\in I}\) be a super Ricci flow with \(\mathcal{D}\geq 0\). Let \((x_{0},t_{0})\in M\times I\). For \(r>0\), we assume that \((x_{0},t_{0})\) is \((\delta,r)\)-selfsimilar, and \(\mathcal{N}_{(x_{0},t_{0})}(r^{2})\geq-\kappa\). Then for all \(t_{1},t_{2}\in[t_{0}-\varepsilon^{-1}r^{2},t_{0}-\varepsilon r^{2}]\) with \(t_{1}\leq t_{2}\), we have_ \[\int_{M}\tau H\,d\nu_{(x_{0},t_{0});t_{1}}\leq\int_{M}\tau H\,d\nu_{(x_{0},t_{0 });t_{2}}+\varepsilon.\] Proof.: We may assume \(t_{0}=0\) and \(r=1\). We set \(\nu:=\nu_{(x_{0},0)}\). By Lemma 7.4 and \(\mathcal{D}\geq 0\), \[\frac{d}{dt}\int_{M}\tau H\,d\nu_{t}\geq 2\int_{M}\tau\left\langle\Phi,h+ \nabla^{2}f-df\otimes df\right\rangle d\nu_{t}+2\int_{M}\tau(\operatorname{ tr}\Phi)(|\nabla f|^{2}-\Delta f)d\nu_{t}. \tag{7.18}\] It follows that \[\int_{M}\tau Hd\nu_{t_{2}}-\int_{M}\tau Hd\nu_{t_{1}} \geq 2\int_{t_{1}}^{t_{2}}\int_{M}\tau\left\langle\Phi,h+\nabla^{2}f- df\otimes df\right\rangle d\nu_{t}dt\] \[+2\int_{t_{1}}^{t_{2}}\int_{M}\tau(\operatorname{tr}\Phi)(| \nabla f|^{2}-\Delta f)\,d\nu_{t}dt.\] By the Cauchy-Schwarz inequality, we have \[\left|\int_{t_{1}}^{t_{2}}\int_{M}\tau\left\langle\Phi,h+\nabla^{ 2}f-df\otimes df\right\rangle d\nu_{t}dt\right|\] \[\leq\left(\int_{t_{1}}^{t_{2}}\int_{M}\tau|\Phi|^{2}\,d\nu_{t}dt \right)^{1/2}\left(C\int_{t_{1}}^{t_{2}}\int_{M}\tau(|h|^{2}+|\nabla^{2}f|^{2} +|\nabla f|^{4})\,d\nu_{t}dt\right)^{1/2}\leq C_{\kappa,\varepsilon}\,\delta^ {1/2}.\] In the same manner, we obtain \[\left|\int_{t_{1}}^{t_{2}}\int_{M}\tau(\operatorname{tr}\Phi)(| \nabla f|^{2}-\Delta f)\,d\nu_{t}dt\right|\] \[\leq\left(\int_{t_{1}}^{t_{2}}\int_{M}\tau(\operatorname{tr}\Phi) ^{2}\,d\nu_{t}dt\right)^{1/2}\left(\int_{t_{1}}^{t_{2}}\int_{M}\tau(|\nabla f |^{2}-\Delta f)^{2}\,d\nu_{t}dt\right)^{1/2}\] \[\leq\left(C\int_{t_{1}}^{t_{2}}\int_{M}\tau|\Phi|^{2}\,d\nu_{t}dt \right)^{1/2}\left(C\int_{t_{1}}^{t_{2}}\int_{M}\tau(|\nabla f|^{4}+|\nabla^{2 }f|^{2})\,d\nu_{t}dt\right)^{1/2}\leq C_{\kappa,\varepsilon}\,\delta^{1/2}.\] Hence, we arrive at \[\int_{M}\tau Hd\nu_{t_{2}}-\int_{M}\tau Hd\nu_{t_{1}}\geq-C_{\kappa, \varepsilon}\,\delta^{1/2}.\] This completes the proof. As a corollary, we obtain the following almost constancy property for Ricci flow: **Corollary 7.6**.: _For \(\kappa>0,\varepsilon\in(0,1)\), if \(\delta\leq\overline{\delta}_{\kappa,\varepsilon}\), then the following holds: Let \((M,g(t))_{t\in I}\) be a Ricci flow. Let \((x_{0},t_{0})\in M\times I\). For \(r>0\), we assume that \((x_{0},t_{0})\) is \((\delta,r)\)-selfsimilar, and \(\mathcal{N}_{(x_{0},t_{0})}(r^{2})\geq-\kappa\). Then for all \(t_{1},t_{2}\in[t_{0}-\varepsilon^{-1}r^{2},t_{0}-\varepsilon r^{2}]\), we have_ \[\left|\int_{M}\tau H\,d\nu_{(x_{0},t_{0});t_{1}}-\int_{M}\tau H\,d\nu_{(x_{0}, t_{0});t_{2}}\right|\leq\varepsilon.\] Proof.: In the case of the Ricci flow, the Muller quantity vanishes in virtue of the evolution formula for the scalar curvature and the contracted second Bianchi identity; in particular, the equality holds in (7.18). Therefore, the same calculation as in the proof of Proposition 7.5 leads us to the desired claim. ### Distance expansion estimate We close this section with the following distance expansion estimate (cf. [4, Proposition 9.1]): **Proposition 7.7**.: _For \(\kappa,D>0,\alpha\in(0,1)\), if \(\delta\leq\overline{\delta}_{\kappa,D,\alpha}\), then the following holds: Let \((M,g(t))_{t\in I}\) be a super Ricci flow with \(\mathcal{D}\geq 0\). Let \((x_{0},t_{0}),(x_{1},t_{1})\in M\times I\). For \(r>0\), we assume that \((x_{0},t_{0})\) is \((\delta,r)\)-selfsimilar, \(\mathcal{N}_{(x_{0},t_{0})}(r^{2})\geq-\kappa\) and \(0\leq t_{1}-t_{0}\leq\alpha^{-1}r^{2}\). Assume_ \[W_{s_{0}}(\nu_{(x_{0},t_{0});s_{0}},\nu_{(x_{1},t_{1});s_{0}})\leq Dr \tag{7.19}\] _for some \(s_{0}\in[t_{0}-\alpha^{-1}r^{2},t_{0}-\alpha r^{2}]\). Then we have_ \[W_{s_{0}+\alpha r^{2}/4}(\nu_{(x_{0},t_{0});s_{0}+\alpha r^{2}/4},\nu_{(x_{1},t _{1});s_{0}+\alpha r^{2}/4})\leq C_{\kappa,D,\alpha}r.\] Proof.: We may assume \(r=1\). For \(i=0,1\), we set \(\nu^{i}:=\nu_{(x_{i},t_{i})}\). Let \(\delta\leq\alpha/2\). By Lemma 4.7 we have \(\mathcal{N}_{(x_{1},t_{1})}(1)\geq-C_{\kappa,D,\alpha}\). From Lemma 6.2, we conclude \[f,f_{1}\geq-C_{\kappa,D,\alpha} \tag{7.20}\] on \([t_{0}-\alpha^{-1},t_{0})\), where \(f,f_{1}\) are the potentials for \((x_{0},t_{0}),(x_{1},t_{1})\), respectively. Moreover, we will denote by \(\tau,\tau_{1}\) the parameters for \((x_{0},t_{0}),(x_{1},t_{1})\), respectively. Set \(s_{1}:=s_{0}-\alpha/4\). We first prove that there exists \(\Omega\subset M\) such that \[f(\cdot,s_{1}),f_{1}(\cdot,s_{1})\leq C_{\kappa,D,\alpha},\quad\nu^{1}_{s_{1} }(\Omega)\geq C_{\kappa,D,\alpha} \tag{7.21}\] on \(\Omega\). Let \((z,s_{1})\) be a center of \((x_{0},t_{0})\) (see Proposition 3.2). By Propositions 3.3 and 5.4, \[\nu^{0}_{s_{1}}(B)\geq\frac{1}{2},\quad m_{s_{1}}(B)\leq C_{\alpha}(t_{0}-s_{1 })^{n/2}, \tag{7.22}\] where \(B:=B(z,s_{1},\sqrt{2\mathcal{C}_{n}(t_{0}-s_{1})}\). For \(a>0\), we define \(\Omega_{0}:=\{f(\cdot,s_{1})\leq a\}\cap B\). In virtue of (7.22), we possess \[\nu^{0}_{s_{1}}(\Omega_{0})\geq\frac{1}{2}-\int_{B\setminus\Omega_{0}}(4\pi(t _{0}-s_{1}))^{-n/2}e^{-f}dm_{s_{1}}\geq\frac{1}{2}-Ce^{-a}(t_{0}-s_{1})^{-n/2} m_{s_{1}}(B)\geq\frac{1}{2}-C_{\alpha}e^{-a};\] in particular, if \(a\geq\underline{a}_{\alpha}\), then (7.20) implies \[\nu^{0}_{s_{1}}(\Omega_{0})\geq\frac{1}{4}. \tag{7.23}\] Let us verify \[\nu^{1}_{s_{1}}(\Omega_{0})\geq C_{\kappa,D,\alpha}. \tag{7.24}\] Define a function \(\phi:M\times[s_{1},s_{0}]\to[0,1]\) by \(\phi(y,s):=\nu_{(y,s);s_{1}}(\Omega_{0})\), and set \(\psi:=\phi(\cdot,s_{0})\). Now, (2.2) and (7.23) imply \[\int_{M}\psi\,d\nu^{0}_{s_{0}}=\nu^{0}_{s_{1}}(\Omega_{0})\geq\frac{1}{4}. \tag{7.25}\] Let \((y_{0},s_{0}),(y_{1},s_{0})\) be centers of \((x_{0},t_{0}),(x_{1},t_{1})\), respectively. Lemma 2.6 implies \[d_{s_{0}}(y_{0},y_{1}) \leq W_{s_{0}}(\delta_{y_{0}},\nu^{0}_{s_{0}})+W_{s_{0}}(\nu^{0}_ {s_{0}},\nu^{1}_{s_{0}})+W_{s_{0}}(\nu^{1}_{s_{0}},\delta_{y_{1}})\] \[\leq\sqrt{\mathcal{C}_{n}(t_{0}-s_{0})}+D+\sqrt{\mathcal{C}_{n}( t_{1}-s_{0})}\leq C_{D,\alpha}. \tag{7.26}\] Furthermore, by Proposition 3.3, \[\nu^{0}_{s_{0}}\left(M\setminus B(y_{0},s_{0},\sqrt{8\mathcal{C}_{n}(t_{0}-s_{ 0})})\right)\leq\frac{1}{8},\quad\nu^{1}_{s_{0}}\left(B(y_{1},s_{0},\sqrt{2 \mathcal{C}_{n}(t_{1}-s_{0})})\right)\geq\frac{1}{2}. \tag{7.27}\] By (7.25), (7.27) and \(\psi\leq 1\), we see \[\int_{B(y_{0},s_{0},\sqrt{8\mathcal{C}_{n}(t_{0}-s_{0})})}\psi\,d\nu^{0}_{s_{ 0}}\geq\frac{1}{4}-\nu^{0}_{s_{0}}\left(M\setminus B(y_{0},s_{0},\sqrt{8 \mathcal{C}_{n}(t_{0}-s_{0})})\right)\geq\frac{1}{8};\] in particular, \(\psi\geq 1/8\) at a point in \(B(y_{0},s_{0},\sqrt{8\mathcal{C}_{n}(t_{0}-s_{0})})\). By Theorem 3.6, for \(T\geq 0\), the function \(\Psi^{-1}_{T+\alpha/4}\circ\psi\) is \(1\)-Lipschitz, here \(\Psi\) is defined as (3.2); in particular, \(\psi\) is \(C(T+\alpha/4)^{-1/2}\)-Lipschitz. Hence, if \(T\geq\underline{T}_{\kappa,D,\alpha}\), then (7.26) yields \(\psi\geq C_{\kappa,D,\alpha}\) on \(B(y_{1},s_{0},\sqrt{2\mathcal{C}_{n}(t_{1}-s_{0})})\). Now, (2.2) and (7.27) lead us to \[\nu^{1}_{s_{1}}(\Omega_{0})=\int_{M}\psi\,d\nu^{1}_{s_{0}}\geq\int_{B(y_{1},s_ {0},\sqrt{2\mathcal{C}_{n}(t_{1}-s_{0})})}\psi\,d\nu^{1}_{s_{0}}\geq C_{\kappa, D,\alpha}.\] This proves (7.24). We define \(\Omega:=\{f_{1}(\cdot,s_{1})\leq a\}\cap\Omega_{0}\). With the help of (7.24) and (7.22), \[\nu^{1}_{s_{1}}(\Omega) \geq C_{\kappa,D,\alpha}-\int_{\Omega_{0}\setminus\Omega}(4\pi(t_{ 1}-s_{1}))^{-n/2}e^{-f_{1}}dm_{s_{1}}\] \[\geq C_{\kappa,D,\alpha}-Ce^{-a}(t_{0}-s_{1})^{-n/2}m_{s_{1}}(B) \geq C_{\kappa,D,\alpha}-C_{\alpha}e^{-a};\] in particular, if \(a\geq\underline{a}_{\kappa,D,\alpha}\), then we conclude (7.21). For \(s:=s_{0}+\alpha/4\) and \(s_{2}:=s_{1}-\alpha/4\), we define \(u\in C^{\infty}(M\times[s_{1},s])\) by \[u:=\frac{1}{(4\pi(t-s_{2}))^{n/2}}\exp\left(-\frac{\tau(f-\mathrm{N})}{t-s_{2 }}\right),\] where \(\mathrm{N}:=\mathcal{N}_{(x_{0},t_{0})}(1)\). Note that for every \(t\in[s_{1},s]\), we have \(\tau\in[3\alpha/4,\alpha^{-1}+\alpha/4],t-s_{2}\in[\alpha/4,3\alpha/4]\), and \[1\leq\frac{\tau}{t-s_{2}}\leq\frac{\alpha^{-1}+\alpha/4}{\alpha/4}.\] By (7.20) and (7.21), we also see \[\int_{M}u\,d\nu^{1}_{s_{1}}\geq\int_{\Omega}u\,d\nu^{1}_{s_{1}}\geq C_{\kappa,D,\alpha},\quad u\leq C_{\kappa,D,\alpha}(4\pi\tau)^{-n/2}e^{-f}. \tag{7.28}\] By direct calculations, \[\begin{split}\Box u&=-\left\{\frac{1}{t-s_{2}} \left(\Box(\tau f)+\frac{n}{2}+\mathrm{N}\right)-\frac{\tau}{(t-s_{2})^{2}} \left(-\tau(|\nabla f|^{2}+H)+f-\mathrm{N}\right)-\frac{\tau^{2}}{(t-s_{2})^{2 }}H\right\}u\\ &\geq-C_{\kappa,D,\alpha}\left(\left|\Box(\tau f)+\frac{n}{2}+ \mathrm{N}\right|+\tau\left|-\tau(|\nabla f|^{2}+H)+f-\mathrm{N}\right|+ \delta\right)u.\end{split}\] This together with (2.5), (7.20) and (7.28) implies \[\begin{split}&\frac{d}{dt}\int_{M}u\,d\nu^{1}_{t}=\int_{M}\Box u \,d\nu^{1}_{t}\\ &\geq-C_{\kappa,D,\alpha}\int_{M}\left(\left|\Box(\tau f)+\frac {n}{2}+\mathrm{N}\right|+\tau\left|-\tau(|\nabla f|^{2}+H)+f-\mathrm{N}\right|+ \delta\right)(4\pi\tau)^{-n/2}e^{-f}d\nu^{1}_{t}\\ &=-C_{\kappa,D,\alpha}\int_{M}\left(\left|\Box(\tau f)+\frac{n}{ 2}+\mathrm{N}\right|+\tau\left|-\tau(|\nabla f|^{2}+H)+f-\mathrm{N}\right|+ \delta\right)(4\pi\tau_{1})^{-n/2}e^{-f_{1}}d\nu^{0}_{t}\\ &\geq-C_{\kappa,D,\alpha}\int_{M}\left(\left|\Box(\tau f)+\frac {n}{2}+\mathrm{N}\right|+\tau\left|-\tau(|\nabla f|^{2}+H)+f-\mathrm{N}\right|+ \delta\right)d\nu^{0}_{t}.\end{split}\] Fix \(\eta\in(0,1)\). By Proposition 7.3 and (7.28), if \(\delta\leq\overline{\delta}_{\kappa,D,\alpha,\eta}\) and \(\eta\leq\overline{\eta}_{\kappa,D,\alpha}\), then \[\int_{M}u\,d\nu^{1}_{s}\geq\int_{M}u\,d\nu^{1}_{s_{1}}-\eta\geq C_{\kappa,D, \alpha}. \tag{7.29}\] From (7.28) and (7.29), it follows that \[\int_{M}(4\pi)^{-n}(\tau\tau_{1})^{-n/2}e^{-f-f_{1}}dm_{s}=\int_{M}(4\pi\tau) ^{-n/2}e^{-f}d\nu^{1}_{s}\geq C_{\kappa,D,\alpha}\int_{M}u\,d\nu^{1}_{s}\geq C _{\kappa,D,\alpha}. \tag{7.30}\] Let \((z_{0},s),(z_{1},s)\) be centers of \((x_{0},t_{0}),(x_{1},t_{1})\), respectively. We put \(d:=d_{s}(z_{0},z_{1})\). From (7.20) and (7.30), we deduce \[\begin{split}&\nu^{0}_{s}(M\setminus B(z_{0},s,d/2))+\nu^{1}_{s}(M \setminus B(z_{1},s,d/2))\\ &\geq C_{\kappa,D,\alpha}\left(\int_{M\setminus B(z_{0},s,d/2)}( 4\pi)^{-n}(\tau\tau_{1})^{-n/2}e^{-f-f_{1}}dm_{s}+\int_{M\setminus B(z_{1},s,d/2)}(4\pi)^{-n}(\tau\tau_{1})^{-n/2}e^{-f-f_{1}}dm_{s}\right)\\ &\geq C_{\kappa,D,\alpha}\int_{M}(4\pi)^{-n}(\tau\tau_{1})^{-n/2 }e^{-f-f_{1}}dm_{s}\geq C_{\kappa,D,\alpha}.\end{split}\] This and (3.1) tell us that \[C_{\kappa,D,\alpha}\leq\nu_{s}^{0}(M\setminus B(z_{0},s,d/2))+\nu_{s}^{1}(M \setminus B(z_{1},s,d/2))\leq\frac{\mathcal{C}_{n}(t_{0}-s)+\mathcal{C}_{n}(t_ {1}-s)}{(d/2)^{2}}\leq\frac{C_{\alpha}}{d^{2}};\] in particular, \(d\leq C_{\kappa,D,\alpha}\). Lemma 2.6 implies \[W_{s}(\nu_{s}^{0},\nu_{s}^{1})\leq W_{s}(\nu_{s}^{0},\delta_{z_{0}})+d+W_{s}( \delta_{z_{1}},\nu_{s}^{1})\leq\sqrt{\mathcal{C}_{n}(t_{0}-s)}+d+\sqrt{ \mathcal{C}_{n}(t_{1}-s)}\leq C_{\kappa,D,\alpha}.\] Thus, we complete the proof. ## 8. Almost static points For \(\varepsilon\in(0,1),r>0\), a point \((x_{0},t_{0})\in M\times I\) is called \((\varepsilon,r)\)_-static_ if the following holds: 1. \([t_{0}-\varepsilon^{-1}r^{2},t_{0}]\subset I\); 2. we have \[r^{2}\int_{t_{0}-\varepsilon^{-1}r^{2}}^{t_{0}-\varepsilon r^{2}}\int_{M}|h|^ {2}d\nu_{(x_{0},t_{0});t}dt\leq\varepsilon;\] 3. for all \(t\in[t_{0}-\varepsilon^{-1}r^{2},t_{0}-\varepsilon r^{2}]\), we have \[r^{2}\int_{M}H\,d\nu_{(x_{0},t_{0});t}\leq\varepsilon;\] 4. \(H\geq-\varepsilon r^{-2}\) on \(M\times[t_{0}-\varepsilon^{-1}r^{2},t_{0}-\varepsilon r^{2}]\). Our first main result is the following almost static cone splitting theorem, which has been formulated by Bamler [4] for Ricci flow (cf. [4, Proposition 10.1]): **Theorem 8.1**.: _For \(\kappa,D>0,\alpha,\varepsilon\in(0,1)\), if \(\delta\leq\overline{\delta}_{\kappa,D,\alpha,\varepsilon}\), then the following holds: Let \((M,g(t))_{t\in I}\) be a super Ricci flow with \(\mathcal{D}\geq 0\). Let \((x_{0},t_{0}),(x_{1},t_{1})\in M\times I\). For \(r>0\), we assume that \((x_{0},t_{0}),(x_{1},t_{1})\) are \((\delta,r)\)-selfsimilar, \(\mathcal{N}_{(x_{0},t_{0})}(r^{2})\geq-\kappa\) and \(\alpha r^{2}\leq t_{1}-t_{0}\leq\alpha^{-1}r^{2}\). If there exists \(s_{0}\in[t_{0}-\alpha^{-1}r^{2},t_{0}-\alpha r^{2}]\) such that_ \[W_{s_{0}}(\nu_{(x_{0},t_{0});s_{0}},\nu_{(x_{1},t_{1});s_{0}})\leq Dr,\] _then \((x_{0},t_{0})\) is \((\varepsilon,r)\)-static._ Proof.: We may assume \(t_{1}=0\) and \(r=1\). For \(i=0,1\), set \(\nu^{i}:=\nu_{(x_{i},t_{0})}\). We also set \(\mathrm{N}_{1}:=\mathcal{N}_{(x_{1},t_{1})}(1)\). Lemma 4.7 implies \(\mathrm{N}_{1}\geq-C_{\kappa,D,\alpha}\). Let \(f,f_{1}\) be the potentials for \((x_{0},t_{0}),(x_{1},t_{1})\), respectively. Moreover, let \(\tau,\tau_{1}\) be the parameters for \((x_{0},t_{0}),(x_{1},t_{1})\), respectively. Let \(\theta\in(0,\overline{\theta}]\) be a constant obtained in Proposition 7.3. We fix \(\zeta\in(0,1)\). By iterating Propositions 3.1 and 7.7, if \(\delta\leq\overline{\delta}_{\kappa,D,\alpha,\zeta}\), then it holds that \(W_{t_{0}-\zeta}(\nu_{t_{0}-\zeta}^{0},\nu_{t_{0}-\zeta}^{1})\leq C_{\kappa,D, \alpha,\zeta}\). We also fix \(\xi\in(0,1)\). If \(\zeta\leq\min\{\xi/2,(\mathfrak{C}\theta(1-\theta)^{-1}\xi)/2\}\), then for every \(t\in[t_{0}-1,t_{0}-\xi]\), we have \((t_{0}-\zeta)-t\geq\xi/2\) and \[H(\cdot,t)\geq-\delta\geq-\frac{\xi}{2}\geq-((t_{0}-\zeta)-t)^{-1},\] \[W_{t_{0}-\zeta}(\nu_{(x_{0},t_{0});t_{0}-\zeta},\nu_{(x_{1},t_{1 });t_{0}-\zeta})\leq\frac{\sqrt{2}C_{\kappa,D,\alpha,\zeta}}{\sqrt{\xi}}\sqrt {(t_{0}-\zeta)-t}\leq C_{1,\kappa,D,\alpha,\zeta,\xi}\sqrt{(t_{0}-\zeta)-t},\] \[t_{0}-(t_{0}-\zeta)=\zeta\leq\frac{2\zeta}{\xi}((t_{0}-\zeta)-t) \leq\mathfrak{C}\frac{\theta}{1-\theta}((t_{0}-\zeta)-t),\] \[-(t_{0}-\zeta)\leq\alpha^{-1}+1\leq\frac{2(\alpha^{-1}+1)}{\xi}( (t_{0}-\zeta)-t)\leq C_{1,\kappa,D,\alpha,\zeta,\xi}^{2}((t_{0}-\zeta)-t),\] where \(\mathfrak{C}\) is a constant obtained in Proposition 6.1. By Proposition 6.1, if \(\zeta\leq\overline{\zeta}_{\xi}\), then \[\nu_{t}^{0}\leq C_{\kappa,D,\alpha,\zeta,\varepsilon}e^{\theta f_{1}}\nu_{t}^{ 1},\quad f_{1}(\cdot,t_{0}-1)\leq C_{\kappa,D,\alpha}+(1-\theta)^{-1}f(\cdot,t _{0}-1) \tag{8.1}\] for every \(t\in[t_{0}-1,t_{0}-\xi]\). Fix \(\eta\in(0,1)\). By Proposition 7.3, if \(\delta\leq\overline{\delta}_{\kappa,D,\eta}\), then \[\int_{t_{1}-\eta^{-1}}^{t_{1}-\eta}\int_{M}\left|\square(\tau_{1}f _{1})+\frac{n}{2}+\mathrm{N}_{1}\right|e^{\theta f_{1}}d\nu_{t}^{1}dt\leq\eta, \tag{8.3}\] \[\int_{t_{1}-\eta^{-1}}^{t_{1}-\eta}\int_{M}\left|-\tau_{1}(H+| \nabla f_{1}|^{2})+f_{1}-\mathrm{N}_{1}\right|e^{\theta f_{1}}d\nu_{t}^{1}dt \leq\eta. \tag{8.2}\] By (8.1) and (8.3), if \(\eta\leq\overline{\eta}_{\kappa,D,\alpha,\zeta,\xi}\), then \[\int_{t_{0}-2\xi}^{t_{0}-\xi}\int_{M}\left|-\tau_{1}(H+|\nabla f _{1}|^{2})+f_{1}-\mathrm{N}_{1}\right|d\nu_{t}^{0}dt\] \[\leq C_{\kappa,D,\alpha,\zeta,\xi}\int_{t_{0}-2\xi}^{t_{0}-\xi} \int_{M}\left|-\tau_{1}(H+|\nabla f_{1}|^{2})+f_{1}-\mathrm{N}_{1}\right|e^{ \theta f_{1}}d\nu_{t}^{1}dt\] \[\leq C_{\kappa,D,\alpha,\zeta,\xi}\int_{t_{1}-\eta^{-1}}^{t_{1}- \eta}\int_{M}\left|-\tau_{1}(H+|\nabla f_{1}|^{2})+f_{1}-\mathrm{N}_{1}\right| e^{\theta f_{1}}d\nu_{t}^{1}dt\leq C_{\kappa,D,\alpha,\zeta,\xi}\eta\leq\xi;\] in particular, \[\int_{t_{0}-2\xi}^{t_{0}-\xi}\int_{M}\tau_{1}H\,d\nu_{t}^{0}dt \leq\xi-\int_{t_{0}-2\xi}^{t_{0}-\xi}\int_{M}(\tau_{1}|\nabla f_{1 }|^{2}-f_{1}+\mathrm{N}_{1})\,d\nu_{t}^{0}dt\] \[\leq C_{\kappa,D,\alpha,\zeta}\,\xi+\int_{t_{0}-2\xi}^{t_{0}-\xi }\int_{M}f_{1}\,d\nu_{t}^{0}dt. \tag{8.4}\] Fix \(s\in[t_{0}-2\xi,t_{0}-\xi]\). By (8.1) and (8.2), if \(\eta\leq\overline{\eta}_{\kappa,D,\alpha,\zeta,\xi}\), then \[\int_{t_{0}-1}^{s}\int_{M}\left|\square(\tau_{1}f_{1})+\frac{n}{2 }+\mathrm{N}_{1}\right|d\nu_{t}^{0}dt\] \[\leq C_{\kappa,D,\alpha,\zeta,\xi}\int_{t_{0}-1}^{s}\int_{M} \left|\square(\tau_{1}f_{1})+\frac{n}{2}+\mathrm{N}_{1}\right|e^{\theta f_{1} }d\nu_{t}^{1}dt\] \[\leq C_{\kappa,D,\alpha,\zeta,\xi}\int_{t_{1}-\eta^{-1}}^{t_{1}- \eta}\int_{M}\left|\square(\tau_{1}f_{1})+\frac{n}{2}+\mathrm{N}_{1}\right|e^{ \theta f_{1}}d\nu_{t}^{1}dt\leq C_{\kappa,D,\alpha,\zeta,\xi}\,\eta\leq\xi. \tag{8.5}\] Hence, (8.1) and (8.5) together with (2.5) imply \[\int_{M}\tau_{1}f_{1}d\nu_{s}^{0} =\int_{M}\tau_{1}f_{1}d\nu_{t_{0}-1}^{0}+\int_{t_{0}-1}^{s}\int_{M }\square(\tau_{1}f_{1})d\nu_{t}^{0}dt\] \[\leq(1-t_{0})\left(C_{\kappa,D,\alpha}+(1-\theta)^{-1}\int_{M}fd \nu_{t_{0}-1}^{0}\right)+\int_{t_{0}-1}^{s}\int_{M}\square(\tau_{1}f_{1})d\nu _{t}^{0}dt\] \[\leq(1+\alpha^{-1})\left(C_{\kappa,D,\alpha}+(1-\theta)^{-1} \frac{n}{2}\right)+\xi-\int_{t_{0}-1}^{s}\int_{M}\left(\frac{n}{2}+\mathrm{N}_ {1}\right)d\nu_{t}^{0}dt\] \[\leq(1+\alpha^{-1})\left(C_{\kappa,D,\alpha}+(1-\theta)^{-1} \frac{n}{2}\right)+\xi+C_{\kappa,D,\alpha}(s-t_{0}+1)\leq C_{\kappa,D,\alpha},\] which leads us to \[\int_{M}f_{1}d\nu_{s}^{0}\leq\frac{C_{\kappa,D,\alpha}}{\tau_{1}}\leq C_{\kappa,D,\alpha}. \tag{8.6}\] From (8.4) and (8.6), we conclude \[\int_{t_{0}-2\xi}^{t_{0}-\xi}\int_{M}\tau_{1}H\,d\nu_{t}^{0}dt\leq C_{\kappa,D,\alpha}\,\xi;\] in particular, there exists \(t_{2}\in[t_{0}-2\xi,t_{0}-\xi]\) such that \[\int_{M}\tau_{1}H\,d\nu^{0}_{t_{2}}\leq C_{\kappa,D,\alpha}.\] Note that \(\tau\leq 2\xi\) and \(\tau_{1}\geq\alpha\) at \(t=t_{2}\). By Proposition 7.5, if \(\delta\leq\overline{\delta}_{\kappa,\varepsilon,\xi}\), then \[\int_{M}\tau H\,d\nu^{0}_{t}\leq\int_{M}\tau H\,d\nu^{0}_{t_{2}}+\xi=\frac{\tau }{\tau_{1}}\int_{M}\tau_{1}H\,d\nu^{0}_{t_{2}}+\xi\leq C_{\kappa,D,\alpha}\,\xi\] for all \(t\in[t_{0}-\varepsilon^{-1},t_{2}]\). Using this bound for \(t=t_{0}-\varepsilon\) and \(H\geq-\delta\) leads us to \[2\int_{t_{0}-\varepsilon^{-1}}^{t_{0}-\varepsilon}\int_{M}|h|^{2}d\nu^{0}_{t} dt=\int_{M}H\,d\nu^{0}_{t_{0}-\varepsilon}-\int_{M}H\,d\nu^{0}_{t_{0}- \varepsilon^{-1}}-\int_{t_{0}-\varepsilon^{-1}}^{t_{0}-\varepsilon}\int_{M} \mathcal{D}(0)d\nu^{0}_{t}dt\leq\frac{C_{\kappa,D,\alpha}\,\xi}{\varepsilon}+\delta.\] If \(\xi\leq\overline{\xi}_{\kappa,D,\alpha,\varepsilon}\) and \(\delta\leq\overline{\delta}_{\varepsilon}\), then we complete the proof. ## 9. Almost splitting For \(\varepsilon\in(0,1),r>0\), a \((k,\varepsilon,r)\)_-splitting map at \((x_{0},t_{0})\in M\times I\)_ is a map \(\vec{y}=(y_{1},\ldots,y_{k}):M\times[t_{0}-\varepsilon^{-1}r^{2},t_{0}- \varepsilon r^{2}]\to\mathbb{R}^{k}\) with the following properties for all \(i,j=1,\ldots,k\): 1. \([t_{0}-\varepsilon^{-1}r^{2},t_{0}]\subset I\); 2. we have \[r^{-1}\int_{t_{0}-\varepsilon^{-1}r^{2}}^{t_{0}-\varepsilon r^{2}}\int_{M}| \square y_{i}|d\nu_{(x_{0},t_{0});t}dt\leq\varepsilon;\] 3. we have \[r^{-2}\int_{t_{0}-\varepsilon^{-1}r^{2}}^{t_{0}-\varepsilon r^{2}}\int_{M}| \langle\nabla y_{i},\nabla y_{j}\rangle-\delta_{ij}|\,d\nu_{(x_{0},t_{0});t} dt\leq\varepsilon.\] We say that \((x_{0},t_{0})\) is \((k,\varepsilon,r)\)_-split_ if there is a \((k,\varepsilon,r)\)-splitting map. The aim of this section is prove the following almost splitting theorem, which has been obtained by Bamler [4] for Ricci flow (cf. [4, Proposition 10.8]): **Theorem 9.1**.: _For \(\kappa,D>0,\varepsilon,\xi\in(0,1)\), if \(\beta\leq\overline{\beta},\mathfrak{D}\geq\mathfrak{D}_{\omega,D},\mathfrak{N }\geq\mathfrak{D}_{\omega,D},\delta\leq\overline{\delta}_{\kappa,D,\varepsilon,\xi}\), then the following holds: Let \((M,g(t))_{t\in I}\) be a super Ricci flow with \(\mathcal{D}\geq 0\). Let \(\{(x_{i},t_{i})\}_{i=0}^{N-1}\subset M\times I\) with \(t_{0}\leq\cdots\leq t_{N-1}\) and \(N\geq\mathfrak{N}\,\xi^{-k}\). For \(r>0\), we assume \(\{(x_{i},t_{i})\}_{i=0}^{N-1}\) are \((\delta,r)\)-selfsimilar, \(\mathcal{N}_{(x_{0},t_{0})}(r^{2})\geq-\kappa\) and \(0\leq t_{i}-t_{0}\leq\beta\,\xi^{2}r^{2}\) for all \(i\). Assume the following:_ 1. \(W_{t_{0}-r^{2}}(\nu_{(x_{0},t_{0});t_{0}-r^{2}},\nu_{(x_{i},t_{i});t_{0}-r^{2} })\leq Dr\) _for all_ \(i\)_;_ 2. \(W_{t_{0}-2\xi^{2}r^{2}}(\nu_{(x_{i},t_{i});t_{0}-2\xi^{2}r^{2}},\nu_{(x_{j},t_ {j});t_{0}-2\xi^{2}r^{2}})\geq\mathfrak{D}\xi r\) _for all_ \(i\neq j\)_._ _Then \((x_{0},t_{0})\) is \((k+1,\varepsilon,r)\)-split._ ### Construction of coordinate functions We begin with the following: **Lemma 9.2**.: _For \(\kappa,D>0,\varepsilon,\xi\in(0,1)\), if \(\delta\leq\overline{\delta}_{\kappa,D,\varepsilon,\xi}\), then the following holds: Let \((M,g(t))_{t\in I}\) be a super Ricci flow with \(\mathcal{D}\geq 0\). Let \((x_{0},t_{0}),(x_{1},t_{1})\in M\times I\). For \(r>0\), we assume that \((x_{0},t_{0}),(x_{1},t_{1})\) are \((\delta,r)\)-selfsimilar, \(\mathcal{N}_{(x_{0},t_{0})}(r^{2})\geq-\kappa\) and \(0\leq t_{1}-t_{0}\leq\xi^{2}r^{2}\). We assume_ \[W_{t_{0}-r^{2}}(\nu_{(x_{0},t_{0});t_{0}-r^{2}},\nu_{(x_{1},t_{1});t_{0}-r^{2} })\leq Dr. \tag{9.1}\] _Let \(\tau,\tau_{1}\) be the parameters for \((x_{0},t_{0}),(x_{1},t_{1})\), respectively. Moreover, let \(f,f_{1}\) be the potentials for \((x_{0},t_{0}),(x_{1},t_{1})\), respectively. We define a function_ \[u:=r^{-1}(\tau_{1}f_{1}-\tau f-(\mathrm{N}_{1}-\mathrm{N})\tau), \tag{9.2}\] _where \(\mathrm{N}:=\mathcal{N}_{(x_{0},t_{0})}(r^{2})\) and \(\mathrm{N}_{1}:=\mathcal{N}_{(x_{1},t_{1})}(r^{2})\). Then we have_ \[\int_{t_{0}-\varepsilon^{-1}r^{2}}^{t_{0}-\xi^{2}r^{2}}\int_{M}\tau^{-1/2}| \square u|d\nu_{(x_{0},t_{0});t}dt\leq\varepsilon,\quad\int_{t_{0}-\varepsilon ^{-1}r^{2}}^{t_{0}-\xi^{2}r^{2}}\int_{M}|\nabla^{2}u|^{2}d\nu_{(x_{0},t_{0});t}dt \leq\varepsilon. \tag{9.3}\] Proof.: We may assume \(t_{0}=0\) and \(r=1\). For \(i=0,1\), we set \(\nu^{i}:=\nu_{(x_{i},t_{i})}\). Let \(\theta\in(0,\overline{\theta}]\) be a constant obtained in Proposition 7.3. Fix \(\zeta\in(0,1)\). By iterating Propositions 3.1 and 7.7, if \(\delta\leq\overline{\delta}_{\kappa,D,\zeta}\), then \(W_{-\zeta}(\nu_{-\zeta}^{0},\nu_{-\zeta}^{1})\leq C_{\kappa,D,\zeta}\). If \(\zeta\leq\overline{\zeta}_{\xi}\), then for every \(t\in[-\varepsilon^{-1},-\xi^{2}]\) we see \(-\zeta-t\geq\xi^{2}-\zeta\), and \[H(\cdot,-\zeta)\geq-\delta\geq-(\varepsilon-\zeta)^{-1}\geq-(- \zeta-t)^{-1},\] \[W_{-\zeta}(\nu_{-\zeta}^{0},\nu_{-\zeta}^{1})\leq\frac{C_{\kappa,D,\zeta}}{\sqrt{\xi^{2}-\zeta}}\sqrt{-\zeta-t}\leq C_{1,\kappa,D,\xi,\zeta} \sqrt{-\zeta-t},\] \[\zeta\leq\frac{\zeta}{\xi^{2}-\zeta}(-\zeta-t)\leq\mathfrak{C} \frac{\theta}{1-\theta}(-\zeta-t),\] \[t_{1}-(-\zeta)\leq\xi^{2}+\zeta\leq\frac{\xi^{2}+\zeta}{\xi^{2}- \zeta}(-\zeta-t)\leq C_{1,\kappa,D,\xi,\zeta}^{2}(-\zeta-t),\] Therefore, by Proposition 6.1, if \(\zeta\leq\overline{\zeta}_{\xi}\), then we have \[\nu_{t}^{0}\leq C_{\kappa,D,\xi,\zeta}e^{\theta f_{1}}\nu_{t}^{1} \tag{9.4}\] for every \(t\in[-\varepsilon^{-1},-\xi^{2}]\). We fix \(\eta\in(0,1)\). By Proposition 7.3, if \(\delta\leq\overline{\delta}_{\kappa,D,\eta}\), then \[\int_{-\eta^{-1}}^{-\eta}\int_{M}\tau\left|h+\nabla^{2}f-\frac{1 }{2\tau}g\right|^{2}d\nu_{t}^{0}dt\leq\eta, \tag{9.6}\] \[\int_{-\eta^{-1}}^{-\eta}\int_{M}\left|\square(\tau f)+\frac{n}{ 2}+\mathrm{N}\right|d\nu_{t}^{0}dt\leq\eta,\] (9.7) \[\int_{t_{1}-\eta^{-1}}^{t_{1}-\eta}\int_{M}\tau_{1}\left|h+\nabla ^{2}f_{1}-\frac{1}{2\tau_{1}}g\right|^{2}e^{\theta f_{1}}d\nu_{t}^{1}dt\leq\eta,\] (9.8) \[\int_{t_{1}-\eta^{-1}}^{t_{1}-\eta}\int_{M}\left|\square(\tau_{1} f_{1})+\frac{n}{2}+\mathrm{N}_{1}\right|e^{\theta f_{1}}d\nu_{t}^{1}dt\leq\eta. \tag{9.5}\] By (9.4), (9.6), (9.8), if \(\eta\leq\overline{\eta}_{\kappa,D,\varepsilon,\xi,\zeta}\), then we obtain \[\int_{-\varepsilon^{-1}}^{-\xi^{2}}\int_{M}\tau^{-1/2}|\square u| d\nu_{t}^{0}dt \leq C_{\kappa,D,\xi,\zeta}\int_{-\varepsilon^{-1}}^{-\xi^{2}} \int_{M}\tau^{-1/2}\left|\square(\tau_{1}f_{1})+\frac{n}{2}+\mathrm{N}_{1} \right|e^{\theta f_{1}}d\nu_{t}^{1}dt\] \[\quad+\int_{-\varepsilon^{-1}}^{-\xi^{2}}\int_{M}\tau^{-1/2} \left|\square(\tau f)+\frac{n}{2}+\mathrm{N}\right|d\nu_{t}^{0}dt\leq C_{\kappa,D,\xi,\zeta}\,\eta.\] This proves the first estimate in (9.3). We next show the second estimate in (9.3). If \(\eta\leq\overline{\eta}_{\xi}\), then \[\int_{-\varepsilon^{-1}}^{-\xi^{2}}\int_{M}|\nabla^{2}u|^{2}d\nu_{t} ^{0}dt\] \[=\int_{-\varepsilon^{-1}}^{-\xi^{2}}\int_{M}\left|\tau_{1}\left(h+ \nabla^{2}f_{1}-\frac{1}{2\tau_{1}}g\right)-\tau\left(h+\nabla^{2}f-\frac{1}{ 2\tau}g\right)+(\tau-\tau_{1})h\right|^{2}d\nu_{t}^{0}dt\] \[\leq C_{\kappa,D,\xi,\zeta}\int_{-\varepsilon^{-1}}^{-\xi^{2}} \int_{M}\tau_{1}^{2}\left|h+\nabla^{2}f_{1}-\frac{1}{2\tau_{1}}g\right|^{2}e^ {\theta f_{1}}d\nu_{t}^{1}dt\] \[\quad+C\int_{-\varepsilon^{-1}}^{-\xi^{2}}\int_{M}\tau^{2}\left| h+\nabla^{2}f-\frac{1}{2\tau}g\right|^{2}d\nu_{t}^{0}dt+Ct_{1}^{2}\int_{- \varepsilon^{-1}}^{-\xi^{2}}\int_{M}|h|^{2}d\nu_{t}^{0}dt\] \[\leq C_{\kappa,D,\xi,\zeta}\,\eta+Ct_{1}^{2}\int_{-\varepsilon^{ -1}}^{-\xi^{2}}\int_{M}|h|^{2}d\nu_{t}^{0}dt.\] For a fixed \(\alpha\in(0,1)\), we first consider the case of \(t_{1}\in[0,\alpha]\). Proposition 6.5 tells us that if \(\alpha\leq\overline{\alpha}_{\kappa,\varepsilon,\xi}\) and \(\eta\leq\overline{\eta}_{\kappa,D,\varepsilon,\xi,\zeta}\), then \[\int_{-\varepsilon^{-1}}^{-\xi^{2}}\int_{M}|\nabla^{2}u|^{2}d\nu_{t}^{0}dt\leq C _{\kappa,D,\xi,\zeta}\,\eta+C_{\kappa,\xi}\alpha^{2}\leq\varepsilon.\] On the other hand, in the case of \(t_{1}\geq\alpha\), Theorem 8.1 tells us that if \(\delta\leq\overline{\delta}_{\kappa,D,\alpha,\eta}\), then \[\int_{-\eta^{-1}}^{-\eta}\int_{M}|h|^{2}d\nu_{t}^{0}dt\leq\eta.\] Therefore, if \(\eta\leq\overline{\eta}_{\kappa,D,\varepsilon,\xi,\zeta}\), then \[\int_{-\varepsilon^{-1}}^{-\xi^{2}}\int_{M}|\nabla^{2}u|^{2}d\nu_{t}^{0}dt\leq C _{\kappa,D,\xi,\zeta}\,\eta\leq\varepsilon.\] This completes the proof of the second estimate in (9.3). For functions in Lemma 9.2, we further see the following (cf. [4, Claim 10.39]): **Lemma 9.3**.: _For \(\kappa,D>0,\xi\in(0,1)\), if \(\delta\leq\overline{\delta}_{\kappa,D}\), then the following holds: Let \((M,g(t))_{t\in I}\) be a super Ricci flow with \(\mathcal{D}\geq 0\). Let \((x_{0},t_{0}),(x_{1},t_{1})\in M\times I\). For \(r>0\), we assume that \((x_{0},t_{0}),(x_{1},t_{1})\) are \((\delta,r)\)-selfsimilar, \(\mathcal{N}_{(x_{0},t_{0})}(r^{2})\geq-\kappa\) and \(0\leq t_{1}-t_{0}\leq\xi^{2}r^{2}\). We further assume (9.1). Let \(u\) be a function defined as (9.2). Then we have_ \[\int_{t_{0}-2\xi^{2}r^{2}}^{t_{0}-\xi^{2}r^{2}}\int_{M}\tau^{-1}|\nabla u|^{2} d\nu_{(x_{0},t_{0});t}dt\leq C_{\kappa,D}.\] Proof.: We may assume \(t_{0}=0\) and \(r=1\). Set \(\nu^{i}:=\nu_{(x_{i},t_{i})}\) for \(i=0,1\). Let \(\theta\in(0,\overline{\theta}]\) be a constant obtained in Proposition 6.5. Fix \(\zeta\in(0,1)\). Using Proposition 7.7, if \(\delta\leq\overline{\delta}_{\kappa,D,\zeta}\), then \(W_{-\zeta^{2}}(\nu_{-\zeta^{2}}^{0},\nu_{-\zeta^{2}}^{1})\leq C_{\kappa,D, \zeta}\xi\). By Proposition 6.1, if \(\zeta\leq\overline{\zeta}\), then \(\nu_{t}^{0}\leq C_{\kappa,D,\zeta}e^{\theta f_{1}}\nu_{t}^{1}\) for every \(t\in[-2\xi^{2},-\xi^{2}]\). Proposition 6.5 yields \[\int_{-2\xi^{2}}^{-\xi^{2}}\int_{M}\tau^{-1}|\nabla u|^{2}d\nu_{t }^{0}dt \leq C\int_{-2\xi^{2}}^{-\xi^{2}}\int_{M}\tau^{-1}\tau_{1}^{2}| \nabla f_{1}|^{2}d\nu_{t}^{0}dt+C\int_{-2\xi^{2}}^{-\xi^{2}}\int_{M}\tau\,| \nabla f|^{2}d\nu_{t}^{0}dt\] \[\leq C_{\kappa,D,\zeta}\int_{-2\xi^{2}}^{-\xi^{2}}\int_{M}| \nabla f_{1}|^{2}e^{\theta f_{1}}d\nu_{t}^{1}dt+C\int_{-2\xi^{2}}^{-\xi^{2}} \int_{M}|\nabla f|^{2}d\nu_{t}^{0}dt\leq C_{\kappa,D,\zeta}.\] This completes the proof. ### Proof of Theorem 9.1 We have the following (cf. [4, Claim 10.31]): **Lemma 9.4**.: _For \(\kappa,D>0,\varepsilon,\xi\in(0,1)\), if \(\delta\leq\overline{\delta}_{\kappa,D,\varepsilon,\xi}\), then the following holds: Let \((M,g(t))_{t\in I}\) be a super Ricci flow with \(\mathcal{D}\geq 0\). Let \((x_{0},t_{0}),(x_{1},t_{1}),(x_{2},t_{2})\in M\times I\) with \(t_{0}\leq t_{1}\leq t_{2}\). For \(r>0\), we assume that \((x_{0},t_{0}),(x_{1},t_{1}),(x_{2},t_{2})\) are \((\delta,r)\)-selfsimilar, \(\mathcal{N}_{(x_{0},t_{0})}(r^{2})\geq-\kappa\) and \(0\leq t_{i}-t_{0}\leq\xi^{2}r^{2}\) for \(i=1,2\). For \(i=1,2\), we further assume_ \[W_{t_{0}-r^{2}}(\nu_{(x_{0},t_{0});t_{0}-r^{2}},\nu_{(x_{i},t_{i});t_{0}-r^{2} })\leq Dr.\] _Let \(\tau,\tau_{1},\tau_{2}\) be the parameters for \((x_{0},t_{0}),(x_{1},t_{1}),(x_{2},t_{2})\), respectively. Moreover, let \(f,f_{1},f_{2}\) be the potentials for \((x_{0},t_{0}),(x_{1},t_{1}),(x_{2},t_{2})\), respectively. For \(i=1,2\), we define_ \[u_{i}:=r^{-1}(\tau_{i}f_{i}-\tau f-(\mathrm{N}_{i}-\mathrm{N})\tau), \tag{9.9}\] _where \(\mathrm{N}:=\mathcal{N}_{(x_{0},t_{0})}(r^{2})\), \(\mathrm{N}_{i}:=\mathcal{N}_{(x_{i},t_{i})}(r^{2})\). Then there are constants \(\mathrm{c}_{1},\mathrm{c}_{2},\mathrm{c}_{3}\in\mathbb{R}\) such that_ \[\int_{t_{0}-\varepsilon^{-1}r^{2}}^{t_{0}-\xi^{2}r^{2}}\int_{M} \tau^{-1}\left|\left|\nabla(u_{1}+u_{2})\right|^{2}-\mathrm{c}_{1}\right|d \nu_{(x_{0},t_{0});t}dt\leq\varepsilon, \tag{9.11}\] \[\int_{t_{0}-\varepsilon^{-1}r^{2}}^{t_{0}-\xi^{2}r^{2}}\int_{M} \tau^{-1}\left|\left|\nabla(u_{1}-u_{2})\right|^{2}-\mathrm{c}_{2}\right|d \nu_{(x_{0},t_{0});t}dt\leq\varepsilon,\] (9.12) \[\int_{t_{0}-\varepsilon^{-1}r^{2}}^{t_{0}-\xi^{2}r^{2}}\int_{M} \tau^{-1}\left|\left\langle\nabla u_{1},\nabla u_{2}\right\rangle-\mathrm{c}_{ 3}\right|d\nu_{(x_{0},t_{0});t}dt\leq\varepsilon. \tag{9.10}\] Proof.: We may assume \(t_{0}=0\) and \(r=1\). We set \(\nu:=\nu_{(x_{0},0)}\). Let us prove (9.10). We define \(u:=u_{1}+u_{2}\) and fix \(\eta\in(0,1)\). By Propositions 7.2, 7.3 and Lemma 9.2, if \(\delta\leq\overline{\delta}_{\kappa,D,\eta,\xi}\), then \[\int_{-\eta^{-1}}^{-\xi^{2}}\int_{M}\left|\tau(2\Delta f-|\nabla f |^{2}+H)+f-n-\mathrm{N}\right|\,d\nu_{t}dt\leq\eta, \tag{9.14}\] \[\int_{-\eta^{-1}}^{-\xi^{2}}\int_{M}\tau\left|H+\Delta f-\frac{n} {2\tau}\right|\,d\nu_{t}\leq\eta,\] (9.15) \[\int_{-\eta^{-1}}^{-\xi^{2}}\int_{M}\tau^{-1/2}|\square u|\,d\nu_ {t}dt\leq\eta,\] (9.16) \[\int_{-\eta^{-1}}^{-\xi^{2}}\int_{M}|\nabla^{2}u|^{2}d\nu_{t}dt \leq\eta,\] (9.17) \[\left|\mathcal{N}_{(x_{0},0)}(\tau)-\mathrm{N}\right|\leq\eta \tag{9.13}\] for all \(t\in[-\eta^{-1},-\eta]\). We first prove that there is a constant \(\mathrm{c}\in\mathbb{R}\) such that for all \(t\in[-\varepsilon^{-1},-\xi^{2}]\), \[\left|\int_{M}\tau^{-1}u^{2}\left(f-\frac{n}{2}-\mathrm{N}\right)d\nu_{t}- \mathrm{c}\right|\leq C_{\kappa,\varepsilon,\xi}\,\eta^{1/4}. \tag{9.18}\] Using (2.5) and (2.4), we obtain the following (cf. [4, Lemma 10.10]): \[\frac{d}{dt}\int_{M}\tau^{-1}u^{2}\left(f-\frac{n}{2}-\mathrm{N} \right)d\nu_{t} =\int_{M}\tau^{-2}u^{2}\left(\tau(2\Delta f-|\nabla f|^{2}+H)+f-n- \mathrm{N}\right)d\nu_{t}\] \[\quad-2\int_{M}\tau^{-1}u^{2}\left(H+\Delta f-\frac{n}{2\tau} \right)d\nu_{t}\] \[\quad+2\int_{M}\tau^{-1}u\square u\left(f-\frac{n}{2}-\mathrm{N} \right)d\nu_{t}\] \[\quad-2\int_{M}\tau^{-1}\left(|\nabla u|^{2}-\int_{M}|\nabla u|^{ 2}d\nu_{t}\right)\left(f-\frac{n}{2}-\mathrm{N}\right)d\nu_{t}\] \[\quad-2\tau^{-1}\left(\int_{M}|\nabla u|^{2}d\nu_{t}\right)\int_ {M}\left(f-\frac{n}{2}-\mathrm{N}\right)d\nu_{t}.\] For \([s_{1},s_{2}]\subset[-\varepsilon^{-1},-\xi^{2}]\), it holds that \[\quad\left|\int_{M}\tau^{-1}u^{2}\left(f-\frac{n}{2}-\mathrm{N} \right)d\nu_{s_{2}}-\int_{M}\tau^{-1}u^{2}\left(f-\frac{n}{2}-\mathrm{N} \right)d\nu_{s_{1}}\right|\] \[\leq\xi^{-4}\int_{-\varepsilon^{-1}}^{-\xi^{2}}\int_{M}u^{2} \left|\tau(2\Delta f-|\nabla f|^{2}+H)+f-n-\mathrm{N}\right|d\nu_{t}dt\] \[\quad+2\xi^{-4}\int_{-\varepsilon^{-1}}^{-\xi^{2}}\int_{M}u^{2} \tau\left|H+\Delta f-\frac{n}{2\tau}\right|d\nu_{t}dt\] \[\quad+2\xi^{-1}\int_{-\varepsilon^{-1}}^{-\xi^{2}}\int_{M}\tau^{- 1/2}|\square u||u|\left|f-\frac{n}{2}-\mathrm{N}\right|d\nu_{t}dt\] \[\quad+2\xi^{-2}\int_{-\varepsilon^{-1}}^{-\xi^{2}}\int_{M}\left| \nabla u|^{2}-\int_{M}|\nabla u|^{2}d\nu_{t}\right|\left|f-\frac{n}{2}-\mathrm{ N}\right|d\nu_{t}dt\] \[\quad+2\xi^{-2}\int_{-\varepsilon^{-1}}^{-\xi^{2}}\left(\int_{M}| \nabla u|^{2}d\nu_{t}\right)\left|\mathcal{N}_{(x_{0},0)}(\tau)-\mathrm{N} \right|dt. \tag{9.19}\] Let us estimate each term in the right hand side of (9.19). Let \(a>0\). For the first term, (9.13) and Proposition 6.5 imply \[\int_{-\varepsilon^{-1}}^{-\xi^{2}}\int_{M}u^{2}\left|\tau(2 \Delta f-|\nabla f|^{2}+H)+f-n-\mathrm{N}\right|\,d\nu_{t}\] \[\leq a^{-1}\int_{-\varepsilon^{-1}}^{-\xi^{2}}\int_{M}\left|\tau( 2\Delta f-|\nabla f|^{2}+H)+f-n-\mathrm{N}\right|\,d\nu_{t}dt\] \[\quad+a\int_{-\varepsilon^{-1}}^{-\xi^{2}}\int_{M}\left(\tau(2 \Delta f-|\nabla f|^{2}+H)+f-n-\mathrm{N}\right)^{2}\,d\nu_{t}dt+a\int_{- \varepsilon^{-1}}^{-\xi^{2}}\int_{M}u^{8}\,d\nu_{t}dt\] \[\leq a^{-1}\eta+a\,C_{\kappa,\varepsilon,\xi}. \tag{9.20}\] For the second term, (9.14) and Proposition 6.5 lead us to \[\int_{-\varepsilon^{-1}}^{-\xi^{2}}\int_{M}u^{2}\tau\left|H+\Delta f- \frac{n}{2\tau}\right|\,d\nu_{t} \leq a^{-1}\int_{-\varepsilon^{-1}}^{-\xi^{2}}\int_{M}\tau\left|H+ \Delta f-\frac{n}{2\tau}\right|\,d\nu_{t}dt\] \[\quad+a\int_{-\varepsilon^{-1}}^{-\xi^{2}}\int_{M}\tau^{2}\left(H +\Delta f-\frac{n}{2\tau}\right)^{2}\,d\nu_{t}dt\] \[\quad+a\int_{-\varepsilon^{-1}}^{-\xi^{2}}\int_{M}u^{8}\,d\nu_{t }dt\] \[\leq a^{-1}\eta+a\,C_{\kappa,\varepsilon,\xi}. \tag{9.21}\] For the third term, (9.15) and Proposition 6.5 yield \[\int_{-\varepsilon^{-1}}^{-\xi^{2}}\int_{M}\tau^{-1/2}|\square u|| u|\left|f-\frac{n}{2}-\mathrm{N}\right|\,d\nu_{t} \leq a^{-1}\int_{-\varepsilon^{-1}}^{-\xi^{2}}\int_{M}\tau^{-1/2 }|\square u|\,d\nu_{t}dt\] \[\quad+a\int_{-\varepsilon^{-1}}^{-\xi^{2}}\int_{M}u^{2}\left(f- \frac{n}{2}-\mathrm{N}\right)^{2}\,d\nu_{t}dt\] \[\quad+a\int_{-\varepsilon^{-1}}^{-\xi^{2}}\int_{M}u^{4}\left(f- \frac{n}{2}-\mathrm{N}\right)^{4}d\nu_{t}dt\] \[\leq a^{-1}\eta+a\,C_{\kappa,\varepsilon,\xi}.\] For the fourth term, (9.16), the Kato inequality, Propositions 6.5 and 3.9 with \(p=1\) imply \[\int_{-\varepsilon^{-1}}^{-\xi^{2}}\int_{M}\left|\nabla u|^{2}- \int_{M}|\nabla u|^{2}d\nu_{t}\right|\left|f-\frac{n}{2}-\mathrm{N}\right|d \nu_{t}dt\] \[\leq a^{-1}\int_{-\varepsilon^{-1}}^{-\xi^{2}}\int_{M}\left| \nabla u|^{2}-\int_{M}|\nabla u|^{2}d\nu_{t}\right|d\nu_{t}dt\] \[\quad+a\int_{-\varepsilon^{-1}}^{-\xi^{2}}\int_{M}\left(|\nabla u |^{2}-\int_{M}|\nabla u|^{2}d\nu_{t}\right)^{2}d\nu_{t}dt+a\int_{-\varepsilon ^{-1}}^{-\xi^{2}}\int_{M}\left(f-\frac{n}{2}-\mathrm{N}\right)^{4}d\nu_{t}dt\] \[\leq Ca^{-1}\int_{-\varepsilon^{-1}}^{-\xi^{2}}\int_{M}|\nabla| \nabla u|^{2}|d\nu_{t}dt+a\,C_{\kappa,\varepsilon,\xi}\] \[\leq Ca^{-1}\left(\int_{-\varepsilon^{-1}}^{-\xi^{2}}\int_{M}| \nabla u|^{2}d\nu_{t}dt\right)^{1/2}\left(\int_{-\varepsilon^{-1}}^{-\xi^{2}} \int_{M}|\nabla^{2}u|^{2}d\nu_{t}dt\right)^{1/2}+a\,C_{\kappa,\varepsilon,\xi}\] \[\leq a^{-1}C_{\kappa,\varepsilon,\xi}\eta^{1/2}+a\,C_{\kappa, \varepsilon,\xi}.\] For the last term, (9.17) and Proposition 6.5 tell us that \[\int_{-\varepsilon^{-1}}^{-\xi^{2}}\left(\int_{M}|\nabla u|^{2}d\nu_{t} \right)\left|\mathcal{N}_{(x_{0},0)}(\tau)-\mathrm{N}\right|dt\leq C_{\kappa, \varepsilon,\xi}\,\eta.\] Combining them with (9.19), and choosing \(a=\eta^{1/4}\), we obtain \[\left|\int_{M}\tau^{-1}u^{2}\left(f-\frac{n}{2}-\mathrm{N}\right)d \nu_{s_{2}}-\int_{M}\tau^{-1}u^{2}\left(f-\frac{n}{2}-\mathrm{N}\right)d\nu_{s _{1}}\right|\] \[\leq C_{\kappa,\varepsilon,\xi}\left(a^{-1}\eta+a^{-1}\eta^{1/2}+ a+\eta\right)\leq C_{\kappa,\varepsilon,\xi}\,\eta^{1/4}.\] We arrive at (9.18). Using (2.5), for every \(t\in[-\varepsilon^{-1},-\xi^{2}]\) we also see the following (cf. [4, Lemma 10.10]): \[\int_{M}\left\{|\nabla u|^{2}-\frac{1}{2}\tau^{-1}u^{2}\left(f- \frac{n}{2}-\mathrm{N}\right)\right\}d\nu_{t}\] \[=-\frac{1}{2}\int_{M}\tau^{-1}u^{2}\left\{\tau(2\Delta f-|\nabla f |^{2}+H)+f-n-\mathrm{N}\right\}d\nu_{t}\] \[\quad+\frac{1}{2}\int_{M}u^{2}\left(H+\Delta f-\frac{n}{2\tau} \right)d\nu_{t}-\int_{M}u\Delta u\,d\nu_{t}.\] Let \(b>0\). By (9.20), (9.21) and (9.16), we have \[\int_{-\varepsilon^{-1}}^{-\xi^{2}}\left|\int_{M}\left\{|\nabla u |^{2}-\frac{1}{2}\tau^{-1}u^{2}\left(f-\frac{n}{2}-\mathrm{N}\right)\right\}d \nu_{t}\right|\,dt\] \[\leq C\xi^{-2}\int_{-\varepsilon^{-1}}^{-\xi^{2}}\int_{M}u^{2} \left|\tau(2\Delta f-|\nabla f|^{2}+H)+f-n-\mathrm{N}\right|d\nu_{t}\,dt\] \[\leq C_{\kappa,\varepsilon,\xi}\left(b^{-1}\eta+b\right)+C\left( \int_{-\varepsilon^{-1}}^{-\xi^{2}}\int_{M}u^{2}\,d\nu_{t}\,dt\right)^{1/2} \left(\int_{-\varepsilon^{-1}}^{-\xi^{2}}\int_{M}|\nabla^{2}u|^{2}\,d\nu_{t}\, dt\right)^{1/2}\] \[\leq C_{\kappa,\varepsilon,\xi}\left(b^{-1}\eta+b+\eta^{1/2} \right)\leq C_{\kappa,\varepsilon,\xi}\,\eta^{1/2},\] where we choose \(b=\eta^{1/2}\). It follows that \[\int_{-\varepsilon^{-1}}^{-\xi^{2}}\int_{M}\tau^{-1}\left|\left| \nabla u\right|^{2}-\frac{\mathrm{c}}{2}\right|d\nu_{t}dt \leq\int_{-\varepsilon^{-1}}^{-\xi^{2}}\int_{M}\tau^{-1}\left| \left|\nabla u\right|^{2}-\int_{M}\left|\nabla u\right|^{2}d\nu_{t}\right|d \nu_{t}dt\] \[\quad+\int_{-\varepsilon^{-1}}^{-\xi^{2}}\tau^{-1}\left|\int_{M} \left\{|\nabla u|^{2}-\frac{1}{2}\tau^{-1}u^{2}\left(f-\frac{n}{2}-\mathrm{N }\right)\right\}d\nu_{t}\right|\,dt\] \[\quad+\frac{1}{2}\int_{-\varepsilon^{-1}}^{-\xi^{2}}\tau^{-1} \left|\int_{M}\left\{\tau^{-1}u^{2}\left(f-\frac{n}{2}-\mathrm{N}\right)- \mathrm{c}\right\}d\nu_{t}\right|\,dt\] \[\leq C_{\kappa,\varepsilon,\xi}\,\eta^{1/4}.\] This proves (9.10). Once we obtain (9.10), the estimate (9.11) can be derived from the same calculation. Moreover, (9.12) follows from (9.10) and (9.11). We complete the proof. \(\Box\) We also prove the following (cf. [4, Claim 10.32]): **Lemma 9.5**.: _For \(\kappa,D>0,\xi\in(0,1)\), if \(\beta\leq\overline{\beta},\mathfrak{D}\geq\mathfrak{D}_{\kappa,D},\delta\leq \overline{\delta}_{\kappa,D,\xi}\), then the following holds: Let \((M,g(t))_{t\in I}\) be a super Ricci flow with \(\mathcal{D}\geq 0\). Let \((x_{0},t_{0}),(x_{1},t_{1}),(x_{2},t_{2})\in M\times I\) with \(t_{0}\leq t_{1}\leq t_{2}\). For \(r>0\), we assume that \((x_{0},t_{0}),(x_{1},t_{1}),(x_{2},t_{2})\) are \((\delta,r)\)-selfsimilar, \(\mathcal{N}_{(x_{0},t_{0})}(r^{2})\geq-\kappa\) and \(0\leq t_{i}-t_{0}\leq\beta\xi^{2}r^{2}\) for \(i=1,2\). Suppose the following:_ 1. \(W_{t_{0}-r^{2}}(\nu_{(x_{0},t_{0});t_{0}-r^{2}},\nu_{(x_{i},t_{i});t_{0}-r^{2} })\leq Dr\) _for_ \(i=1,2\)_;_ 2. \(W_{t_{0}-2\xi^{2}r^{2}}(\nu_{(x_{1},t_{1});t_{0}-2\xi^{2}r^{2}},\nu_{(x_{2},t_ {2});t_{0}-2\xi^{2}r^{2}})\geq\mathfrak{D}\xi r\)_._ _For \(i=1,2\), let \(u_{i}\) be a function defined as (9.9). Then we have_ \[\int_{t_{0}-2\xi^{2}r^{2}}^{t_{0}-\xi^{2}r^{2}}\int_{M}\tau^{-1}|\nabla(u_{1}- u_{2})|^{2}d\nu_{(x_{0},t_{0});t}dt\geq\xi^{2}.\] Proof.: We may assume \(t_{0}=0\) and \(r=1\). Set \(\nu:=\nu_{(x_{0},0)}\) and \(\nu^{i}:=\nu_{(x_{i},t_{i})}\) for \(i=1,2\). Fix \(\eta\in(0,1)\). By Proposition 7.3 and Lemma 9.4, if \(\delta\leq\overline{\delta}_{\kappa,D,\eta,\xi}\), then there exists a constant \(\mathrm{c}\in\mathbb{R}\) such that \[\int_{-2\xi^{2}}^{-\xi^{2}}\left\{\int_{M}\tau^{-1}\left|\left|\nabla(u_{1}-u_{ 2})\right|^{2}-\mathrm{c}\right|d\nu_{t}+\sum_{i=1}^{2}\int_{M}\left|-\tau_{i}( \left|\nabla f_{i}\right|^{2}+H)+f_{i}-\mathrm{N}_{i}\right|d\nu_{t}^{i}\right\} dt\leq 3\eta;\] in particular, there exists \(s\in[-2\xi^{2},-\xi^{2}]\) such that \[\int_{M}\tau^{-1}\left|\left|\nabla(\tau_{1}f_{1}-\tau_{2}f_{2}) \right|^{2}-\mathrm{c}\right|d\nu_{s}+\sum_{i=1}^{2}\int_{M}\left|-\tau_{i}( \left|\nabla f_{i}\right|^{2}+H)+f_{i}-\mathrm{N}_{i}\right|d\nu_{s}^{i}\] \[=\int_{M}\tau^{-1}\left|\left|\nabla(u_{1}-u_{2})\right|^{2}- \mathrm{c}\right|d\nu_{s}+\sum_{i=1}^{2}\int_{M}\left|-\tau_{i}(\left|\nabla f _{i}\right|^{2}+H)+f_{i}-\mathrm{N}_{i}\right|d\nu_{s}^{i}\leq C\eta\,\xi^{-2}. \tag{9.22}\] Let \(\zeta\in(0,1)\). Due to Propositions 3.1 and 7.7, if \(\delta\leq\overline{\delta}_{\kappa,D,\xi,\zeta}\), then \(W_{-\zeta\xi^{2}}(\nu_{-\zeta\xi^{2}}^{j},\nu_{-\zeta\xi^{2}}^{2})\leq C_{ \kappa,D,\xi,\zeta}\) for \(j=0,1\). If \(\beta\leq(\mathfrak{C}\theta)/2\) and \(\zeta\leq\overline{\zeta}_{\beta}\), then \[H(\cdot,s)\geq-\delta\geq-(1-\zeta)^{-1}\xi^{-2}\geq-(-\zeta\xi^ {2}-s)^{-1},\] \[W_{-\zeta\xi^{2}}(\nu_{-\zeta\xi^{2}}^{j},\nu_{-\zeta\xi^{2}}^{2 })\leq C_{1,\kappa,D,\xi,\zeta}\sqrt{-\zeta\xi^{2}-s},\] \[t_{2}+\zeta\xi^{2}\leq(\beta+\zeta)\xi^{2}\leq\mathfrak{C}\theta (1-\zeta)\xi^{2}\leq\mathfrak{C}\theta(-\zeta\xi^{2}-s),\] \[\zeta\xi^{2}\leq t_{1}+\zeta\xi^{2}\leq(\beta+\zeta)\xi^{2}\leq C _{1,\kappa,D,\xi,\zeta}^{2}(1-\zeta)\xi^{2},\] where \(\theta\in(0,\overline{\theta}]\) and \(\mathfrak{C}\) are constants obtained in Propositions 7.3 and 6.1, respectively. Proposition 6.1 implies \(e^{-\theta f_{2}}\nu_{s}^{2}\leq C_{\kappa,D,\xi}\,\nu_{s}^{j}\) for \(j=0,1\); in particular, (9.22) yields \[\int_{M}\left(\tau^{-1}\left|\left|\nabla(\tau_{1}f_{1}-\tau_{2}f_{2})\right| ^{2}-\mathrm{c}\right|+\sum_{i=1}^{2}\left|-\tau_{i}(\left|\nabla f_{i} \right|^{2}+H)+f_{i}-\mathrm{N}_{i}\right|\right)\,e^{-\theta f_{2}}\,d\nu_{s} ^{2}\leq C_{\kappa,D,\xi}\,\eta. \tag{9.23}\] For each \(i=1,2\), let \((y_{i},s)\) be a center of \((x_{i},t_{i})\) (see Proposition 3.2). By Lemma 2.6 and Proposition 3.1, if \(\mathfrak{D}\geq\mathfrak{D}\), then \[d_{s}(y_{1},y_{2}) \geq W_{-2\xi^{2}}(\nu_{-2\xi^{2}}^{1},\nu_{-2\xi^{2}}^{2})- \sqrt{\mathrm{Var}_{s}(\delta_{y_{1}},\nu_{s}^{1})}-\sqrt{\mathrm{Var}_{s}( \delta_{y_{2}},\nu_{s}^{2})}\] \[\geq\left(\mathfrak{D}-2\sqrt{2\mathcal{C}_{n}}\right)\xi\geq\frac {1}{2}\mathfrak{D}\xi. \tag{9.24}\] From (9.24), Proposition 3.3 and (3.1), it follows that \[\nu_{s}^{2}(B)\geq\frac{1}{2},\quad\nu_{s}^{1}(B)\leq\nu_{s}^{1}\left(M\setminus B (y_{1},s,\mathfrak{D}\xi/4)\right)\leq\frac{\mathrm{Var}_{s}(\nu_{s}^{1}, \delta_{y_{1}})}{(\mathfrak{D}\xi/4)^{2}}\leq\frac{32\mathcal{C}_{n}}{\mathfrak{ D}^{2}}, \tag{9.25}\] where \(B:=B(y_{2},s,2\sqrt{\mathcal{C}_{n}}\xi)\). For \(a>0\), we set \(\Omega_{0}:=\{f_{2}(\cdot,s)\leq a\}\cap B\). By Proposition 5.4, if \(a\geq\underline{a}\), then \[\nu_{s}^{2}(B\setminus\Omega_{0})\leq C\xi^{-n}e^{-a}m_{s}(B)\leq Ce^{-a}\leq \frac{1}{8}. \tag{9.26}\] By (9.23), if \(\eta\leq\overline{\eta}_{\kappa,D,\xi}\), then there is \(\Omega\subset\Omega_{0}\) with \(\nu_{s}^{2}\left(\Omega_{0}\setminus\Omega\right)\leq 1/8\) such that for \(i=1,2\), \[\tau^{-1}\left|\left|\nabla(\tau_{1}f_{1}-\tau_{2}f_{2})\right|^{2}-\mathrm{c} \right|+\sum_{i=1}^{2}\left|-\tau_{i}(\left|\nabla f_{i}\right|^{2}+H)+f_{i}- \mathrm{N}_{i}\right|\leq 1\] on \(\Omega\) at time \(s\). Note that \(\tau\in[\xi^{2},2\xi^{2}],\tau_{i}\in[\xi^{2},3\xi^{2}]\). We also notice that if \(\mathrm{c}<3\xi^{2}\), then \[\tau_{2}|\nabla f_{2}|^{2}\leq\left|\tau_{2}(|\nabla f_{2}|^{2}+H) -f_{2}+\mathrm{N}_{2}\right|-\tau_{2}H+f_{2}-\mathrm{N}_{2}\leq C_{\kappa,D,a},\] \[\tau_{1}|\nabla f_{1}|^{2}\leq 2\xi^{-2}\left(|\nabla(\tau_{1}f_{1}- \tau_{2}f_{2})|^{2}+\tau_{2}^{2}|\nabla f_{2}|^{2}\right)\leq 2\xi^{-2} \left(\mathrm{c}+\tau+C_{\kappa,D,a}\xi^{2}\right)\leq C_{\kappa,D,a},\] \[|\tau_{1}|\nabla f_{1}|-\tau_{2}|\nabla f_{2}||\leq(\mathrm{c}+ \tau)^{1/2}\leq C\xi \tag{9.27}\] on \(\Omega\). In view of (9.25) and (9.27), we also possess \(\nu_{s}^{2}(\Omega)\geq 1/4\). Let us show \(\mathrm{c}\geq 3\xi^{2}\). We suppose \(\mathrm{c}<3\xi^{2}\). From (9.27), we derive \[\tau_{2}f_{2}-\tau_{1}f_{1}\geq \ (\tau_{2}^{2}-\tau_{1}^{2})H-\sqrt{2}\left|\tau_{1}|\nabla f_{1} |-\tau_{2}|\nabla f_{2}||\left(\sum_{i=1}^{2}\tau_{i}^{2}|\nabla f_{i}|^{2} \right)^{1/2}-\sum_{i=1}^{2}\tau_{i}|\mathrm{N}_{i}|\] \[-\sum_{i=1}^{2}\tau_{i}\left|-\tau_{i}(|\nabla f_{i}|^{2}+H)+f_{i} -\mathrm{N}_{i}\right|\geq-C_{\kappa,D,a}\,\xi^{2}\] on \(\Omega\). It follows that \(f_{1}\leq C_{\kappa,D,a}\) on \(\Omega\). Therefore, (9.25) leads us to \[\frac{1}{4}\leq\nu_{s}^{2}(\Omega)=\int_{\Omega}(4\pi\tau_{2})^{-n/2}e^{-f_{2} }dm_{s}\leq C_{\kappa,D,a}\int_{\Omega}(4\pi\tau_{1})^{-n/2}e^{-f_{1}}dm_{s}= C_{\kappa,D,a}\,\nu_{s}^{1}(B)\leq\frac{C_{\kappa,D,a}}{\mathfrak{D}^{2}}.\] If \(\mathfrak{D}\geq\underline{\mathfrak{D}}_{\kappa,D,a}\), then this yields the contradiction, and hence \(\mathrm{c}\geq 3\xi^{2}\). We conclude \[\int_{-2\xi^{2}}^{-\xi^{2}}\int_{M}\tau^{-1}|\nabla(u_{1}-u_{2})|^{2}d\nu_{t} dt\geq\frac{\mathrm{c}}{2}-\int_{-2\xi^{2}}^{-\xi^{2}}\int_{M}\tau^{-1}\left| \left|\nabla(u_{1}-u_{2})\right|^{2}-\mathrm{c}\right|d\nu_{t}dt\geq\xi^{2}\] if \(\eta\leq\xi^{2}/2\). We complete the proof. We are now in a position to conclude Theorem 9.1: Proof of Theorem 9.1.: Once we obtain Lemmas 9.2, 9.3, 9.4, 9.5, Theorem 9.1 follows from the same argument as in the proof of [4, Proposition 10.8] together with [4, Lemma 10.23]. Thus, we complete the proof. ## 10. Quantitative stratification ### Parabolic balls For \((x_{0},t_{0})\in M\times I\) and \(r>0\), the _parabolic ball_ is defined by \[P(x_{0},t_{0};r):=\left\{\,(x,t)\in M\times I\,\mid\,t\in[t_{0}-r^{2},t_{0}+r ^{2}],\;W_{t_{0}-r^{2}}(\nu_{(x_{0},t_{0});t_{0}-r^{2}},\nu_{(x,t);t_{0}-r^{2 }})<r\,\right\}.\] We recall the following basic property (see [2, Proposition 9.4], [4, Proposition 4.25]): **Lemma 10.1** ([2]).: _For \((x_{0},t_{0}),(x_{1},t_{1})\in M\times I\) and \(r>0\), if \(P(x_{0},t_{0};r)\cap P(x_{1},t_{1};r)\neq\emptyset\), then \(P(x_{0},t_{0};r)\subset P(x_{1},t_{1};3r)\)._ We first show the following volume estimate for time-slices (cf. [2, Theorem 9.8]): **Lemma 10.2**.: _Let \((M,g(t))_{t\in I}\) be a super Ricci flow with \(\mathcal{D}\geq 0\). Let \((x_{0},t_{0})\in M\times I\). For \(r,\rho,R>0\), we assume \([t_{0}-(R^{2}+\rho)r^{2},t_{0}]\subset I\). Then for every \(t\in[t_{0}-R^{2}r^{2},t_{0}+R^{2}r^{2}]\),_ \[m_{t}(P(x_{0},t_{0};Rr)\cap(M\times\{t\}))\leq C_{\rho,R}\exp(\mathcal{N}_{(x_ {0},t_{0})}(\rho r^{2}/2))r^{n}.\] Proof.: We may assume \(t_{0}=R^{2}\) and \(r=1\). Lemma 2.1 implies \(H(\cdot,-\rho/2)\geq-n/\rho\) (see Remark 2.2). Let \((z_{0},0)\) be a center of \((x_{0},R^{2})\) (see Proposition 3.2). For \(t\in[0,2R^{2}]\) and \(x\in S_{t}\), let \((z,0)\) be a center of \((x,t)\), where \(S_{t}:=P(x_{0},t_{0};R)\cap(M\times\{t\})\). By Lemma 2.6, \[d_{0}(z_{0},z) \leq W_{0}(\delta_{z_{0}},\nu_{(x_{0},R^{2});0})+W_{0}(\nu_{(x_{0},R^{2});0},\nu_{(x,t);0})+W_{0}(\nu_{(x,t);0},\delta_{z})\] \[\leq\sqrt{\mathcal{C}_{n}}R+R+\sqrt{\mathcal{C}_{n}t}\leq C_{1}R\] for some \(C_{1}>1\); in particular, \(B(z,0,\sqrt{2\mathcal{C}_{n}t})\) is contained in \(B:=B(z_{0},0,C_{1}R)\). Proposition 3.4 tells us that \[\nu_{(x,t);0}(B)\geq\nu_{(x,t);0}\left(B(z,0,\sqrt{2\mathcal{C}_{n}t})\right) \geq\frac{1}{2}. \tag{10.1}\] Let \(u\in C^{0}(M\times[0,2R^{2}])\cap C^{\infty}(M\times(0,2R^{2}])\) be the solution to the heat equation with initial condition \(u(\cdot,0)=\chi_{B}\). By Lemma 2.1, we have \[\frac{d}{dt}\int_{M}u\,dm_{t}=-\int_{M}uH\,dm_{t}\leq\frac{n}{\rho}\int_{M}u\, dm_{t}.\] In virtue of (10.1), we possess \(u\geq 1/2\) on \(S_{t}\), and hence \[\frac{1}{2}m_{t}(S_{t})\leq\int_{M}u(\cdot,t)dm_{t}\leq e^{\frac{nt}{\rho}}m_{ 0}(B)\leq C_{\rho}m_{0}(B). \tag{10.2}\] Using Proposition 5.4, we obtain \[m_{0}(B)\leq C_{\rho,R}\exp(\mathcal{N}_{(z_{0},0)}(\rho/2)). \tag{10.3}\] Also, by (4.3) and Lemmas 4.5 and 2.6, \[\mathcal{N}_{(z_{0},0)}(\rho/2)-\mathcal{N}_{(x_{0},R^{2})}(\rho/2) \leq\mathcal{N}_{(z_{0},0)}(\rho/2)-\mathcal{N}_{(x_{0},R^{2})}( \rho/2+R^{2})\] \[=\mathcal{N}_{-\rho/2}(z_{0},0)-\mathcal{N}_{-\rho/2}(x_{0},R^{2})\] \[\leq C_{\rho,R}(W_{0}(\delta_{z_{0}},\nu_{(x_{0},R^{2});0})+1) \leq C_{\rho,R}. \tag{10.4}\] From (10.2), (10.3) and (10.4), we conclude the desired estimate. Based on Lemma 10.2, we present the following (cf. [2, Theorem 9.11]): **Proposition 10.3**.: _Let \((M,g(t))_{t\in I}\) be a super Ricci flow with \(\mathcal{D}\geq 0\). Let \((x_{0},t_{0})\in M\times I\). For \(r,\rho,R>0\), we assume \([t_{0}-2(R^{2}+\rho)r^{2},t_{0}]\subset I\). Let \(S\subset P(x_{0},t_{0};Rr)\) and \(\xi\in(0,\sqrt{\rho}]\). Then there exists \(\{(x_{i},t_{i})\}_{i=1}^{N}\subset S\) such that_ \[S\subset\bigcup_{i=1}^{N}P(x_{i},t_{i};\xi r),\quad N\leq C_{\rho,R}\,\xi^{-n -2}. \tag{10.5}\] Proof.: We may assume \(t_{0}=0\) and \(r=1\). Let \(\{(x_{i},t_{i})\}_{i=1}^{N}\subset S\) be a maximal collection of points such that \(\{P(x_{i},t_{i};\xi/3)\}_{i=1}^{N}\) are pairwise disjoint. Let \((x,t)\in S\). By the maximality we see \(P(x,t;\xi/3)\cap P(x_{i},t_{i};\xi/3)\neq\emptyset\) for some \(i\). From Lemma 10.1, we deduce \((x,t)\in P(x_{i},t_{i};\xi)\), which proves the inclusion in (10.5). Let us derive an upper bound on \(N\) in (10.5). Fix \(\zeta\in(0,1/2]\). There exist \(s\in[-R^{2}-2\zeta\xi^{2},R^{2}-\zeta\xi^{2}]\) and \(\mathcal{I}\subset\{1,\ldots,N\}\) such that \[|\mathcal{I}|\geq\lfloor C_{\zeta}\xi^{2}N\rfloor,\quad s\in[t_{i}-2\zeta\xi^ {2},t_{i}-\zeta\xi^{2}] \tag{10.6}\] for all \(i\in\mathcal{I}\). For each \(i\in\mathcal{I}\), let \((z_{i},s)\) be a center of \((x_{i},t_{i})\) (see Proposition 3.2). By Lemma 2.6 and Proposition 3.1, if \(\zeta\leq\overline{\zeta}\), then for all \(i,j\in\mathcal{I}\) with \(i\neq j\), it holds that \[d_{s}(z_{i},z_{j}) \geq W_{s}(\nu_{(x_{i},t_{i});s},\nu_{(x_{j},t_{j});s})-W_{s}( \delta_{z_{i}},\nu_{(x_{i},t_{i});s})-W_{s}(\delta_{z_{j}},\nu_{(x_{j},t_{j});s})\] \[\geq W_{t_{i}-(\xi/3)^{2}}(\nu_{(x_{i},t_{i});t_{i}-(\xi/3)^{2}},\nu_{(x_{j},t_{j});t_{i}-(\xi/3)^{2}})-2\sqrt{2\mathcal{C}_{n}\zeta}\xi\geq \frac{\xi}{3}-2\sqrt{2\mathcal{C}_{n}\zeta}\xi\geq\frac{\xi}{4};\] in particular, \(\{B(z_{i},s,\xi/8)\}_{i\in\mathcal{I}}\) are pairwise disjoint. For each \(i\in\mathcal{I}\), Lemma 2.6 and Proposition 3.1 lead us to \[\begin{split}&\quad W_{-R^{2}-2\zeta\xi^{2}}(\nu_{(x_{0},0);-R^{2}-2 \zeta\xi^{2}},\nu_{(z_{i},s);-R^{2}-2\zeta\xi^{2}})\\ &\leq W_{-R^{2}-2\zeta\xi^{2}}(\nu_{(x_{0},0);-R^{2}-2\zeta\xi^{2 }},\nu_{(x_{i},t_{i});-R^{2}-2\zeta\xi^{2}})+W_{-R^{2}-2\zeta\xi^{2}}(\nu_{(x_{ i},t_{i});-R^{2}-2\zeta\xi^{2}},\,\nu_{(z_{i},s);-R^{2}-2\zeta\xi^{2}})\\ &\leq W_{-R^{2}}(\nu_{(x_{0},0);-R^{2}},\nu_{(x_{i},t_{i});-R^{2}} )+W_{s}(\nu_{(x_{i},t_{i});s},\delta_{z_{i}})\\ &\leq R+\sqrt{\mathcal{C}_{n}(t_{i}-s)}\leq R+\sqrt{2\mathcal{C}_ {n}}\zeta\xi<R+\frac{\xi}{8}\end{split}\] if \(\zeta\leq\overline{\zeta}\), and hence \((z_{i},s)\in P(x_{0},0;R+\xi/8)\cap(M\times\{s\})\). For every \(y\in B(z_{i},s,\xi/8)\), \[\begin{split}&\quad W_{-(R+\xi/8)^{2}}(\nu_{(x_{0},0);-},\nu_{(z_{i},s);-(R+\xi/8)^{2}})\\ &\leq W_{-(R+\xi/8)^{2}}(\nu_{(x_{0},0);-(R+\xi/8)^{2}},\nu_{(y, s);-(R+\xi/8)^{2}})+d_{s}(y,z)\leq R+\frac{\xi}{4},\end{split}\] and thus \(B(z_{i},s,\xi/8)\times\{s\}\) is contained in \(P(x_{0},0;R+\xi/4)\cap(M\times\{s\})\). Summarizing the above observations, from Lemma 10.2 and (4.3), we derive \[\begin{split}|\mathcal{I}|\,\inf_{i\in\mathcal{I}}m_{s}(B(z_{i},s,\xi/8))&\leq\sum_{i\in\mathcal{I}}m_{s}(B(z_{i},s,\xi/8))\\ &\leq m_{s}(P(x_{0},0;R+\xi/4)\cap(M\times\{s\})\\ &\leq C_{\rho,R}\exp(\mathcal{N}_{(x_{0},0)}(\rho/2)).\end{split} \tag{10.7}\] On the other hand, by Lemma 2.1 and Proposition 5.1, if \(\zeta\leq\overline{\zeta}\), then \[\begin{split} m_{s}(B(z_{i},s,\xi/8))&\geq m_{s}(B (z_{i},s,2\sqrt{\mathcal{C}_{n}}\zeta\xi))\geq m_{s}(B(z_{i},s,\sqrt{2 \mathcal{C}_{n}(t_{i}-s)}))\\ &\geq C_{\rho,R}\exp(\mathcal{N}_{(x_{i},t_{i})}(t_{i}-s))(t_{i}- s)^{n/2}\geq C_{\rho,R}\exp(\mathcal{N}_{(x_{i},t_{i})}(\rho/2))\xi^{n}.\end{split} \tag{10.8}\] Moreover, (4.3), Lemmas 4.5 and 4.2 yield \[\begin{split}\mathcal{N}_{(x_{i},t_{i})}(\rho/2)\geq\mathcal{N}_ {(x_{i},t_{i})}(t_{i}+R^{2}+\rho/2)\geq\mathcal{N}_{(x_{0},0)}(R^{2}+\rho/2)-C _{\rho,R}\geq\mathcal{N}_{(x_{0},0)}(\rho/2)-C_{\rho,R}.\end{split} \tag{10.9}\] Here, we used \(H(\cdot,-R^{2}-\rho/2)\geq-n/\rho\) by Lemma 2.1 (see Remark 2.2). Combining (10.7), (10.8), (10.9) implies \(|\mathcal{I}|\leq C_{\rho,R}\xi^{-n}\). This together with (10.6) proves the claim. ### Effective strata Due to Theorem 9.1, we have the following (cf. [4, Lemma 11.3]): **Proposition 10.4**.: _For \(\kappa>0,\varepsilon,\xi\in(0,1)\), if \(\delta\leq\overline{\delta}_{\kappa,\varepsilon,\xi}\), then the following holds: Let \((M,g(t))_{t\in I}\) be a super Ricci flow with \(\mathcal{D}\geq 0\). Let \((x_{0},t_{0})\in M\times I\). For \(r>0\), we assume \(\mathcal{N}_{(x_{0},t_{0})}(r^{2})\geq-\kappa\). For a subset \(S\subset M\times I\), we assume that every point in \(S\) is \((\delta,r)\)-selfsimilar, and none of the following two properties hold:_ 1. \((k+1,\varepsilon,r)\)_-split;_ 2. \((k-1,\varepsilon,r)\)_-split and_ \((\varepsilon,r)\)_-static._ _Then there exists \(\{(x_{i},t_{i})\}_{i=1}^{N}\subset S\cap P(x_{0},t_{0};r/2)\) such that_ \[S\cap P(x_{0},t_{0};r/2)\subset\bigcup_{i=1}^{N}P(x_{i},t_{i};\xi r),\quad N \leq C_{\kappa}\,\xi^{-k}. \tag{10.10}\] Proof.: We may assume \(t_{0}=0\) and \(r=1\). We first notice that by (4.3), Lemmas 4.5 and 4.2, for every \((x,t)\in P(x_{0},0;1/2)\) we have \[\mathcal{N}_{(x,t)}(1)\geq\mathcal{N}_{(x,t)}(t+2)\geq\mathcal{N}_{(x_{0},0)}(2 )-C\geq\mathcal{N}_{(x_{0},0)}(1)-C\geq-C_{\kappa}.\] Let \(\{(x_{i},t_{i})\}_{i=1}^{N}\subset S\cap P(x_{0},0;1/2)\) be a maximal collection of points such that \(\{P(x_{i},t_{i};\xi/10)\}_{i=1}^{N}\) are pairwise disjoint. By the maximality and Lemma 10.1, every point in \(S\cap P(x_{0},0;1/2)\) belongs to \(P(x_{i},t_{i};3\xi/10)\) for some \(i\), which shows the inclusion in (10.10). We derive an upper bound on \(N\). We may assume \(t_{1}\leq\ldots\leq t_{N}\). We verify the following: If \(\delta\leq\overline{\delta}_{\kappa,\varepsilon,\xi},N\geq\mathfrak{N}(100 \mathfrak{D})^{l}\xi^{-l}\), and if \(0\leq t_{i}-t_{1}\leq\beta(100\mathfrak{D})^{-2}\xi^{2}\) for all \(i\), then \((x_{1},t_{1})\) is \((l+1,\varepsilon,1)\)-split, where \(\beta\leq\overline{\beta},\mathfrak{D}\geq\underline{\mathfrak{D}}_{\kappa}\) and \(\mathfrak{N}\geq\underline{\mathfrak{N}}_{\kappa}\) are constants obtained in Theorem 9.1. Indeed, for every \(i\), Proposition 3.1 yields \[W_{t_{1}-1}(\nu_{(x_{1},t_{1});t_{1}-1},\nu_{(x_{i},t_{i});t_{1}- 1})\] \[\leq W_{-1/4}(\nu_{(x_{1},t_{1});-1/4},\nu_{(x_{0},0);-1/4})+W_{- 1/4}(\nu_{(x_{0},0);-1/4},\nu_{(x_{i},t_{i});-1/4})\leq 1.\] Furthermore, for \(i\neq j\), the fact that \((x_{j},t_{j})\not\in P(x_{i},t_{i};\xi/10)\) and Proposition 3.1 imply \[W_{t_{1}-2(\xi/100\mathfrak{D})^{2}}\big{(}\nu_{(x_{i},t_{i});t_{1}-2(\xi/100 \mathfrak{D})^{2}},\nu_{(x_{j},t_{j});t_{1}-2(\xi/100\mathfrak{D})^{2}}\big{)}\] if \(\mathfrak{D}\geq\underline{\mathfrak{D}}\). We apply Theorem 9.1 at \((x_{1},t_{1})\) with \(\xi\) replaced with \(\xi/100\mathfrak{D}\) and \(D=1\), and it follows that \((x_{1},t_{1})\) is \((l+1,\varepsilon,1)\)-split. We set \(C_{1,\kappa}:=2\mathfrak{N}(100\mathfrak{D})^{2n}\) and \(C_{2,\kappa}:=\beta(100\mathfrak{D})^{-2}\). We suppose \(N\geq C_{1,\kappa}\xi^{-k}\). By the claim in the above paragraph, there is no \(\mathcal{I}\subset\{1,\ldots,N\}\) with \(|\mathcal{I}|\geq N/2\) such that \(|t_{i}-t_{j}|\leq C_{2,\kappa}\xi^{2}\) for all \(i,j\in\mathcal{I}\). Hence, for all \(i,j=1,\ldots,N\) with \(j-i\geq N/2\), we see \(|t_{i}-t_{j}|\geq C_{2,\kappa}\xi^{2}\). By Theorem 8.1, if \(\delta\leq\overline{\delta}_{\kappa,\varepsilon,\varepsilon}\), then \(\{(x_{i},t_{i})\}_{i=1}^{\lfloor N/2\rfloor}\) are \((\varepsilon,1)\)-static; in particular, they are not \((k-1,\varepsilon,1)\)-split. Let \(\mathcal{J}\subset\{1,\ldots,\lfloor N/2\rfloor\}\) be a subset such that \(|t_{i}-t_{j}|\leq C_{2,\kappa}\xi^{2}\) for all \(i,j\in\mathcal{J}\), and \[|\mathcal{J}|\geq C_{\kappa}\xi^{2}\lfloor N/2\rfloor\geq C_{\kappa}\,\xi^{-( k-2)}.\] Applying the above claim to \(\{(x_{i},t_{i})\}_{i\in\mathcal{J}}\) for \(l=k-2\), we arrive at the contradiction. For \(\varepsilon\in(0,1)\) and \(r_{1},r_{2}>0\) with \(r_{1}<r_{2}\), the _effective strata_\(\mathcal{S}_{r_{1},r_{2}}^{\varepsilon,k}\) is defined by the set of all points in \(M\times I\) such that for all \(r\in(r_{1},r_{2})\) none of the following two properties hold: 1. \((k+1,\varepsilon,r)\)-split; 2. \((k-1,\varepsilon,r)\)-split and \((\varepsilon,r)\)-static. We conclude the following quantitative stratification result, which has been established by Bamler [4] for Ricci flow (cf. [4, Proposition 11.2]): **Theorem 10.5**.: _Let \((M,g(t))_{t\in I}\) be a super Ricci flow with \(\mathcal{D}\geq 0\). Let \((x_{0},t_{0})\in M\times I\). For \(r,\kappa>0\) we assume \([t_{0}-2r^{2},t_{0}]\subset I\) and \(\mathcal{N}_{(x_{0},t_{0})}(r^{2})\geq-\kappa\). Then for all \(\varepsilon\in(0,1)\) and \(\sigma\in(0,\varepsilon)\), there exists \(\{(x_{i},t_{i})\}_{i=1}^{N}\subset\mathcal{S}_{\sigma r,\varepsilon r}^{ \varepsilon,k}\cap P(x_{0},t_{0};r)\) such that_ \[\mathcal{S}_{\sigma r,\varepsilon r}^{\varepsilon,k}\cap P(x_{0},t_{0};r) \subset\bigcup_{i=1}^{N}P(x_{i},t_{i};\sigma r),\quad N\leq C_{\kappa, \varepsilon}\,\sigma^{-k-\varepsilon}.\] Proof.: We may assume \(t_{0}=0\) and \(r=1\). For \(\xi\in(0,1)\), let \(m\in\mathbb{N}\) be an integer determined by \(\sigma\in[\xi^{m},\xi^{m-1}]\). In virtue of Proposition 7.2, for every \((x,t)\in P(x_{0},0;1)\), the number of \(j\) such that \((x,t)\) is not \((\delta,\xi^{j})\)-selfsimilar is bounded from above by \(C_{2}=C_{2,\kappa,\xi,\delta}\), where \(\delta\leq\overline{\delta}_{\kappa,\varepsilon,\xi}\) is a constant obtained in Proposition 10.4. For \(\{o_{j}\}_{j=0}^{m-1}\subset\{0,1\}\), we set \[S\left(\{o_{j}\}_{j=0}^{m-1}\right):=\left\{(x,t)\in P(x_{0},0;1)\ |\ o_{j}=1\ \text{if and only if}\ (x,t)\ \text{is not}\ (\delta,\xi^{j})\text{-selfsimilar}\right\}.\] Then \(P(x_{0},0;1)\) can be written as the union of at most \(m^{C_{2}}\) many non-empty subsets of the form \(S\left(\{o_{j}\}_{j=0}^{m-1}\right)\) since \(S\left(\{o_{j}\}_{j=0}^{m-1}\right)=\emptyset\) if \(\sum_{j}o_{j}>C_{2}\). Fix \(l=0,\ldots,m-1\) and \((y,s)\in P(x_{0},0;1)\), where let \((y,s):=(x_{0},0)\) if \(l=0\). Due to Propositions 10.3 and 10.4, there is \(\{(y_{i},s_{i})\}_{i=1}^{N}\subset\mathcal{S}_{\sigma,\varepsilon}^{\varepsilon,k} \cap S\left(\{o_{j}\}_{j=0}^{m-1}\right)\cap P(y,s;\xi^{l})\) such that \[\mathcal{S}_{\sigma,\varepsilon}^{\varepsilon,k}\cap S\left(\{o_{j}\}_{j=0}^{m-1 }\right)\cap P(y,s;\xi^{l})\subset\bigcup_{i=1}^{N}P(y_{i},s_{i};\xi^{l+1}), \quad N\leq\begin{cases}C_{3,\xi}&\text{if $o_{l}=1$ or $l=0$,}\\ C_{1,\kappa}\,\xi^{-k}&\text{if $o_{l}=0$ and $l\geq 1$,}\end{cases}\] where the assertion for \(l=0\) or \(o_{l}=1\), and that for \(l\geq 1\) and \(o_{l}=0\) are consequences of Propositions 10.3 and 10.4, respectively. By induction, \(\mathcal{S}_{\sigma,\varepsilon}^{\varepsilon,k}\cap S\left(\{o_{j}\}_{j=0}^{m-1 }\right)\cap P(x_{0},0;1)\) can be covered by at most \[C_{3,\xi}^{1+\sum_{j}o_{j}}(C_{1,\kappa}\,\xi^{-k})^{m-1-\sum_{j}o_{j}}\] many parabolic balls of the form \(P(y,s;\sigma)\). It follows that \(\mathcal{S}_{\sigma,\varepsilon}^{\varepsilon,k}\cap P(x_{0},0;1)\) can be covered by at most \[m^{C_{2}}\,C_{3,\xi}^{1+C_{2}}(C_{1,\kappa}\xi^{-k})^{m}\leq C_{\kappa,\xi, \delta}\,m^{C_{2}}C_{1,\kappa}^{m}\,\xi^{-mk}\] many such parabolic balls. If \(\xi\leq\overline{\xi}_{\kappa,\varepsilon}\) such that \(\xi^{-\varepsilon/2}\geq C_{1,\kappa}\), then \[C_{\kappa,\xi,\delta}\,m^{C_{2}}\,\xi^{-mk-m\varepsilon/2}\leq C_{\kappa,\xi, \delta}\,\xi^{-mk-m\varepsilon}\leq C_{\kappa,\xi,\delta}\,\sigma^{-k- \varepsilon}.\] Thus, we complete the proof. Acknowledgements.: The first author was supported by JSPS KAKENHI (JP23K03105). The second author was supported by JSPS KAKENHI (JP23K12967).
2309.16135
A dual-branch model with inter- and intra-branch contrastive loss for long-tailed recognition
Real-world data often exhibits a long-tailed distribution, in which head classes occupy most of the data, while tail classes only have very few samples. Models trained on long-tailed datasets have poor adaptability to tail classes and the decision boundaries are ambiguous. Therefore, in this paper, we propose a simple yet effective model, named Dual-Branch Long-Tailed Recognition (DB-LTR), which includes an imbalanced learning branch and a Contrastive Learning Branch (CoLB). The imbalanced learning branch, which consists of a shared backbone and a linear classifier, leverages common imbalanced learning approaches to tackle the data imbalance issue. In CoLB, we learn a prototype for each tail class, and calculate an inter-branch contrastive loss, an intra-branch contrastive loss and a metric loss. CoLB can improve the capability of the model in adapting to tail classes and assist the imbalanced learning branch to learn a well-represented feature space and discriminative decision boundary. Extensive experiments on three long-tailed benchmark datasets, i.e., CIFAR100-LT, ImageNet-LT and Places-LT, show that our DB-LTR is competitive and superior to the comparative methods.
Qiong Chen, Tianlin Huang, Geren Zhu, Enlu Lin
2023-09-28T03:31:11Z
http://arxiv.org/abs/2309.16135v1
# A Dual-Branch Model with Inter- and Intra-branch Contrastive Loss for Long-tailed Recognition ###### Abstract Real-world data often exhibits a long-tailed distribution, in which head classes occupy most of the data, while tail classes only have very few samples. Models trained on long-tailed datasets have poor adaptability to tail classes and the decision boundaries are ambiguous. Therefore, in this paper, we propose a simple yet effective model, named Dual-Branch Long-Tailed Recognition (DB-LTR), which includes an imbalanced learning branch and a Contrastive Learning Branch (CoLB). The imbalanced learning branch, which consists of a shared backbone and a linear classifier, leverages common imbalanced learning approaches to tackle the data imbalance issue. In CoLB, we learn a prototype for each tail class, and calculate an inter-branch contrastive loss, an intra-branch contrastive loss and a metric loss. CoLB can improve the capability of the model in adapting to tail classes and assist the imbalanced learning branch to learn a well-represented feature space and discriminative decision boundary. Extensive experiments on three long-tailed benchmark datasets, i.e., CIFAR100-LT, ImageNet-LT and Places-LT, show that our DB-LTR is competitive and superior to the comparative methods. keywords: Neural network, long-tailed recognition, imbalanced learning, contrastive learning + ## 1 Introduction In recent years, with various large-scale and high-quality image datasets being easily accessed, deep Convolutional Neural Networks(CNNs) have achieved great success in visual recognition (He et al., 2016; Zhang et al., 2018; Liang et al., 2020). These datasets are usually artificially balanced, however, real-world data often exhibits a long-tailed distribution, where a few classes (majority or head classes) occupy most of the data, while other classes (minority or tail classes ) only have very few samples. Due to the extreme class imbalance and limited tail samples of the long-tailed datasets, the feature representation of tail classes learned by the model is insufficient. Consequently, the model has poor adaptability to tail classes and the decision boundaries are ambiguous. Several kinds of methods have been proposed to improve the classification of tail classes, including class re-balancing, data augmentation and ensemble model. Class re-balancing can be roughly divided into re-sampling (Buda et al., 2018; Pouyanfar et al., 2018; Chawla et al., 2002; Drummond et al., 2003) and re-weighting (Wang et al., 2017; Park et al., 2021; Shu et al., 2019), both methods share the goal of rectifying the skewed training data to improve the classifier learning. Data augmentation strategy (Zhang et al., 2017; Chou et al., 2020; Li et al., 2021; Wang et al., 2021) generates new information for tail classes to improve the recognition capability of tail samples. Ensemble model (Cai et al., 2021; Wang et al., 2020; Cui et al., 2022; Xiang et al., 2020) splits the long-tailed datasets into several relatively balanced subsets and trains a model with multiple classifier, embodying the design philosophy of divide-and-conquer. These approaches reshape the decision boundary and improve the generalization ability of the model via enhancing the learning law on tail classes. However, the class re-balancing either takes the risk of overfitting to tail classes and under-learning of head classes(Wang et al., 2021; Yang et al., 2022). The data augmentation strategy lacks theoretical guidance and cannot introduce new efficient samples (Yang et al., 2022). And the powerful computational resources underpin the great success of ensemble model. In this paper, we improve the long-tailed models adaptation ability to tail classes and the decision boundaries with a Dual-Branch Long-Tailed Recognition (DB-LTR) model, which is built upon an imbalanced learning branch and a contrastive learning branch. The imbalanced learning branch aims to cope with the data imbalance and model bias. In Contrastive Learning Branch (CoLB), tail samples are used to supervise the model training under the \(N\)-way \(N_{sup}\)-shot setting (Snell et al., 2017). CoLB only utilizes a small amount of tail data, which avoids overfitting and does not bring much more computational overhead for our method. In DB-LTR, the imbalanced learning branch performs common imbalanced learning methods, e.g., LDAM (Cao et al., 2019), Remix (Chou et al., 2020), etc. The key idea underlying this branch is to train an unbiased or less biased classifier. Nevertheless, the issue of insufficient feature learning of tail classes remains untouched. Therefore, we introduce CoLB to tackle this problem. To be specific, we calculate a prototype-based (Snell et al., 2017) metric loss to promote the models adaptation to tail classes. In order to separate head- and tail features and to make the class boundaries of tail classes distinct in embedding space, we borrow contrastive learning to calculate an inter-branch Contrastive Loss (inter-CL) and an intra-branch Contrastive Loss (intra-CL), respectively. Benefiting from these two contrastive losses, CoLB can learn a decision boundary that keeps far away from head- and tail classes simultaneously. We have validated the effectiveness of DB-LTR on three long-tailed datasets. The main contributions of this paper are summarized as follows: 1. We propose a Dual-Branch Long-Tailed Recognition model, which is composed of an imbalanced learning branch and a contrastive learning branch. Our model can effectively enhance the distinguishability of decision boundary and the adaptability of the model to tail classes. 2. The Contrastive Learning Branch (CoLB) can seamlessly and friendly plug in other imbalanced learning methods to further improve their recognition performance. 3. Experimental results on three long-tailed benchmarks, i.e., CIFAR100-LT, ImageNet-LT and Place-LT, show that our method poses promising performance and surpasses the comparative methods. ## 2 Related work ### Long-tailed recognition Existing methods of addressing long-tailed problems include class rebalancing, data augmentation, decoupled learning and ensemble model (Cai et al., 2021; Wang et al., 2020; Cui et al., 2022; Xiang et al., 2020). **Class re-balancing.** Traditional class re-balancing methods are mainly divided into re-sampling and re-weighting. Re-sampling reduces the overwhelming effect of head class on the model during training via under-sampling head classes (Drummond et al., 2003; Buda et al., 2018) or over-sampling tail classes (Buda et al., 2018; Chawla et al., 2002; More, 2016). Some recent works attempt to explore more effective sampling strategies (Pouyanfar et al., 2018; Kang et al., 2020). Re-weighting assigns different loss contribution (Lin et al., 2017; Wang et al., 2017) or decision margin (Cao et al., 2019) to different classes according to the frequencies or difficulties of samples. CB Loss (Cui et al., 2019) decides the weighting coefficients based on the effective number of samples. These approaches adjust the decision boundary in an indirect way. Different from them, Park et al. (2021) re-weight each sample according to their influence on the decision surface, aiming to learn a generalizable decision boundary. Since class re-balancing methods enlarge the importance of tail classes during model optimization, they can effectively promote the generalization and classification of the under-performed minority classes. **Data augmentation.** Unlike re-sampling, data augmentation strategy (Chou et al., 2020; Wang et al., 2021) aims to exterminate the ambiguity of decision boundary via generating new additional information for tail classes. Remix (Chou et al., 2020) decomposes the interpolation of samples and labels, which enlarges the importance of tail classes when interpolating labels, while the samples interpolation employs standard mixup (Zhang et al., 2017). Wang et al. (2021) observe that the scarcity of tail samples impels the learned feature space of the rare classes to be too small. Hence, they design a generator dedicated to augmenting tail features so that alleviate the misclassification of tail data. ISDA (Wang et al., 2019) brings inferior performance in long-tailed scenarios because of limited samples in tail classes. MetaSAug (Li et al., 2021) overcomes this shortcoming. It learns transformed semantic directions in the way of meta-learning to perform meaningful semantic augmentation for tail classes, resulting in the improvement of classifier learning. **Decoupled learning.** Decoupled learning (Kang et al., 2020; Wang et al., 2022) optimizes the feature backbone and classifier using different training strategies. Recent works (Zhang et al., 2021; Zhong et al., 2021) aim to make the decision surfaces more recognizable by retraining the classifier. Zhang et al. (2021) observe that the performance bottleneck of the long-tailed model is the biased decision boundary. Thus, they propose an adaptive calibration function and a distribution alignment strategy to re balance the classifier. The same viewpoint of decoupling can be found in (Alshammari et al., 2022), which balances the classifier weight norm by tuning the weight decay properly. The ambiguity of decision boundary implies bias residing in classifier, which leads to miscalibration and over-confidence. Zhong et al. (2021) tackle these issues via label-aware smoothing and shifted batch normalization. ### Contrastive learning Class re-balancing will lead to sub-optimal feature representation (Kang et al., 2020; Zhou et al., 2020; Wang et al., 2022). On the contrary, contrastive learning is a promising work in unsupervised representation learning (Chen et al., 2020; He et al., 2020; Kang et al., 2020). There is the insight that contrastive learning learns a compact and separable feature space by pulling together the samples belonging to the same classes while pushing away the samples of different categories. Recently, Supervised Contrastive Learning (SCL) poses considerable success in long-tailed recognition (Wang et al., 2021; Cui et al., 2021; Li et al., 2022). The basic idea is that better features make better classifier. So Wang et al. (2021) introduce a supervised contrastive loss to improve image representation and use the cross-entropy loss to train an unbiased classifier. Kang et al. (2020) claim that the feature space learned by self-supervised contrastive learning is more balanced than that of SCL yet lacks semantic discriminativeness. So they devise KCL (Kang et al., 2020) to address this problem. The proposed Hybrid-PSC (Wang et al., 2021) and BCL (Zhu et al., 2022) are the most relevant to our method. Hybrid-PSC contrasts each sample against the prototypes of all other categories, and BCL aims to learn an ideal geometry for representation learning by class-averaging and class-complement. Although similar, our DB-LTR differs from Hybrid-PSC and BCL as we introduce CoLB dedicated to dealing with the poor adaptability of the model to tail classes and the ambiguity of decision boundary. ## 3 Methodology ### Overall framework The architecture of our DB-LTR is shown in Figure 1, it mainly consists of an imbalanced learning branch and a contrastive learning branch. Therein, the imbalanced learning branch includes a shared backbone \(f_{\theta}\) and a linear classifier \(f_{\varphi}\). This channel acts as the main branch and integrates common imbalanced learning methods, e.g., LDAM (Cao et al., 2019), Remix (Chou et al., 2020), etc. Contrastive Learning Branch (CoLB) is an auxiliary branch, which is dedicated to enhancing the models adaptation ability and to learning a compact and separable feature space. Specifically, to compensate the under-learning of tail classes, CoLB samples tail data to supervise the model learning through calculating an inter-branch contrastive loss (inter-CL), an intra-branch contrastive loss (intra-CL) and a metric loss. The two branches share the same backbone structure and weights. In the training phase, the total loss is defined as the weighted loss of these two branches and is used to update the network. During the inference phase, the shared backbone and the linear classifier are used to make predictions. ### Imbalanced learning branch The imbalanced learning branch is used to mitigate the influence of data imbalance and the models preference for head classes. We integrate LDAM (Cao et al., 2019) into this branch, which encourages model to learn different margins for different classes. That is, if a class has a large number of samples, Figure 1: The architecture of our Dual-Branch Long-Tailed Recognition (DB-LTR) model. The uniform sampler utilizes instance-balanced sampling on the whole long-tailed datasets, while the tail sampler only samples tail data randomly. The head instances from the imbalanced learning branch are used to compute the inter-CL (\(L_{inter}\)). For intra-CL (\(L_{intra}\)), it contrasts the support- and query samples inside the contrastive learning branch. The prototype-based metric loss (\(L_{m}\)) is calculated using non-parametric metric. it will be assigned a relatively small margin, otherwise, a large margin will be allocated. Let \(z_{i}=f_{\varphi}(f_{\theta}(x_{i}^{imb}))\) denote the logit for a training sample \(x_{i}^{imb}\). According to LDAM, the probability of sample \(x_{i}^{imb}\) to be predicted as class \(j\) is: \[p_{i,j}^{imb}=\frac{\exp\left(z_{j}-\Delta_{j}\right)}{\exp\left(z_{j}-\Delta_ {j}\right)+\sum_{i\neq j}\exp\left(z_{i}\right)} \tag{1}\] where \(\Delta_{j}=H/\left(n_{j}^{1/4}\right)\) denotes the margin for class \(j\), \(n_{j}\) is the number of samples in \(j\)-th class, \(H\) is an adjustable hyper-parameter. The more samples there are in a class, the smaller margin the class obtains. Cross-entropy loss is used to calculate the loss of the imbalanced learning branch: \[L_{imb}=-\frac{1}{N_{imb}}\sum_{i=1}^{N_{imb}}L_{CE}\left(p_{i}^{imb},y_{i}^{imb }\right) \tag{2}\] where \(N_{imb}\) is the number of samples in the imbalance learning branch, \(y_{i}^{imb}\) is the label of sample \(x_{i}^{imb}\). Other imbalanced learning methods can also be integrated into this branch. When collaborating with CoLB, they achieve better performance. More details are described in Section 4.3. ### Contrastive learning branch In order to make the model better adapt to tail classes and learn generic features, a Contrastive Learning Branch (CoLB) is designed. In CoLB, we learn a prototype representation (Snell et al., 2017) for each tail class, and leverage tail samples to calculate a prototype-based metric loss, aiming to improve the model's ability to recognize tail classes. In long-tailed datasets, due to the small number of tail samples, they are overwhelmed by the dominant head instances, which leads to insufficient and sub-optimal feature learning. Therefore, we further calculate an inter-branch Contrastive Loss (inter-CL) and an intra-branch Contrastive Loss (intra-CL), aiming to learn a feature space with the property of intra-class compactness and inter-class separability. To alleviate the over-suppression by head classes and to balance the model learning, the CoLB takes tail samples as inputs. To this end, we devise a tail sampler to sample training data for CoLB. In a mini-batch, firstly the tail sampler randomly selects \(N\) categories on tail classes, and then randomly samples \(\left(N_{sup}+N_{qry}\right)\) instances for each category, where \(N_{sup}\) instances and \(N_{qry}\) ones are used as support set and query set, respectively. We denote the support set as \(S=\left\{\left(x_{i}^{sup},y_{i}^{sup}\right)\right\}_{i=1}^{N_{sup}}\) and the query set \(Q=\left\{\left(x_{i}^{qry},y_{i}^{qry}\right)\right\}_{i=1}^{N_{qry}}\), where \(x_{i}\) represents a support/query sample and \(y_{i}\) is its corresponding label. Without loss of generality, the support and query samples can be uniformly expressed as \(\left(x_{i}^{con},y_{i}^{con}\right)\). In CoLB, the samples are predicted via non-parametric metric. Specifically, we calculate the semantic similarity of samples with respect to the prototypes to obtain their prediction probability. Let \(S_{j}\) denote the set of samples in class \(j\), and \(c_{j}\) denote the prototype of class \(j\). For any sample \(x_{i}^{qry}\) in the query set, we can get its prediction probability by calculating the similarity of its feature and the prototype of class \(j\): \[p_{i,j}^{met}=\frac{\exp\left(-d\left(f_{\theta}\left(x_{i}^{qry}\right),c_{j} \right)\right)}{\sum_{k}\exp\left(-d\left(f_{\theta}\left(x_{i}^{qry}\right),c _{k}\right)\right)} \tag{3}\] where \(d\) is the Euclidean distance, \(p_{i,j}^{met}\) can be interpreted as the probability that the sample \(x_{i}^{qry}\) is predicted as class \(j\). We utilize cross-entropy loss to calculate the metric loss: \[L_{m}=-\frac{1}{N\times N_{qry}}\sum_{n=1}^{N}\sum_{i=1}^{N_{qry}}L_{CE}\left( p_{i}^{met},y_{i}^{qry}\right) \tag{4}\] In order to make the class boundaries of tail classes distinguishable, intra-CL computes the contrastive loss between query samples and the prototypes of the support set in CoLB. Let \(c_{j}^{\prime}\) denote the prototype of support set \(S_{j}\). For a query sample \(x_{i}^{qry}\), supposing it belongs to class \(j\). Therefore, the positive of \(x_{i}^{qry}\) is the prototype \(c_{j}^{\prime}\), while except for category \(j\), the prototypes of all classes are negatives. Then we calculate the cosine similarity \(s_{i,j}\) between \(x_{i}^{qry}\) and \(c_{j}^{\prime}\), as done in (Chen et al., 2020). The contrastive loss of the positive pair \(\left(x_{i}^{qry},c_{j}^{\prime}\right)\) is defined as: \[s_{i,j} =\frac{g\left(f_{\theta}\left(x_{i}^{qry}\right)\right)\cdot c_{j }^{\prime}}{\left\|g\left(f_{\theta}\left(x_{i}^{qry}\right)\right)\right\| \times\left\|c_{j}^{\prime}\right\|} \tag{5}\] \[l_{intra}\left(x_{i}^{qry},c_{j}^{\prime}\right) =-\log\frac{\exp\left(s_{i,j}/\tau\right)}{\sum_{n=1}^{N}\mathbb{ I}\left(n\neq j\right)\exp\left(s_{i,n}/\tau\right)} \tag{6}\] where \(g\left(\cdot\right)\) is a projection head and proven to be more appropriate for contrastive loss (Chen et al., 2020), \(\left\|\cdot\right\|\) denotes the L2 norm, \(\mathbb{I}\left(\cdot\right)\) is an indicator function, \(\tau\) is a temperature hyper-parameter. In practical implementation, \(g\left(\cdot\right)\) is an MLP with one hidden layer followed by a ReLU function. The intra-CL can be written as: \[L_{intra}=\frac{1}{N\times N_{qry}}\sum_{j=1}^{N}\sum_{i=1}^{N_{qry}}l_{intra} \left(x_{i}^{qry},c_{j}^{\prime}\right) \tag{7}\] Inter-CL contrasts query samples against the head instances from the imbalanced learning branch to distinguish the head- and tail features. For inter-CL, the positive samples are the same as intra-CL, for simplicity of notation, we still leverage \(c_{j}^{\prime}\) to stand for them. But negative samples are the head instances of the imbalanced learning branch. Let \(x_{i}^{h}\) represent a head instance and we have \(N_{head}\) such instances. Similarly, inter-CL is defined as: \[s_{i,h}^{\prime}=\frac{g\left(f_{\theta}\left(x_{i}^{qry}\right)\right)\cdot g \left(f_{\theta}\left(x_{i}^{h}\right)\right)}{\left\|g\left(f_{\theta}\left( x_{i}^{qry}\right)\right)\right\|\times\left\|g\left(f_{\theta}\left(x_{i}^{h} \right)\right)\right\|} \tag{8}\] \[l_{inter}\left(x_{i}^{qry},c_{j}^{\prime}\right)=-\log\frac{\exp\left(s_{i,j} /\tau\right)}{\sum_{h=1}^{N_{head}}\exp\left(s_{i,h}^{\prime}/\tau\right)} \tag{9}\] \[L_{inter}=\frac{1}{N\times N_{qry}}\sum_{j=1}^{N}\sum_{i=1}^{N_{qry}}l_{inter} \left(x_{i}^{qry},c_{j}^{\prime}\right) \tag{10}\] where \(s_{i,h}^{\prime}\) is the cosine similarity of \(x_{i}^{qry}\) and \(x_{i}^{h}\), and \(s_{i,j}\) is that of \(x_{i}^{qry}\) and \(c_{j}^{\prime}\). Finally, the total loss of CoLB is: \[L_{con}=L_{m}+L_{intra}+L_{inter}\times\lambda \tag{11}\] where \(\lambda\) is a hyper-parameter that controls the loss contribution of inter-CL. For the metric loss and intra-CL, they play their part inside the CoLB, and we equally set their weights as 1. ### The objective function With the above two branch losses, we finally define our training objective as: \[L=\alpha\times L_{imb}+(1-\alpha)\times L_{con} \tag{12}\] where \(\alpha\) controls the importance of the two branches during the training phase. The larger \(\alpha\) is, the more attention the model pays to the imbalanced learning branch. Since the imbalanced learning branch plays the most critical role, we focus on it first, and then gradually turn the focus towards CoLB. To meet this expectation, \(\alpha\) should decrease as the training process progresses. So the parabolic decay function is chosen among all the elementary functions (Zhou et al., 2020). Let \(T_{max}\) denote the total training epoch, and \(T\) denote the current training epoch, \(\alpha\) is defined as \(\alpha=1-\left(T/T_{max}\right)^{2}\). The training process of DB-LTR is provided in Algorithm 1. ``` 0: training set \(\mathbb{D}\), the number of epochs \(T_{max}\). 0: feature backbone \(f_{\theta}\), classifier \(f_{\varphi}\), projection head \(g\left(\cdot\right)\). 1:for\(T=1\) to \(T_{max}\)do 2: Sample for imbalanced learning branch using the uniform sampler 3: Sample for contrastive learning branch using the tail sampler 4: Extract features using the feature backbone 5: Predict the features of imbalanced learning branch according to Equation 1 6: Compute the \(L_{imb}\) according to Equation 2 7: Calculate prototypes \(c_{j}\) of the support set 8: Predict the features of the query set according to Equation 3 9: Compute the \(L_{m}\) according to Equation 4 10: Feed the extracted features into \(g\left(\cdot\right)\) 11: Calculate prototypes \(c_{j}^{\prime}\) using the features that are mapped by \(g\left(\cdot\right)\) 12: Calculate the cosine similarity \(s_{i,j}\) according to Equation 5 13: Compute the \(L_{intra}\) according to Equation 7 14: Calculate the cosine similarity \(s_{i,h}^{\prime}\) according to Equation 8 15: Compute the \(L_{inter}\) according to Equation 10 16: Compute the \(L_{con}\) according to Equation 11 17:\(\alpha=1-\left(\frac{T}{T_{max}}\right)^{2}\) 18: Compute the training objective \(L\) according Equation 12 19: Update \(f_{\theta}\), \(f_{\varphi}\) and \(g\left(\cdot\right)\) using \(L\) 20:endfor ``` **Algorithm 1** The training process of DB-LTR. ## 4 Experiments ### Experimental settings **Datasets.** We evaluate our method by performing experiments on three long-tailed datasets: CIFAR100-LT (Cui et al., 2019), ImageNet-LT (Liu et al., 2019) and Places-LT (Liu et al., 2019). For CIFAR100-LT, the training set contains 100 categories, and the number of training samples per category is \(n=n_{i}/\mu^{i/100}\), where \(i\) is the class index, \(n_{i}\) is the number of original training samples in class \(i\), \(\mu\) is an imbalance factor. In CIFAR100-LT, each image is of size \(32\times 32\). For ImageNet-LT, the training set is sampled from the original ImageNet-2012 (Deng et al., 2009) following the Pareto distribution with the power value of 6, while the validation set and test set remain unchanged. There are 115.8K images with size \(224\times 224\) from 1000 categories in the training set, in which the number of samples per category is between 5-1280, so the imbalance factor is 256. For Places-LT, it is constructed similarly to ImageNet-LT, containing 62.5K images from 365 categories with a maximum of 4980 images and a minimum of 5 images, i.e., a imbalance factor of 996. This dataset contains all kinds of scenes and the image size is \(256\times 256\). **Evaluation metrics.** We report the average top-1 classification accuracy (%) and standard deviation over 5 different runs to evaluate our model over all classes. And we further create three splits of datasets following (Liu et al., 2019) and report accuracy over them. They are Many-shot (more than 100 images), Medium-shot (20\(\sim\)100 images) and Few-shot (less than 20 images), respectively. **Implementation.** For fair comparisons, we follow the network setting in (Cui et al., 2019; Liu et al., 2019) and use ResNet-32 (He et al., 2016) for CIFAR100-LT, ResNet-10 (He et al., 2016) for ImageNet-LT and ResNet-152 pre-trained on the full ImageNet dataset (Kang et al., 2020) for Places-LT as the model network, respectively. We train the network with 200 epochs on CIFAR100-LT and warm up the learning rate to 0.1 in the first 5 epochs and decay it at the 160th epoch and 180th epoch by 0.1. For ImageNet-LT, the network is optimized with 90 epochs. Likewise, a learning rate warming up to 0.1 in the first 5 epochs is used and decayed by 0.1 at the 30th epoch and 60th epoch. 30 epochs and an initial learning rate of 0.01 with cosine learning rate schedule (Loshchilov and Hutter, 2016) are used to update the model on Places-LT. The models are trained by SGD algorithm with fixed momentum of 0.9 and weight decay of 0.0005 for all datasets. In all experiments, \(\tau\) and \(\lambda\) are set to 0.6 and 0.3, respectively. Unless otherwise stated, the batch size is set to 128 for the imbalanced learning branch. For CoLB, we empirically find that the tail sampler sampling data on the Medium- and Few-shot subsets poses the best performance. For \(N\) and \(N_{sup}\), they are set to 5 and 4 by default, respectively. The ablative analysis and experimental results are shown in Section 4.3. Therefore, the tail sampler randomly samples 5 classes and 5 samples (4 support samples and 1 query sample) for each class in a mini-batch. **Comparison methods.** We compare DB-LTR with six categories of methods: (a)CE: A traditional deep CNN classification model. (b)Re-weighting: Focal Loss (Lin et al., 2017), CB Loss (Cui et al., 2019), CFL (Smith, 2022), LDAM-DRW (Cao et al., 2019). (c)Data augmentation: Remix-DRW (Chou et al., 2020), Bag of Tricks (Zhang et al., 2021). (d)Calibration. LA (Menon et al., 2021), CDA-LS (Islam et al., 2021), LADE (Hong et al., 2021). (e)Decouple method: cRT(Kang et al., 2020), LWS(Kang et al., 2020), DaLS (Wang et al., 2022), MiSLAS(Zhong et al., 2021). (f) Others: OLTR (Liu et al., 2019), BBN (Zhou et al., 2020), Hybrid-PSC (Wang et al., 2021), TSC (Li et al., 2022). ### Experimental results **Experimental results on CIFAR100-LT.** Table 1 displays the overall top-1 accuracy of different methods on CIFAR100-LT with different imbalance factors of 100, 50 and 10. It can be observed that DB-LTR delivers the best results in all situations. These results validate the effectiveness of our method. We note that the performance gap between DB-LTR and Hybrid-PSC (Wang et al., 2021) declines as the extent of data imbalance decreases. This result is mainly attributed to the fact that the model poses poorer adaptability to tail classes when imbalanced problem is more severe, and our CoLB can make the model more adaptive and bring more performance gains. **Experimental results on ImageNet-LT.** Table 2 presents the detailed results of our proposal and other methods on ImageNet-LT. Our DB-LTR still consistently achieves the best performance on all occasions. From the table we can see, BBN (Zhou et al., 2020) improves the classification on Medium- and Few-shot subsets, while sacrificing negligible accuracy on Many-shot classes. This verifies the effectiveness of reversed sampling strategy, which is used to impel the decision boundary to be away from tail classes. Different from BBN that utilizes cross-entropy loss to train the feature backbone, we introduce inter-CL and intra-CL to learn better feature representation. Therefore, DB-LTR outperforms BBN on all occasions. Note that the overall accuracy of Bag of Tricks (Zhang et al., 2021) is exceedingly close to that of our DB-LTR. Such merit of Bag of Tricks may account for the scientific combination of many efficacious training tricks. However, its training procedure is intractable due to the cumbersome combination of tricks. In comparison, DB-LTR is simple and easy to implement, enabling us to tackle long-tailed recognition effectively. **Experimental results on Places-LT.** To further verify the effectiveness of our method, we conduct comparative experiments on the large-scale scene dataset, i.e., Places-LT, and report the results of Many-shot, Medium-shot and Few-shot subsets. The experimental results are shown in Table 3. The imbalanced learning branch of the DB-LTR deploys LDAM-DRW (Cao et al., 2019) algorithm, whose core idea is to address the data imbalance and to mitigate the bias towards the dominant majority classes. Nevertheless, the insufficiency of feature representation for tail classes has not been \begin{table} \begin{tabular}{c|c|c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{MFLOPs} & \multicolumn{4}{c}{Imbalance factor} \\ & & 100 & 50 & 10 \\ \hline CE & 69.76 & 38.32 & 43.85 & 55.71 \\ \hline Focal Loss(Lin et al., 2017) & 69.76 & 38.41 & 44.32 & 55.78 \\ CB Loss(Cui et al., 2019) & 69.76 & 39.60 & 45.32 & 57.99 \\ CFL(Smith, 2022) \(\dagger\) & 69.76 & 42.71 & 48.66 & 60.87 \\ LDAM-DRW(Cao et al., 2019) & 69.76 & 42.04 & 47.30 & 58.71 \\ \hline Remix-DRWChou et al. (2020) & 69.76 & 46.77 & - & 61.23 \\ Bag of Tricks(Zhang et al., 2021) & 69.76 & 47.73 & 51.69 & - \\ \hline CDA-LS(Islam et al., 2021) \(\dagger\) & 69.76 & 41.42 & 46.22 & 59.86 \\ LA(Menon et al., 2021) & 69.76 & 43.89 & - & - \\ LADE(Hong et al., 2021) & 74.22 & 45.4 & 50.5 & 61.7 \\ \hline cRT(Kang et al., 2020) & 69.76 & 43.30 & 47.37 & 57.86 \\ LWS(Kang et al., 2020) & 69.76 & 42.97 & 47.40 & 58.08 \\ DaLS(Wang et al., 2022) & 69.76 & 47.68 & 51.90 & 61.34 \\ MiSLAS(Zhong et al., 2021) & 69.76 & 47.0 & 52.3 & 63.2 \\ \hline OLTR(Liu et al., 2019) & 71.2 & 41.4 & 48.1 & 58.3 \\ BBN(Zhou et al., 2020) & 74.22 & 42.56 & 47.02 & 59.12 \\ Hybrid-PSC(Wang et al., 2021) & 69.76 & 44.97 & 48.93 & 62.37 \\ TSC(Li et al., 2022) & - & 43.8 & 47.4 & 59.0 \\ \hline DB-LTR (ours) & 69.76 & \(\mathbf{48.83_{\pm 0.06}}\) & \(\mathbf{53.67_{\pm 0.12}}\) & \(\mathbf{63.89_{\pm 0.10}}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Top-1 accuracy (%) on CIFAR100-LT with ResNet-32. \(\dagger\) denotes results are our reproduction with released code. We report the mean and standard deviation of our method over five different runs. considered and addressed. The proposed DB-LTR tackles these problems simultaneously. Specifically, in CoLB, we recurrently train the model with tail data to enhance the adaptability of the model. Besides, two contrastive losses are calculated to reduce the intra-class variances and to distinguish head- and tail classes. Therefore, our method surpasses LDAM-DRW by \(16.40\%\), and the well-learned representation boost the overall performance. LWS (Kang et al., 2020b) shows the best performance on Medium- and Few-shot subsets as it is equipped with class-balanced sampling to adjust the tail classifier with small weight norm. However, LWS under-learns head classes and achieves unsatisfactory results on the Many-shot classes. In comparison, our method improves the classification of tail classes and the feature representation concurrently, so it is still superior to LWS on the Many-shot and overall accuracy. As shown in Tables 1 to 3, the FLOPs of our method are close to those of previous methods. The additional computational overhead of DB-LTR mainly lies in the second branch, where DB-LTR applies a projection head \(g\left(\cdot\right)\) to calculate the inter-CL and intra-CL. However, compared to the entire network structure, the computational cost of the projection head can be ignored. \begin{table} \begin{tabular}{c c c c c c} \hline \hline Method & GFLOPs & Many & Medium & Few & Overall \\ \hline CE & 0.89 & 49.4 & 13.7 & 2.4 & 23.9 \\ \hline Focal Loss(Lin et al., 2017) & 0.89 & 36.4 & 29.9 & 16.0 & 30.5 \\ CB Loss(Cui et al., 2019) & 0.89 & 43.1 & 32.9 & 24.0 & 35.8 \\ CFL(Smith, 2022) \(\dagger\) & 0.89 & 46.73 & 23.18 & 13.51 & 32.46 \\ LDAM-DRW(Cao et al., 2019) & 0.89 & 45.3 & 34.1 & 19.3 & 36.3 \\ \hline CDA-LS(Islam et al., 2021) & 0.89 & - & - & - & 35.68 \\ LA (Menon et al., 2021) \(\dagger\) & 0.89 & 51.64 & 38.96 & 22.63 & 41.54 \\ \hline Bag of Tricks(Zhang et al., 2021b) & 0.89 & - & - & - & 43.13 \\ \hline cRT(Kang et al., 2020b) & 0.89 & - & - & - & 41.8 \\ LWS(Kang et al., 2020b) & 0.89 & - & - & - & 41.4 \\ DaLS(Wang et al., 2022) & - & - & - & - & 42.43 \\ \hline OLTR(Liu et al., 2019) & 0.91 & 43.2 & 35.1 & 18.5 & 35.6 \\ BBN(Zhou et al., 2020) & - & 49.1 & 37.1 & 20.4 & 37.7 \\ \hline DB-LTR (ours) & 0.89 & \(\mathbf{53.44_{\pm 0.04}}\)\(\mathbf{39.59_{\pm 0.03}}\)\(\mathbf{27.60_{\pm 0.07}}\)\(\mathbf{43.28_{\pm 0.03}}\) \\ \hline \hline \end{tabular} \end{table} Table 2: Top-1 accuracy (%) on ImageNet-LT with ResNet-10. \(\dagger\) denotes results are our reproduction with released code. We report the mean and standard deviation of our method over five different runs. ### Ablation study **CoLB plugs in previous methods.** As mentioned before, our CoLB can be treated as a plug-and-play module and seamlessly combined with previous methods to further improve their performance. Experiments on CIFAR100-LT with imbalance factor of 100 for validating this are shown in Table 4. As shown in this table, CoLB improves the recognition performance of previous methods. Since CoLB learns a prototype (Snell et al., 2017) for each tail class and calculates a prototype-based metric loss, the model is more adaptive to tail classes, which results in noticeable performance improvement of the minority. On the other hand, the proposed CoLB selects positive pairs from tail classes, aligning them and repulsing head instances in the embedding space, to learn a discriminative feature space that possesses the property of intra-class compactness and inter-class separability. Hence, the distinguishability of the decision boundary gets upgraded. The approach MiSLAS (Zhong et al., 2021) utilizes mixup (Zhang et al., 2017) to find better representation, when combined with CoLB, it still gets 1.06 points performance gains. **CoLB with different losses.** Intending to investigate the effect of the three losses in CoLB, i.e., inter-CL, intra-CL and metric loss, we carry on experiments on CIFAR100-LT with imbalance factor of 100 and ImageNet-LT. The results are shown in Table 5. The first line of the table shows the result of LDAM-DRW (Cao et al., 2019), i.e, only using the imbalanced \begin{table} \begin{tabular}{c c c c c c} \hline \hline Method & GFLOPs & Many & Medium & Few & Overall \\ \hline CE & 11.55 & 45.9 & 22.4 & 0.36 & 27.2 \\ \hline Focal Loss(Lin et al., 2017) \(\ddagger\) & 11.55 & 41.1 & 34.8 & 22.4 & 34.6 \\ CB Loss(Cui et al., 2019) \(\dagger\) & 11.55 & 43.44 & 33.71 & 13.37 & 32.94 \\ CFL(Smith, 2022) \(\dagger\) & 11.55 & 45.07 & 25.76 & 8.50 & 29.11 \\ LDAM-DRW(Cao et al., 2019) \(\dagger\) & 11.55 & 43.25 & 31.76 & 16.68 & 32.74 \\ \hline CDA-LS (Islam et al., 2021) \(\dagger\) & 11.55 & 42.34 & 33.13 & 18.42 & 33.61 \\ LA (Menon et al., 2021) \(\dagger\) & 11.55 & 43.10 & 37.43 & 22.40 & 36.31 \\ \hline cRT(Kang et al., 2020b) & 11.55 & 42.0 & 37.6 & 24.9 & 36.7 \\ LWS(Kang et al., 2020b) & 11.55 & 40.6 & **39.1** & **28.6** & 37.6 \\ \hline OLTR(Liu et al., 2019) & 11.56 & 44.7 & 37.0 & 25.3 & 35.9 \\ BBN(Zhou et al., 2020) & - & **49.1** & 37.1 & 20.4 & 37.7 \\ \hline DB-LTR (ours) & 11.55 & 43.85\({}_{\pm 0.04}\) & 38.65\({}_{\pm 0.06}\) & 26.93\({}_{\pm 0.04}\) & **38.11\({}_{\pm 0.02}\)** \\ \hline \hline \end{tabular} \end{table} Table 3: Top-1 accuracy (%) on Places-LT with ResNet-152. \(\ddagger\) and \(\dagger\) denote the results are borrowed from (Liu et al., 2019) and reproduced by us with released code, respectively. We report the mean and standard deviation of our method over five different runs. learning branch in DB-LTR. It can be observed that the metric loss respectively delivers 10.15% and 10.60% performance gains on CIFAR100-LT and ImageNet-LT compared with LDAM-DRW, which is attributed to the better adaptation to tail classes. Based on the metric loss, the performance obtains further improvement when integrating intra-CL or inter-CL into CoLB. It is noteworthy that compared with the combination of metric loss and intra-CL, that of metric loss and inter-CL achieves better recognition performance. Due to the imbalance of class label and the scarcity of tail samples, it is difficult for the model to depict the real feature distribution of tail classes, resulting in mixed head- and tail features. Inter-CL can separate head- and tail classes from each other, while intra-CL makes the class boundaries of minority classes distinguishable. Hence, when cooperating with metric loss, the accuracy gains brought by inter-CL are greater than intra-CL. The combination of inter-CL and intra-CL is also significantly better than LDAM-DRW, corroborating the better feature representation is conducive to model performance. When the metric loss is further added to the combination of inter-CL and intra-CL, the model achieves the best performance. **Visualization of feature distribution.** In order to verify that DB-LTR \begin{table} \begin{tabular}{c|c|c} \hline \hline Method & CoLB & Top-1 accuracy (\%) \\ \hline CE & \(\times\) & 38.32 \\ & \(\surd\) & 43.65 (+5.33) \\ \hline Focal LossLin et al. (2017) & \(\times\) & 38.41 \\ & \(\surd\) & 43.14 (+4.73) \\ \hline CFLSmith (2022) & \(\times\) & 42.71\(\dagger\) \\ & \(\surd\) & 45.16 (+2.45) \\ \hline Remix-DRWChou et al. (2020) & \(\times\) & 46.52\(\dagger\) \\ & \(\surd\) & 48.46 (+1.94) \\ \hline CDA-LSIslam et al. (2021) & \(\times\) & 41.42\(\dagger\) \\ & \(\surd\) & 46.98 (+5.56) \\ \hline MiSLASZhong et al. (2021) & \(\times\) & 47.0 \\ & \(\surd\) & 48.06 (+1.06) \\ \hline \hline \end{tabular} \end{table} Table 4: Evaluation of CoLB plugging in previous methods. \(\dagger\) denotes results are reproduced with released code. The green numbers in round brackets stand for the performance gains of leading CoLB into other methods. \(\surd\) and \(\times\) indicate CoLB has / has not been incorporated into the corresponding methods. can learn a better feature space with intra-class compactness and inter-class separability, we visualize the learned feature distributions of our method and other approaches. The compared methods include CE, LDAM (Cao et al., 2019) and DB-LTR. We use t-SNE (Van der Maaten and Hinton, 2008) visualization technology, a common method to detect feature quality in visual images, to display the comparison results in Figure 2. From the figure we can see, the quality of the features learned by CE is the worst, in which most of the features for different categories mix together and the decision boundaries are ambiguous. It implies that cross-entropy loss is not suitable for feature \begin{table} \begin{tabular}{c c c c c} \hline \hline Metric loss & Intra-CL & Inter-CL & Accuracy on & Accuracy on \\ & & & CIFAR100-LT & ImageNet-LT \\ \hline & & & 42.04 & 36.3 \\ \(\surd\) & & & 46.31 & 40.15 \\ \(\surd\) & \(\surd\) & & 47.48 & 41.48 \\ \(\surd\) & & \(\surd\) & 47.61 & 41.67 \\ & \(\surd\) & \(\surd\) & 48.21 & 42.29 \\ \(\surd\) & \(\surd\) & \(\surd\) & **48.88** & **43.23** \\ \hline \hline \end{tabular} \end{table} Table 5: Evaluation of CoLB with different losses. \(\surd\) denotes using the corresponding loss in experiments. Top-1 accuracy (%) is reported on CIFAR100-LT with an imbalance factor of 100 and ImageNet-LT. Figure 2: The t-SNE visualization of embedding space obtained using CE, LDAM and DB-LTR. The first and the second rows visualize results on the CIFAR100-LT test set and the ImageNet-LT test set, respectively. The dots of No. 0\(\sim\)2 denote the features of head classes and the stars of No. 3\(\sim\)5 stand for tail features. Zoom in for details. learning in the long-tailed scenarios because of the extreme class imbalance and the lack of tail samples. LDAM encourages model to learn different margins for different classes, which improves the decision boundaries to some extent. However, the intra-class features trained by LDAM are scattered in the embedding space and the issue of inadequate feature learning for tail classes has not been solved satisfactorily. In contrast with CE and LDAM, our DB-LTR learns high-quality features and decision boundaries. As Figure 2 (c) and (f) illustrate, the features within class closely gather together, revealing that the intra-class variances are quite small. Meanwhile, there are more obvious decision boundaries. The results prove the effectiveness and practicality of CoLB in assisting the imbalanced learning branch to learn compact and separable feature space and discriminative decision boundary. **Sampling strategy of tail sampler.** In Figure 3, we discuss the impact of the sampling strategy for the tail sampler on the model performance. We conduct experiments on CIFAR100-LT with imbalance factor of 100. As shown in Figure 3, when the training data of CoLB is sampled on Medium- and Few-shot subsets, the model achieves the best performance due to the more balanced learning between head- and tail classes. When the tail sampler samples data on the whole dataset, the model shows inferior overall accuracy. The main reason is that the training data of CoLB is dominated by head classes, which brings more performance improvement to the majority yet less one to the minority. When the data of few-shot subset is sampled by the tail sampler, the overall accuracy is the worst, because Medium-shot classes Figure 3: The detailed results of the tail sampler sampling data on the whole dataset, on the Medium- and Few-shot subsets, and on the Few-shot subset, respectively. receive less focus and pose the lowest accuracy. In a nutshell, tail sampler sampling on Medium- and Few-shot classes better balances the learning law of the model, thus delivering the best performance. **Effect of \(N\) and \(N_{sup}\).** The proposed CoLB supervises the model with \(N\) categories and \(N_{sup}\) support samples for each category. To investigate the sensitivity of our model to \(N\) and \(N_{sup}\), we conduct experiments with different values of \(N\) and \(N_{sup}\). From Table 6 we can see, as the value of \(N_{sup}\) increases, the performance of our model initially improves, however after a certain point it starts to drop. It is attributed to the fact that the larger \(N_{sup}\) is, the more high-quality the learned prototype is, the better results the model poses. However, when \(N_{sup}\) increases to 10, the computed prototype can nearly impeccably represent the true prototype of the corresponding class. Therefore, we draw a conclusion that our method is susceptible to the number of support samples. Despite that, since there are extremely few samples for some tail categories when the imbalance factor becomes large, e.g., imbalance factor of 256 for ImageNet-LT, we set \(N_{sup}\) to 4 by default. Table 7 demonstrates a large \(N\) will deteriorate the model performance because of the overfitting problem, which is similar to oversampling tail classes (Chawla et al., 2002; More, 2016). In a nutshell, we carry on experiments under 5-way 4-shot setting. **The hyper-parameter \(\tau\) and \(\lambda\).** In Figure 4, we explore the influence of \(\tau\) and \(\lambda\) on the performance of DB-LTR. As illustrated in the figure, \(\tau=0.6\) makes the model perform the best. And setting \(\lambda\) with a small or large value brings inferior performance gains due to the under and over regularization of inter-CL. Therefore, we set the values of \(\tau\) and \(\lambda\) to 0.6 and 0.3 in experiments. \begin{table} \begin{tabular}{c|c} \hline \(N_{sup}\) & Top-1 accuracy (\%) \\ \hline 1 & 62.93 \\ \hline 4 & 63.89 \\ \hline 10 & 64.14 \\ \hline 20 & 62.73 \\ \hline \end{tabular} \end{table} Table 6: The ablation study of the hyper-parameter \(N_{sup}\). The results are reported on CIFAR100-LT with an imbalance factor of 10 and \(N=5\) (5-way \(N_{sup}\)-shot). ## 5 Conclusion In this paper, we propose a Dual-Branch Long-Tailed Recognition (DB-LTR) model to deal with long-tailed problems. Our model consists of an imbalanced learning branch and a Contrastive Learning Branch (CoLB). The imbalanced learning branch can integrate common imbalanced learning methods to deal with the data imbalance issue. With the introduction of the prototypical network and contrastive learning, CoLB can learn well-trained feature representation, thus improving the adaptability of the model to tail classes and the discriminability of the decision boundaries. Specifically, the prototype-based metric loss improves the ability of the model to recognize tail classes, the inter-branch contrastive loss and the intra-branch contrastive loss make the learned feature space more compact and separable. CoLB is a plug-and-play module, when other imbalanced learning methods are combined with CoLB, their recognition performance is improved. The proposed method outperforms the comparative methods on three common long-tailed datasets, proving its effectiveness and competitiveness. ## Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Figure 4: The influence of hyper-parameter \(\tau\) and \(\lambda\) on model performance. The results are produced on (a) CIFAR100-LT with an imbalance factor of 100 and (b) ImageNet-LT, respectively. ## Acknowledgements This work is supported by the National Natural Science Foundation of China (No. 62176095), and the Guangdong Provincial Key Laboratory of Artificial Intelligence in Medical Image Analysis and Application (No.2022B121010011).
2309.07986
Viewpoint Textual Inversion: Discovering Scene Representations and 3D View Control in 2D Diffusion Models
Text-to-image diffusion models generate impressive and realistic images, but do they learn to represent the 3D world from only 2D supervision? We demonstrate that yes, certain 3D scene representations are encoded in the text embedding space of models like Stable Diffusion. Our approach, Viewpoint Neural Textual Inversion (ViewNeTI), is to discover 3D view tokens; these tokens control the 3D viewpoint - the rendering pose in a scene - of generated images. Specifically, we train a small neural mapper to take continuous camera viewpoint parameters and predict a view token (a word embedding). This token conditions diffusion generation via cross-attention to produce images with the desired camera viewpoint. Using ViewNeTI as an evaluation tool, we report two findings: first, the text latent space has a continuous view-control manifold for particular 3D scenes; second, we find evidence for a generalized view-control manifold for all scenes. We conclude that since the view token controls the 3D `rendering' viewpoint, there is likely a scene representation embedded in frozen 2D diffusion models. Finally, we exploit the 3D scene representations for 3D vision tasks, namely, view-controlled text-to-image generation, and novel view synthesis from a single image, where our approach sets state-of-the-art for LPIPS. Code available at https://github.com/jmhb0/view_neti
James Burgess, Kuan-Chieh Wang, Serena Yeung-Levy
2023-09-14T18:52:16Z
http://arxiv.org/abs/2309.07986v2
# Viewpoint Textual Inversion: Unleashing Novel View Synthesis with Pretrained 2D Diffusion Models ###### Abstract Text-to-image diffusion models understand spatial relationship between objects, but do they represent the true 3D structure of the world from only 2D supervision? We demonstrate that yes, 3D knowledge is encoded in 2D image diffusion models like Stable Diffusion, and we show that this structure can be exploited for 3D vision tasks. Our method, Viewpoint Neural Textual Inversion (ViewNeTI), controls the 3D viewpoint of objects in generated images from frozen diffusion models. We train a small neural mapper to take camera viewpoint parameters and predict text encoder latents; the latents then condition the diffusion generation process to produce images with the desired camera viewpoint. ViewNeTI naturally addresses Novel View Synthesis (NVS). By leveraging the frozen diffusion model as a prior, we can solve NVS with very few input views; we can even do single-view novel view synthesis. Our single-view NVS predictions have good semantic details and photorealism compared to prior methods. Our approach is well suited for modeling the uncertainty inherent in sparse 3D vision problems because it can efficiently generate diverse samples. Our view-control mechanism is general, and can even change the camera view in images generated by user-defined prompts. Code is available at our project website. ## 1 Introduction Text-to-image diffusion models trained on web-scale datasets have shown impressive capabilities in reasoning about objects, the composition of multiple objects, and 2D spatial layout [18, 41, 44, 50, 52]. Despite being trained on 2D image data, these models seem able to do 3D reasoning: in a simple experiment, we ask a Stable Diffusion model [41] to infill the background around a car, and find that it generates 3D-consistent shadows and reflections (Fig. 2). In this work, we investigate how to extract 3D knowledge from a pretrained diffusion model's latent space, and how to leverage that knowledge for 3D vision tasks. Utilizing pretrained diffusion models is appealing for two reasons. First, because large and unposed 2D datasets [36, 46] are cheaper to procure than 3D datasets [10, 38] or multi-view datasets [9, 21, 33], diffusion models cover a larger distribution of concepts; this has motivated recent work on lifting 2D knowledge from diffusion models to 3D [22, 35, 49, 54, 64]. Second, since diffusion models are generative, they are well-suited for modeling the ambiguity that naturally arises in 3D vision due to incomplete input information, for example in sparse-view novel view synthesis [11, 27, 32]. Our key insight is that the diffusion model text encoder space embeds learnable concepts for 3D reasoning. Our method, Viewpoint Neural Textual Inversion (ViewNeTI), controls the 3D viewpoint of objects via the text encoder latent space, while keeping the object details consistent. Specifically, we train a small neural mapper that takes camera parameters and predicts a text encoding; the text encoding then conditions the diffusion process to produce images with the desired camera view ( Fig. 1). The ViewNeTI mapper weights are optimized using Textual Inversion (TI), an approach common in personalization and content creation [1, 13, 24, 43]. TI adds novel concepts to the diffusion model vocabulary, such as content (e.g. a user's particular dog) or styles (e.g. a particular artist's visual aesthetic) by optimizing new word embeddings to condition the diffusion model image generation. Here, we apply textual inversion to add _viewpoint control_ words to the diffusion model vocabulary. To our knowledge, this is the first application of TI for doing semantically-consistent image transformations. We leverage ViewNeTI in a new formulation of novel view synthesis (NVS) by conditioning diffusion model generation on camera parameters. We first optimize ViewNeTI on a single scene using a small set of multi-view images. Without access to any prior 3D data, the mapper _generalizes to novel views_, as long as those views are 'interpolations' of the training views. Seeking a more generic control mechanism that does viewpoint extrapolation, we pre-train ViewNeTI on multiple scenes with a shared coordinate system. The pretrained ViewNeTI mapper _generalizes to novel scenes_, enabling synthesis of novel views far from the input views with little data; it can even do NVS from a single image. Compared to existing single-image NVS methods, ViewNeTI has several advantages, especially in single-view NVS. It produces views with photorealistic details for real-world objects that are in the massive 2D training distribution of 2D diffusion models like Stable Diffusion. Once trained, it can generate diverse predictions under uncertainty in close to real time. The pre-training multi-view dataset is small (less than 50 scenes) and is cheap to train (less than one day with one GPU). Moreover, the view-control generalizes to new object scenes under distribution shift of object semantics. Finally, we show that ViewNeTI has the potential for more diverse applications, such as content creation: by adding ViewNeTI conditioning to new text prompts, we can control the 3D viewpoint around objects in generated images. ## 2 Related work **Textual inversion of diffusion models** Personalization aims to inject novel concepts - concepts like objects and styles - into the diffusion model vocabulary using a few image examples of that concept. Textual inversion (TI) is a popular approach that optimizes a word embedding for each new concept [13]. Compared to alternatives that fine-tune the model weights [25, 43], learned concepts are editable, have low storage cost, and are more portable to other copies of the same model [1, 8, 13]. Recent extensions train an encoder to map images to concept vectors [14, 60, 67, 80]. Another direction improves the quality and editability of the learned concepts by training different text embeddings depending on the noising timestep and UNet layer [1, 62, 79]. These are combined in the recent NeTI model [15], which is the current state of the art in Textual Inversion. Our work utilizes many of the architectural ideas used in NeTI. Our work is, to our knowledge, the first to use textual inversion for controlling 3D viewpoint in images and the first to use it for any 3D control of generated objects. We propose an architecture for predicting text embeddings that control camera viewpoint, and also contribute learning strategies for adapting TI to do novel view synthesis and view-controlled image generation. **Sparse-view novel view synthesis** One set of techniques for sparse-view novel view synthesis (NVS) train a NeRF Figure 2: Diffusion infilling models do 3d reasoning. Left: input masked object. Middle: the generated shadows are consistent with the shadows on the car and the shadows in the sand. Right: the ground has object reflections. [65] as an explicit 3d representation. To address the challenge of sparse input views, these approaches add regularization on novel views [12, 20, 34, 40, 47, 63, 69, 71, 77], modify the training process [48, 59, 73], or condition the neural field on image features that are derived from pretraining on multi-view datasets [6, 7, 26, 75]. Most of these works that do not have pretraining do not attempt single-view NVS. On the other hand, a recent series of models do tackle single-view NVS using NeRFs with novel view regularization [11, 27, 32, 72] by leveraging diffusion models as a data prior over 2D renders (via the Score Distillation Loss [35, 64]). Unlike our approach, the NeRF-based methods are not generative: once trained, they cannot be easily sampled from, and thus cannot easily model ambiguity. The sparse NeRF methods also tend to produce blurriness and image artifacts compared to our method. The other set of techniques for sparse-view NVS are 'geometry free': they do not train any explicit 3D representation of the scene. Rather than doing per-scene optimization, they train an inductive model. One approach for single-view NVS is to formulate an image-to-image translation problem [53, 57], and recent works have leveraged diffusion models [27, 66]. Many works design the architecture and losses to encode 3D priors [19, 45, 68], and recent approaches also use diffusion models [2, 4, 22, 28, 54, 58, 81]. These geometry-free methods require pretraining on multi-view datasets [5, 10, 38], and they most commonly test on images that are in the same class and covariate distribution as the training set. Our approach differs in that we do not directly train or fine-tune an image generation model. We instead learn to predict text embeddings to condition image generation in large pretrained models like StableDiffusion. Since we control the 3D view of objects in the very large StableDiffusion vocabulary, we can do novel view synthesis on scenes with semantics outside the pretraining dataset. A concurrent work on ArXiv, DreamSparse [74], does sparse-view NVS by learning to map multi-view inputs to 3D features, which then condition a perturbation on the UNet predictions (as in ControlNet [78]). This has a similar motivation to our work: by using pretrained frozen diffusion models, they can do NVS for classes outside the distribution of the multi-view pretraining set: they are 'open-set'. However, their guidance approach is different, and it does not do NVS from one input view, which is our primary application. ## 3 Background ### Text-to-image latent diffusion models We apply viewpoint textual inversion to text-to-image Stable-Diffusion (SD). SD's are Latent Diffusion Models (LDMs) for image generation [41] and are typically trained on web-scale datasets of text-image pairs \((\mathbf{x},y)\sim\mathcal{D}\)[46]. There are two components. First, a variational autoencoder (VAE) [23, 39] with encoder \(\mathcal{E}\) and decoder \(\mathcal{D}\) compresses RGB images, \(\mathbf{x}\in\mathbb{R}^{H\times W\times 3}\) to a lower-dimensional latent \(\mathbf{z}_{0}=\mathcal{E}(\mathbf{x})\in\mathbb{R}^{h\times w\times c}\). Second, a conditional diffusion model [17, 18, 50, 52] is trained to generate this distribution of latents with respect to the text prompt, \(y\), as \(p(\mathbf{z}|y)\). LDMs model a diffusion process: noise is incrementally added to the the latent over \(T\) steps; the intermediate latents are \(\mathbf{z}_{t}\) with \(t\in[0,T]\). We learn a neural network \(\epsilon_{\theta}\) that reverses each step by predicting the applied noise. To train Figure 3: Our method for solving sparse-view novel view synthesis using ViewNeTI. Left: we have a small multi-view dataset, \(\mathcal{D}_{MV}\) containing images, \(\mathbf{x}_{i}\), and known camera poses, \(\mathbf{R}_{i}\). We create a caption for each image. The caption contains a camera-specific token \(S_{\mathbf{R}_{i}}\) that is different for each view, and an object token, \(S_{o}\), which is common across views. Right: textual inversion training of our neural mappers, \(\mathcal{M}_{v}\) and \(\mathcal{M}_{o}\). The mappers predict the word embeddings for \(S_{\mathbf{R}_{i}}\) and \(S_{o}\) respectively. These are conditioned on diffusion timestep \(t\) and UNet layer \(\ell\); the view mapper, \(\mathcal{M}_{v}\), is additionally conditioned on camera parameters, \(\mathbf{R}_{i}\). All parameters are encoded by a Fourier feature mapper, \(\gamma\)[55]. The remaining tokens take their regular word embeddings. The prompt is passed through the CLIP text encoder [36]. The post-CLIP encoding corresponding to \(S_{\mathbf{R}_{i}}\) and \(S_{o}\) are perturbed by a vector that is also predicted by \(\mathcal{M}_{v}\) and \(\mathcal{M}_{o}\). This encoding conditions the diffusion model, \(\epsilon_{\theta}\). We do diffusion model training on this dataset while optimizing only \(\mathcal{M}_{v}\) and \(\mathcal{M}_{o}\) (this is textual inversion training [1, 13]). More details in Sec. 4.2. this network, we simulate \(\mathbf{z}_{t}\) by sampling isotropic Gaussian noise, \(\epsilon\), scaling it according to a parameter \(t\sim\mathcal{U}[0,T]\), and adding it to \(\mathbf{z}\). The training objective is for \(\epsilon_{\theta}\) to predict \(\epsilon\) conditioned on noising step, \(t\) and the text, \(y\): \[L_{LDM}:=\mathbb{E}_{(\mathbf{x},y)\sim\mathcal{D},\epsilon\sim\mathcal{N}(0, \mathbf{I}),t}\Big{[}\|\epsilon-\epsilon_{\theta}(\mathbf{z}_{t},y,t)\|\Big{]} \tag{1}\] One can sample from \(p(\mathbf{z}|y)\) with \(\mathbf{z}_{T}\sim\mathcal{N}(0,\mathbf{I})\), and using \(\epsilon_{\theta}\) to run the reverse process over \(T\) steps [50],. This gives a latent sample \(\tilde{z}\) which is decoded to an image, \(\tilde{\mathbf{x}}=\mathcal{D}(\tilde{\mathbf{z}})\). The \(\epsilon_{\theta}\) architecture is a conditional UNet [42]. \(\mathbf{z}_{t}\) is passed through the main Unet stream. The text is passed through a pretrained CLIP [36] text encoder, giving a \(d\)-dim conditioning vector for each token, \(\mathbf{c}(y)\in\mathbb{R}^{d\times 77}\), which is mixed with each Unet layer via cross-attention [41]. ### Textual inversion In textual inversion (TI) [13], we learn a new text embedding, \(\mathbf{v}_{S_{o}}\), for pseudo-word \(S_{o}\), which represents a new concept from a small dataset. The dataset, \((\mathbf{x},y)\sim\mathcal{D}_{TI}\) contains images, \(\mathbf{x}\), of that concept; and paired captions, \(y\), that are "A photo of \(S_{o}\)". To learn the word embedding \(\mathbf{v}_{S_{o}}\), we use the LDM loss as in Eq. (1), but replacing \(\mathcal{D}\) with \(\mathcal{D}_{TI}\), and optimizing _only_ with respect to \(\mathbf{v}_{S_{o}}\). Importantly, the diffusion model weights are frozen. A recent work proposes the NeTI model [1], which includes many recent advances in textual inversion [14, 60, 67, 80]. Instead of learning a single embedding for \(S_{o}\), it learns an embedding for each UNet cross-attention layer, \(\ell\)[62] and for each noise step, \(t\)[15]; this representation space is denoted \(\mathcal{P}^{*}\)[1]. NeTI is implemented as a small neural network mapper, \(\mathcal{M}\) conditioned on \(t\) and \(l\). The optimization of Eq. (1) is with respect to the weights of \(\mathcal{M}\)[1]. Our work, ViewNeTI, uses the NeTI architecture. ## 4 Method Viewpoint Neural Textual Inversion (ViewNeTI) controls the viewpoint of objects in images generated by diffusion models. In Sec. 4.1 we describe the core architectural component, which is a small neural network - the _view-mapper_ or \(\mathcal{M}_{v}\) - that takes camera parameters and predicts a text encoding. This text encoding then conditions the diffusion generation process to control viewpoint in images. The ViewNeTI view mapper is trained to do novel view synthesis from few inputs. Illustrated in Fig. 3, we use textual inversion (TI) to optimize the \(\mathcal{M}_{v}\) parameters jointly with a mapper for the objects semantics \(\mathcal{M}_{o}\). In Sec. 4.2 we describe how to train \(\mathcal{M}_{v}\) on a single scene with no prior multi-view supervision, which enables _interpolation_ of novel views. In Sec. 4.3, we propose pretraining \(\mathcal{M}_{v}\) on a multi-view dataset, which enables _extrapolation_ of novel views far from the input views, and under semantic distribution shift. Here, we highlight the especially challenging application of NVS from a single input view. Finally in Sec. 4.4, we show that the pretrained \(\mathcal{M}_{v}\) can be used in text-to-image generation to control the viewpoint around objects. ### ViewNeTI mapper architecture and inference The ViewNeTI mappers control the text encoding for the diffusion model. As shown in Fig. 3 we have a view-mapper, \(\mathcal{M}_{v}\), and an object mapper, \(\mathcal{M}_{o}\), to predict the text encoding of the view and object tokens respectively. They are both conditioned on the diffusion timestep, \(t\), and UNet conditioning layer, \(\ell\), while the view-mapper is also conditioned on camera parameters, \(\mathbf{R}\). They predict a word embedding input to the text encoder. They also predict a 'textual bypass', which is a perturbation to the vectors that are output from the text encoder. We now explain Fig. 3 in detail. **3D-view/pose representation** One input parameter to the view-mapper, \(\mathcal{M}_{v}\), is the camera parameters, \(\mathbf{R}\), which can be any vector representation of the camera extrinsics and intrinsics. In our experiments, we use the camera-to-world project matrix, and we normalize each matrix entry independently to \([-1,1]\). The normalization range is fixed according to the dataset parameters for experiments in Sec. 5. Our method is agnostic to the camera parameterization: we verify that our method also works with spherical coordinates in Appendix G. We apply Fourier-feature encoding [37, 55] with bandwidth \(\sigma=2\). This enables the neural mappers to learn a predictor of word-embedding space that is sufficiently sensitive to small changes in camera parameters; that is, it can represent high frequency changes in word embedding space. The \(\sigma\) parameter is fixed across our experiments: it is big enough to model a diverse viewpoint range, but small enough to interpolate viewpoints (see Sec. 5.4 ablations). **Architecture** Both neural mappers, \((\mathcal{M}_{v},\mathcal{M}_{o})\) are conditioned on the the denoising timestep and the diffusion model UNet layer, \((t,\ell)\). This improves textual inversion reconstruction quality and optimization convergence because different timesteps and UNet layers control different image features [1, 62]; for example, small \(t\) denoising steps control finer texture details rather than layout and shape. We concatenate these parameters with the camera parameters into a vector, \([t,\ell,\mathbf{R}]\). Let the Fourier feature encoding function [37, 55] be \(\gamma(\cdot)\), and the encoding be \(\mathbf{c}_{\gamma}\). Then the encoding is \(\mathbf{c}_{\gamma}=\gamma([t,\ell,\mathbf{R}])\), and we choose its dimension as 64. We define the function parameterized by our view-mapper as: \[(\mathbf{v}_{\mathbf{R}},\mathbf{v}_{\mathbf{R},p})=\mathcal{M}_{v}(\mathbf{c} _{\gamma}) \tag{2}\] The vectors \((\mathbf{v}_{R},\mathbf{v}_{\mathbf{R},p})\) have the same dimension as the text encoder, which is 768 in Stable Diffusion 2 (SD2) [41]. We parameterize \(\mathcal{M}_{v}\) as a 2-layer MLP with 64 dimensions, LayerNorm [3], and LeakyRelu [70]. The view-mapper has 140,000 parameters. The object-mapper, \(\mathcal{M}_{o}\) is defined in the same way, except without conditioning on camera parameters. So, \(\mathbf{c}_{\gamma}=\gamma([t,\ell])\) and: \[(\mathbf{v}_{o},\mathbf{v}_{o,p})=\mathcal{M}_{o}(\mathbf{c}_{\gamma})\] In the next subsection we describe how the two output vectors are used for controlling the 3D viewpoint of Stable Diffusion generations. **Inference for view-controlled generation** We pass the text encoder a prompt like, '\(S_{\mathbf{R}}\). A photo of a \(S_{o}\)', where \(S_{\mathbf{R}}\) has word embedding \(\mathbf{v}_{\mathbf{R}}\) and \(S_{o}\) has word embedding \(\mathbf{v}_{o}\). The embedding for \(S_{\mathbf{R}}\) controls the viewpoint for the image that is been generated. The embeddings for \(S_{o}\) are the same for all novel views generated in one scene, so it captures properties that are invariant across all images such as object semantics and image style. We scale the embeddings, \((\mathbf{v}_{\mathbf{R}},\mathbf{v}_{o})\) to have the same \(L_{2}\) norm as the word embedding for 'object' (the choice of this reference word is not important in practice). We then pass this prompt through the CLIP text encoder with padding, which gives 77 encoding vectors. If \(k\) is the index for token \(S_{\mathbf{R}}\), then call its embedding vector \(\mathbf{e}_{k}\). We perturb \(\mathbf{e}_{k}\) in the direction of the \(\mathbf{v}_{\mathbf{R},p}\) vector (which was predicted by \(\mathcal{M}_{v}\) as in Eq. (2)): \[\mathbf{e}_{k}^{\prime}:=\mathbf{e}_{k}+\alpha\|\mathbf{e}_{k}\|\cdot\frac{ \mathbf{v}_{\mathbf{R},p}}{\|\mathbf{v}_{\mathbf{R},p}\|} \tag{3}\] This is called _textual bypass_[1], and it enables more flexibility for the view-mapper control, as well as faster convergence. \(\alpha\) controls the perturbation magnitude, and prior textual inversion methods set it to \(0.2\). This constrains the bypass mechanism to only refine object details, and thus preserve the compositionality of the word embedding with other concepts in word space. We set \(\alpha=0.2\) only for experiments in text-to-image generation. But for NVS, we do not care about compositionality, so we are free to increase \(\alpha\) to 5, which we find has similar view-control performance, and better reconstructs object details (see Sec. 5.4 ablations). The final view-mapper output, \(\mathbf{e}_{k}^{\prime}\), is the \(k\)th conditioning vector that is passed to the UNet generator. We apply this exact same textual bypass approach to the object token, where we perturb using the predicted \(\mathbf{v}_{o,p}\). To generate the final image in viewpoint \(\mathbf{R}\), we run diffusion model generation with DPMSolver [30, 31, 51] for 50 steps to get the final image. **Data augmentations** We apply simple augmentations to images, similar to [32]. This helps avoid overfitting when the training dataset is small, as we show in Sec. 5.4 ablations. We also do text prompt augmentations that are standard in textual inversion [13]. See Appendix C for details. ### Single-Scene Novel View Synthesis In single-scene NVS, we have a small (\(<10\)) dataset called \(\mathcal{D}_{MV}\) with multi-view images, \(\mathbf{x}_{i}\), with known camera pose parameters, \(\mathbf{R}_{i}\). We do not have prior 3D knowledge from other multi-view datasets. As in Fig. 3, we generate captions of the form \(y(S_{\mathbf{R}_{i}},S_{o})=\)"\(S_{\mathbf{R}_{i}}\). a photo of a \(S_{o}\)". As in the previous Sec. 5.1, \(S_{\mathbf{R}_{i}}\) corresponds to viewpoint and its embeddings are predicted by \(\mathcal{M}_{v}\). Likewise, \(S_{o}\) corresponds to object semantics and its embeddings are predicted by \(\mathcal{M}_{o}\). The \(S_{o}\) embeddings are the same for all images. Now, let the modified multi-view dataset be the set of image-caption pairs, \((\mathbf{x}_{i},y(S_{\mathbf{R}_{i}},S_{o}))\sim\mathcal{D}_{MV^{\prime}}\). We optimize the weights of \(\mathcal{M}_{v}\) and \(\mathcal{M}_{o}\) using the loss in Eq. (1), except we replace \(\mathcal{D}\) with \(\mathcal{D}_{MV^{\prime}}\). Intuitively, we are learning text space latents that, when conditioning diffusion model generation, reproduce the training set images with the correct camera views. ### Novel view synthesis with pretraining on multi-view data The single-scene optimization case cannot extrapolate to views far from the input views (see Sec. 5.1 and Fig. 4). It also cannot do one-shot novel view synthesis. To tackle these challenging settings, we propose pretraining on a multi-view dataset with multiple scenes. **Pre-training** We now have access to images, \(\mathbf{x}_{ij}\) with known camera poses, \(\mathbf{R}_{ij}\), for views \(i\) and scenes \(j\). The camera poses are in the same reference frame: their notion of 'up' and 'down' are the same; however there is no requirement for the scene's objects to have any particular pose within that space. The multi-view datasets should have dense coverage of the view space, which we visualize in Appendix D. The goal is to train a view-mapper, \(\mathcal{M}_{v}\), that is shared across all the scenes, and is generalizable to new scenes. The training is the same as the single scene case in Sec. 5.2, with one change: in the captions, "\(S_{\mathbf{R}_{i}}\), a photo of a \(S_{o,j}\)", each scene has its own object token \(S_{o,j}\) and object mapper, \(\mathcal{M}_{o,j}\) (but again, they share view tokens, \(S_{\mathbf{R}_{i}}\) and view-mapper, \(\mathcal{M}_{v}\)). **Novel view synthesis from sparse input views** Given a sparse dataset of muti-view images and camera parameters, \((\mathbf{x}_{i},\mathbf{R}_{i})\sim\mathcal{D}_{MV}\), we do the exact training procedure for single-scene NVS described in Sec. 5.2, except we use the pretrained view mapper, \(\mathcal{M}_{v}\) and freeze it's weights. The inference procedure for generating novel views is also the same. **Novel-view synthesis from a single input view** Under this framework, single-view NVS is solved with the same approach as sparse-view NVS. ### Generalizing Viewpoint Control to New Prompts Content creation via text-to-image diffusion is a popular application, and one research direction seeks to control certain properties in generated images [16, 78]. ViewNeTI can be easily extended to control the camera viewpoint around objects that are generated from user-defined captions, \(y\). We take the pretrained view mapper, \(\mathcal{M}_{v}\), described in Sec. 4.3, and we prepend its corresponding token to \(y\). An example text prompt is "\(\mathbf{R}_{0}\). a brown teddy bear". Then, the inference procedure is the same as Sec. 4.1. ## 5 Results In Sec. 5.1, we use ViewNeTI for sparse-view novel view synthesis (NVS) on a single scene, without access to any multi-view or 3D datasets. We show the limitations of the single-scene approach, which motivate our pretraining and fine-tuning method. This leads to our primary result in Sec. 5.2.2: novel view synthesis from a single view. In Sec. 5.3, we show an extra application of ViewNeTI to view-controlled text-to-image generation. All results use the same hyperparameters, and we ablate these design choices in 5.4. **NVS data setting** We evaluate on DTU-MVS [21], a multi-view dataset of real-world objects with challenging details. We use the train-test splits for camera views used in sparse-view snrthesis from prior works [11, 75] (training set sizes 1, 3, 6, and 9). As visualized in Appendix E, DTU images can be roughly grouped by their camera pitch angle into 'top-view','side-view', and 'neutral-view'. The 9-view training set has images from all groups, so novel test views are 'interpolations'. The 6-view dataset has images from the side and neutral views, so test views from top-view are 'extrapolation' (this is also visualized in Fig. 5. The 3-view dataset uses neutral-view images, and so top and side views are extrapolation. For the 1-view set, all test views are extrapolation. ### Single-scene novel view synthesis from sparse views We start by establishing that the text latent space can be manipulated for viewpoint control. Using the method described in Sec. 4.2, we optimize a view-mapper and object-mapper from scratch on a scene with 6-views. The training time is 2.5hrs with one Titan RTX. In Fig. 5 we visualize the training and inference camera positions, and the NVS predictions. Despite being supervised on few images, we can interpolate to novel views. We retain good object details, and avoid imaging artifacts like floaters that appear in NeRF [34]. However, in this setting we fail to extrapolate. The failure mode is interesting and different to other NVS methods: the object semantics approximately match the training views, but the poses are incorrect. This motivated us to pretrain the view-mapper, as we review in the next section ### Novel view synthesis from one input view #### 5.2.1 View-mapper pretraining We pretrain a view-mapper, \(\mathcal{M}_{v}\) as described in Sec. 4.3. We take 88 training scenes from DTU chosen by [75] to ensure that object classes are different between the train and test scenes. The pretraining takes 48hours on one Titan RTX. We validate the approach with generations in Appendix F: the view-mapper and object-mapper correctly reconstruct the training set of views, where the view-mapper is shared across all scenes. #### 5.2.2 Single-view NVS with a pretrained view-mapper We use the frozen view-mapper, \(\mathcal{M}_{v}\), and fine-tune a new object-mapper, \(\mathcal{M}_{o}\) on a new scene from a different semantic distribution, as described in Sec. 4.3, which takes one Figure 4: Novel view synthesis trained on a single scene (without pretraining), described Sec. 5.2.2. Top: the camera positions for the DTU buddha scene [21] visualized with Nerfstudio [56] and SDFStudio [76]. We show input views from the 6-view split (blue), inference views that are ‘interpolated’ from the input (green), and inference views that are ‘extrapolated’ (red). Bottom left: inference on interpolated views has good semantics and pose. Bottom right: inference on extrapolated views does not work. The semantics are good but the poses are wrong. This limitation motivates our pretraining strategy, see Sec. 4.3. hour on one Titan RTX. We show single-view NVS predictions for selected scenes in Fig. 1, and all DTU test scenes in Fig. 8. In Fig. 5, we compare NVS predictions against baselines using the same challenging evaluation scenes and views chosen by [11]. We compare against zero-1-to-3 in Appendix H; this is separate because this model has a different coordinate system that does not allow translation. The first 3 methods learn an explicit 3D scene representation - a NeRF - which has the benefit of consistent scene geometry. But they have blurriness and artifacts that commonly appear under sparse supervision [11, 65, 75] and they have errors in the semantic details. They also have the drawback that one cannot easily sample novel views from a high-variance distribution, which is desirable when the prediction problem is ambiguous. The zero-1-to-3 model has a fast runtime, but it is not suitable for NVS on images with photorealisitc object details (as shown by the toy pig scene), or for images where the borders crop the object (as shown by the brick scene), becasthethey are outside the training distribution of 3D assets [10]. Our ViewNeTI predictions, while sometimes hallucinating inaccurate object details, do generate photorealistic images with plausible semantics and little blurriness. Once trained, ViewNeTI can generate diverse samples in close to real time, and thus can be useful for modeling uncertainty. For completeness, we also compare the LPIPS, SSIM, and PSNR against baselines in Tab. 1, which is standard in NVS. We agree with the remarks in NeRDi [11] that these reconstruction-based metrics are not suitable for the single-view setting because there is high uncertainty in the 3D prediction. This metric will reward methods that perform some 'averaging' over diverse views, rather than those methods that predict diverse but plausible views. Having said that, the 'perceptual' metrics LPIPS and SSIM may not be as problematic as PSNR. Our approach is state-of-the-art for LPIPS, and near state-of-the art for SSIM. Our PSNR is poor which, as we argue in Sec. 6, is caused by issues with localizing the object in the image. That is, while the object details and pose are good, the image is mis-translated with respect to the ground truth. This is an area in which NeRFs are naturally at an advantage over diffusion models. \begin{table} \begin{tabular}{l c c c} \hline \hline **Method** & **LPIPS \(\downarrow\)** & **SSIM \(\uparrow\)** & **PSNR \(\uparrow\)** \\ \hline NeRF [65] & 0.703 & 0.286 & 8.000 \\ pixelNeRF [75] & 0.515 & **0.564** & 16.048 \\ SinNeRF [71] & 0.525 & 0.560 & **16.520** \\ NerDi [11] & 0.421 & 0.465 & 14.472 \\ ViewNeTI (ours) & **0.378** & 0.516 & 10.947 \\ \hline \hline \end{tabular} \end{table} Table 1: Single-image novel view synthesis metrics on DTU. The best score is in **bold**, and second best is underlined. We argue in Sec. 5.2.2 that these metrics are imperfect for settings with ambiguity like this one. Figure 5: Qualitative comparison of single-view novel view synthesis on DTU [21]. Our method has good photorealism and semantics compared to baselines (see Sec. 5.2.2). ### View-controlled text-to-image generation In Fig. 6, we show examples of using ViewNeTI to control the viewpoint of images in text-to-image generation. The object semantics are consistent across views, and using our conditioning adds only negligible time to the generation procedure. Moreover, the example images are outside the semantic distribution of the DTU training dataset. ### Ablations We show the effectiveness of our design choices by showing qualitative results on ablated versions of ViewNeTI for single-view NVS in Appendix B. The key architectural choices are the frequency of the positional encoding, the text embedding norm scaling, the flexibility permitted in the textual bypass, and the image augmentations. Other important choices include the freezing of the pretrained view-mapper, and the choice of image dimension in object sampling. Note that the results in Fig. 5 already established that no-pretraining leads to a failure to extrapolate away from the training viewpoints. Note that our application differs from standard textual inversion works [1, 13] in that we are not interested in compositionality of our word embeddings with existing concepts. As such, we do not ablate against this. ## 6 Limitations The major limitation for ViewNeTI in novel view synthesis (NVS) is localization: it is prone to generating objects that are mis-translated slightly from the ground truth, and this drives the degradation in PSNR scores seen in Tab. 1. This is a product of the training procedure: unlike in NeRFs, we do not penalize the MSE loss with an image. However in return for this drawback, we gain the ability to generate realistic samples (thanks to the pretraining on a web-scale datasets) and diverse samples (thanks to the generative nature of the model). ViewNeTI also struggles to generate precise details in objects, such as small textures in the building scene ( Fig. 8, row 2). Reconstruction quality is an active area of research in textual inversion [1] and advances there should be transferable to ViewNeTI. Although ViewNeTI scene optimization has similar training time to NeRF-based methods for sparse NVS (about 1 hour, which is similar to [63, 73]), it is far from the realtime speeds of recent methods that pose NVS as image-to-image translation [27]. The key bottleneck in our approach is optimizing new object tokens. But here, too, there is much work among textual inversion researchers for accelerating these predictions, [60, 67, 14, 80] and we expect these advances will be applicable to ViewNeTI. ## 7 Conclusions In this study, we investigated the capabilities of diffusion models to represent and reason about the 3D world. Remarkably, despite having only unposed 2D image supervision during training, we found strong evidence that the diffusion model latent space does encode 3D knowledge. We proposed Viewpoint Neural Textual Inversion (ViewNeTI), a framework for taking advantage of this prior knowledge for 3D vision tasks. Our method controls the viewpoint of images generated by a diffusion model by predicting appropriate conditioning vectors in the text latent space using a neural view-mapper. Our method naturally addresses novel view synthesis by fitting the neural view-mapper on multi-view image datasets. We fit the view mapper on a single scene with very sparse viewpoints ( \(<\) 10). Despite this small dataset being the only 3D supervision ever seen by the image diffusion model, the learned view-mapper was able to generalize to novel camera viewpoints, as long as those views were near the training views. Next we showed that the ViewNeTI could learn a more generic control mechanism for viewpoint control. We pretrained the view mapper on a multi-scene dataset that shared a common reference frame. The view mapper was able to learn this reference frame, while the per-scene object mappers learned the pose of objects within that scene. We highlighted impressive results for single-view novel view synthesis, an extremely challenging task. Here, our results showed excellent photorealism and good object semantics. This is most likely the result of the diffusion model leveraging prior knowledge from its massive training distribution, since the NeRF methods - without the benefit of large-scale pretraining datasets - rendered views Figure 6: ViewNeTI controls viewpoint in text-to-image generation by composing the view-mapper text encoder, represented by \(R_{i}\), with novel text prompts. Top: reference poses from a scene in the DTU [21] dataset. Middle & bottom: text generations conditioned on ViewNeTI view control tokens. with more blur and artifacts. Notably our multi-view pre-training dataset was small in scale compared to datasets underpinning other advances in vision, and the training time was modest at only 2 days on one GPU. This shows that extracting 3D knowledge from 2D models need not require unreasonably large 3D datasets. We have demonstrated strong results for viewpoint manipulation, which is one important form of 3D control. Looking forward, we hope that our framework can inspire work on leveraging diffusion model latent spaces for other challenging 3D tasks, such as scene relighting and 2D-to-3D lifting for 3D human pose estimation.
2309.14044
The Accuracy of Job Seekers' Wage Expectations
We study the accuracy of job seekers' wage expectations by comparing subjective beliefs to objective benchmarks using linked administrative and survey data. Our findings show that especially job seekers with low objective earnings potential and those predicted to face a penalty compared to their pre-unemployment wage display overly optimistic wage expectations. Moreover, wage optimism is amplified by increased job search incentives and job seekers with overoptimistic wage expectations tend to overestimate their reemployment chances. We discuss the labor market implications of wage optimism, as well as the role of information frictions and motivated beliefs as sources of overoptimism.
Marco Caliendo, Robert Mahlstedt, Aiko Schmeißer, Sophie Wagner
2023-09-25T11:21:14Z
http://arxiv.org/abs/2309.14044v2
# The Accuracy of Job Seekers' Wage Expectations+ ###### Abstract Job seekers' misperceptions about the labor market can distort their decision-making and increase the risk of long-term unemployment. Our study establishes objective benchmarks for the subjective wage expectations of unemployed workers. This enables us to provide novel insights into the accuracy of job seekers' wage expectations. First, especially workers with low objective earnings potential tend to display excessively optimistic beliefs about their future wages and anchor their wage expectations too strongly to their pre-unemployment wages. Second, among long-term unemployed workers, overoptimism remains persistent throughout the unemployment spell. Third, higher extrinsic incentives to search more intensively lead job seekers to hold more optimistic wage expectations, yet this does not translate into higher realized wages for them. Lastly, we document a connection between overoptimistic wage expectations and job seekers' tendency to overestimate their reemployment chances. We discuss the role of information frictions and motivated beliefs as potential sources of job seekers' optimism and the heterogeneity in their beliefs. **Keywords:** Subjective expectations, objective benchmarks, job search, unemployment, reemployment wages **JEL codes:** D83, D84, J64 Introduction Job search is a challenging process, in which unemployed workers encounter significant uncertainty about their future outcomes. It is well-recognized that workers often have an incomplete understanding of their labor market prospects (see, e.g., Adams-Prassl _et al._, 2023; Balleer _et al._, 2021; Mueller _et al._, 2021; Spinnewijn, 2015) and potential job matches (see, e.g., Belot _et al._, 2019; Jager _et al._, 2023; Krueger and Mueller, 2016). This, in turn, can distort their decision-making during job search and may increase the risk of long-term unemployment. However, despite the increasing evidence indicating the presence of systematic biases in job seekers' beliefs (see Mueller and Spinnewijn, 2023, for an overview), there is still a limited understanding of the underlying causes and, in particular, which groups of workers are affected most by these misperceptions. In our study, we examine the accuracy of job seekers' expectations about their wages upon reemployment and analyze heterogeneity in the extent to which they exhibit overly optimistic or pessimistic beliefs about their potential earnings. To that end, we explore a unique combination of survey and administrative data on job seekers in Germany. The large-scale survey provides insights into the perceptions of more than 5,000 newly unemployed workers about their future wages. Simultaneously, we leverage administrative records to establish objective benchmarks for their actual earnings potential based on the realized wages of comparable workers in a similar situation. To approximate job seekers' objective wage potential, we account for a rich set of socio-demographic characteristics, regional information, and detailed employment biographies and employ flexible LASSO regressions. While job seekers, on average, overestimate their future wages by about 17%, the comparison of subjective beliefs and objective benchmarks allows us to show that there is significant heterogeneity in the levels of overoptimism among different groups of workers. We find that especially job seekers with lower objective earnings potential tend to hold disproportionately optimistic views regarding their future income. Those positioned in the lowest decile of the objective benchmark distribution overestimate their potential wages by approximately 36%, whereas the level of overoptimism is comparatively modest at around 6% among individuals in the top decile of the distribution. Moreover, we observe considerable heterogeneity in the accuracy of job seekers' wage expectations concerning their personal characteristics. For example, we find that men and high-skilled workers overestimate their wages relative to their objective earnings potential more strongly than women and low-skilled workers. In examining the influence of job seekers' pre-unemployment wages, we demonstrate that expected wage changes are more tightly compressed around zero compared to the distribution of objectively predicted wage changes. This observation suggests that job seekers tend to anchor their wage expectations more strongly to their pre-unemployment wages than what is objectively justified. At the same time, we find that this anchoring effect is asymmetric. While the beliefs of job seekers who can reasonably expect a wage increase compared to their previous salary are relatively accurate, those who are predicted to face a wage penalty anchor their expectations too strongly to their pre-unemployment wage. This finding indicates that job seekers do not sufficiently account for the potential scarring effects of unemployment when forming their wage expectations.1 Footnote 1: Existing evidence suggests that the experience of unemployment is associated with wage penalties upon reemployment (see, e.g, Arulampalam, 2001; Gregory and Jukes, 2001). Consistent with this notion, our objective benchmarks indicate that job seekers’ average wage potential decreases by about 12% compared to their pre-unemployment wage. In addition, we explore repeatedly elicited wage expectations for job seekers who are still searching for a job about one year after becoming unemployed. We find that the overoptimism among this group of long-term unemployed remains persistent throughout the unemployment spell. This suggests that those facing challenges in securing a job are hesitant to adjust their wage expectations despite the feedback they receive during the job search process. This reluctance to adapt might be one factor hindering their successful reintegration into the labor market. It is essential to recognize that job seekers' beliefs can also be influenced by their own actions. To shed light on the causal impact of individuals' search behavior on the accuracy of their wage expectations, we exploit exogenous variation in the incentives of unemployed workers to search for jobs. Specifically, we leverage regional differences along the administrative borders of local employment agency (LEA) districts, where job seekers face varying risks of being subject to punitive benefit sanctions. This variation arises because LEAs have the discretion to decide how strictly they punish job seekers for non-compliance with their search requirements. Consequently, caseworkers in LEA districts with more stringent sanction regimes may exert greater pressure on job seekers, leading them to perceive stronger incentives to apply for and accept jobs. Supporting this notion, we find that a 10 percentage point higher sanction intensity - this is equivalent to an increase of approximately one standard deviation - raises the number of weekly job applications by about 9.8%. Simultaneously, a stricter sanction regime fosters greater optimism among job seekers regarding their earnings potential. Raising the sanction intensity by 10 percentage points increases job seekers' wage expectations relative to the objective benchmark by about 1.8%. In contrast, we find no evidence that the sanction intensity has a positive effect on job seekers' realized wages upon reemployment. The rise in overoptimistic beliefs may appear somewhat surprising, given that an enhanced sanction risk is presumed to directly influence job seekers to become less selective, leading them to lower their wage expectations. However, the increased search incentives may induce indirect effects that foster heightened levels of optimism. For instance, job seekers may adopt more optimistic wage expectations as a way to enhance their motivation to search for jobs and to cope with the increased threat of benefit sanctions. In the final part of our analysis, we provide descriptive evidence on the labor market implications of wage optimism. The matched survey-administrative data enable us to examine the extent to which job seekers' belief accuracy predicts their search behavior, realized wages, and perceived and actual job finding rates. It turns out that being overly optimistic about the potential wage one could earn is positively related to the number of job applications and realized wages upon finding employment. At the same time, we observe a wedge between the perceived and actual job finding rates for increasing levels of wage optimism. On the one hand, job seekers who are most optimistic about their potential wages also report the highest perceived chances of finding a job. On the other hand, job seekers' actual prospects of finding a job decline as their wage optimism rises. This suggests that the more optimistic workers are about the wages they can earn upon reemployment, the more likely they are to overestimate their job finding prospects. This pattern is in line with the idea that optimistic job seekers who receive wage offers lower than their expectations tend to be excessively selective and reject offers more frequently than warranted, thereby prolonging unemployment (see also Conlon _et al._, 2018; Dubra, 2004; Mueller _et al._, 2021). What causes the overall optimism of unemployed workers and the heterogeneity in their beliefs? It is often argued that job seekers are not fully informed about the job finding process and learn about their labor market prospects during job search (Burdett and Vishwanath, 1988; Gonzalez and Shi, 2010). Conlon _et al._ (2018) show that job seekers' wage expectations increase when they receive an offer exceeding their initial belief. Aligning with this notion, our findings suggest that job seekers with greater unemployment experience and those who receive more advice from their caseworker tend to hold more accurate earnings expectations. Against this backdrop, one may expect that providing job seekers with information about their objective earnings potential may reduce their tendency to hold excessively optimistic wage expectations.2 Footnote 2: Related to this idea, various studies have investigated the causal effects of informing unemployed workers about potentially promising job matches (Altmann _et al._, 2022; Behaghel _et al._, 2022; Belot _et al._, 2019, 2022; Ben Dhia _et al._, 2022) or the search process in general (Altmann _et al._, 2018), which can have positive effects on job seekers’ labor market integration. Moreover, Jäger _et al._ (2023) provide employed workers with information about their outside options, that is, the average wage of workers with similar characteristics in the same labor market, leading treated individuals to revise their wage expectations, as well as their job search and wage negotiation intentions. However, an alternative view is that misperceptions can arise from the way individuals process the information available to them. The literature on motivat the desire to maintain a positive self-image can significantly impact how individuals form their beliefs (Benabou and Tirole, 2002, 2004, 2016). For this reason, they may selectively retrieve certain information and deliberately suppress negative feedback (Bordalo _et al._, 2020, 2021; Gennaioli and Shleifer, 2010). Several of our findings speak to the empirical relevance of these ideas for the accuracy of job seekers' wage expectations. In particular, we find higher levels of overoptimism among individuals with the lowest objective earnings potential, especially among workers who are predicted to experience a wage decline compared to their past salary. This group of workers may have a heightened desire for motivated beliefs. Furthermore, we find that job seekers who initially overestimate their wage potential by up to 17% continue to increase their wage expectations throughout the unemployment spell, despite receiving (most likely) negative feedback. Additionally, job seekers who are encouraged to search more intensively due to extrinsic incentives hold more optimistic wage expectations, yet this does not translate into higher realized wages for them. This aligns with the notion that these individuals adopt a more optimistic outlook to motivate themselves to comply with their search requirements. Our study is the first to establish objective benchmarks for the subjective wage expectations of unemployed workers. By doing so, we contribute to a growing body of literature demonstrating job seekers' average tendency to be overly optimistic about their job finding prospects (Balleer _et al._, 2021; Mueller _et al._, 2021; Spinnewijn, 2015; Van den Berg _et al._, 2023) and their reluctance to update their wage expectations over time (Conlon _et al._, 2018; Drahs _et al._, 2018; Krueger and Mueller, 2016). In this context, our approach enables us to document significant heterogeneity concerning the accuracy of job seekers' wage expectations. In addition, we empirically establish a connection between optimistic wage expectations and job seekers' tendency to overestimate their reemployment prospects. These findings support the theoretical notion that overly optimistic beliefs are associated with being excessively selective and rejecting offers more frequently than justified (see, e.g., Mueller _et al._, 2021; Mueller and Spinnewijn, 2023). Moreover, when comparing our findings to those of Jager _et al._ (2023), who employ a similar approach to study the beliefs of employed workers about their outside options, we note that the extent of anchoring seems to be less pronounced for the wage expectations of unemployed workers in contrast to employed workers. This difference could potentially be attributed to unemployed job seekers having already acquired information about their earnings potential during their job search. Additionally, our study sheds light on how job seekers' incentives to search for employment influence individuals' behavior and beliefs. In doing so, we contribute to a limited body of research that examines the response of job search behavior to changes in the benefit environment. Aligning with existing studies focusing on the generosity of UI benefit payments (see, e.g., Lichter and Schiprowski, 2021; Marinescu, 2017), we find that more restrictive policy regimes encourage unemployed workers to apply for jobs more intensively. Simultaneously, our finding that an enhanced sanction risk fosters greater wage optimism among unemployed workers challenges the notion that a stricter regime makes job seekers less selective in their job choices. Rather, the heightened incentives to search for jobs appear to induce indirect effects, leading to a more optimistic outlook regarding the wages individuals can earn upon reemployment. This phenomenon may contribute to the observation that, in various settings, reservation wages do not respond to variation in benefit payments or changes in the benefit rules as predicted by standard job search theory (Le Barbanchon _et al._, 2019; Lichter and Schiprowski, 2021; Krueger and Mueller, 2016; Schneider, 2008).3 It is worth noting that there is no indication that job seekers' increased wage optimism, which comes with the higher search intensity, is warranted. On the contrary, realized wages upon reemployment tend to be (insignificantly) lower when job seekers are subject to a more restrictive sanction regime. This aligns with previous evidence suggesting that benefit sanctions often lead job seekers to eventually accept lower-quality jobs (Arni _et al._, 2013; Van den Berg and Vikstrom, 2014; Van den Berg _et al._, 2019). Against this backdrop, it is conceivable that job seekers facing an elevated risk of sanctions only revise their wage expectations as time progresses. Footnote 3: Using French administrative data on reservation wages and changes in UI rules, Le Barbanchon _et al._ (2019) estimate a null effect of the potential benefit duration on reservation wages, which is consistent with the estimates by Lichter and Schiprowski (2021) based on German survey data. Similarly, Krueger and Mueller (2016) cannot reject that the elasticity of reservation wages to benefit levels in the U.S. is equal to zero, while Schneider (2008) finds no significant effect of imposed benefit sanctions on the reservation wages of unemployed workers in Germany. The remainder of this paper proceeds as follows. In the next section, we discuss our empirical setting, while Section 3 illustrates some of the theoretical issues related to the wage expectations of unemployed workers. Section 4 presents empirical evidence on the accuracy of job seekers' wage expectations and Section 5 concludes. ## 2 Empirical Setting To examine the accuracy of job seekers' wage expectations, our analysis builds on different complementary data sources providing information on unemployed workers in Germany. To begin with, we rely on a large-scale survey involving 17,400 workers who became unemployed between June 2007 and May 2008 and were eligible for unemployment insurance (UI) benefits (see Arni _et al._, 2014). The first interview was conducted within 7 to 14 weeks after entering unemployment, followed by a second interview wave 12 months later. The survey encompasses detailed data on socio-demographic characteristics, personality traits, job search behavior, and, notably for our study, subjective beliefs about labor market prospects, especially wages upon reemployment. In addition to the survey, we leverage administrative records to access highly reliable data regarding job seekers' actual labor market outcomes and their employment histories prior to unemployment. We utilize the administrative data for two purposes. Firstly, we can directly link the survey information with the administrative records at the individual level for about 87% of the survey respondents (Eberle _et al._, 2017). Secondly, we incorporate administrative information from a larger sample of unemployed workers to establish objective benchmarks for job seekers' earnings potential. Importantly, both datasets consist of individuals randomly sampled from the same population of unemployed workers. This ensures that job seekers within the different datasets can be directly compared and that we have access to similar information regarding their labor market biographies.4 Footnote 4: The _IZA/IAB Administrative Evaluation Dataset_ offers administrative data for a 4.7% random sample of individuals who entered unemployment between 2001 and 2008 (Caliendo _et al._, 2011; Eberle and Schmucker, 2015), while the _the IZA Evaluation Dataset Survey_ provides survey data for a representative subset of individuals who entered unemployment between June 2007 and May 2008. Furthermore, the _IZA/IAB Linked Evaluation Dataset_ combines survey and administrative data, linking them for 87% of the survey respondents. ### Subjective wage expectations and objective benchmarks The survey elicits job seekers' beliefs about the monthly net salary (in \(\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{ \mathsf{ }}}}}}}}}}}}}}}}\)) they expect to receive upon starting a new job, using the following question: "Now, I am interested in the salary you anticipate receiving in your next job. What is your expected monthly net income in \(\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{ \mathsf{ }}}}}}}}}}}}}}}}}\)?" The question is asked during the initial survey interview, which takes place 7 to 14 weeks after entry into unemployment, and is directed at all individuals who are still unemployed at this stage and are actively searching for a job. Moreover, our analysis focuses exclusively on individuals who previously held full-time positions to minimize the influence of variation in working hours on monthly wage expectations. This results in an estimation sample of 5,376 survey respondents who can be linked to the administrative records. The objective of our empirical analysis is to examine the accuracy of individuals' subjective wage expectations, which is notoriously challenging for two reasons. First, we need to compare subjective beliefs to realizations of the same outcome. However, realized wages of the job seekers observed in our survey data represent only one instance of the objective ex-ante distribution of wages, and they might be affected by unforeseeable labor demand shocks that individuals cannot be aware of when reporting their subjective beliefs. Second, job seekers' expectations may impact their job search behavior, which in turn can influence actual labor market outcomes. In order to address these concerns, we adopt an approach where we estimate objective benchmarks for job seekers' earnings potential based on the realized wages of comparable individuals in similar situations. To that end, we utilize administrative data from a larger sample of 84,617 workers who became unemployed between January 2005 and May 2007 (see Appendix A for additional information regarding the prediction of objective benchmarks). This time period was chosen to avoid any overlap with the survey sample, ensuring that the objective predictions are not influenced by the beliefs and behaviors of survey respondents.5 In order to ensure comparability with the survey sample, we apply similar restrictions to the administrative data. Specifically, we focus on newly unemployed individuals who are eligible for unemployment insurance (UI) benefits and were previously employed in non-subsidized full-time positions for at least three months. Moreover, we restrict the sample to job seekers who have not found regular employment within three months after entry, which is the average time until the first interview of the survey. Footnote 5: As a robustness check, we also employ a random sample comprising 80% of all entries into unemployment between June 2007 and May 2008, which aligns with the survey period. Detailed summary statistics for both samples can be found in Appendix A. We employ flexible LASSO regressions to predict reemployment wages, accounting for a comprehensive set of pre-determined covariates available in the matched survey-admin data. This includes socio-demographic characteristics, information on the last job before unemployment, labor market history over the past ten years, and local labor market characteristics. The dependent variable is the first monthly salary received in a regular job within 24 months after entry into unemployment, and we test the robustness of our findings using different time horizons.6 To compare the objective benchmarks with subjective wage expectations, we convert the realized wages recorded in administrative records from gross to net terms by deducting social security contributions and income taxes. The exact procedure is described in Appendix A. Footnote 6: Specifically, we utilize wages of individuals reemployed within nine months of unemployment, which is, on average, six months after the initial interview. This is motivated by the fact that job seekers answered the question about their wage expectations shortly after discussing their anticipated likelihood of finding a job during the next six months, suggesting a consistent time frame for wage expectations. To evaluate the quality of the benchmarks, we estimate the out-of-sample \(R^{2}\) by regressing realized wages on predicted wages using distinct test datasets, i.e., samples not utilized during the prediction generation process. As shown in Table A.3, we find values of \(R^{2}\) within the range of 0.48 to 0.53, suggesting that we are equipped with meaningful objective benchmarks for individuals' wages. Further supporting this notion, Table A.4 reveals that the objective benchmarks derived from wages of comparable workers exhibit greater predictive power for survey respondents' realized wages compared to their own subjective wage expectations. ### Summary statistics Figure 1 shows the distributions of subjective wage expectations and objective benchmarks. The average expected net income is \(\mathfrak{C}\) 1,407 per month, which is substantially greater than the average income that workers could reasonably expect. For the average job seeker in our sample, the objective benchmarks suggest a monthly net wage of only \(\mathfrak{C}\) 1,173. This closely aligns with the average realized wage of \(\mathfrak{C}\) 1,190 per month that we observe among the survey respondents (see Panel A of Appendix Table B.1). In addition to job seekers' wage expectations, we also explore information on individuals' perceived and actual job finding prospects. The perceived chances of reemployment are elicited over a six-month horizon and responses are provided using four options: "very likely", "likely", "unlikely", or "very unlikely".7 Overall, unemployed workers in our survey sample tend to be remarkably optimistic about their reemployment prospects. In particular, 89% of the survey population consider themselves "likely" or "very likely" to find a job within six months, while only 56% actually do so (see Panel B of Appendix Table B.1). This observation is in line with existing evidence from, e.g., Balleer _et al._ (2021), Mueller _et al._ (2021), and Spinnewijn (2015), suggesting that a majority of job seekers hold overly optimistic beliefs about their job finding probabilities. Figure 1: Distribution of subjective wage beliefs and objective benchmarks Theoretical Considerations Before presenting the results of our empirical analysis, we illustrate some of the theoretical issues related to the accuracy of job seekers' wage expectations. To do so, we sketch a random job search framework in which job seekers face uncertainty about their labor market prospects. While individuals are unemployed, they receive a flow of benefits, \(b\). In each period \(t\), they make decisions about the number of job applications they send out, \(s_{t}\), and their reservation wage, \(\phi_{t}\). The probability of a successful job application, resulting in a job offer, is denoted by \(\lambda_{t}\), while the effort costs incurred during the job search are captured by the increasing and convex function \(\gamma(s_{t})\). Each job offer is associated with a wage, denoted by \(w\), which is a random draw from the wage offer distribution \(F(w)\). Upon receiving multiple job offers in a given period, individuals accept the highest wage offer \(y=\max\{w_{1},w_{2},...,w_{n}\}\) if it exceeds their reservation wage. The distribution of this maximum offer can be described as \(F_{yt}(y)=F(y)^{n_{t}}\), where \(n_{t}=\lambda_{t}s_{t}\) represents the total number of job offers received in period \(t\). Inspired by Mueller and Spinnewijn (2023), we assume that job seekers hold subjective beliefs about their success probability, \(\widehat{\lambda}_{t}\), and the wage distribution, \(\widehat{F}(w)\), which may differ from the true functions. When choosing their search strategy, individuals maximize their perceived present value of income in period \(t\): \[U_{t}=\max_{s_{t},\phi_{t}}b-\gamma(s_{t})+\rho\left\{EU_{t+1}+\left[1-(1- \widehat{\lambda}_{t})^{s_{t}}\right]\int_{\phi}^{\infty}\left(EV_{t+1}(y)-EU_ {t+1}\right)d\widehat{F}_{yt}(y)\right\} \tag{1}\] where future income is discounted at rate \(\rho\) and \(V_{t+1}\) denotes the value of being employed at wage \(y\) when a job is found in the future. The corresponding uncertainty is captured by the expectation operator \(E\). The reservation wage and the optimal search effort can be expressed as functions of the (perceived) model primitives: \[\phi_{t}=\phi_{t}[b,\gamma(\cdot),\widehat{\lambda}_{t},\widehat{F}(\cdot),V_ {t+1}(\cdot)]\] \[s_{t}^{*}=s_{t}^{*}[b,\gamma(\cdot),\widehat{\lambda}_{t},\widehat{F}(\cdot),V _{t+1}(\cdot)]\] At their reservation wage, \(\phi_{t}\), job seekers are indifferent between accepting a job and remaining unemployed, \(U_{t+1}=V_{t+1}(\phi_{t})\), while they choose the optimal effort level, \(s_{t}^{*}\), trading off the cost of search and the perceived returns to search. Accuracy of wage expectations:Within this framework, the object of our empirical analysis, the accuracy of job seekers' wage expectations, is characterized by the difference between the perceived and actual expected maximum of all wage offers that the worker receives in a given period: \[E\widehat{F}_{yt}(y)-EF_{yt}(y). \tag{2}\] This illustrates that disparities between job seekers' subjective beliefs and objective benchmarks can have different origins. It is straightforward that workers may hold overly optimistic wage expectations due to misperceptions about the distribution of wage offers, \(F(w)\). Specifically, overly optimistic workers may perceive the wage offer distribution as shifted toward higher values compared to the actual distribution, or they may perceive the distribution to be more dispersed than it truly is. The latter can induce optimistic beliefs because job seekers are inclined to accept the offer with the highest pay. On the other hand, overestimating the success probability of an application can also induce wage optimism because it may lead job seekers to overestimate their prospects of attracting offers that come with particularly high wages (i.e. overestimating \(n_{t}\) causes individuals to perceive a higher expected maximum offer). Anchoring and belief updating:Workers who have incomplete information, for instance, about the statistical properties of the wage offer distribution have to form their beliefs based on the signals they have received. A signal that typically comes at no cost and is at their disposal is the wage job seekers earned in their prior job. This can lead individuals to anchor their wage expectations to their previous salary (see, e.g., Jager _et al._, 2023, for a formal illustration). As a consequence, workers who should reasonably anticipate a wage decrease (increase) relative to their pre-unemployment wage would overestimate (underestimate) their reemployment wages relative to the objective benchmarks. At the same time, individuals may acquire additional signals about their objective earnings potential over the course of their job search from the job offers they receive (see, e.g., Burdett and Vishwanath, 1988; Conlon _et al._, 2018; Gonzalez and Shi, 2010). This should enable them to adjust their wage expectations accordingly as time progresses. In situations involving Bayesian learning, the beliefs of job seekers who receive repeated feedback are anticipated to converge toward the actual mean of the wage distribution. Conversely, when individuals hold motivated beliefs, they may want to believe that they can earn a particular wage, possibly driven by motivational reasons (Benabou and Tirole, 2002) or ego-related satisfaction (Koszegi, 2006). In such instances, individuals may selectively process available signals, such as their prior salary and incoming wage offers, and thus deceive themselves (Gennaioli and Shleifer, 2010; Tversky and Kahneman, 1973). Therefore, it is conceivable that the anchoring and updating of beliefs is skewed toward excessively optimistic expectations (see, e.g., Heidhues _et al._, 2018; Huffman _et al._, 2022; Zimmermann, 2020). The role of extrinsic incentives:Clearly, job seekers' beliefs can also be influenced by their own actions. For example, unemployed individuals who engage in a more intensive job search may gather a greater amount of information, effectively increasing the number of signals they receive about their potential earnings. As a result, a higher search intensity could lead to more accurate wage expectations. Concurrently, by submitting a larger number of job applications, job seekers expand the pool of available offers, denoted as \(n_{t}\), within a given period. This wider array of opportunities increases their chances of receiving a particularly well-paying offer, which makes it reasonable to anticipate higher wages. Against this backdrop, we expect that variations in job search incentives - such as differing benefit levels \(b\) or benefit payments conditioned on a minimum effort level - will also impact the accuracy of job seekers' wage expectations. Furthermore, individuals who adopt beliefs that enhance their effort motivation may become more optimistic when confronted with extrinsic incentives to intensify their job search. Labor market implications:Lastly, it is evident that the beliefs held by job seekers play a significant role in shaping their decisions, ultimately influencing their integration into the labor market. For instance, if individuals are optimistic regarding their chances of attracting a well-paying job offer, they perceive particularly high returns to search. As a result, optimistic workers might be willing to exert more effort during job search, potentially enhancing their actual job finding prospects. On the other hand, optimistic workers may set higher reservation wages, leading them to be more inclined to reject job opportunities offering comparatively lower salaries. This in turn could lead to extended periods of unemployment compared to utility-maximizing job seekers with accurate wage expectations. In addition, this mechanism can result in job seekers who have overly optimistic wage expectations also overestimating their actual job finding rates (see, e.g. Conlon _et al._, 2018; Dubra, 2004; Mueller _et al._, 2021). ## 4 Empirical Evidence on the Accuracy of Wage Expectations In this section, we compare job seekers' subjective wage expectations to the objective benchmarks. This enables us to uncover heterogeneity in the accuracy of job seekers' wage expectations, taking into account their objective earnings potential (see Section 4.1) and individual background characteristics (see Section 4.2). In this context, we place particular emphasis on investigating the influence of job seekers' pre-unemployment wages to understand the extent to which they anchor their wage expectations to their previous salary and whether such anchoring is justified (see Section 4.3). Moreover, we leverage the longitudinal nature of our data by examining repeatedly elicited wage expectations from job seekers who are still in search of employment approximately one year after becoming unemployed. This allows us to investigate how job seekers revise their subjective expectations over the course of their unemployment spell (see Section 4.4). We also take into consideration the interdependence between job seekers' beliefs and their actions. To that end, we exploit exogenous variation in individuals' job search incentives and study the consequences for the accuracy of their wage expectations (see Section 4.5). Finally, we present descriptive evidence on the labor market implications of inaccurate beliefs (Section 4.6) and discuss how our results speak to different theoretical mechanisms related to job seekers' belief formation (see Section 4.7). ### Heterogeneity by objective earnings potential To begin with, we consider the relationship between subjective expectations and objective benchmarks based on the following regression model (see Jager _et al._, 2023): \[S_{i}=\beta_{0}+\beta_{1}\widehat{O_{i}}+\epsilon_{i} \tag{3}\] where \(S_{i}\) denotes the subjective belief of job seeker \(i\) about their reemployment wage and \(\widehat{O}_{i}\) refers to the corresponding objective prediction. The intercept \(\beta_{0}\) captures biases in beliefs that are common to all job seekers, while the slope parameter \(\beta_{1}\) describes how strongly beliefs respond to variation in objective benchmarks. In this context, we can think about different scenarios depending on the values of \(\beta_{0}\) and \(\beta_{1}\). First, when \(\beta_{1}=1\) and \(\beta_{0}=0\), individuals' expectations perfectly correspond to the objective benchmark, which indicates _unbiased beliefs_ throughout the distribution. Second, when \(\beta_{1}=1\) and \(\beta_{0}\neq 0\), job seekers' beliefs are subject to _homogeneous biases_, that is, individuals are overly optimistic (\(\beta_{0}>0\)) or pessimistic (\(\beta_{0}<0\)), but job seekers with different objective predictions share the same degree of bias. Lastly, when \(\beta_{1}\neq 1\), beliefs do not exhibit a one-to-one response to objective variation, resulting in _heterogeneous biases_. In particular, \(\beta_{1}<1\) means that beliefs do not adjust sufficiently strongly to changes in objective predictions, which implies that overoptimism is more pronounced among job seekers with a relatively low objective wage potential. Panel A of Figure 2 displays a binned scatter plot based on Equation (3). It illustrates the prevalence of overoptimism regarding reemployment wages across all levels of the corresponding objective distribution. In each of the 20 bins, the average expected wage exceeds the corresponding objective benchmark. In other words, both job seekers with low and high objective earnings potential tend to overestimate their reemployment wage compared to our objective benchmarks. At the same time, the magnitude of job seekers' optimism varies across the objective distribution. Our analysis reveals a slope coefficient of \(\widehat{\beta}_{1}=0.74\) (SE: 0.01) compared to an objective benchmark slope of one. This suggests that beliefs do not adequately respond to variations in the objective wage potential. As a result, job seekers with lower predicted wages exhibit a greater tendency to overestimate their earnings potential compared to those with higher objective predictions. To be precise, individuals within the bottom decile of the objective benchmark distribution overestimate their earnings potential by approximately 36%, whereas the overoptimism is only about 6% among individuals in the top decile of the distribution. Robustness:A series of robustness checks confirms our result that job seekers with lower objective earnings potential exhibit the highest levels of optimism. First, we vary the sample used to estimate the objective benchmark based on the administrative records. In particular, we rely on alternative training data including individuals who became unemployed in 2007 and 2008 (i.e. the same time period during which the survey was conducted), and we only include job seekers who find a job within nine rather than 24 months. Second, we address concerns that the estimated slope coefficients may suffer from attenuation bias due to measurement error in the objective predictions. Therefore, we conduct an instrumental variable (IV) regression, where we use objective predictions from the alternative training data as an instrument for objective predictions from our baseline training data. This approach is similar to a split-sample IV measurement error correction (see, e.g., Drenik _et al._, 2020; Jager _et al._, 2023). Third, we impose two additional restrictions on the survey sample, considering only (i) individuals who Figure 2: Comparison of subjective beliefs and objective benchmarks search for and expect to find a full-time job and (ii) unmarried individuals. This enables us to examine whether differences in expected working hours (full-time) and measurement error arising from the conversion of gross to net wages (unmarried individuals) affect our estimates. As summarized in Appendix Table B.2, we obtain similar slope coefficients between 0.64 and 0.75 across the various specifications. ### Determinants of optimism and pessimism Another way to illustrate heterogeneity in the accuracy of beliefs is to consider deviations between subjective beliefs and objective benchmarks: \(S_{i}-O_{i}\). Panel B of Figure 2 shows the resulting distribution. We see that job seekers overestimate their wage potential, on average, by about 17%. While a majority of about 57% of the survey respondents expect their wage to be at least 10% higher than the objective prediction, only 13% underestimate their wage potential by more than 10%.8 Footnote 8: In Appendix Table B.2, we report the mean bias and the share of job seekers who over- and underestimate their wage potential for the alternative specifications explained above. Across all specifications, we find that overly optimistic wage expectations are widespread among unemployed workers. Building upon this individual-level measure, we can explore correlations between the accuracy of beliefs and job seekers' characteristics. To that end, we first regress subjective wage expectations on a set of covariates without accounting for the objective predictions (column 1 of Table 1). In a second step, we analyze to what extent these differences in expectations are justified by heterogeneity in job seekers' actual wage potential. Therefore, we regress the deviation between subjective beliefs and objective predictions, \(S_{i}-O_{i}\), on the objective benchmarks and the same set of individual-level characteristics (column 2).9 Lastly, in columns (3) and (4), we distinguish between individuals who overestimate and underestimate their wage potential.10 Footnote 9: One should note that the covariates included in the regression model represent a subset of all individual-level characteristics that are explored to generate the objective benchmarks. This means the estimated correlations cannot be explained by heterogeneity in rational expectations. Footnote 10: In particular, we consider the deviation between \(S_{i}\) and \(O_{i}\) and set negative (positive) values to zero and thus only exploit variation in positive (negative) deviations. Overall, we observe intuitive correlations between the accuracy of job seekers' beliefs and their background characteristics. For instance, it can be seen in column (1) that women expect to earn about 17% less than men upon reemployment. This is in line with existing evidence that men generally have higher levels of self-confidence (Barber and Odean, 2001; Cortes _et al._, 2022) and set higher reservation wages (Caliendo _et al._, 2017) than women. When we take into account the heterogeneity in job seekers' objective wage potential in column (2), the gender gap remains negative. In particular, the difference between the subjective wage expectation and the objective prediction is about 9 percentage points lower for women than for men. Comparing these estimates suggests that about half of the gender gap in wage expectations is due to the \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \(S_{i}\) & \(S_{i}-O_{i}\) & \(S_{i}-O_{i}\) & \(S_{i}-O_{i}\) & \(S_{i}-O_{i}\) & \(S_{i}-O_{i}\) \\ & & & Pos. values & Neg. values & Pos. values & Neg. values \\ & (1) & (2) & (3) & (4) & (5) & (6) \\ \hline **Socio-demographic characteristics** & & & & & & \\ Female & -0.172*** & -0.086*** & -0.067*** & -0.019*** & -0.064*** & -0.018*** \\ & (0.008) & (0.007) & (0.006) & (0.003) & (0.006) & (0.003) \\ Age (ref. 16-24 years) & & & & & \\ 25 - 34 years & 0.039*** & 0.022** & 0.012 & 0.010** & 0.014* & 0.011** \\ & (0.011) & (0.010) & (0.008) & (0.004) & (0.008) & (0.004) \\ 35 - 44 years & 0.057*** & 0.014 & 0.005 & 0.009** & 0.011 & 0.010** \\ & (0.011) & (0.010) & (0.008) & (0.004) & (0.009) & (0.005) \\ 45 - 55 years & 0.072*** & 0.027*** & 0.016* & 0.012*** & 0.026** & 0.015*** \\ & (0.012) & (0.011) & (0.009) & (0.004) & (0.009) & (0.005) \\ German citizen & -0.026* & -0.046*** & -0.032*** & -0.014*** & -0.037*** & -0.014*** \\ & (0.016) & (0.014) & (0.012) & (0.005) & (0.012) & (0.005) \\ Educational level (ref. no higher education) & & & & & & \\ Vocational certificate & 0.055*** & 0.018 & 0.003 & 0.015*** & -0.003 & 0.014*** \\ & (0.013) & (0.012) & (0.010) & (0.006) & (0.010) & (0.006) \\ University degree & 0.266*** & 0.135*** & 0.118*** & 0.017*** & 0.106*** & 0.017*** \\ & (0.016) & (0.016) & (0.013) & (0.007) & (0.013) & (0.007) \\ Married & -0.022*** & 0.002 & 0.015* & -0.013** & 0.017* & -0.012*** \\ & (0.008) & (0.008) & (0.007) & (0.003) & (0.007) & (0.003) \\ Any children & -0.026*** & -0.012 & -0.004 & -0.007** & -0.003 & -0.007** \\ & (0.009) & (0.008) & (0.007) & (0.003) & (0.007) & (0.003) \\ East Germany & -0.081*** & -0.015* & -0.012* & -0.003 & -0.010 & -0.002 \\ & (0.008) & (0.008) & (0.007) & (0.003) & (0.007) & (0.003) \\ **Labor market history** & & & & & & \\ Last wage (ln) & 0.325*** & 0.212*** & 0.176*** & 0.036*** & 0.169*** & 0.035*** \\ & (0.013) & (0.013) & (0.011) & (0.004) & (0.011) & (0.004) \\ Last job was quit & -0.025 & -0.003 & 0.004 & -0.007 & 0.001 & -0.007 \\ & (0.018) & (0.016) & (0.013) & (0.006) & (0.013) & (0.006) \\ Number of unemployment spells in last 2 years (ref. 0 spells) & & & & & & \\ 1 spell & 0.032*** & -0.015 & -0.017*** & 0.002 & -0.019*** & 0.002 \\ & (0.010) & (0.009) & (0.008) & (0.003) & (0.008) & (0.003) \\ 2 spells & 0.011 & -0.025*** & -0.031*** & 0.005 & -0.030*** & 0.006* \\ & (0.010) & (0.009) & (0.008) & (0.003) & (0.008) & (0.003) \\ \(\geq\) 3 spells & 0.011 & -0.024** & -0.027*** & 0.003 & -0.028*** & 0.003 \\ & (0.011) & (0.010) & (0.008) & (0.004) & (0.008) & (0.004) \\ Last unemployment duration & -0.001* & 0.001** & 0.001 & 0.001*** & 0.001* & 0.001*** \\ & (0.001) & (0.001) & (0.000) & (0.000) & (0.000) & (0.000) \\ **Personality traits** & & & & & & \\ Internal locus of control & & & & & & \\ & & & & & & \\ Conscientiousness & & & & & & \\ & & & & & & \\ Openness & & & & & & \\ & & & & & & \\ Extraversion & & & & & & \\ & & & & & & \\ Neuroticism & & & & & & \\ & & & & & & \\ **Caseworker counseling** & & & & & \\ Number of caseworker meetings (ref. 0 - 2 meetings) & & & & & \\ & 3 - 5 meetings & & & & & \\ & & & & & & \\ & & & & & & \\ \(\geq\) 6 meetings & & & & & & \\ & & & & & & \\ & & & & & & \\ Number of vacancy referrals & & & & & & \\ & & & & & & & \\ Objective benchmark \(O_{i}\) & -0.544*** & -0.456*** & -0.088*** & -0.456*** & -0.088*** \\ & (0.019) & (0.016) & (0.007) & (0.016) & (0.007) \\ \hline No. of observations & 5,376 & 5,376 & 5,376 & 5,376 & 5,200 & 5,200 \\ \(R^{2}\) & 0.458 & 0.239 & 0.248 & 0.067 & 0.258 & 0.070 \\ Mean dep. variable & 7.183 & 0.170 & 0.205 & -0.035 & 0.205 & -0.036 \\ \hline \hline _Note:_ The table reports the results of OLS regressions. In column (1), the dependent variable is individuals’ subjective belief \(S_{i}\) about their net monthly reappport wage (in log). In column (2), we consider the log difference between the subjective belief \(S_{i}\) and the objective benchmark \(O_{i}\). In columns (3) and (5),we set negative values for zero and thus only exploit variation in positive deviations (“optimism”), while in columns (4) and (6), we set positive values to zero and thus only exploit variation in negative deviations (“pessim”). Robust standard errors are shown in parentheses. ***/**/“ indicate statistical significance at the 1%/5%/10%-level. \end{table} Table 1: Determinants of optimism and pessimism fact that men actually earn higher wages than women and it is therefore rational that men have also higher wage expectations. At the same time, the other half of the gender gap in wage expectations cannot be explained by the objective predictions suggesting that men tend to be more confident even in comparison to the rational benchmark. Considering the decomposition of results for optimistic and pessimistic beliefs in column (3) and column (4), we find that the gender differences are mainly driven by men exhibiting more overly optimistic beliefs rather than women being too pessimistic. Moreover, we observe several other noteworthy correlations. First, it may not be surprising that German citizens have more accurate beliefs compared to foreigners, as they may possess a better understanding of the German labor market dynamics. Second, high-skilled workers, specifically those with a university degree, exhibit a greater tendency toward overoptimism regarding their wage potential compared to low-skilled workers. Quantitatively, job seekers with a university degree tend to overestimate their earnings potential by approximately 12 percentage points more compared to those without any higher education. This result is in line with earlier findings that higher levels of education are associated with individuals being more overconfident about their investment decisions (Bhandari and Deaves, 2006; Trejos _et al._, 2019). This pattern could reflect that individuals' beliefs about their abilities (Stinebrickner and Stinebrickner, 2012; Wiswall and Zafar, 2015) or about the returns to schooling (Attanasio and Kaufmann, 2014; Jensen, 2010) affect educational or occupational choices early on in their careers.11 Lastly, we find that wage optimism tends to be less pronounced among job seekers who have experienced more frequent periods of unemployment in the past. This observation aligns with the notion that individuals with greater job search experience may have accumulated more accurate information about their potential earnings. Footnote 11: Note that, at first glance, this finding contradicts recent evidence from Balleer _et al._ (2021) who find that overconfidence among job seekers in the U.S. decreases with their skill level. However, it should be noted that we condition on the level of objective predictions and various other covariates. This allows us to disentangle the effect of education from differential misperceptions along other characteristics that are correlated with job seekers’ education. In addition to the covariates that we use to estimate the objective benchmarks based on the administrative records, the survey provides information on a variety of other worker characteristics that are typically related to their labor market integration. In specifications (5) and (6), we additionally account for personality traits and caseworkers' counseling activities. Notably, we find that job seekers who hold an internal locus of control, believing that their life outcomes are primarily determined by their own actions rather than external factors, tend to hold more optimistic beliefs about their future wages.12 At the same time, higher levels of openness and extraversion, as well as lower levels of neuroticism, are associated with greater optimism among job seekers. Furthermore, we find that job seekers who have had several meetings with their case-worker since becoming unemployed tend to exhibit less optimistic beliefs. On the other hand, a higher number of vacancy referrals, where caseworkers direct unemployed workers toward specific job postings, comes with less pessimistic wage expectations. While the observed patterns appear intuitive, it is essential to acknowledge that the disparity between subjective wage expectations and objective benchmarks might be partly attributed to job seekers having private information about their personal situations. Since our prediction model does not account for individuals' personality traits and caseworkers' counseling activities, it remains uncertain whether the observed patterns regarding these factors truly indicate heterogeneity in misperceptions about workers' earnings potential. In particular, if job seekers understand that they exhibit certain traits that come with higher earnings, it would be reasonable for them to adjust their wage expectations accordingly.13 Footnote 13: For example, several studies document statistically significant correlations between workers’ personality traits and their earnings (e.g. Andrisani, 1977; Heckman _et al._, 2006; Heineck and Anger, 2010; Mueller and Plug, 2006; Semykina and Linz, 2007). Similarly, previous research has demonstrated that caseworker counseling plays a substantial role in the labor integration of unemployed workers (see, e.g., Behncke _et al._, 2010; Schiprowski, 2020). ### Anchoring of beliefs to pre-unemployment wages Various studies indicate that individuals often rely on anchoring heuristics (Kahneman _et al._, 1982) when forming their expectations. Closely related to our setting, it is commonly observed that unemployed job seekers anchor their reservation wages to their previous salary before becoming unemployed (see, e.g., Feldstein and Poterba, 1984; Krueger and Mueller, 2016; Le Barbanchon _et al._, 2019; Koenig _et al._, 2021). However, due to the absence of objective benchmarks in the previous literature, it is challenging to determine the extent to which this type of anchoring is justified. In light of this, we now compare both job seekers' subjective wage expectations and the objective predictions to their pre-unemployment wages. To begin with, Panel A of Figure 3 shows the distributions of the subjective beliefs and objective predictions relative to pre-unemployment wages. It turns out that job seekers overestimate their wage potential not only relative to the objective benchmarks generated from wages of similar workers, but also in comparison to their own previous salary. On average, job seekers anticipate a 5.1% increase in salary compared to their last job, whereas objective predictions indicate an average decrease of approximately 12.1% in actual wages compared to the pre-unemployment wage. While periods of unemployment often come with wage penalties upon reemployment (Arulampalam, 2001; Gregory and Jukes, 2001), it appears that job seekers, on average, do not account for these adverse effects of unemployment when forming their wage expectations. Moreover, the distribution of expected wage changes is much more compressed around zero compared to the distribution of changes in objective predictions. This suggests that job seekers anchor their wage expectations more strongly to their past wage than is objectively justified. Panel B of Figure 3 further illustrates this trend by showcasing the relationship between job seekers' subjectively expected wage growth and the objectively predicted wage growth. The estimated slope coefficient of 0.71 (SE: 0.02), which is significantly smaller than the objective benchmark slope of one, indicates that job seekers perceive their reemployment wage to be closer to their pre-unemployment wage than it actually is. In particular, when job seekers should reasonably anticipate a 10 percentage point larger wage decline, they expect the wage decrease to be, on average, only 7.1 percentage points larger. That is, they tend to anchor their beliefs too strongly to their pre-unemployment salary. Lastly, we find that the anchoring to the pre-unemployment salary is asymmetric, as further illustrated in Appendix Figure B.1. Here, we depict differential slope coefficients for positive and negative variations in objective wage changes. The beliefs of job seekers who can reasonably expect a wage increase compared to their previous salary are relatively accurate, as indicated by the slope coefficient of 1.07 (SE: 0.03). Conversely, among individuals who are predicted to face a wage penalty compared to their previous wage, we estimate a slope coefficient of 0.66 (SE: 0.02). This finding suggests that this particular group of workers tends to hold overly optimistic beliefs and anchors their wage expectations too strongly to their pre-unemployment salaries. We can directly compare our results on the wage expectations of unemployed job seekers with the estimates of Jager _et al._ (2023), who analyze beliefs of employed workers regarding Figure 3: Subjective beliefs and objective benchmarks relative to pre-unemployment wages their outside options, that is, the wages they could earn with other employers. Jager _et al._ (2023) estimate a slope coefficient of 0.089 for the relationship between workers' subjectively expected wage changes and an objective benchmark for their actual wage changes (generated from observed wage changes of their coworkers). This coefficient is almost an order of magnitude lower than our corresponding estimate for the reemployment wages of unemployed workers. This indicates that the distorting effects of anchoring might be less severe for wage beliefs of unemployed job seekers compared to employed workers. A potential explanation might be that unemployed job seekers possess more knowledge about their earnings potential than employed workers, perhaps because they have already gathered information during their current or previous spells of unemployment. ### Belief updating over the unemployment spell To further explore the idea that job seekers acquire relevant information during the job search process and update their beliefs accordingly (see, e.g., Burdett and Vishwanath, 1988; Conlon _et al._, 2018; Gonzalez and Shi, 2010, for learning models of the labor market), we now investigate the evolution of job seekers' beliefs throughout their unemployment spell. Therefore, we utilize wage expectations that were repeatedly elicited during the first and second survey waves. We observe this information for 459 individuals who were still unemployed and actively searching for a job at the time of the follow-up interview, conducted one year after entry into unemployment. Appendix Figure B.2 compares the distributions of wage expectations over time. Consistent with the patterns observed by Mueller _et al._ (2021) and Krueger and Mueller (2016), job seekers' wage expectations remain remarkably stable throughout their unemployment spell. More specifically, we find no statistically significant changes in the distribution of wage expectations between the two waves (\(p=0.872\) based on a Kolmogorov-Smirnov test for equal distributions in both waves).14 Footnote 14: Additionally, Appendix Figure B.3 depicts variation in the accuracy of wage expectations with respect to the timing of the first survey interview. In line with our results based on the second survey wave, we find no evidence for systematic differences across job seekers interviewed at different points of their unemployment spell (between seven and 14 weeks after becoming unemployed). Having established objective benchmarks, we can also examine whether there is a systematic relationship between job seekers' initial level of wage optimism or pessimism and the way they update their wage expectations over time. To that end, Panel A of Figure 4 illustrates the change in respondents' expectations from the first to the second interview against the deviation between their subjective expectations and the objective benchmark as measured during the first interview. The estimated slope coefficient of -0.26 indicates that beliefs are not updated perfectly. Specifically, job seekers who initially overestimate their reemployment wage by an additional 10% only decrease their wage expectations by 2.6% more during the course of their unemployment spell.15 Additionally, we observe that only job seekers who exhibit significant initial wage optimism in the first interview revise their wage expectations downwards. Conversely, job seekers who overestimate their wage potential by up to 17% actually increase their wage expectations over time. These individuals most likely have received negative feedback from the jobs they encountered during their job search, yet they appear to resist updating their wage expectations accordingly. Footnote 15: It is worth noting that the estimated negative slope may be influenced to some extent by statistical mean reversion resulting from measurement error in belief elicitation, which suggests that the estimated slope coefficient might represent a lower bound for the true relationship. Finally, we account for changes in objective benchmarks over time by predicting the reemployment wages of job seekers who remain unemployed for at least 12 months following their entry into unemployment, as observed in the administrative data. Previous research shows that job seekers' reemployment prospects (Eriksson and Rooth, 2014; Kroft _et al._, 2013) and the quality of wage offers (Schmieder _et al._, 2016) decrease with prolonged unemployment spells. Consistent with this notion of negative duration dependence, we find that the objectively predicted wage for respondents who are still unemployed at the second interview decreases, on average, by about 6% over the course of one year. Combining the observation that many job seekers are reluctant to revise their wage expectations downward with the decline in the objective earnings potential during the unemployment spell, it becomes apparent that a majority of job seekers who remain unemployed display an even greater degree of overoptimism as time progresses. This trend is illustrated in Panel B of Figure 4, which showcases the relationship be Figure 4: Belief updating over the unemployment spell tween the deviations of subjective beliefs and objective benchmarks during the first and second interview waves. Specifically, the estimates suggest that the degree of wage optimism increases for job seekers who initially overestimate their wage potential by up to 34% (i.e. the intersection of the red and blue lines in Panel B of Figure 4) and only decreases for those who exhibit an even higher level of optimism during the first interview. Taken together, our results suggest that job seekers' overoptimism remains persistent throughout the unemployment spell and they update their wage expectations only to a very limited extent in response to the feedback received during the search process. In this context, it is important to note that the group of individuals who are still unemployed during the second interview after one year consists of job seekers with poor overall labor market prospects (reflecting dynamic selection over the unemployment spell). It appears plausible that their reluctance to revise their wage expectations is one factor that hinders the labor market integration of these job seekers, ultimately contributing to their extended unemployment spell. On the flip side, it is possible that job seekers who revised their wage expectations as time progressed may have successfully found employment before the second interview, making it difficult to draw conclusions as to how their beliefs were updated over time. ### How do search incentives shape job seekers' beliefs? Next, we study how the accuracy of job seekers' wage expectations depends on their search behavior. This analysis is subject to a non-trivial identification problem, because individuals' choices might be influenced by their subjective beliefs. To address this concern, we explore exogenous variation in the incentives of unemployed workers to search for jobs and analyze their impact on job seekers' wage optimism or pessimism. We exploit regional variation in the risk that job seekers will be subject to punitive sanctions along the administrative borders of local employment agency (LEA) districts. These sanctions involve temporary reductions in unemployment benefit payments and are imposed by caseworkers when they detect that job seekers are not complying with job search requirements.16 The regional variation arises because LEAs have autonomy in deciding about the local policy style (see, e.g., Boockmann _et al._, 2014; Doerr and Kruppe, 2015; Fertig _et al._, 2006), including how strictly they punish job seekers for inadequate search behavior. Footnote 16: Instances of non-compliance include insufficient job applications, rejecting job offers from the employment agency, and voluntary termination of employment. As a result, caseworkers in LEA districts with higher sanction intensities may exert greater pressure on job seekers, leading them to perceive stronger incentives to apply for and accept jobs. This can yield different implications for job seekers' wage expectations. On the one hand, an enhanced sanction risk may prompt job seekers to reduce their selectivity, leading them to lower their wage expectations. On the other hand, job seekers intensifying their search activities can induce indirect effects fostering heightened levels of optimism. For example, a larger number of applications may increase the probability of encountering job offers with particularly high wages. Alternatively, job seekers might adopt more optimistic beliefs in order to motivate themselves. #### 4.5.1 Econometric strategy To capture variations in the local sanction regime, we utilize regional data on the annual number of benefit sanctions imposed in each of the 178 LEA districts (indexed by \(j\)) and normalize this information by the average annual stock of unemployed workers in each LEA district. The resulting sanction intensity, \(SI_{j}\), can be linked to the administrative and survey data, as explained in Section 2, both of which include identifiers for job seekers' place of residence.17 To ensure that our estimation sample does not contribute to the sanction intensity measure, we rely on the corresponding numbers as observed in the year before a job seeker entered unemployment. In the Appendix, we illustrate the distribution of the sanction intensity across survey respondents (see Figure B.4), as well as LEA districts in Germany (see Figure B.5). Footnote 17: Due to data security restrictions, we are unable to utilize regional identifiers for the linked survey-administrative data in our analysis. Consequently, in this section, we rely on the survey and administrative data without linking them at the individual level. This requires us to re-estimate the objective benchmarks using a reduced set of covariates available in both the survey and administrative records. This includes socio-demographic characteristics, previous wage, regional information, and month of entry into unemployment. Despite this adjustment, our model demonstrates strong out-of-sample predictive power (with an \(R^{2}\) of 0.39), and the re-estimated objective predictions closely align with the measure employed in the previous sections (\(\rho=0.75\)). While the local sanction intensity serves as a proxy for the personal risk of being exposed to a benefit sanction, LEA districts imposing more sanctions might face a different composition of the unemployed workforce. This makes it unlikely that a simple regression of job seekers' outcomes on the local sanction intensity will identify the causal effect of job seekers' personal sanction risk. Therefore, we exploit discontinuities with respect to the sanction intensity along the administrative borders of the LEA districts (similar to Caliendo _et al._, 2022; Dube _et al._, 2010). Specifically, we estimate border-pair fixed-effects models of the following form: \[Y_{ijb}=\alpha+\delta SI_{j}+\beta X_{i}+\phi R_{j}+\kappa_{b}+\varepsilon_{ijb}, \tag{4}\] where \(i\) denotes the individual job seeker, \(j\) the LEA district in which the individual is located at the beginning of the unemployment spell, and \(b\) a pair of bordering LEA districts such that \(\kappa_{b}\) denotes the border-pair fixed effects for any combination of two neighboring LEA districts. Since one LEA district usually has several neighboring districts, an individual living in region \(j\) can belong to different sets of boarder pairs \(b\) and therefore enters the estimation multiple times (depending on the number of neighboring regions). Therefore, we use sampling weights referring to the inverse of the number of neighboring LEA districts. The parameter of interest \(\delta\) identifies the effect of sanction intensity on the outcome variables \(Y\) by comparing individuals living in similar, neighboring LEA districts but facing varying risks of being sanctioned. Moreover, \(R_{j}\) captures regional characteristics including the local unemployment rate, vacancy rate, gross domestic product, industry structure, and federal state fixed effects, and \(X_{i}\) accounts for individual-level characteristics. Standard errors are clustered at the LEA district level. #### 4.5.2 Validity of the empirical approach The underlying assumption of this approach is that two LEA districts with a common border are similar in all relevant characteristics except the sanction intensity. LEA districts represent relatively small geographical entities and delineations of functional local labor markets in Germany typically result in larger geographical entities (see, e.g., Kropp and Schwengler, 2016, who identify 50 local labor market regions, compared to 178 LEA districts). For example, the two largest metropolitan areas in Germany - the Rhine-Ruhr region and the Berlin-Brandenburg area - are home to approximately 10.9 million and 6.2 million residents, respectively. At the same time, they encompass 13 and eight distinct LEA districts each. Multiple LEA districts being part of larger local labor markets makes it likely that bordering LEA districts will exhibit similar characteristics. To empirically support this premise, Table B.3 contrasts disparities in regional labor market indicators - such as unemployment rates, vacancy rates, GDP, etc. - within 487 pairs of neighboring LEAs with differences in randomly selected LEA district pairs (see also Caliendo _et al._, 2022). For instance, the average disparity in unemployment rates between two randomly chosen LEA districts is approximately 4.0 percentage points. In contrast, when examining pairs of LEAs that share a common border, this disparity is markedly reduced by about 70%, resulting in a mere 1.2 percentage point difference. Moreover, we conduct balancing tests regressing the local sanction intensity on a rich set of individual-level characteristics to further examine the validity of our approach. As in our main analysis, we condition on border-pair fixed effects, as well as the set of regional characteristics, and we explore the predictive power of socio-demographic characteristics, labor market histories and personality traits, all variables that have been proven to be important for individuals' labor market success. As shown in Appendix Table B.4, we find very little evidence that individual characteristics as observed in our data are correlated with the conditional sanction intensity (i.e. see \(p\)-values at the bottom of Table B.4). Another concern relates to the possibility that LEAs with more restrictive sanction regimes also adjust other dimensions of their policy style. In that case, any effect of the sanction intensity could possibly reflect changes in the usage of other policy instruments rather than sanctions. To test this, we exploit survey data on various dimensions of caseworkers' counseling activities including notifications about labor market programs (i.e. training, workfare programs, job creation schemes, and start-up subsidies), the number of caseworker meetings, and the provision of vacancy referrals. These variables are the most direct measures of the LEA's policy style, since they reflect the caseworkers' information strategy. The findings presented in Appendix Table B.5 provide no evidence that the sanction intensity is related to caseworkers' counseling activities. #### 4.5.3 Effect of sanction risk on behavior and beliefs Table 2 shows the effect of the sanction risk on job seekers' search effort, wage expectations, and realized wages. In line with standard search-theoretical arguments, a stricter sanction regime seems to motivate unemployed workers to exert more search effort. Specifically, as shown in column (1) of Table 2, a 10 percentage point higher sanction intensity - equivalent to an increase of approximately one standard deviation - raises the number of weekly job applications by about 9.8% (\(p=0.016\)). At the same time, the estimates in column (2) reveal that a stricter sanction regime fosters greater optimism among job seekers regarding their earnings potential. Raising the sanction intensity by 10 percentage points increases job seekers' wage expectations relative to the objective benchmark by about 1.8% (\(p=0.006\)). When we differentiate between individuals who overestimate and underestimate the potential wages they could earn, we observe that the sanction intensity impacts both dimensions. Specifically, it significantly enhances optimism (as indicated in column (3)) while concurrently reducing pessimism (as shown in column (4)). \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{3}{*}{Dependent variable} & \multirow{3}{*}{Log no. of job applications} & \multicolumn{3}{c}{Accuracy of wage expectations\({}^{(a)}\)} & \multirow{3}{*}{Log realized net monthly wage\({}^{(b)}\)} \\ & & & & & \\ \cline{3-5} & & & & & \\ \cline{3-5} & & & & & \\ \hline Effect of sanction intensity & 0.098\({}^{**}\) & 0.018\({}^{***}\) & 0.011\({}^{**}\) & 0.007\({}^{***}\) & -0.009 \\ & (0.040) & (0.007) & (0.005) & (0.003) & (0.008) \\ No. of observations & 5,669 & 5,669 & 5,669 & 5,669 & 17,973 \\ Mean dep. variable & 1.716 & 0.125 & 0.167 & -0.042 & 7.001 \\ \hline \hline \end{tabular} * \({}^{(a)}\) In column (2), the dependent variable is the log difference between subjective belief \(S_{i}\) and objective benchmark \(O_{i}\). In column (3), we set negative values to zero and thus only exploit variation in positive deviations (“optimism”), while in column (4), we set positive values to zero and thus only exploit variation in negative deviations (“pessimism”). \({}^{(b)}\) In column (5), the dependent variable is the log realized wage of individuals observed in the administrative sample who start regular employment within 24 months after entry into unemployment.[ENDFOOTNOTE] \end{table} Table 2: Effect of sanction risk on search behavior and accuracy of wage expectations At first glance, these findings may appear somewhat surprising, as an enhanced sanction risk would be presumed to directly influence job seekers to become less selective, leading them to reduce their wage expectations. However, the increased incentives to search can induce indirect effects that foster heightened levels of wage optimism. For example, by submitting a greater number of job applications, job seekers enhance their prospects of attracting job offers that come with particularly higher wages. Moreover, this effect might be reinforced in the presence of negative duration dependence, e.g., due to skill depreciation during the unemployment spell. Job seekers who anticipate finding a job after a shorter period of unemployment due to their intensified search efforts may also anticipate receiving more favorable job offers, since their skills are perceived to have depreciated at a lesser rate (see, e.g., Nekoei and Weber, 2017). Lastly, it is often emphasized that confidence can be valuable, as it enhances an individual's motivation to exert effort (see, e.g., Benabou and Tirole, 2002). In our context, it is conceivable that job seekers who perceive increased pressure from their caseworkers to submit a greater number of job applications may adjust their perception of the returns from this strategy, thereby holding more optimistic expectations about their potential earnings. This mechanism also aligns with the observation that a greater sanction risk increases job seekers' wage optimism, yet does not translate into higher realized wages for them.18 On the contrary, as shown in column (5) of Table 2, realized wages upon reemployment tend to be (insignificantly) lower when job seekers are subject to a more restrictive sanction regime. Therefore, it is possible that job seekers exposed to an increased risk of sanctions only revise their wage expectations as time progresses. Footnote 18: Note that the effects of the sanction intensity on realized wages are estimated based on a larger sample of job seekers as observed in the administrative records. This is because we cannot utilize regional identifiers for job seekers’ LEA district and therefore do not observe the local sanction intensity when analyzing the linked survey-administrative data (see also footnote 17). ### Labor market implications: descriptive evidence In the final part of our analysis, we take a closer look at the potential labor market implications of wage optimism. As elaborated upon in Section 3, inaccurate beliefs can yield different consequences for the labor market integration of unemployed workers. On the one hand, optimistic wage expectations can motivate job seekers to exert more effort, thereby facilitating job finding. On the other hand, unemployed individuals who possess unrealistically optimistic wage expectations might exhibit an excessive degree of selectivity and reject job offers more frequently than justified. This in turn may lead to higher realized wages, but may prolong unemployment and cause job seekers to overestimate their reemployment prospects. In what follows, we present descriptive evidence illustrating the empirical significance of these mechanisms. As previously discussed, individuals' search behavior and their beliefs are closely intertwined, making it notoriously challenging to identify the causal effects of optimistic wage expectations on job seekers' search and labor market outcomes. Nonetheless, as we shall see, the data patterns reveal insightful perspectives regarding the importance of the distinct mechanisms at play. Figure 5 illustrates correlations of job seekers' wage optimism or pessimism with their search effort, realized wages, and perceived and realized job finding rates. Specifically, we depict binned scatter plots conditioning on socio-demographic characteristics and objective benchmarks (Appendix Table B.6 reports the corresponding regression results). We find that the level of wage optimism is positively related to the number of job applications (see Panel A of Figure 5). In particular, job seekers who overestimate their wage by an additional 10% sent out, on average, 1.6% more job applications (\(p=0.001\)). Moreover, it turns out that workers who exhibit a greater level of optimism also earn higher wages upon finding a job within two years after the start of the unemployment spell (see Panel B of Figure 5). A wage expectation that is inflated by 10% comes along with approximately 3.1% higher monthly wages (\(p<0.001\)) in comparison to individuals who provide an accurate assessment of their earnings potential. Finally, we examine the relationship between the accuracy of job seekers' wage expectations and their perceived and actual job finding rates, both measured over a period of six months following the interview. This comparison reveals a remarkable pattern. On the one hand, as illustrated in Panel C of Figure 5, there is a positive and almost linear relationship between wage optimism and the perceived job finding probability. This suggests that job seekers who are most optimistic about their reemployment wages also report the highest perceived chances of finding a job. On the other hand, Panel D of Figure 5 shows a non-linear connection between the accuracy of individuals' wage expectations and their actual job finding rates. Specifically, job seekers who hold relatively accurate beliefs about their reemployment wage have the highest likelihood of finding a job within six months, whereas those who over- or underestimate their earnings potential face reduced reemployment rates. Consequently, our results suggest that the more optimistic workers are about the wages they can earn upon reemployment, the higher the likelihood that they will overestimate their prospects of finding a job. Taken together, the observed pattern aligns with the notion that optimistic job seekers, upon receiving wage offers they deem to be "too low", exhibit an excessive degree of selectivity. Therefore, they may earn higher wages, but prolong the duration of unemployment beyond their initial expectations, resulting in a wedge between the true and perceived job finding rates. Concurrently, wage optimism can serve as a motivation to search more intensively, potentially explaining the positive correlation between optimistic beliefs and search effort. However, it appears that the impact of heightened effort on job finding is outweighed by the increased selectivity linked to job seekers' excessive optimism. ### What causes overly optimistic wage expectations? Gaining insight into the reasons behind the observed average optimism and the heterogeneity in belief inaccuracies is essential to draw meaningful conclusions about how to assist the unemployed in making better decisions during their job search. With this in mind, we will now discuss our results against existing theoretical and empirical evidence related to job seekers' belief formation. Limited information:While psychologists and economists offer numerous explanations for why individuals make systematic judgment errors (see, e.g, Benjamin, 2019, for an overview), it is often emphasized in the job search literature that unemployed workers may have incomplete Figure 5: Descriptive evidence on labor market implications information about the labor market (Burdett and Vishwanath, 1988; Gonzalez and Shi, 2010). After all, job search is a complex endeavor and there is relatively little information and feedback that may help job seekers to navigate this process. Various pieces of existing evidence support the notion that information frictions about workers' earnings potential play a significant role and may contribute to job seekers' overly optimistic wage expectations. For example, Conlon _et al._ (2018) demonstrate that job seekers' expectations of wage offers increase when they receive an offer exceeding their initial expectations. Additionally, information regarding the salaries of comparable individuals has been shown to impact workers' beliefs about their outside options (Jager _et al._, 2023) and women's willingness to ask for higher wages (Roussille, 2022). Related to this notion, a growing body of literature suggests that providing workers with information about the salaries of their peers narrows co-worker wage gaps (Baker _et al._, 2023; Cullen and Pakzad-Hurson, 2023) and affects workers' motivation (Breza _et al._, 2018; Card _et al._, 2012; Cullen and Perez-Truglia, 2022). We also find evidence supporting the idea that information frictions impact the formation of beliefs among unemployed workers in our context. As discussed in Section 4.2, we observe that job seekers with greater unemployment experience and those who receive more advice from their caseworkers tend to hold more accurate earnings expectations. Moreover, the anchoring of beliefs to pre-unemployment wages, as documented in Section 4.3, may stem from workers' having incomplete information, leading them to utilize their previous wage as a signal for their wage upon reemployment. These observations align with the notion that acquiring additional information can mitigate job seekers' tendency to be overly optimistic. Against this backdrop, one may expect that a policy offering job seekers precise information about their potential earnings may improve the accuracy of job seekers' beliefs. Motivated reasoning and selective recall:An alternative explanation for the divergence between perceived and actual outcomes relates to the way individuals process the available information. It is commonly argued that subjective beliefs serve essential psychological needs (see, e.g., Benabou and Tirole, 2016, for an overview). For instance, individuals may hold unrealistically positive beliefs because they derive direct utility from maintaining a positive self-image (Brunnermeier and Parker, 2005; Koszegi, 2006), they aim to enhance their motivation and overcome self-control problems (Benabou and Tirole, 2002), or they use optimism as a signal to convince others about one's abilities (Burks _et al._, 2013). Consistent with these ideas, our findings indicate higher levels of overoptimism among job seekers with a lower objective earnings potential. It appears plausible that this specific group of unemployed workers may have a heightened desire for motivated beliefs compared to individuals who can reasonably anticipate higher wages. Motivated beliefs may impact not only job seekers' overall tendency to be overly optimistic, but also how they process feedback. In particular, Benabou and Tirole (2002, 2004) suggest selective recall - that is, individuals deliberately managing to forget or suppress negative feedback - as a way to sustain overly optimistic beliefs even when faced with repeated negative feedback.19 In this context, two of our results are particularly noteworthy. First, especially job seekers who are predicted to experience a wage decline compared to their previous job tend to exhibit overly optimistic beliefs and anchor their wage expectations too strongly to their past salary, whereas job seekers who have reasons to anticipate a wage increase have relatively accurate expectations (see Section 4.3). Second, we find that job seekers who remain unemployed for an extended period are reluctant to revise their wage expectations downwards. Those who initially overestimate their earnings potential by up to 17% even increase their wage expectations over time (see Section 4.4). These findings are consistent with the notion that job seekers do not fully internalize the negative feedback they receive during the search process. Footnote 19: Aligning with these theoretical ideas, Zimmermann (2020) demonstrates in a lab experiment that positive feedback regarding individuals’ relative performance has a lasting impact on their beliefs, while negative feedback affects subjective beliefs only in the short term. Moreover, Huffman _et al._ (2022) provide empirical evidence, both through reduced-form analysis and structural modeling, indicating that managers who participate repeatedly in high-powered tournament incentive systems tend to make overly optimistic predictions about their future performance. These predictions are associated with an inclination toward overly positive recollections of their past performance. Lastly, the result that job seekers who perceive greater extrinsic incentives to apply for jobs hold more optimistic wage expectations (see Section 4.5), while not experiencing an increase in actual wages, can be explained by models of motivated beliefs as well. Specifically, when job seekers face an elevated risk of sanctions for non-compliance with search requirements, they may adjust their expectations about the potential returns of submitting job applications as a way to motivate themselves. In other words, they may adopt a more optimistic outlook to compensate for the expected loss in utility resulting from the threat of financial penalties. The existence of motivated beliefs carries implications for policymakers seeking to assist job seekers in their search process. First, it is not immediately evident whether implementing policies to reduce overoptimism is advisable if optimistic beliefs help sustain high levels of motivation among job seekers. Second, interventions aiming to correct belief inaccuracies may not be effective if job seekers forget or suppress negative feedback in order to sustain optimistic beliefs. To reduce selective processing of information, it may help to provide feedback in a way that is less likely to be perceived by job seekers as "ego threatening". Conclusion Job seekers' misperceptions about the labor market can distort their decision-making and prolong unemployment. In our study, we have established objective benchmarks for the subjective wage expectations of unemployed workers, which provide intriguing insights along four important dimensions. First, we have discovered significant variation in the levels of optimism and pessimism among different groups of unemployed individuals. Notably, those with lower objective earnings potential, particularly individuals expected to experience a wage decline compared to their previous job, tend to display remarkably overoptimistic beliefs. Second, when exploring repeatedly elicited wage expectations, we find that the overoptimism among the group of long-term unemployed workers remains persistent throughout the unemployment spell. This reluctance to adapt expectations might be one factor hindering their labor market integration. Third, our results show that unemployed workers who face greater incentives to search for and accept jobs - and who consequently increase the number of job applications - become even more optimistic about the potential wages they can earn upon reemployment. This heightened optimism potentially serves as an additional factor motivating them to intensify their job search efforts. Finally, we have established a clear connection between overly optimistic wage expectations and job seekers' tendency to overestimate their prospects of reemployment. This finding holds significant implications for policymakers committed to preventing long-term unemployment. To be more precise, our results suggest a wedge between the perceived and actual job finding rates for increasing levels of wage optimism. This aligns with the idea that inaccurate beliefs may incur decision-making costs, because overly optimistic job seekers may prolong unemployment by being excessively selective with respect to the jobs they accept. Against this backdrop, encouraging job seekers to revise their overly ambitious aspirations could present an attractive path for labor market policy focused on enhancing the reemployment prospects of unemployed workers. While the combination of survey data and administrative records offers valuable insights into the interrelation of individuals' beliefs, their job search behavior, and their actual labor market outcomes, it is clear that our setting does not come without limitations. First of all, it is important to note that individuals may possess private information about their earnings potential that is not accounted for in our benchmarks. This poses a significant challenge in identifying misperceptions at the individual level, and has the potential to influence the relationship between subjective beliefs and job search or labor market outcomes. The ideal survey should elicit individuals' beliefs not only about their own earnings potential, but also about primitives, such as the perceived returns to search and the distribution of wage offers, to reduce issues of reverse causality. Moreover, one could hope to improve the accuracy of objective benchmarks by conditioning on a richer set of commonly unobserved individual characteristics. These could, for instance, include workers' personality traits, their non-cognitive skills, or their preferences over non-wage job characteristics. Acknowledging these limitations, we consider our analysis as an important step toward opening the black box of how job seekers form their beliefs and how beliefs can affect their decisions while searching for jobs. Notably, various pieces of evidence substantiate the idea that motivated beliefs have a significant influence on job seekers' tendency to be overly optimistic about their labor market prospects. Simultaneously, we recognize that other factors, such as information frictions, may be at work as well, and untangling these distinct explanations presents an intriguing avenue for future research. For instance, it would be particularly interesting to provide a randomly selected group of job seekers with information about their objective earnings potential and analyze the consequences for their behavior and reemployment prospects. Such an approach would help to assess the significance of information frictions in influencing job seekers' decision-making and outcomes during the job search process. At the same time, analyzing how job seekers recall the provided information and update their beliefs depending on their priors could offer further insights into the role of motivated beliefs.
2309.05149
From Erdos-Renyi graphs to Linial-Meshulam complexes via the multineighbor construction
The $m$-neighbor complex of a graph is the simplicial complex in which faces are sets of vertices with at least $m$ common neighbors. We consider these complexes for Erdos-Renyi random graphs and find that for certain explicit families of parameters the resulting complexes are with high probability $(t-1)$-dimensional with all $(t-2)$-faces and each $(t-1)$-face present with a fixed probability. Unlike the Linial-Meshulam measure on the same complexes there can be correlations between pairs of $(t-1)$-faces but we conjecture that the two measures converge in total variation for certain parameter sequences.
Eric Babson, Jan Spaliński
2023-09-10T21:53:11Z
http://arxiv.org/abs/2309.05149v1
# From Erdos-Renyi graphs to ###### Abstract. The \(m\)_-neighbor complex_ of a graph is the simplicial complex in which faces are sets of vertices with at least \(m\) common neighbors. We consider these complexes for Erdos-Renyi random graphs and find that for certain explicit families of parameters the resulting complexes are with high probability \((t-1)\)-dimensional with all \((t-2)\)-faces and each \((t-1)\)-face present with a fixed probability. Unlike the Linial-Meshulam measure on the same complexes there can be correlations between pairs of \((t-1)\)-faces but we conjecture that the two measures converge in total variation for certain parameter sequences. Key words and phrases: Random graph, random complex, neighborhood complex of a graph, m-neighbor complex 2020 Mathematics Subject Classification: Primary: 05C80; secondary: 62R99 ## 1. Introduction The \(m\)_-neighbor complex_\(N_{m}(G)\) of a graph \(G\) is the simplicial complex in which faces are sets of vertices with at least \(m\) common neighbors. This construction is studied in detail in [10]. We consider these complexes for Erdos-Renyi random graphs [4] and find that for certain explicit families of parameters the resulting complexes are with high probability \((t-1)\)-dimensional with all \((t-2)\)-faces and each \((t-1)\)-face is present with a fixed probability. Unlike the Linial-Meshulam ([9], [2]) measure on the same complexes there can be correlations between pairs of \((t-1)\)-faces but we conjecture that the two measures converge in total variation for certain parameter sequences. ## 2. Preliminaries We recall some well known facts and fix notation. Write \([n]=\{1,2,\ldots,n\}\). If \(A\) is a finite set write \(|A|\) for its cardinality, \(\mathcal{P}A\) for the set of its subsets and \(\binom{A}{c}\subseteq\mathcal{P}A\) for those with cardinality \(c\). If \(X\) and \(W\) are both graphs or both simplicial complexes write \(Z_{X}W\) for the set of injective maps from \(W\) to \(X\), \(z_{x}w=z_{X}W=|Z_{X}W|\) for their number which depends only on the shapes \(x\) and \(w\) of \(X\) and \(W\) as defined in section 4 below and if \(Z_{X}W\neq\emptyset\) say that \(X\) contains a copy of \(W\). If \(G\) is a graph write \(VG\) and \(EG\) for its vertices and edges. If \(X\) is a simplicial complex write \(X_{\mathcal{F}}\) for the set of facets and \(X_{0}\) for the set of vertices. A random variable \(B\) is said to have a binomial distribution \(B\sim\textbf{Bin}_{n,q}\) if \[\mathbb{P}(B=k)=\binom{n}{k}q^{k}(1-q)^{(n-k)},\qquad k=0,\ldots,n.\] The mean and variance are given by \(\mu=nq\) and \(\sigma^{2}=nq(1-q)\). We will use the following bounds which are proven in the final section. **Hoeffding's Inequalities:**_If \(B\sim\textbf{Bin}_{n,q}\) then:_ * \(\mathbb{P}(B\leq m)\leq\exp[-\frac{2}{n}(nq-m)^{2}]\) _if_ \(m<nq\) _and_ * \(\mathbb{P}(B\geq m)\leq\exp[-\frac{2}{n}(nq-m)^{2}]\) _if_ \(m>nq\)_._ As an illustration, we include images of two small graphs and the \(1\)-skeletons of the associated \(1\)- and \(2\)-neighbor complexes. Figure 1. A graph and the \(1\)-skeleta of the \(m\)-neighbor complexes for \(m=1\) and \(m=2\). ## 3. Support of \(N_{m}(G(n,p))\) Take \(n\) to be a positive integer and \(p\in(0,1)\) a probability and consider the Erdos-Renyi probability measure \(G(n,p)\) on graphs with vertex set \([n]\) and each edge introduced independently with probability \(p\). Consider \(\Gamma_{n,m,p}=N_{m}G(n,p)\) the probability measure on simplicial complexes which is the \(m\)-neighbor complex of a random graph from \(G(n,p)\). Below is a picture of an Erdos-Renyi graph with parameters \(n=100\) and \(p=0.31\) on the left and the \(1\)-skeleton of its \(14\)-neighbor complex on the right. Given \(n\), \(m\) and \(p\) let \(t\) be the number defined as follows: \[t=\left[\left[\frac{\ln(n)-\ln(m)}{-\ln(p)}\right]\right]=\left[\left[\log_{p }\left(\frac{m}{n}\right)\right]\right]\] with \([[\cdot]]\) meaning a closest integer, taking the smaller choice if there are two possibilities. Next, take \(\tau\) with \(|\tau|\leq\frac{1}{2}\) to be the difference between \(t\) and the expression being rounded: \[\tau=t+\frac{\ln(n)-\ln(m)}{\ln(p)}=t+\log_{p}\left(\frac{n}{m}\right)\] so that \[p^{t}=p^{\log_{p}\left(\frac{m}{n}\right)+\tau}=\left(\frac{m}{n}\right)p^{ \tau}.\] Let \(Y_{n,k-1}\) be the set of simplicial complexes with: * vertex set \([n]\), Figure 2. A graph and the \(1\)-skeleta of the \(m\)-neighbor complexes for \(m=1\) and \(m=2\). * all faces with \(k-1\) vertices and * no faces with \(k+1\) vertices. Hence all but one complex in \(Y_{n,k-1}\) is \((k-1)\)-dimensional. Let \(Y_{n,k-1,q}\) be the Linial-Meshulam probability distribution on \(Y_{n,k-1}\) so that: * faces with \(k\) vertices occur independently with probability \(q\). We show that for a large range of choices of \(m\) and \(n\), with high probability the complexes drawn from \(\Gamma_{n,m,p}\) belong to \(Y_{n,t-1}\) with \(t=\left[\left[\log_{p}\left(\frac{m}{n}\right)\right]\right]\) as above and further the probability that such a complex contains any particular \((t-1)\)-face is the probability that \(B\sim\mathbf{Bin}_{n-t,p^{t}}\) takes a value of at least \(m\). Call this face probability \(q\). The distributions \(\Gamma_{n,m,p}\) and \(Y_{n,t-1,q}\) differ in that in \(\Gamma\) face occurances may be correlated while in \(Y\) they are not. **Lemma 1**.: _If \(p\in(0,1)\) there is \(c>0\) (with \(c=\frac{1}{2}\) if \(p\leq\frac{1}{4}\)) so that for any \(n\), \(m\) and \(f\in\binom{[n]}{t+1}\) with \(t=\left[\left[\log_{p}\left(\frac{m}{n}\right)\right]\right]\) as above \(\mathbb{P}_{K\in\Gamma_{n,m,p}}(f\in K)\leq\exp\left[-c\frac{m^{2}}{n}\right]\)._ Proof.: If \(\Gamma\) is a graph with \(V\Gamma=[n]\) and \(f\in\binom{[n]}{t+1}\) write \(\beta_{f}\Gamma=|\{v\in V\Gamma-f|\{v\}\times f\subseteq E\Gamma\}|\) for the number of common neighbors so \(\beta_{f}G(n,p)=B\sim\mathbf{Bin}_{n-t-1,p^{t+1}}\) and \(\mathbb{P}_{K\in\Gamma_{n,m,p}}(f\in K)=\mathbb{P}(B\geq m)\). First consider the case \(p\leq\frac{1}{4}\) and take \(c=\frac{1}{2}\). In order to apply Hoeffding's inequality, we take \(|\tau|\leq\frac{1}{2}\) as above so \(p^{t}=\frac{m}{n}p^{\tau}\) and verify Figure 3. An Erdős–Rényi graph from G(100,0.31) and the 1-skeleton of the 14-neighbor complex. that if \(|f|=t+1\) then \(\mu=\mathbb{E}\,B<m\): \[\mu=(n-(t+1))p^{t+1}=(n-(t+1))\cdot\frac{m}{n}\cdot p^{\tau+1}=m\cdot\frac{n-(t+ 1)}{n}\cdot p^{\tau+1}<m.\] Moreover, since \(p\in(0,\frac{1}{4}]\), we have \[0\leq\frac{m}{n}p^{\tau+1}\leq\frac{1}{2}\left(\frac{m}{n-(t+1)}\right).\] Hence we have: \[\mathbb{P}_{K\in\Gamma_{n,m,p}}(f\in K) =\mathbb{P}(B\geq m)\] \[\leq\exp\left[-2(n-t-1)\left(p^{t+1}-\frac{m}{n-(t+1)}\right)^{2}\right]\] \[\leq\exp\left[-2(n-t-1)\frac{1}{4}\left(\frac{m}{n-(t+1)}\right) ^{2}\right]\] \[\leq\exp\left[-\frac{1}{2}\left(\frac{m^{2}}{n-(t+1)}\right)\right]\] \[\leq\exp\left[-\frac{1}{2}\left(\frac{m^{2}}{n}\right)\right].\] Finally for the case \(p>\frac{1}{4}\) take \(c=2(1-p^{\frac{1}{2}})^{2}\). The argument is analogous to the \(p\leq\frac{1}{4}\) case upon noting that \(0<p^{\tau+1}<p^{\frac{1}{2}}\) and \(\frac{m}{n-(t+1)}\geq\frac{m}{n}\) and hence \[\frac{m}{n-(t+1)}-\frac{m}{n}p^{\tau+1}\geq\frac{m}{n-(t+1)}-\frac{m}{n-(t+1)} p^{\tau+1}\geq\frac{m}{n-(t+1)}\left(1-p^{\frac{1}{2}}\right).\] **Lemma 2**.: _If \(p\in(0,1)\) there is \(c>0\) (with \(c=\frac{1}{3}\) if \(p\leq\frac{1}{4}\)) so that for any \(n\geq 9\), \(m\) and \(f\in\binom{[n]}{t-1}\) with \(t=\left[\left[\log_{p}\left(\frac{m}{n}\right)\right]\right]\) as above \(\mathbb{P}_{K\in\Gamma_{n,m,p}}(f\not\in K)\leq\exp\left[-c\frac{m^{2}}{n}\right]\)._ Proof.: Similarly to the previous proof if \(f\in\binom{[n]}{t-1}\) then \(\mathbb{P}_{K\in\Gamma_{n,m,p}}(f\not\in K)=\mathbb{P}(B<m)\) with \(B\sim\mathbf{Bin}_{n-(t-1),p^{t-1}}\). Once again consider first the case \(p\leq\frac{1}{4}\) and take \(c=\frac{1}{3}\). Note that \[t-1=\left[\left[\frac{\ln(n)-\ln(m)}{-\ln(p)}\right]\right]-1\leq\frac{\ln(n )}{\ln(p^{-1})}\leq\frac{\ln(n)}{\ln 4}\leq\ln(n)\leq\sqrt{n}.\] Hence \[\frac{n}{n-(t-1)}\leq\frac{n}{n-\sqrt{n}}\leq\frac{n(n+\sqrt{n})}{n^{2}-n}\leq \frac{n+\sqrt{n}}{n-1}. \tag{1}\] The function on the right hand side above is clearly decreasing (for \(n>1\)), and has value \(\frac{3}{2}\) for \(n=9\), hence the left hand side above is bounded by \(\frac{3}{2}\) for all \(n\geq 9\). Moreover with \(|\tau|\leq\frac{1}{2}\) as above, \(p^{\tau-1}\geq 2\). Hence we have \[p^{\tau-1}-\frac{n}{n-(t-1)} \geq\frac{1}{2},\] \[\frac{p^{\tau-1}}{n}-\frac{1}{n-(t-1)} \geq\frac{1}{2n}.\] In order to apply Hoeffding's inequality, we verify that \(\mu=\mathbb{E}\,B>m\). \[\mu=(n-(t-1))p^{t-1}=(n-(t-1))\cdot\frac{m}{n}\cdot p^{\tau-1}=m\cdot\frac{n-( t-1)}{n}\cdot p^{\tau-1}>m,\] where the last inequality follows from the estimates of the factors in the previous paragraph. Hence we have. \[\mathbb{P}_{K\in\Gamma_{n,m,p}}(f\not\in K) =\mathbb{P}(B\leq m-1)\] \[\leq\mathbb{P}(B\leq m)\] \[\leq\exp\left[-2(n-(t-1))\left(p^{t-1}-\frac{m}{n-(t-1)}\right)^ {2}\right]\] \[=\exp\left[-2(n-(t-1))\left(\frac{m}{n}p^{\tau-1}-\frac{m}{n-(t-1 )}\right)^{2}\right]\] \[=\exp\left[-2m^{2}(n-(t-1))\left(\frac{1}{n}p^{\tau-1}-\frac{1}{ n-(t-1)}\right)^{2}\right]\] \[\leq\exp\left[-2m^{2}(n-(t-1))\frac{1}{4n^{2}}\right]\] \[\leq\exp\left[-\frac{1}{3}\frac{m^{2}}{n}\right].\] The last inequality follows from the fact that \(\frac{n-(t-1)}{n}\geq\frac{2}{3}\). Finally for the case \(p>\frac{1}{4}\) take \(c=\frac{1}{3}(p^{-\frac{1}{2}}-1)^{2}\). Choose \(\varepsilon=p^{-\frac{1}{2}}-1\) so that \[p^{d-1}\geq\frac{1}{\sqrt{p}}=1+\varepsilon.\] From (1) there exists an \(N\) such that for \(n>N\) we have \[\frac{n}{n-(t-1)}\leq 1+\frac{\varepsilon}{2}.\] Hence for \(n>N\) we have \[p^{t-1}-\frac{m}{n-(t-1)} =\frac{m}{n}p^{d-1}-\frac{m}{n-(t-1)}\] \[=\frac{m}{n}\left(p^{d-1}-\frac{n}{n-(t-1)}\right)\] \[\geq\frac{m}{n}\cdot\frac{\varepsilon}{2}.\] Using this and \(n-(t-1)\geq\frac{2}{3}n\) an argument analogus to the \(p\leq\frac{1}{4}\) case yields (for \(n>N\)): \[\mathbb{P}(B<m)\leq\exp\left[-\frac{\varepsilon^{2}}{3}\left(\frac{m^{2}}{n} \right)\right].\] For the example in figure 3 with \(n=100\) and \(p=.31\) there is \(t=2\), \(\tau=.32\), \(c=.39\) for lemma 1 and \(c=.21\) for lemma 2. Thus lemma 1 implies that the chance of each triple of vertices of the graph to not be a triangle in the complex is at least \(.53\) so the chance that there are no triangles is at least \(0\) as these events are not independent while lemma 2 implies that the chance of each vertex of the graph to be a vertex of the complex is at least \(.66\) so the chance that they all are is at least \(10^{-18}\) which is nonzero only because these events are independent. Thus these parameters are not in the regime addressed in the following theorem with \(t=3\) in which the complexes are gauranteed to have high probability of having every vertex of the graph as a vertex and no triangles. The first two parts of the following theorem use these two lemmas while the third part follows from lemmas 4 and 5 in the next section. **Theorem 1**.: _If \(p_{n}\in(0,1)\) and \(m_{n}\in\mathbb{N}\) are sequences for which any of the following three pairs of conditions holds:_ * \(p_{n}\) _is constant and_ \(\lim_{n\to\infty}\frac{m_{n}^{2}}{n(\ln n)^{2}}=\infty\)_,_ * \(\lim_{n\to\infty}p_{n}=0\) _and_ \(\lim_{n\to\infty}\frac{-(\ln p)m_{n}^{2}}{n(\ln n)^{2}}>4\) _or_ * \(m_{n}=m\) _is constant and there constants_ \(t<\sqrt{2m+1}\) _an integer and_ \(b\in(t-1,\frac{m(t+1)}{m+t+1})\) _with_ \(p_{n}=n^{\frac{-1}{b}}\)_._ _then_ \[\lim_{n\to\infty}\mathbb{P}_{K\in\Gamma}(K\in Y)=1\] _where \(\Gamma=\Gamma_{n,m_{n},p_{n}}\) and \(Y=Y_{n,t_{n}-1}\) with \(t_{n}=\left[\!\left[\log_{p_{n}}\left(\frac{m_{n}}{n}\right)\right]\!\right]\)._ **Definition 1**.: Write \(\Gamma_{m}\)_has property \(P\) asymptotically almost surely (aas)_ if \(\lim_{n\to\infty}\mathbb{P}_{K\in\Gamma}(K\) has property \(P)=1\). Thus the conclusion of the theorem is that \(\Gamma\in Y\) aas. Figures 4 and 5 illustrate the first two sets of coditions, with the value of \(t=2\), the size \(n\) of the Erdos-Renyi graph given on the horizontal axis and the proportion of simplices in the \(m\)-neighbor complex to the maximal possible displayed on the vertical axis. Proof.: This is a proof of the first two parts. The third follows immediately from lemmas 4 and 5 of the next section. First use the second lemma above and the first moment method to see that \(\binom{[n]}{t_{n}-1}\subseteq\Gamma\) aas. Let \(K\) be a complex drawn from \(\Gamma\) as described in the statement of the theorem, and let \(N\) be the random variable counting the number of \((t_{n}-1)\)-element subsets of \(K_{0}=[n]\) which are not faces of \(K\). By the First Moment Method (see [5], Lemma 22.2), we have \[\mathbb{P}(N>0)\leq\mathbb{E}\,N.\] Figure 4. First set of conditions. Here \(p=0.5\), \(m=\text{Ceiling}\,(n/4)\) and \(t=2\). Here \(q\approx 0.49\). Let \(\kappa_{n}=\frac{m_{n}^{2}(-\ln p_{n})}{n(\ln n)^{2}}\). By the second lemma above there is a constant \(c>0\) depending on \(p\) with \[\mathbb{E}N =\binom{n}{t_{n}-1}\mathbb{P}_{K\in\Gamma(n,m,p)}\,(\text{a fixed }(t_{n}-1)\text{-tuple is not a face of }K)\] \[\leq n^{t_{n}-1}\exp\left[-c\frac{m_{n}^{2}}{n}\right]\] \[\leq\exp\left[\left(t_{n}-\frac{1}{2}\right)\ln(n)\right]\exp \left[-c\frac{m_{n}^{2}}{n}\right]\] \[\leq\exp\left[\log_{p_{n}}\left(\frac{m_{n}}{n}\right)\ln(n)-c \frac{m_{n}^{2}}{n}\right]\] \[\leq\exp\left[\frac{(\ln n)^{2}}{-\ln p_{n}}\left(1-c\kappa_{n} \right)\right].\] In the first case of the theorem \(\lim\kappa_{n}=\infty\) and \(c\) depends only on \(p\) so the limit of the last expression is equal to zero. In the second case of the theorem \(\lim\kappa_{n}\geq 2\) and \(c=\frac{1}{2}\) so again the limit of the last expression is equal to zero. Next use the first lemma above and the first moment method again to show that \(\Gamma\) aas has dimension at most \(t\). The argument is very similar to that above but also uses the bound \[t_{n}+1\leq 2\frac{\ln(n)-\ln(m_{n})}{-\ln(p_{n})}.\] Figure 5. Second set of conditions. Here \(p=1/ln(ln(n))\), \(m=\text{round}(np^{2})\) and \(t=2\). Here \(q\approx 0.48\). ## 4. Asymptotics of \(N_{m}(G(n,p))\) In this section we consider the hypotheses from the third part of theorem 1. That is \(\Gamma=\Gamma_{n,m,p}\) with \(p=n^{\frac{-1}{\beta}}\) for a fixed density parameter \(\beta>0\), a fixed number \(m\) and a growing number \(n\) of vertices. This makes the parameter \(t-\tau\) from the previous section converge to \(\beta\). We then fix a finite simplicial complex \(X\) and study the limiting probability that \(X\) is isomorphic to a subcomplex of a complex \(K\) chosen from \(\Gamma\). This analysis includes a proof of the last part of theorem 1 but does not give total variation convergence which we conjecture below. **Definition 2**.: Call \(\beta\) a _threshold_ for a property of a complex in \(\Gamma_{m}\) if a complex drawn from \(\Gamma_{m,n,p}\) with \(p=n^{\frac{-1}{b}}\) has the property aas as \(n\) grows if \(b>\beta\) and does not have it aas as \(n\) grows if \(b<\beta\). This ignores the behavior at \(b=\beta\). Similarly, call \(\beta\) a _threshold_ for a property of a graph \(H\) drawn from \(G(n,p)\), where \(p=n^{\frac{-1}{b}}\), if \(H\) aas has the property as \(n\) grows if \(b>\beta\) and does not have it aas as \(n\) grows if \(b<\beta\). The second part of the above definition is consistent with Definition 1.6 in [5], for the choice of the threshold function \(p^{*}(n)=n^{\frac{-1}{\beta}}\). The first part of this section sets up notation to define the \(m\)-density of a complex \(X\) and shows that it is a threshold for \(\Gamma_{m}\) to contain \(X\) as a subcomplex. The result is based on one for subgraphs of random graphs which Erdos and Renyi proved for balanced graphs (see below) and Bollobas stated in the form we use. The account given in Frieze-Karonski [5] is particularly useful for our purposes. Figure 6. Copies of pure simplicial complexes in m-neighbor constructions on Erdős– Rényi graphs Define the density of a nonempty graph \(H\) as the ratio of the number of edges to the number of vertices: \[d_{H}=\frac{e_{H}}{v_{H}}\] and the related maximum subgraph density: \[\bar{d}_{H}=\max\{d_{K}:\emptyset\neq K\subseteq H\}.\] A graph is balanced if \(\bar{d}_{G}=d_{G}\) and strictly balanced if \(d_{G}>d_{H}\) for all proper subgraphs \(H\) in \(G\). **Theorem 5.3 in [5]** If \(H\) is a graph with \(d_{H}>0\), then \(\bar{d_{H}}\) is a threshold for the appearence of \(H\) in \(G(n,p)\) with \(p=n^{\frac{-1}{b}}\). To study the probability of finding a copy of a finite complex \(X\) with facets \(F=X_{\mathcal{F}}\) in a complex drawn from \(\Gamma_{n,m,p}\), we will consider functions \[W:F\to PX_{0}\] along with the functions they induce on the power set \[W^{\cap},W^{!},W^{\cup}:\mathcal{P}F\to\mathcal{P}X_{0}\] which take a set of facets respectively to the intersection, exclusive intersection or union of the images of \(W\). Write \(W^{*}_{A}=W^{*}(A)\), \(w^{*}_{A}=|W^{*}_{A}|\) and by convention \(W^{\cap}_{\emptyset}=X_{0}\). Call each \(W^{*}\) a version of the \(F\)-set \(W\) and each \(w^{*}\) a version of the \(F\)-shape \(w\). **Example 1.** Let \(X\) be the pure \((3-1)\) dimensional simplicial complex with facets \(F=\{f,g,h\}\) where \[f=\{\alpha,\gamma,\delta\},\quad g=\{\gamma,\delta,\theta\},\quad\text{and} \quad h=\{\delta,\kappa,\lambda\}.\] The complex and the geometric realization are displayed in Figure 7. Figure 7. The pure simplicial complex \(X\) and its geometric realization Hence \[\mathcal{P}F=\{\emptyset,\ \{f\},\ \{g\},\ \{h\},\ \{f,g\},\ \{f,h\},\ \{g,h\},\ \{f,g,h\}\}\] and \(X\) is itself an \(F\)-set with shape \(x\) with the values of \(x^{\cap}\), \(x^{!}\) and \(x^{\cup}\) on the subsets \(A\) of \(F\) given in the following table. \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \(A\) & \(\emptyset\) & \(\{f\}\) & \(\{g\}\) & \(\{h\}\) & \(\{f,g\}\) & \(\{f,h\}\) & \(\{g,h\}\) & \(\{f,g,h\}\) \\ \hline \(x_{A}^{\cap}\) & 6 & 3 & 3 & 3 & 2 & 1 & 1 & 1 \\ \hline \(x_{A}^{!}\) & 0 & 1 & 1 & 2 & 1 & 0 & 0 & 1 \\ \hline \(x_{A}^{\cup}\) & 0 & 3 & 3 & 3 & 4 & 5 & 5 & 6 \\ \hline \end{tabular} Note that any one of the three versions \(x^{\cap}\), \(x^{!}\) and \(x^{\cup}\) can be explicity expressed in terms of any other as described in the following proposition. **Proposition 1**.: _If \(x\) is an \(F\)-shape and \(A\subseteq F\) then:_ \[(a) x_{A}^{\cup}=\sum_{\emptyset\neq B\subseteq A}(-1)^{|B|-1}x_{B}^{\cap}\] \[(b) x_{A}^{!}=\sum_{B\supseteq A}(-1)^{|B|-|A|}x_{B}^{\cap}\] \[(c) x_{A}^{\cup}=\sum_{B\cap A\neq\emptyset}x_{B}^{!}\] \[(d) x_{A}^{\cap}=\sum_{B\supseteq A}x_{B}^{!}\] \[(e) x_{A}^{\cap}=\begin{cases}\sum_{B\subseteq A}(-1)^{|B|-1}x_{B}^{ \cup},&A\neq\emptyset,\\ x_{F}^{\cup}&A=\emptyset\end{cases}\] \[(f) x_{A}^{!}=\sum_{B\subseteq A\neq\emptyset}(-1)^{|A|-|B|+1}x_{F \setminus B}^{\cup}.\] Proof.: Formulas \((a)\) and \((b)\) follow directly from the inclusion and exclusion formula (see M. Aigner [1], Chapter 5, Sieve Methods, Section 1: Inclusion-Exclusion). Formula \((c)\) follows from the fact that each element of \(X_{0}\) increases \(t_{A}^{!}\) by 1 for a unique \(A\subseteq F\). More generally a triple \(x=(x^{\cap},x^{!},x^{\cup})\) of integer valued functions on \(\mathcal{P}F\) related via the above summations is called an _\(F\)-shape_ while a set valued one is called an _\(F\)-set_. The three entries \(x^{*}\) are called versions of \(x\). Write \(x_{0}=x_{\emptyset}^{\cap}=x_{F}^{\cup}\). If \(x\) is the shape of a simplicial complex then \(x_{0}\) is the number of vertices. A final useful construction is the pointwise \(\cap\)-product of \(F\)-shapes defined by \((wx)_{A}^{\cap}=w_{A}^{\cap}x_{A}^{\cap}\). Note that it is the \(\cap\)-versions which multiply pointwise while the effects on the \(\mathfrak{l}\)- and \(\cup\)-versions are more complicated. Call an \(F\)-shape \(x\)_nonnegative_ if \(x_{A}^{\mathfrak{l}}\geq 0\) for every \(A\subseteq F\) and \(k\)_-pure_ (_pure_) if \(x_{\{f\}}^{\cap}=k\) for every \(f\in F\). In the latter case write \(\bar{x}=k\). Note that the shape of any simplicial complex is nonnegative and the shape is \(k\)-pure exactly if the complex is \((k-1)\)-pure. Given \(F\)-shapes \(z\) and \(x\) we say that \(z\leq x\) if \(x-z\) is nonnegative. A key quantity for our considerations is a measure of density required for the appearence of \(X\) in \(\Gamma_{n,m,p}\). **Definition 3** (\(m\)-density of \(X\)).: For a pair of pure \(F\)-shapes \(x\) and \(w\), let \(b(x,w)=\frac{(xw)_{0}}{x_{0}+w_{0}}\). We define the \(m\)_-density of \(X\)_ as \[b_{m}(x)=\min_{\bar{w}=m}\left\{\max_{z,v>0}\left\{b(z,v)\mid z\leq x,\,v\leq w \right\}\right\}\] Write also \(b_{m}(X)=b_{m}(x)\) if \(X\) has shape \(x\). These arise in the next theorem when studying whether a complex \(K\) drawn from \(\Gamma_{n,m,p}\) contains a copy of a given pure finite simplicial complex \(X\) with shape \(x\) by considering an \(m\)-pure \(F\)-set \(W\) with shape \(w\). Write \(H=H(G,X,W)\) for the set of all injective maps \(\rho:X_{0}\cup W_{0}\to VG\) for which \(\cup_{f\in F}\left[\rho(X_{\{f\}}^{\cap})\times\rho(W_{\{f\}}^{\cap})\right] \subseteq EG\). Thus if \(\rho\in H\) then \(\rho|_{X_{0}}:X\to N_{m}G\) induces an injective map of simplicial complexes. If \(G\) is drawn from \(G(n,p)\) then the log base \(n\) of the expected value of \(|H|\) is positive if \(\beta>b(x,w)\) for \(n\) sufficiently large, as the following calculation shows. \[\lim_{n\to\infty}\log_{n}\mathbb{E}|H| =\lim_{n\to\infty}\log_{n}\binom{n}{x_{0}+w_{0}}p^{(xw)_{0}}\] \[=x_{0}+w_{0}-\frac{b}{\beta}(x+w)_{0}=(x_{0}+w_{0})\left(1-\frac {b}{\beta}\right).\] **Example 2.** Consider the pure (3-1)-dimensional simplicial complex \(X\) of shape \(x\) with facets \(F=\{f,g,h\}\) where: \[f=\{\alpha,\gamma,\delta\},g=\{\gamma,\delta,\theta\},h=\{\theta,\kappa,\lambda\}\] so \(x_{0}=6\), \(\bar{x}=3\), \(x_{F}=3\) and \(\phi_{x}=\frac{3}{2}\). The complex and the geometric realization are displayed in Figure If a copy of the complex \(X\) appears in a complex \(K=N_{2}G\) drawn from \(\Gamma_{n,2,p}\) via \(\rho:X_{0}\to VG=K_{0}\) then * \(X\) and \(\rho X\) are \(F\)-sets with the same shape and * for each facet \(f\in F\) the associated vertices \(\rho X^{\cup}_{\{f\}}\subseteq VG\) have at least two common neighbors in \(G\). Choose any such pair to be \(W^{\cup}_{\{f\}}\) and call the resulting \(F\)-set \(W\) (which by construction is \(2\)-pure) a \(2\)_-witness_ to the copy \(\rho X\) of \(X\). In the example the \(\cap\) version of the shape of \(X\) is the vector: \[x^{\cap}=(x^{\cap}_{\{f\}},x^{\cap}_{\{g\}},x^{\cap}_{\{h\}},x^{\cap}_{\{f,g\} },x^{\cap}_{\{f,h\}},x^{\cap}_{\{g,h\}},x^{\cap}_{\{f,g,h\}})=(3,3,3,2,0,1,0).\] Here are some possibilities for the shape \(w\) of a \(2\)-witness \(W\) to a copy of \(X\): A) If \(w_{0}\) takes its largest possible value of \(2|F|=6\) then \(w^{\cap}_{A}=w^{!}_{A}=0\) for every \(A\) with \(|A|\geq 2\) \[w^{\cap}=(2,2,2,0,0,0,0).\] This extreme case appears later as the shape \(r\). Each element of \(W_{0}\) is connected to the \(3\) vertices of the face of \(X\) over which it lies in Figure 9. The density of the associated union of three complete bipartite graphs is \[b(x,w)=\frac{(xw)_{0}}{x_{0}+w_{0}}=\frac{18}{6+6}=\frac{3}{2}.\] B) One possibility with \(w_{0}=5\) has each element of \(W_{0}\) connected to the vertices of the face of \(X\) over which it lies in Figure 10 and \[w^{\cap}=(2,2,2,1,0,0,0).\] Figure 8. The pure simplicial complex \(X\) and its geometric realization The density of this associated union of three complete bipartite graphs is \[b(x,w)=\frac{(xw)_{0}}{x_{0}+w_{0}}=\frac{16}{6+5}=\frac{16}{11}.\] C) If \(w_{0}=2\) which is the smallest possible value then each element of \(W_{0}\) is connected to every vertex of \(X\) as in Figure 11 and \[w^{\cap}=(2,2,2,2,2,2,2).\] The density of this associated complete bipartite graph is Figure 10. Second witness to \(X\) with \((wx)_{0}=16\) edges Figure 9. First witness to \(X\) with \((wx)_{0}=18\) edges \[b(x,w)=\frac{(xw)_{0}}{x_{0}+w_{0}}=\frac{12}{6+2}=\frac{3}{2}.\] **Theorem 2**.: _If \(X\) is a finite simplicial complex and \(m\geq 1\) then the \(m\)-density of \(X\) is a threshold for the appearance of \(X\) in \(\Gamma_{m}\)._ Proof.: Write \(F=X_{\mathcal{F}}\). For an \(m\)-pure \(F\)-set \(W\) with shape \(w\), by Theorem 5.3 of [5] above, the formula \[\max_{Z\subseteq X,V\subseteq W}\,\frac{(zv)_{0}}{z_{0}+v_{0}}\] gives a threshold for \(H(G,X,W)\) to be nonempty in \(\Gamma_{m}\). The threshold for the appearance of \(X\) then only requires selecting the \(W\) for which this is minimal. Figures 12-14 dispaly the average number of copies of \(X\) from Examples 1 and 2 observed in 10 draws from \(\Gamma_{n,m,p}\) using various values of the relevant parameters. Write \(r\) for the \(m\)-pure \(F\)-shape with \(r_{A}^{\cap}=0\) for every \(A\subseteq F\) with \(|A|\geq 2\) so \(r_{.}=r_{\{f\}}^{\cap}=m\) and \(r_{0}=m|F|\) and note that if \(x\) is also a pure \(F\)-shape then \[b(x,r)=\frac{(xr)_{0}}{x_{0}+r_{0}}=\frac{\bar{x}}{\frac{x_{0}}{m|F|}+1}\] Figure 11. Third witness to \(X\) with \((wx)_{0}=12\) edges **Lemma 3**.: _If \(X\) is a finite \((k-1)\)-pure simplicial complex with \(\phi\) facets and \(m>kx_{0}\phi\) then any \(m\)-pure \(X_{\mathcal{F}}\)-shape \(w\neq r\) has \(b(x,w)>b(x,r)\)._ Proof.: Write \(F=X_{\mathcal{F}}\) and without loss of generality, assume that \(\phi\geq 2\). Since \(w\neq r\) and \(r_{A}^{!}=0\) for every \(|A|\geq 2\) there is some \(A\subseteq F\) with \(|A|\geq 2\) and \(w_{A}^{!}\geq 1\). Fix such a set \(A\) and write \(v\) for the \(F\)-shape with \(v_{A}^{!}=1\), every \(a\in A\) has \(v_{\{a\}}^{!}=-1\) and otherwise \(v_{B}^{!}=0\). Hence if is the complex from Example 1, \(v\) is the \(F\)-shape described by Figure 15. Hence if \(u=w-v\) is another \(m\)-pure \(F\)-shape it suffices to check that \(b(x,w)>b(x,u)\). Note that \(v_{B}^{\cap}=1\) if \(B\subseteq A\) and \(|B|\geq 2\) and \(v_{B}^{\cap}=0\) otherwise and that \(v_{0}=1-|A|\) while \((xv)_{0}=x_{A}^{\cup}-|A|k\). The last equality follows from the fact that \(v^{\cap}=(0,\ldots,0,1,1,\ldots 1)\) with zeros in the first \(|A|\) slots and ones elsewhere. Compute: Figure 14. Average number of copies of \(X\) from Example 1 in \(\Gamma_{n,m,p}\), where \(m=4\), \(b\approx 2\), \(\beta=2.2\), and \(n\) is in \(\{\)50–130\(\}\) Figure 15. The \(F\)-shape \(v\) \[[b(x,w)-b(x,u)] [(x_{0}+w_{0})(x_{0}+u_{0})]\] \[=(xw)_{0}(x_{0}+u_{0})-(xu)_{0}(x_{0}+w_{0})\] \[=(xv)_{0}(x_{0}+w_{0})-(xw)_{0}v_{0}\] \[=(x_{A}^{\cup}-|A|k)(x_{0}+w_{0})+(xw)_{0}(|A|-1)\] \[=(x_{A}^{\cup}-k)w_{0}+(|A|-1)[(xw)_{0}-kw_{0}]-(|A|k-x_{A}^{\cup} )x_{0}\] \[>m+0-\phi kx_{0}\] \[\geq 0.\] For the strict inequality \(x_{A}^{\cup}\) is the number of vertices in a union of at least two \((k-1)\)-faces and hence at least \(k+1\). Since \(w\) is an \(m\)-pure \(w_{0}\) is at least \(m\) so the first term is also. The second term is positive since \((xw)_{0}\) is the number of edges connecting a witness of shape \(w\) to a copy of \(X\) and \(w_{0}\) is the number of vertices in the witness each of which is contained in at least \(k\) edges and since \(w\neq r\) some vertex is contained in more than \(k\) edges. For the third term \(x_{A}^{\cup}\) is nonnegative so discard it and \(|A|\) is at most \(\phi\). **Lemma 4**.: _The property of having every size \(k\) subset of the vertices as a face has \(k\) as a threshold in \(\Gamma_{m}\)._ Proof.: Write \(\Delta\) for the \((k-1)\)-pure simplicial complex which is just a single simplex and \(\Delta_{0}=[k]\). Consider a complex \(K\) drawn from \(\Gamma_{n,m,p}\) with \(p=n^{\frac{-1}{b}}\). Consider the number of simplex witnesses \(N=|\{\rho\in Z_{K}\Delta|(\forall i\leq k)\rho i=i\}|\) and compute \[\log_{n}\mathbb{E}(N) \leq\log_{n}\left[\binom{n}{m}p^{mk}\right]\leq\log_{n}\left[n^{ m}n^{\frac{-1}{b}mk}\right]\] \[\leq\log_{n}\left[n^{m-\frac{1}{b}mk}\right]\leq m\left(1-\frac{k }{b}\right).\] The last expression is less than zero for \(b<k\), and hence the expected number itself has limit zero as \(n\) goes to infinity. We then apply the First Moment Method (see [5], Lemma 22.2): If \(X\) is a nonnegative integer valued random variable, then \[\mathbb{P}(X>0)\leq\mathbb{E}X.\] We conclude that the probability that a given set of \(k\) vertices is a face is aas equal to zero giving one of the threshold directions. For the other direction take \(b>k\). For each \(f\in\binom{[n]}{k}\) the distribution of the number of common neighbors is a binomial variable \(X_{f}\sim B_{n-k}\) with probability \(p^{k}=n^{\frac{-k}{b}}\) and \(n-k\) samples. If \(f\cap g=\emptyset\) then \(X_{f}\) and \(X_{g}\) are independent and they are positively correlated otherwise. Hence the probability that all \(\binom{n}{k}\) such subsets have at least \(m\) common neighbors is at least the \(\binom{n}{k}\) power of \(\mathbb{P}(B_{n-k}\geq m)\). By the following argument this is aas equal to one. We apply the following version of the Bernstein-Chernoff bound (see A. Klenke [8], Exercise 5.2.1, pg. 110): If \(X_{i},\ldots,X_{n}\) are i.i.d. Bernoulli variables, and \(S_{n}=X_{1}+\cdots+X_{n}\) with \(\mu=\mathbb{E}S_{n}\), then for any \(\delta\) \[\mathbb{P}[S_{n}\leq(1-\delta)\mu]\leq\exp\left(-\frac{\delta^{2}\mu}{2}\right)\] Note that \(\mu_{n}=\mathbb{E}(B_{n-k})=(n-k)n^{-\frac{k}{b}}=n^{1-\frac{k}{b}}-kn^{- \frac{k}{b}}\). Choose \(\delta_{n}\) so that \((1-\delta_{n})\mu_{n}=m-1\). The probability \(P_{n}\) that all \(k\) element subsets have at least \(m\) neighbors is bounded from below as follows: \[P_{n} \geq\left(1-\exp\left(-\frac{\delta_{n}^{2}\;\mu_{n}}{2}\right) \right)^{\binom{n}{k}}\] \[\geq 1-\binom{n}{k}\exp\left(-\frac{\delta_{n}^{2}\;\mu_{n}}{2}\right)\] \[\geq 1-n^{k}\exp\left(-\frac{\delta_{n}^{2}\;\mu_{n}}{2}\right)\] \[\geq 1-\exp\left(k\ln n-\left[1-\frac{m}{\mu_{n}}+\frac{1}{\mu_{n }}\right]^{2}\frac{\mu_{n}}{2}\right)\] The last bound has limit \(1\) as \[\lim_{n\to\infty}\frac{\mu_{n}}{\ln n}=\infty\] **Lemma 5**.: _The property of having some size \(k\) subset of the vertices as a face has \(\frac{mk}{m+k}\) as a threshold in \(\Gamma_{m}\)._ Proof.: First consider the case \(b<\frac{mk}{m+k}\). If \(K=N_{m}G\) is drawn from \(\Gamma_{n,m,p}\) with \(p\) as above write \(M=z_{G}K_{m,k}\) for the number of \(m\)-witnesses to \((k-1)\)-simplices and compute \[\log_{n}\mathbb{E}(M) =\log_{n}\left[\binom{n}{k}\binom{n-k}{m}p^{mk}\right]\] \[=\log_{n}\left[\frac{n!}{k!(n-k)!}\frac{(n-k)!}{(n-k-m)!m!}p^{mk}\right]\] \[\leq\log_{n}\left[n^{k+m-\frac{1}{b}mk}\right]-log_{n}(k!m!)\] \[=-\varepsilon-\log_{n}(k!m!)\] where \(\varepsilon=\frac{mk}{b}-(k+m)>0\) so \(M\) is aas zero and using the first moment method as above there are aas no \(k\) faces in \(K\). Next consider the case \(b>\frac{mk}{m+k}\). Here we apply the Second Moment Method (see e.g. Lemma 22.5 in [5]): If \(X\) is a nonnegative integer valued random variable, then \[\mathbb{P}(X=0))\leq\frac{\operatorname{Var}X}{E(X^{2})}=1-\frac{(\mathbb{E}X )^{2}}{\mathbb{E}(X^{2})}.\] We apply this to \(M\) to obtain a lower bound on \(\mathbb{P}(M>0)=\mathbb{P}(M\geq 1)\): \[\frac{(\mathbb{E}M)^{2}}{\mathbb{E}(M^{2})}\leq\mathbb{P}(M>0)\] and then show that \[\lim_{n}\frac{(\mathbb{E}M)^{2}}{\mathbb{E}(M^{2})}=1.\] This is achieved by writing \(M^{2}=\sum_{a}M_{a}^{2}\) a finite sum with the number of terms independent of \(n\) and then computing that the limit as \(n\) grows without bound of \[\log_{n}\frac{(\mathbb{E}M)^{2}}{\mathbb{E}(M_{a}^{2})}=2\log_{n}\mathbb{E}M- \log_{n}\mathbb{E}(M_{a}^{2}) \tag{2}\] is zero for one choice of \(a\) and at positive for each of the others. Minor modifications of the argument in the first part of the proof show that \[\lim_{n}\log_{n}(\mathbb{E}M)^{2}=2\big{(}k+m-\frac{km}{b}\big{)}.\] Bounds on the second term of (2) are obtained by considering the possible intersection patterns for pairs of \(k\)-sets \(K_{1}\) and \(K_{2}\) in \(\binom{[n]}{k}\) with \(m\)-witnesses \(M_{1}\) and \(M_{2}\) in \(\binom{[n]}{m}\) and is indexed by four nonnegative parameters \[a_{kk}=|K_{1}\cap K_{2}| a_{kk}+a_{km}\leq k\] \[a_{km}=|K_{1}\cap M_{2}| a_{kk}+a_{mk}\leq k\] \[a_{mk}=|K_{2}\cap M_{1}| a_{mm}+a_{km}\leq m\] \[a_{mm}=|M_{1}\cap M_{2}| a_{mm}+a_{mk}\leq m\] It is then possible to compute \(\mathbb{E}(M^{2})\) as a sum over the possible \(a_{..}\) values and check that the Expression (2) is positive. Specifically \(\mathbb{E}M^{2}\) is the sum over finitely many quadruples \(a=\{a_{..}\}\) of \(\mathbb{E}M^{2}_{a}\) and \(\lim_{n}\log_{n}\mathbb{E}M^{2}_{a}=2(k+m-\frac{km}{b})-(a_{kk}+a_{mm}+a_{mk}+ a_{km}-\frac{a_{kk}a_{mm}+a_{mk}a_{km}}{b})\). Since the number of choices for \(a\) is a function of \(k\) and \(m\) independent of \(n\), it suffices to check that \((a_{kk}+a_{mm}+a_{mk}+a_{km}-\frac{a_{kk}a_{mm}+a_{mk}a_{km}}{b})>0\) if \(a\neq 0\) which is an easy check if \(b>\frac{mk}{m+k}\). Specifically, we can apply 1-variable calculus to the function \(f(x)=x+y-\frac{xy}{b}\), where \(y\) is held fixed in \([0,m]\) and \(x\) ranges over \([0,k]\), to see that the function takes positive values on \((0,k)\). It follows that the order of magnitude (as a power of \(n\)) of the expectation of \(M^{2}\) does not exceed that of the square of the expectation of \(M\). It follows that the limit of Expression (2) is equal to zero and aas \(\mathbb{P}(M>0)=1\). **Corollary 1**.: _If \(k<\sqrt{2m+1}\) there is an interval of \(\beta\) values for which \(\Gamma_{m,n,p}\) with \(p=n^{\frac{-1}{\beta}}\) is aas in \(Y_{n,k-1}\) but they are not all the same, while if \(k^{2}+k<m\) and \(\max\{k,\frac{mk}{m+k}\}<\beta<\min\{k+1,\frac{mk+m}{m+k+1}\}\) then all such complexes are aas always the complex with every set of vertices of size \(k\) a face and none of size \(k+1\)._ Figure 16. **Theorem 3**.: _If \(X\) is a finite \((k-1)\)-pure simplicial complex with \(\phi\) facets, \(m>kx_{0}\phi\) and \(k>\beta>\frac{km\phi}{x_{0}+m\phi}\) then with \(p=n^{\frac{-1}{\beta}}\) and \(q=n^{m(1-\frac{k}{\beta})}\) there is \(\lim_{n\to\infty}\frac{\mathbb{E}_{K\in\Gamma_{n,m,p}}(z_{K}X)}{\mathbb{E}_{K \in Y_{n,k,q}}(z_{K}X)}=1\)._ Write \(R\) for the extreme \(m\)-pure \(F\)-set of shape \(r\) with every \(r_{A}^{\cap}=0\) if \(A\in\binom{F}{2}\). Proof.: Write \(F\) for the facets of \(X\) and let \(a=1-\frac{k}{\beta}\). Note that \[\lim_{n\to\infty}\log_{n}\mathbb{E}_{K\in Y_{n,k,q}}(z_{K}X) =\lim_{n\to\infty}\log_{n}\binom{n}{x_{0}}q^{\phi}\] \[=\lim_{n\to\infty}\log_{n}\binom{n}{x_{0}}n^{m\phi a}=x_{0}+am\phi\] For the lower bound there are enough copies of \(X\) with witnesses of shape \(r\). If \(G\) is a graph write \[[M(G,X)= \{\rho\in H(G,R,X)|\] \[\forall(f\in F,g\in VG-\rho R_{\{f\}}^{\cup})\exists(h\in\rho X _{\{f\}}^{\cup},(g,h)\not\in EG)\}\] for the minimal \(R\)-witnesses to \(X\) in \(N_{m}G\) and \(m(G,X)=|M(G,X)|\). Note that if \(\rho,\rho^{\prime}\in M(G,X)\) with \(\rho X_{0}=\rho^{\prime}X_{0}\) are witnesses to the same copy of \(X\) up to symmetry and \(f\in F\) then the definition of \(M\) implies that \(\rho R_{\{f\}}^{\cup}=\rho^{\prime}R_{\{f\}}^{\cup}\). Thus \(\rho\) and \(\rho^{\prime}\) differ by one of at most \(c=(m!k!)^{\phi}\phi!\) automorphisms and \(m(G,X)\leq cz_{N_{m}G}X\) with \(c\) independent of \(|VG|\). In the Erdos-Renyi setting \(|VG|=n\) so for any of the \(\frac{n!}{(n-x_{0}-m\phi)!}\sim n^{x_{0}+m\phi}\) injective maps \(\rho:R_{0}\cup X_{0}\to VG\) we have \[\mathbb{P}_{G\in G(n,p)}(\rho\in M(G,X))=p^{(rx)_{0}}(1-p^{k})^{(n-x_{0}-m \phi)\phi}\] Since \(a=1-\frac{k}{\beta}<0\) by the choice of \(\beta\), after some simplification this gives \[\lim_{n\to\infty}\log_{n}\mathbb{E}_{K\in\Gamma_{n,m,p}}(z_{K}X) \geq\lim_{n\to\infty}x_{0}+m\phi-\frac{mk\phi}{\beta}-2\phi\ln^{ -1}(n)n^{a}\] \[=x_{0}+am\phi.\] The middle inequality uses the approximation \(1-c\geq e^{-2c}\) if \(c<\frac{3}{4}\) with \(c=p^{k}=n^{\frac{-k}{\beta}}\). For the upper bound note that the number of \(m\)-pure \(F\)-shapes \(w\) is bounded independent of \(n\) (by \((m+1)^{2^{\phi}}\) for instance), let \(\Omega\) be the finite set of all such shapes. First, we will establish that \[w_{0}-\frac{(xw)_{0}}{\beta}<am\phi\] Note that \[w_{0}-\frac{(xw)_{0}}{\beta}<\left(1-\frac{k}{\beta}\right)m\phi\iff\frac{km\phi} {\beta}-\frac{(xw)_{0}}{\beta}<m\phi-w_{0}\] Since by our assumptions on \(\beta\), we have \[\frac{km\phi-(xw)_{0}}{\beta}<\frac{km\phi-(xw)_{0}}{\frac{km\phi}{x_{0}+m\phi}}\] It is enough to establish that \[\frac{km\phi-(xw)_{0}}{\frac{km\phi}{x_{0}+m\phi}}<m\phi-w_{0}\] The following inequalities are equivalent \[[km\phi-(xw)_{0}][x_{0}+m\phi] <km\phi(m\phi-w_{0})\] \[x_{0}km\phi-x_{0}(xw)_{0}-m\phi(xw)_{0} <-w_{0}km\phi\] \[(x_{0}+w_{0})km\phi <(x_{0}+m\phi)(xw)_{0}\] \[b(x,r)=\frac{km\phi}{(x_{0}+m\phi)} <\frac{(xw)_{0}}{(x_{0}+w_{0})}=b(x,w)\] By Lemma 3 if \(r\neq w\), \(b(x,r)<b(x,w)\), hence the final inequality holds. Since the expected number \(\mathbb{E}_{G\in G(n,p)}(h(G,w,x))\) of copies of \(X\) in \(N_{m}G\) together with a witness of shape \(w\) is approximately \(n^{x_{0}+w_{0}-\frac{(xw)_{0}}{\beta}}\), we have \[\lim_{n\to\infty}\log_{n}\mathbb{E}_{K\in\Gamma_{n,m,p}}(z_{K}X) =\lim_{n\to\infty}\log_{n}\sum_{w\in\Omega}\mathbb{E}_{G\in G(n,p )}(h(G,w,X))\] \[=\lim_{n\to\infty}\log_{n}\sum_{w\in\Omega}n^{x_{0}+w_{0}-\frac{( xw)_{0}}{\beta}}\] \[\leq\lim_{n\to\infty}\log_{n}\sum_{w\in\Omega}n^{x_{0}+am\phi}\] \[=\lim_{n\to\infty}\log_{n}\left(|\Omega|\cdot n^{x_{0}+am\phi}\right)\] \[=x_{0}+am\phi\] **Conjecture 1**.: _For every \(k\) here is a sequence \(n_{m}\) for which the total variation distance between the \(\Gamma_{n,m,p}\) and \(Y_{n,k,q}\) distributions tends to zero as \(m\) tends to infinity if \(n=n_{m}\), \(\beta=k-\frac{1}{n_{m}}\), \(p=n^{\frac{-1}{\beta}}\) and \(q=n^{\frac{-km}{\beta}}\)._ **Conjecture 2**.: _There is a choice of \(m\), \(\beta\) and a positive \(k\)-pure shape \(x\) for which if \(p=n^{\frac{-1}{\beta}}\) then the support of \(\Gamma_{n,m,p}\) is aas in \(Y_{n,k}\) but if also \(q=n^{\frac{-km}{\beta}}\) then \(\lim_{n\to\infty}\frac{\mathbb{E}_{K\in\Gamma_{n},m,p}\left(z_{K}x\right)}{ \mathbb{E}_{K\in Y_{n,k,q}\left(z_{K}x\right)}}\neq 1\)._ In an effort to find an example for the second conjecture or disprove it consider the following reduction. If \(W\) is an \(m\)- witness for a \((k-1)\)-pure complex \(X\) write \[\bar{w} =m, \phi =x_{F},\] \[\bar{x} =k, \phi_{x} =\frac{\phi\bar{x}}{x_{0}}\geq 1,\] \[x_{w} =\frac{x_{0}\bar{w}}{w_{0}\bar{x}}, \phi_{w} =\frac{\phi\bar{w}}{w_{0}}\geq 1,\] \[\pi_{x}^{w} =\frac{(xw)_{0}}{x_{0}\bar{w}}\geq 1, b =\frac{(xw)_{0}}{x_{0}+w_{0}}=\frac{\bar{x}\bar{w}}{\bar{x}(\pi_{ x}^{w})^{-1}+\bar{w}(\pi_{w}^{x})^{-1}}.\] The above lemmas imply that there is a choice of \(\beta\) in the conjecture if \[\bar{x}-1<b \text{(so all $k-2$ faces occur aas)},\] \[b<\frac{\bar{w}(\bar{x}+1)}{\bar{w}+\bar{x}+1} \text{(so no $k$ faces occur aas) and}\] \[b<\frac{\phi\bar{x}\bar{w}}{x_{0}+\phi\bar{w}} \text{(so $X$ does not occur in $Y_{n,k,q}$)}.\] Substituting the definition of \(b\), cross multiplying and collecting terms involving \(\bar{w}\) in these three inequalities allows them to be rewritten as \[\frac{x_{w}(\bar{x}-1)}{1+\bar{x}(\pi_{w}^{x}-1)}<\frac{\bar{w}}{\bar{x}},\] \[\frac{(\pi_{w}^{x}-x_{w})(\bar{x}+1)}{1-\bar{x}(\pi_{w}^{x}-1)}<\frac{\bar{w} }{\bar{x}},\] \[\frac{\bar{w}}{\bar{x}}<\frac{\phi_{w}-\pi_{w}^{x}}{\phi_{x}(\pi_{w}^{x}-1)}\] respectively. Finally the first and third and second and third yield inequalities by ignoring the intervening \(\frac{\bar{w}}{\bar{x}}\) and again crossmultiplying and collecting \(\bar{x}\) terms yields equivalent inequalities \[\bar{x} <\frac{\phi_{w}-1}{\pi_{w}^{x}-1}, \tag{4}\] \[\bar{x} <\frac{1}{\pi_{w}^{x}-1}(1-\phi_{w}\frac{\pi_{x}^{w}-1}{\phi_{x}- 1}) \tag{3}\] respectively. Perhaps this formulation is easier to work with. **Example 2, continued.** Returning to the complex \(X\) of shape \(x\) with facets \(F=\{f,g,h\}\): \[f=\{\alpha,\gamma,\delta\},g=\{\gamma,\delta,\theta\},h=\{\theta,\kappa,\lambda\}\] we have \(x_{0}=6\), \(\bar{x}=3\), \(x_{F}=3\) and \(\phi_{x}=\frac{3}{2}\). For the 2-witnesses W mentioned earlier, the above parameters take the following form. A) We have \((wx)_{0}=18\), \(\phi_{w}=1\), \(\pi_{w}^{x}=1\), \(\pi_{w}^{w}=\frac{3}{2}\) and the inequalities (3) and (4) that would be needed for a counterexample do not apply since \(W=R\). In both cases the left hand side equals 3, and the right hand sides are fractions with zero in both the numerator and denominator. B) We have \((wx)_{0}=16\), \(\phi_{w}=\frac{6}{5}\), \(\pi_{w}^{x}=\frac{16}{15}\), \(\pi_{x}^{w}=\frac{4}{3}\) and the inequalities (3) and (4) that would be needed for a counterexample are almost satisfied (both turn out to be \(3<3\)). C) We have \((wx)_{0}=12\), \(\phi_{w}=3\), \(\pi_{w}^{x}=2\), \(\pi_{x}^{w}=1\) and the inequalities (3) and (4) that would be needed for a counterexample are \(3<2\) and \(3<1\) respectively. A simple case to study is that in which \(X\) has only two facets which each have \(k\) vertices and share \(\ell\) of these. In this case if \(m<\frac{\ell(2k-\ell)}{2(k-\ell)}\) then \(\Gamma_{n,m,n^{\frac{-1}{\beta}}}\) aas contains all \(k-1\) faces if \(\beta>\frac{(k+1)m}{k+1+m}\) and aas contains no pair of \(k-1\) faces which intersect in \(\ell\) vertices if \(\beta<\frac{(k+1)m}{k+1+m}\). If on the other hand \(m>\frac{\ell(2k-\ell)}{2(k-\ell)}\) there is a third phenomenon. If \(\frac{2km}{2k+2m-\ell}<\beta<\frac{(k+1)m}{k+1+m}\) then \(\Gamma\) aas contains pairs of \(k-1\) faces which share \(\ell\) vertices but no \(k\) faces. In this case the expected number of such pairs is approximately \(n^{2k+2m-\ell-\frac{1}{\beta}2km}\). ## 5. Computer Simulations In this section we briefly summarize the results of computer simulations which are consistent with the results presented in the earlier section. Using the Networkx and Gudhi Python libraries, we have implemented the Erdos-Renyi graphs and the m-neighbor construction. For \(n=150\) vertices, the probability \(p=0.2\) and the values of \(m\) in the set \(\{2,4,8,12\}\) we have obtained random complexes whose number of simplices is presented in the following table. \begin{tabular}{|c|c|c|c|c|c|} \hline Number of neighbors & m=1 & m=2 & m=4 & m=8 & m=12 \\ \hline Value of \(t\) & 3 & 3 & 2 & 2 & 2 \\ \hline Number of Simplices & 11160 & 11097 & 150 & 150 & 150 \\ in dimension \(t-2\) & & & & & \\ \hline Number of Simplices & 389580 & 219430 & 9515 & 2882 & 207 \\ in dimension \(t-1\) & & & & & \\ \hline Number of Simplices & 0 & 0 & 0 & 0 & 0 \\ in dimension \(t\) & & & & & \\ \hline Ratio of simplices in & & & & & \\ dimension \(t-2\) to & 0.999 & 0.993 & 1 & 1 & 1 \\ the maximal possible & & & & & \\ \hline Ratio of simplices in & & & & & \\ dimension \(t-1\) to & 0.707 & 0.398 & 0.851 & 0.257 & 0.0185 \\ the maximal possible & & & & & \\ \hline Estimate of \(q\) & & & & & \\ via the binomial & 0.693 & 0.329 & 0.847 & 0.242 & 0.0162 \\ distribution & & & & & \\ \hline \end{tabular} ## 6. Proofs **Hoeffding's Inequalities.**_If \(B\sim\textbf{Bin}_{n,q}\) then:_ * \(\mathbb{P}(B\leq m)\leq\exp[-\frac{2}{n}(nq-m)^{2}]\) _if_ \(m<nq\) _and_ * \(\mathbb{P}(B\geq m)\leq\exp[-\frac{2}{n}(nq-m)^{2}]\) _if_ \(m>nq\)_._ Proof.: We start with the original theorem of Hoeffding: **Theorem 2** [6]: _If \(X_{1},\ldots,X_{n}\) are independent, \(a_{i}\leq X_{i}\leq b_{i}\) for \(i=1,\ldots,n\), and \(\mu=\mathbb{E}\bar{X}\) with \(\bar{X}=\frac{1}{n}\sum_{i=1}^{n}X_{i}\) then for \(t>0\)_ \[\mathbb{P}\{\bar{X}-\mu\geq t\}\leq\mathrm{e}^{-2n^{2}t^{2}/\sum_{i=1}^{n}(b_{ i}-a_{i})^{2}}.\] If we assume that the \(X_{i}\sim\mathbf{Bin}_{1,q}\) are copies of the Bernoulli random variable with success probability \(q\), then \(B=n\bar{X}\) has the binomial distribution \(B\sim\mathbf{Bin}_{n,q}\). The above inequality can be written in the form: \[\mathbb{P}\{B-nq\geq nt\}\leq\exp\left[-2nt^{2}\right].\] If \(m>n\mu=nq\) and we set \(c=\frac{m-n\mu}{n}\) then: \[\mathbb{P}(B\geq m) =\mathbb{P}(B\geq n\mu+nc)\] \[=\mathbb{P}(B-n\mu\geq nc)\] \[\leq\exp\left[-2nc^{2}\right]\] \[\leq\exp\left[-2n\left(\frac{m-n\mu}{n}\right)^{2}\right]\] \[\leq\exp\left[-\frac{2}{n}(m-nq)^{2}\right].\] This finishes the proof of the second inequality above. Next, let \(m<n\mu\). We have \[\mathbb{P}(B\leq m) =\sum_{i=0}^{m}\binom{n}{i}q^{i}(1-q)^{n-i}\qquad(\text{Set}\quad j =n-i\ )\] \[=\sum_{j=n-m}^{n}\binom{n}{n-j}q^{n-j}(1-q)^{j}\] \[=\sum_{j=n-m}^{n}\binom{n}{j}(1-q)^{j}q^{n-j}\] \[=\mathbb{P}(\hat{B}\geq n-m),\qquad\text{where}\quad\hat{B}\sim \mathbf{Bin}_{n,1-q}.\] Note that since \(m<n\mu=nq\), we have \(n-m>n-nq=n(1-q)\) which is the expected value of \(\hat{B}\). Applying the second inequality proved above, we have \[\mathbb{P}(\hat{B}\geq n-m) \leq\exp\left[-2n\left(\frac{n-m}{n}-(1-q)\right)^{2}\right]\] \[=\exp\left[-\frac{2}{n}(nq-m)^{2}\right].\] This finishes the proof of the first inequality.
2309.11114
Reconstructing lattice QCD spectral functions with stochastic pole expansion and Nevanlinna analytic continuation
The reconstruction of spectral functions from Euclidean correlation functions is a well-known, yet ill-posed inverse problem in the fields of many-body and high-energy physics. In this paper, we present a comprehensive investigation of two recently developed analytic continuation methods, namely stochastic pole expansion and Nevanlinna analytic continuation, for extracting spectral functions from mock lattice QCD data. We examine a range of Euclidean correlation functions generated by representative models, including the Breit-Wigner model, the Gaussian mixture model, the resonance-continuum model, and the bottomonium model. Our findings demonstrate that the stochastic pole expansion method, when combined with the constrained sampling algorithm and the self-adaptive sampling algorithm, successfully recovers the essential features of the spectral functions and exhibits excellent resilience to noise of input data. In contrast, the Nevanlinna analytic continuation method suffers from numerical instability, often resulting in the emergence of spurious peaks and significant oscillations in the high-energy regions of the spectral functions, even with the application of the Hardy basis function optimization algorithm.
Li Huang, Shuang Liang
2023-09-20T07:44:37Z
http://arxiv.org/abs/2309.11114v1
Reconstructing lattice QCD spectral functions with stochastic pole expansion and Nevanlinna analytic continuation ###### Abstract The reconstruction of spectral functions from Euclidean correlation functions is a well-known, yet ill-posed inverse problem in the fields of many-body and high-energy physics. In this paper, we present a comprehensive investigation of two recently developed analytic continuation methods, namely stochastic pole expansion and Nevanlinna analytic continuation, for extracting spectral functions from mock lattice QCD data. We examine a range of Euclidean correlation functions generated by representative models, including the Breit-Wigner model, the Gaussian mixture model, the resonance-continuum model, and the bottomonium model. Our findings demonstrate that the stochastic pole expansion method, when combined with the constrained sampling algorithm and the self-adaptive sampling algorithm, successfully recovers the essential features of the spectral functions and exhibits excellent resilience to noise of input data. In contrast, the Nevanlinna analytic continuation method suffers from numerical instability, often resulting in the emergence of spurious peaks and significant oscillations in the high-energy regions of the spectral functions, even with the application of the Hardy basis function optimization algorithm. ## I Introduction Lattice QCD (LQCD) is a well-established first-principles and non-perturbative approach for studying strong interactions [1; 2; 3]. It serves as a valuable tool in understanding the genesis and evolution of the quark-gluon plasma (QGP) [4] and mapping out the phase diagram of strong-interaction matter [5; 6; 7]. In LQCD, spectral functions play a vital role in scrutinizing and elucidating high-energy physical phenomena that involve quarks and gluons, such as the melting of heavy quarkonium [8; 9; 10; 11; 12; 13; 14; 15] and the transport properties [16; 17; 18] of the QGP formed through relativistic heavy-ion collisions. However, accessing the spectral functions and other dynamical properties of the QCD medium from lattice simulations remains challenging due to LQCD's typical formulation on a discrete Euclidean space-time grid [1; 2; 3]. Therefore, researchers must reconstruct the spectral functions from numerically computed Euclidean correlation functions on the lattice to understand the relevant physics and compare the theoretical results with corresponding experimental data obtained from the Relativistic Heavy Ion Collider (RHIC) and the Large Hadron Collider (LHC) [19]. Mathematically, the Euclidean correlators \(G(t)\) and the spectral functions \(\rho(s)\) are connected through a Fredholm equation of the first kind: \(G(t)=K(t,s)\otimes\rho(s)\), where \(K(t,s)\) represents a continuous kernel function and \(\otimes\) signifies convolution. Mapping the spectral functions to the Euclidean correlators is a straightforward process that can be easily accomplished using numerical integration. However, extracting spectral functions from Euclidean correlators through analytic continuation poses a formidable challenge [19]. We observe that similar inverse problems are quite common in many-body and high-energy physics [1; 20]. They are considered ill-posed. There are two main reasons for this statement. Firstly, the Euclidean correlators are evaluated at a finite number of points due to the space-time discretization in LQCD [1; 2; 3]. Secondly, as LQCD simulations rely on stochastic Monte Carlo sampling, the resulting Euclidean correlators are inherently noisy [21]. Even small deviations or fluctuations in the Euclidean correlators result in significant uncertainties in the spectral functions. As a result, the majority of resulting spectral functions exhibit high oscillations and lack physical significance, thereby inhibiting a reliable comparison between the theoretical spectra and experimental data. To address these issues, researchers have developed a plethora of analytic continuation approaches in the past decades. Next, we will briefly introduce some of the most commonly employed methods in LQCD simulations. _Maximum entropy method_. The maximum entropy method (MaxEnt) is perhaps the most popular analytic continuation tool, and it has dominated this field for a long time. In this method, the spectral density is interpreted as a probability distribution [22; 23; 24; 25]. The primary objective is to extract the most probable spectral density \(\rho\) from the correlation function \(G\) and maximize the posterior probability \(\Pr[\rho]G\). According to Bayes's theorem, \(\Pr[\rho]G\propto\Pr[G]\Pr[\rho]\), where \(\Pr[G]\rho\) is the likelihood function and \(\Pr[\rho]\) is the prior probability. It is important to incorporate analytical knowledge related to spectral properties in LQCD, such as positive definiteness or even the presence of pole structures within the spectra, into the probability distribution. A significant portion of the prior information could be encoded within the prior probability \(\Pr[\rho]\), which is proportional to \(\exp{(\alpha S)}\), where \(\alpha\) is a regulation parameter and \(S\) denotes entropy. It is worth emphasizing that the entropic term \(\alpha S\) is not unique and typically takes the form of the generalized Shannon-Jaynes entropy [22; 23]. Another popular alternative is the Bayesian reconstruction entropy [25; 26]. While the MaxEnt method is widely recognized for its efficiency and noise-tolerance, it sometimes struggles to faithfully recover sharp, subtle, and high-frequency features within the spectral functions. _Stochastic analytic continuation_. In the past decade, the stochastic analytic continuation method (SAC) and its variants have emerged as formidable contenders to surpass the MaxEnt method [27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37]. Unlike the MaxEnt method, the SAC method treats all spectral functions equally instead of selecting the most probable one. Initially, the spectral functions are parameterized with hundreds or thousands of \(\delta\)-like functions. And then these parameters, such as the amplitudes and locations of the \(\delta\) functions, are stochastically sampled at a fictitious temperature \(\Theta\) using a Boltzmann-like weight function, which essentially serves as a likelihood function [27; 28]. Finally, the gathered spectral functions are filtered and averaged. We note that Shao and Sandvik _et al._ have proven the equivalence in a generalized thermodynamic limit (large number of degrees of freedom) of the average spectrum and the maximum entropy solution [38]. Although the SAC method supplements the MaxEnt method by enabling the resolution of subtle structures in spectra, it requires significant computational resources [32]. _Machine learning approaches._ In recent years, several machine learning aided methods have been developed to address the analytic continuation challenges in LQCD simulations. These methods include the deep neural networks (DNN) [39; 40], radial basis functions network (RBFN) [41], entropy variational autoencoder (SVAE) [42], kernel ridge regression (KRR) [43], automatic differentiation (AD) [44], Gaussian processes regression (GPR) [45], and many others. Although these methods may offer improved performance in certain situations, their universality is not guaranteed. Furthermore, some studies have adopted supervised approaches to train machine learning network models, which incorporate prior knowledge from specific physics insights into the training sets. However, caution must be exercised as there is a risk of introducing biases in the training data. Recently, two new analytic continuation methods, namely the stochastic pole expansion (SPX) [46] and the Nevanlinna analytic continuation (NAC) [47; 48], have been proposed. The SPX method inherits the spirit of the SAC method, where the Matsubara Green's function is initially parameterized with hundreds or thousands of poles. Subsequently, the amplitudes and positions of these poles are optimized using a stochastic algorithm that based on stimulated annealing [49]. The SPX method is applicable to fermionic and bosonic systems. It has been extended to support analytic continuation of matrix-valued Green's functions [46]. On the other hand, similar to the Pade approximation (PA) [50; 51; 52; 53], the NAC method aims to interpolate the Matsubara data in the complex plane using some form of continued fraction expansion [47; 48]. It takes the "Nevanlinna" analytic structure of the Matsubara Green's function into consideration, ensuring that the calculated spectral functions are inherently positive and normalized. However, this method is highly sensitive to the noise level in the raw Matsubara data. With noiseless data as input, it can successfully resolve complex spectral functions across a wide energy range with unprecedented accuracy. Unfortunately, when the Matsubara data contains noise, the Nevanlinna interpolants may not exist, and the resulting spectral functions are not guaranteed to be causal. Both the SPX and NAC methods have not yet been employed in addressing the issue of analytic continuation in the LQCD simulations, and it remains uncertain whether they are applicable for the analytic continuation of Euclidean correlation functions [19]. Therefore, the purpose of this study is to fill this knowledge gap and expand the potential applications of these methods. We first generate noisy Euclidean data using four representative models: the Breit-Wigner model, the Gaussian mixture model, the resonance-continuum model, and the bottomonium model. These synthetic data sets are then processed using the SPX, NAC, and MaxEnt methods. Finally, we conduct a comprehensive comparison between the calculated spectra and the exact solutions if available. The results suggest that the SPX method manifests comparable or even superior performance in comparison to the commonly used MaxEnt method. On the other hand, the NAC method tends to suffer from numerical instability, even in the absence of noise in the input data. The structure of the remaining sections of this paper is as follows: Section II provides an introduction to the basic formulations of Euclidean correlation functions and offers a brief overview of the SPX and NAC methods. In Section III, we present the computational setups, demonstrate, and discuss four representative examples. Section IV explores the robustness of the SPX method and numerical instability of the NAC method in the presence of noisy Euclidean data. Additionally, we analyze the effects of the constrained sampling algorithm and the self-adaptive sampling algorithm on the SPX method, along with the impact of the Hardy basis function optimization algorithm on the NAC method. Finally, our findings are summarized in Section V. ## II Method ### Euclidean correlation function At finite temperature, the Euclidean correlation function \(G(\tau)\) is related to the spectral function \(\rho(\omega)\) through [19] \[G(\tau)=\int_{0}^{\infty}\mathrm{d}\omega\,\rho(\omega)\frac{\cosh\left[\omega (\tau-\beta/2)\right]}{\sinh\left(\beta\omega/2\right)}. \tag{1}\] Here \(\beta=1/T\) represents the inverse system temperature, and \(\tau\) represents the Euclidean (imaginary) time interval (\(\tau\in[0,\beta]\)). In momentum space, the expression for the Euclidean correlation function is [54] \[G(p)=\int_{0}^{\infty}\mathrm{d}\omega\,\rho(\omega)\frac{\omega}{\omega^{2} +p^{2}}, \tag{2}\] where \(p\) represents the Euclidean (Matsubara) frequency, the value of \(G(p)\) can be derived by performing a discrete Fourier transform on \(G(\tau)\). In the references, Eq. (2) is also known as the Kallen-Lehmann (KL) spectral representation [55]. By applying the process of analytic continuation, the retarded propagator \(G^{R}(\omega)\) can be attained, enabling the extraction of the spectral function through the following expression: \[\rho(\omega)=-\frac{1}{\pi}\text{Im}G^{R}(\omega), \tag{3}\] where \(\omega=-ip\). It is important to note that \(\rho(\omega)\) must be an odd function for bosonic system, i.e., \(\rho(-\omega)=-\rho(\omega)\). ### Stochastic pole expansion According to the textbooks of many-body physics, the Lehmann representation of the finite temperature many-body Green's functions is given by the following formula [56; 20]: \[G(z)=\frac{1}{Z}\sum_{m,n}\frac{\langle n|d|m\rangle\langle m|d^{\dagger}|n \rangle}{z+E_{n}-E_{m}}\left(e^{-\beta E_{n}}\pm e^{-\beta E_{m}}\right). \tag{4}\] In this expression, \(d\) and \(d^{\dagger}\) represent the annihilation and creation operators, respectively. \(|n\rangle\) and \(|m\rangle\) are the eigenstates of the Hamiltonian \(\hat{H}\), and \(E_{n}\) and \(E_{m}\) are the corresponding eigenvalues. \(Z=\sum_{n}\exp(-\beta E_{n})\) is the partition function. \(z\in\mathbb{C}\backslash\mathbb{R}\). The positive sign corresponds to fermions, while the negative sign corresponds to bosons. By introducing \(A_{mn}=\langle n|d|m\rangle\langle m|d^{\dagger}|n\rangle\left(e^{-\beta E_{n }}\pm e^{-\beta E_{m}}\right)/Z\) and \(P_{mn}=E_{m}-E_{n}\), Eq. (4) can be simplified as: \[G(z)=\sum_{m,n}\frac{A_{mn}}{z-P_{mn}}. \tag{5}\] It is evident that only terms where \(A_{mn}\neq 0\) can be present. The indices \(m\) and \(n\) can also be compressed as \(\gamma\), resulting in the following expression: \[G(z)=\sum_{\gamma=1}^{N_{p}}\frac{A_{\gamma}}{z-P_{\gamma}}. \tag{6}\] Eq. (6) is referred to as the _pole representation_ of the many-body Green's functions [20]. In this representation, \(N_{p}\) denotes the number of poles, and \(A_{\gamma}\) and \(P_{\gamma}\) mean the amplitude and position of the \(\gamma\)-th pole. For the Euclidean correlation in momentum space, its pole representation can be reformulated as: \[G(p)=\sum_{\gamma=1}^{N_{p}}\Xi(p,P_{\gamma})\tilde{A}_{\gamma}. \tag{7}\] Here, \(\Xi\) represents the kernel matrix, which is calculated using the following equation: \[\Xi(p,\omega)=-\frac{G(0)\omega}{p-\omega}. \tag{8}\] \(\tilde{A}_{\gamma}\) is the renormalized amplitude of the \(\gamma\)-th pole, given by: \[\tilde{A}_{\gamma}=-\frac{A_{\gamma}}{G(0)P_{\gamma}}. \tag{9}\] It can be easily proven that \(\tilde{A}_{\gamma}\) and \(P_{\gamma}\) must satisfy the following constraints: \[\forall\gamma,\ 0\leq\tilde{A}_{\gamma}\leq 1,\ \sum_{\gamma}\tilde{A}_{ \gamma}=1,\ \text{and}\ P_{\gamma}\in\mathbb{R}. \tag{10}\] We assume that the input Euclidean correlation function is denoted as \(\mathcal{G}(p_{n})\), and the input data consists of \(N\) frequency points. We then utilize Eq. (7) to approximate the Euclidean data. To assess the discrepancy between \(\mathcal{G}(p_{n})\) and \(G(p_{n})\), we introduce the so-called goodness-of-fit function \(\chi^{2}\). Its definition is as follows: \[\chi^{2}\left[\left\langle\tilde{A}_{\gamma},P_{\gamma}\right\rangle_{\gamma=1 }^{N_{p}}\right]=\frac{1}{N}\sum_{n=1}^{N}\left\|\mathcal{G}(p_{n})-\sum_{ \gamma=1}^{N_{p}}\Xi(p_{n},P_{\gamma})\tilde{A}_{\gamma}\right\|_{F}^{2}, \tag{11}\] where \(\|\cdot\|_{F}\) represents the Frobenius norm. Hence, the objective of the analytic continuation is to solve the subsequent multivariate optimization problem: \[\underset{\{A_{\gamma},P_{\gamma}\}_{\gamma=1}^{N_{p}}}{\text{arg min}}\chi^{2} \left[\left\langle\tilde{A}_{\gamma},P_{\gamma}\right\rangle_{\gamma=1}^{N_{p }}\right]. \tag{12}\] Once the optimized parameters \(N_{p}\), \(\tilde{A}_{\gamma}\), and \(P_{\gamma}\) are determined, evaluating the retarded Green's function is straightforward by substituting \(p\) with \(\omega+i0^{+}\) in Eq. (7). Additionally, the spectral function \(\rho(\omega)\) is computed using Eq. (3). It is important to note that this optimization problem [i.e., Eq. (12)] is highly non-convex. Traditional gradient-based optimization methods typically fail to identify the global minimum unless the initial solution is of high quality [57]. Therefore, in the SPX method, we employ the simulated annealing algorithm [49] to optimize the \(\tilde{A}_{\gamma}\) and \(P_{\gamma}\) parameters subject to the constraints defined by Eq. (10). For technical details regarding the possible Monte Carlo random walking rules in the configuration space \(\mathcal{C}=\{\tilde{A}_{\gamma},P_{\gamma}\}\), please refer to Ref. [46]. The advantages of the SPX method include its ability to derive approximate expressions for correlation functions and its ease of extension to support the analytic continuation of bosonic systems, two-particle Green's functions, matrix-valued Green's functions, and so on. Application to noisy Matsubara data suggests that the SPX method can accurately resolve both continuum spectra for condensed matter cases and multiple \(\delta\)-like peaks for molecule cases. Notably, it performs well in reproducing sharp high-frequency features. ### Nevanlinna analytic continuation It is well known that the retarded Green's function, denoted as \(G^{R}(\omega+i0^{+})\), and the Matsubara Green's function, denoted as \(G(i\omega_{n})\), can both be consistently represented as \(G(z)\), where \(z\in\mathbb{C}\backslash\mathbb{R}\). The NAC method utilizes the fact that the negative fermionic Green's function, denoted as \(f(z)=-G(z)\), belongs to the class of Nevanlinna functions. By applying the invertible Mobius transform \(h(z)=(z-i)/(z+i)\) to the function value of \(f(z)\), the Nevanlinna function is mapped in a one-to-one fashion to a contractive function \(\theta(z)=h[f(z)]\). This contractive function \(\theta(z)\) can be expressed in the form of a continued fraction expansion, and an iterative algorithm can be constructed accordingly [47]. The recursion relation between two steps \(\theta_{j}(z)\) and \(\theta_{j+1}(z)\) is given by: \[\theta_{j}(z)=\frac{\theta_{j+1}(z)+\gamma_{j}}{\gamma_{j}^{*}h_{j}(z)\theta_{j +1}(z)+1}. \tag{13}\] In this equation, \(h_{j}(z)=(z-Y_{j})/(z+Y_{j})\), \(Y_{j}=i\omega_{j}\) represents the \(j\)-th Matsubara frequency used, and \(\gamma_{j}=\theta_{j}(Y_{j})\) represents the function value of the \(j\)-th contractive function at the point \(Y_{j}\). The final expression of the recursive function \(\theta(z)\) can be written as [58]: \[\theta(z)[z;\theta_{N_{s}+1}(z)]=\frac{a(z)\theta_{N_{s}+1}(z)+b(z)}{c(z) \theta_{N_{s}+1}(z)+d(z)}, \tag{14}\] where \[\begin{pmatrix}a(z)&b(z)\\ c(z)&d(z)\end{pmatrix}=\prod_{j=1}^{N_{s}}\begin{pmatrix}h_{j}(z)&\gamma_{j} \\ \gamma_{j}^{\prime}h_{j}(z)&1\end{pmatrix}, \tag{15}\] with \(j\) increasing from left to right. Here \(N_{s}\) is the overall iteration step, which is equivalent to the number of data points. After obtaining \(\theta(z)\), one can immediately get the Green's function by an inverse Mobius transform as \(G(z)=-h^{-1}[\theta(z)]\). Note that the Pick criterion [59] should be fulfilled for the existence of the Nevanlinna interpolation. Additionally, it is worth noting that there is flexibility in choosing \(\theta_{N_{s}+1}(z)\), which can be used to select the most desirable spectral function. In Reference [47], \(\theta_{N_{s}+1}(z)\) is expanded in the Hardy basis and chosen in such a way that it achieves the smoothest possible spectral function [60]. The loss function employed in this selection process is given by: \[\mathcal{L}=\left|1-\int\frac{\mathrm{d}\omega}{2\pi}\rho_{\theta_{N_{s}+1}}( \omega)\right|^{2}+\lambda\left\|\frac{\mathrm{d}^{2}\rho_{\theta_{N_{s}+1}}( \omega)}{\mathrm{d}\omega^{2}}\right\|_{F}^{2}. \tag{16}\] This loss function consists of two terms. The first term enforces the proper sum rule, while the second term incorporates the smoothness condition. \(\lambda\) is an adjustable parameter. By preserving the "Nevanlinna" analytic structure of Green's functions, the NAC method automatically generates positive and normalized spectral functions [47]. However, it is important to emphasize that the method is sensitive to noise, and either a large number of data points \(N\) or a high Hardy order \(H\) can potentially lead to numerical instabilities. Although the NAC method has been extended to support the analytic continuation of matrix-valued Green's functions [48], it cannot be directly applied to bosonic systems in its original formalism. Quite recently, Nogaki _et al._ suggest an ingenious trick to work around this limitation. Their basic idea is to introduce an auxiliary fermionic function [61]. Let us start with a bosonic Green's function \(G(\tau)\) that satisfies the periodic condition \(G(\tau+\beta)=G(\tau)\). One can construct an artificial anti-periodic fermionic Green's function \(\tilde{G}(\tau)\) as follows: \[\tilde{G}(\tau)=\begin{cases}G(\tau)&(0<\tau<\beta)\\ -G(\tau+\beta)&(-\beta<\tau<0)\end{cases} \tag{17}\] Clearly, this auxiliary fermionic Green's function exhibits the same value as the bosonic Green's function in the range \(0<\tau<\beta\). It is easy to prove that the relation between the bosonic spectral function \(\rho(\omega)\) and the auxiliary fermionic spectral function \(\tilde{\rho}(\omega)\) is as follows: \[\rho(\omega)=\tilde{\rho}(\omega)\tanh(\beta\omega/2). \tag{18}\] Furthermore, the sum rule for \(\tilde{\rho}(\omega)\) is given by: \[\int_{-\infty}^{\infty}\mathrm{d}\omega\ \tilde{\rho}(\omega)=G(\tau=0^{+})+G( \tau=\beta-0^{-}). \tag{19}\] Given \(\tilde{G}(\tau)\), it is easy to construct \(\tilde{G}(iv_{n})\) via direct Fourier transformation, where \(v_{n}=(2n+1)\pi/\beta\) are the fermionic Matsubara frequencies. Since \[\tilde{G}(iv_{n})=\int_{-\infty}^{\infty}\mathrm{d}\omega\ \frac{\tilde{\rho}( \omega)}{iv_{n}-\omega}, \tag{20}\] one can perform analytic continuation for \(\tilde{G}(iv_{n})\) via the standard NAC method to get \(\tilde{\rho}(\omega)\). And then the bosonic spectral function \(\rho(\omega)\) can be derived according to Eq. (18). This procedure has been outlined in Figure 1 in Ref. [61]. ## III Benchmarks ### Computational setups To benchmark the SPX and NAC methods, we consider four typical models, namely the Breit-Wigner model, the Gaussian mixture model, the resonance-continuum model, and the bottomonium model in the present investigation. The first three models provide analytic formulas for generating the exact spectral functions, denoted as \(\rho(\omega)\). For the bottomonium model, we take the "exact" spectral function from Ref. [62] as input. Using these spectral functions, one can create clean Euclidean data, denoted as \(\mathcal{G}_{\mathrm{clean}}\), using Eq. (2) [63]. To mimic the noise present in LQCD simulations [21], we manually add multiplicative Gaussian noises to the clean Euclidean data. The formula we use is as follows: \[\mathcal{G}_{\mathrm{noisy}}=\mathcal{G}_{\mathrm{clean}}[1+\delta N_{\mathrm{ C}}(0,1)], \tag{21}\] where \(\delta\) measures the noise level of the input data and \(N_{\mathrm{C}}(0,1)\) represents complex-valued normal Gaussian noise [64]. In our subsequent analysis, unless explicitly stated, we set \(\delta=10^{-4}\) for the SPX method and \(\delta=0.0\) for the NAC method. We then supply the noisy Euclidean data, denoted as \(\mathcal{G}_{\mathrm{noisy}}\), into the SPX and NAC codes to extract the spectral functions. Finally, we compare the calculated spectral functions with the corresponding exact solutions. The SPX method has been implemented within the ACFlow package [65]. In this study, the number of poles (\(N_{p}\)) is fixed to 2000. For each test, we perform a total of \(2\times 10^{3}\) individual SPX runs. Each SPX run consists of \(2\times 10^{5}\) Monte Carlo sampling steps. The spectral functions generated in all SPX runs are gathered and the corresponding \(\chi^{2}\) values are recorded. Assuming the mean value of the collected \(\chi^{2}\) is denoted as \(\langle\chi^{2}\rangle\), we only retain the solutions whose \(\chi^{2}\) values are smaller than \(\langle\chi^{2}\rangle/\alpha_{\mathrm{good}}\), and apply them to calculate the averaged spectrum. Here, \(\alpha_{\mathrm{good}}\) is an adjustable parameter (\(\alpha_{\mathrm{good}}\geq 1.0\)). Its optimal value is about 1.2. Regarding the NAC method, we utilize another open-source toolkit, namely the Nevanlinna.jl package [66]. In the present calculations, the lowest 100 Matsubara frequencies are kept as input. In order to avoid breaking the Pick criterion [59], the optimal number of data points is automatically determined, which is denoted as \(N_{\text{opt}}\), through a "Pick Selection" procedure in the algorithm. Usually a typical value for \(N_{\text{opt}}\) is 10. During the simulated process, the Hardy basis function optimization algorithm is always enabled. The highest Hardy order, denoted as \(H_{\text{max}}\), is set to be 50. To ensure numerical stability, the cutoff value of Hardy order, denoted as \(H_{\text{cut}}\), should be determined automatically on a case-by-case basis. The \(\lambda\) parameter, as seen in Eq. (16), is set to be \(10^{-6}\). In addition to the SPX and NAC methods, the classic MaxEnt method [24] is also employed to yield analytic continuation results for comparison. We again utilize the ACFlow package, which provides a state-of-the-art implementation of the MaxEnt method [65]. For the MaxEnt simulations, we usually choose a flat default model and calibrate the regularization parameter \(\alpha\) by using the \(\chi^{2}\)-kink algorithm [67]. The starting and ending values of \(\alpha\) are \(10^{16}\) and \(10^{1}\), respectively. The ratio between two successive \(\alpha\) parameters, i.e. \(\alpha_{i}/\alpha_{i+1}\), is 10. For the prior probability (the entropic term), we adopt the generalized Shannon-Jaynes entropy [22; 23], but the Bayesian reconstruction entropy [25; 26] is also examined. The simulated results are in good agreement with each other. Thus, we only present results obtained with the generalized Shannon-Jaynes entropy in the following discussions. ### Breit-Wigner model The Breit-Wigner spectral function, obtained from a parameterization derived directly from one-loop perturbative quantum field theory [19; 39], is expressed as follows: \[\rho(\omega)=\frac{4A\Gamma\omega}{(M^{2}+\Gamma^{2}-\omega^{2})^{2}+4\Gamma^ {2}\omega^{2}}, \tag{22}\] where \(M\) represents the mass of the corresponding state, \(\Gamma\) is the width, and \(A\) is a positive constant. We start with a superposed collection of Breit-Wigner peaks. Specially, two typical scenarios are investigated in the present work: (1) Single Breit-Wigner peak (dubbed 1BW model) with \(M=2.0\) GeV, \(\Gamma=0.5\) GeV, and \(A=1.0\) GeV. (2) Two Breit-Wigner peaks (dubbed 2BW model) with \(M_{1}=1.0\) GeV, \(M_{2}=3.0\) GeV, \(\Gamma_{1}=\Gamma_{2}=0.5\) GeV, \(A_{1}=0.8\) GeV, and \(A_{2}=1.0\) GeV. The system temperature \(T\) is fixed to be 0.02 GeV. The synthetic Euclidean data (\(\mathcal{G}_{\text{noisy}}\)) consists of 50 frequency points for both the SPX method and the MaxEnt method, and 100 frequency points for the NAC method. The analytical continuation results obtained by the SPX, NAC, and MaxEnt methods are illustrated in Figure 1. Overall, the SPX method demonstrates better performance for the 1BW model. It captures precisely not only the height, but also the position of the Breit-Wigner peak. In comparison, the NAC and MaxEnt methods tend to overestimate the width and underestimate the height of the Breit-Wigner peak. Furthermore, the NAC method leads to an obvious oscillation phenomenon around 0.5 GeV. For the 2BW model, all three methods are able to reproduce the low-energy peak around 1.0 GeV, but encounter difficulty in resolving the high-energy peak near 3.0 GeV. Although the SPX method successfully identifies the position of the high-energy peak, its weight is not accurately resolved. The spectrum obtained by the NAC method is quite sensitive to the \(\lambda\) parameter in this case. If \(\lambda=10^{-4}\) (it is the default choice of the code), the high-energy peak is shifted towards a higher energy (\(\sim 4.5\) GeV) and broadened significantly. Only when \(\lambda=10^{-6}\), the high-energy peak is well reproduced. The spectrum obtained by the MaxEnt method is also not ideal. The high-energy peak is smeared out and replaced with a shoulder-like feature. ### Gaussian mixture model Just as its name implies, the spectral function of the Gaussian mixture model [24] is a superposition of some Gaussian peaks. It can be expressed by the following equation: \[\rho(\omega)=\sum_{i}A_{i}\exp\left[-\frac{(\omega-M_{i})^{2}}{\Gamma_{i}} \right], \tag{23}\] where \(A_{i}\), \(M_{i}\), and \(\Gamma_{i}\) represent the amplitude, position, and broadening of the \(i\)-th Gaussian peak. In this example, we consider a three Gaussian peaks model. The specific values for the model parameters are as follows: \(A_{1}=1.0\) GeV, \(A_{2}=0.4\) GeV, \(A_{3}=0.2\) GeV, \(M_{1}=0.5\) GeV, \(M_{2}=2.5\) GeV, \(M_{3}=6.5\) GeV, \(\Gamma_{1}=0.01\) GeV, \(\Gamma_{2}=0.2\) GeV, and \(\Gamma_{3}=1.5\) GeV. The mock Euclidean data consist of 100 points, and \(T=0.02\) GeV. In the simulations, we have enhanced the SPX method by utilizing the self-adaptive sampling algorithm, which we refer to as SA-SPX [46]. The results of the analytical continuation are presented in Figure 2. It is expected that the true spectrum will exhibit three well-defined peaks. We find that all three methods are successful in recovering the low-energy sharp peak at \(M_{1}\). However, resolving the two high-energy peaks presents some challenges. Specifically, for the SA-SPX method, it is able to roughly resolve the peak at \(M_{2}\), but it tends to overestimate the width of the peak at \(M_{3}\). In order to save the computational resources, the iteration number of the self-adaptive sampling algorithm is fixed to 10. Nevertheless, it should be emphasized that increasing the iteration number could further reduce the peak's width at \(M_{3}\). In Section IV.2, we will delve into the combination of the self-adaptive sampling algorithm with the SPX method and discuss the usefulness of the SA-SPX method in resolving complex LQCD spectral functions. As for the NAC method, it produces a sharp peak around 2.1 GeV and a broad peak around 4.5 GeV, which are both in incorrect positions. More specifically, this method underestimates the energies of the two high-energy peaks. We also adjust the \(\lambda\) parameter, but it doesn't help. Regarding the MaxEnt method, it can recover the peak at \(M_{2}\), although with a larger width. However, it fails to resolve the peak at \(M_{3}\), instead exhibiting a broad hump around \(7.0\pm 3.0\) GeV. ### Resonance-continuum model The resonance-continuum model is a physics-motivated model borrowed from References [37; 44]. The spectral function of the resonance-continuum model can be viewed as a nonlinear combination of the resonance part (\(\rho_{r}\)) and the continuum part (\(\rho_{c}\)): \[\rho(\omega)=\xi_{1}(\omega)\rho_{r}(\omega)+\xi_{2}(\omega)\rho_{c}(\omega). \tag{24}\] Here, \(\xi_{1}\) and \(\xi_{2}\) are the mixing coefficients. Their definitions are as follows: \[\xi_{1}(\omega)=\xi(\omega,M_{r},\Gamma)\big{[}1-\xi(\omega,M_{r}+\Gamma, \Gamma)\big{]}, \tag{25}\] and \[\xi_{2}(\omega)=\xi(\omega,M_{c}+\Gamma,\Gamma), \tag{26}\] where \(\xi\) is a cutoff function: \[\xi(\omega,M,\Delta)=\left[1+\exp\left(\frac{M^{2}-\omega^{2}}{\omega\Delta} \right)\right]^{-1}. \tag{27}\] It is used to smooth out the constructed spectral function. The resonance part of the spectral function is given by: \[\rho_{r}(\omega)=C_{r}\omega^{2}\left[\frac{\left(M_{r}^{2}-\omega^{2}\right) ^{2}}{M_{r}^{2}\Gamma^{2}}+1\right]^{-1}, \tag{28}\] which follows a relativistic Breit-Wigner form. The continuum part of the spectral function is expressed as: \[\rho_{c}(\omega)=\frac{3C_{c}}{8\pi\omega}\tanh\left(\frac{\omega}{4T}\right) \sqrt{\omega^{2}-4M_{c}^{2}(2\omega^{2}+4M_{c}^{2})}. \tag{29}\] In this example, the model parameters are \(M_{r}=0.10\) GeV, \(M_{c}=0.05\) GeV, \(C_{r}=2.0\) GeV, \(C_{c}=2.10\) GeV, and \(\Gamma=0.06\) GeV. The synthetic Euclidean data consist of 100 frequency points, and \(T=0.02\) GeV. In this study, we consider three different cases: (i) the resonance-continuum model, (ii) the resonance model, and (iii) the continuum model. The analytic continuation results are shown in Figure 3. Figure 1: Analytic continuations of the Breit-Wigner models. For the NAC method, the Hardy basis function optimization algorithm is adopted. (a) Single Breit-Wigner peak. \(N_{\rm opt}=13\). (b) Two Breit-Wigner peaks. \(N_{\rm opt}=14\). In panel (b) the spectra are scaled by a factor of 0.5 for a better view. Figure 2: Analytic continuations of the Gaussian mixture model. The spectra are scaled by a factor of 0.8 for a better view. As for the SPX method, the self-adaptive sampling algorithm is enable to obtain a more reasonable spectrum. Once the self-adaptive sampling algorithm is turned off, the obtained spectrum by the SPX method will resemble the one by the MaxEnt method. The spectrum should become smoother and almost featureless in the high energy region (\(\omega>4.0\) GeV). The NAC method is enhanced by the Hardy basis function optimization algorithm and \(N_{\rm opt}=11\). Please see the main text for more details. For the resonance-continuum model [see Fig. 3(a)], it is evident that the resonance peak at \(M_{r}\) is approximately reproduced. However, in the continuum part (\(\omega>0.4\) GeV), both the SPX and MaxEnt methods exhibit moderate oscillations. These oscillations will decay when \(\omega\) is increased. The NAC method results in huge oscillations, especially when \(\omega\) is large. This unphysical feature can not be eliminated or suppressed by the Hardy basis function optimization algorithm. This fact suggests that the three analytic continuation methods do not accurately describe the continuum part. In the case of the resonance model [see Fig. 3(b)], all three methods successfully recover the location, width, and height of the resonance peak. As for the continuum model [see Fig. 3(c)], all three methods produce oscillating spectra. In particular, the NAC method leads to more pronounced oscillations. Even worse, these oscillations are enhanced with the increment of \(\omega\). The only useful information that can be extracted from the calculated spectra is the location of the band edge. ### Bottomonium spectrum The "exact" bottomonium spectrum utilized in this study is taken directly from References [62; 70]. It is generated by a \(N_{f}=2+1\) LQCD calculation. The temperature employed in the LQCD simulation is 201 MeV, which exceeds the deconfinement crossover temperature (\(T_{c}\)). The spectrum is specifically for the \(\Upsilon\) channel. Initially, we synthesize the Euclidean data by Eq. (2) for the first 100 Matsubara frequencies. Then, random Gaussian noises are added by Eq. (21). In this example, we adopt the combination of the SPX method with the constrained sampling algorithm (dubbed C-SPX) [46] to improve the performance. Specifically, locations for the randomly generated poles (\(P_{\gamma}\)) are restricted to the energy range: \(\omega\in[9.5\) GeV,\(16.0\) GeV] [68]. For the NAC method, the Hardy optimization trick is applied [47]. Regarding the MaxEnt method, its default model is a shifted Gaussian function [69]. The analytic continuation solutions, together with the "exact" bottomonium spectrum, are illustrated in Fig. 4. As is seen in Fig. 4, the ideal bottomonium spectrum consists of a single resonance peak at approximately 9.6 GeV and a "rise-and-decay" feature with two sizable bumps around 10.8 GeV and 12.0 GeV. By employing the constrained algorithm, the SPX method successfully resolves the left boundary of the resonance peak and captures the long tail of the "rise-and-decay" feature. However, it falls short in resolving the resonance peak and the two bumps. In Section IV.2, we will demonstrate the application of the self-adaptive sampling algorithm to cure this issue partly [46]. For the NAC Figure 4: Analytic continuations of the bottomonium correlation function. Here, the terminology “C-SPX” implies that the positions of the poles are restricted in the SPX simulations [46; 68]. The Hardy basis function optimization algorithm is enabled for the NAC method and \(N_{\rm opt}=8\). A shifted Gaussian function, instead of a constant, is used as the default model for the MaxEnt method [69]. See the main text for more details. Figure 3: Analytic continuations of the resonance-continuum models. The NAC method is enhanced with the Hardy basis function optimization algorithm. (a) Spectra of the resonance-continuum model. \(N_{\rm opt}=12\). (b) Spectra of the resonance model. \(N_{\rm opt}=10\). (c) Spectra of the continuum model. \(N_{\rm opt}=13\). method, it accurately reproduces the resonance peak. But it fails to recover the desired "rise-and-decay" feature, which is instead replaced by two distinct peaks located at approximately 11.3 GeV and 14.0 GeV. According to our experience, though the spectrum obtained by the NAC method is not accurate enough, it can provide some hints about the energy range of the true spectrum (such as the position of the resonance peak in this case). We can use this information to refine subsequent simulations by imposing more reasonable constraints for the C-SPX method and more appropriate default models for the MaxEnt method. By employing the MaxEnt method, the resonance peak and the "rise-and-decay" pattern are smoothed out. Such that the calculated spectrum only features a Gaussian-like peak centered around 11.5 GeV. In addition to the shifted Gaussian model, we also benchmark alternative default models, such as the flat model, the shifted Lorentzian model, and the two Lorentzians model, etc. However, they do not contribute to improving the results. ## IV Discussions ### Robustness with respect to noisy data In the previous work, the robustness of the SPX method in the presence of noisy data from quantum Monte Carlo simulations has been demonstrated [46]. This study aims to reexamine the noise resilience of the SPX method when applied to noisy LQCD data. Let us take the 1BW model as an example (please refer to Section III.2 for the model parameters) to address this issue. The noise level is varied from \(\delta=10^{-8}\) to \(\delta=10^{-2}\). The analytic continuation results are displayed in Fig. 5. Just as expected, the SPX method is highly robust to variations in noise levels of the input Euclidean data. For low noise levels (\(10^{-4}\leq\delta\leq 10^{-8}\)), the calculated spectra closely approximate the exact solution and manifest minimal deviation. At a moderate noise level (\(\delta=10^{-3}\)), the calculated spectrum shows a light fluctuation around \(\omega=0.5\) GeV, and the main peak is shifted slightly (approximately 0.1 GeV) towards the lower energy region. For a high noise level (\(\delta=10^{-2}\)), three sharp peaks emerge in the calculated spectrum. Apart from the peak at 1.8 GeV, the other peaks at 0.5 GeV and 3.0 GeV are unphysical. In Fig. 5(h), a plot of \(\log(\langle\chi^{2}\rangle)\) against \(\log(\delta^{-1})\) is shown. Initially, \(\log(\langle\chi^{2}\rangle)\) decreases linearly as \(\log(\delta^{-1})\) increases from 2.0 to 5.0. Subsequently, it approaches to a constant value (approximately - 8.1) when \(\log(\delta^{-1})\geq 6.0\). This benchmark suggests that the SPX method remains robust when applied to noisy LQCD data, even with a moderately elevated noise level. Nonetheless, minimizing the noise level can enhance the performance of the SPX method. Fei and Gull _et al._ have pointed out that the NAC method requires high-precision input data to ensure the Pick criterion is not violated and the existence of the Nevanlinna interpolants [47; 64]. Thus, in the previous calculations, we just assume that the input Euclidean data are noiseless for the NAC method (\(\delta=0.0\)). Now let us examine the noise resilience of the NAC method for synthetic LQCD data. For the sake of simplicity, the 2BW model is taken as a test-bed. The model parameters are presented in Section III.2. The noise level \(\delta\) is fixed to be \(10^{-8}\). For each NAC run, the noisy part of the mock Euclidean data is always refreshed. We repeat the NAC simulations for 100 times with and without Hardy basis function optimization [60]. Then we collect the calculated spectra and evaluate their arithmetic average. The analytic continuation results are shown in Fig. 6. We confirm again that the NAC method is extremely sensitive to noise, irrespective of the Hardy basis function optimization algorithm. Small fluctuations in the input Euclidean data can lead to huge variations in the resulting spectra. Even though the noise level is quite small, the performance of the NAC method is not good. It fails to capture the major characteristics of the 2BW model, and produces some spurious peaks. If the noise level is further increased, its performance should deteriorate ulteriorly (not shown in Figure 6). ### Self-adaptive sampling algorithm for the SPX method In the SPX method, the poles should be placed in a very dense frequency grid. In general, such a frequency grid can be either uniform or non-uniform. So, some _prior knowledge_ about the spectrum and the physical system could be encapsulated in the form of the grid to improve the performance and usefulness of the SPX method. This has led to the development of the C-SPX method [46]. Previous works have suggested that by modifying boundaries and grid interval distribution of the grid, the SPX method is capable of capturing complicated features in the spectra. However, it can be observed in Fig. 4 that the C-SPX method, as well as the NAC and MaxEnt methods, fail to resolve the major characteristics of the bottomonium spectrum [62; 70]. It implies that simple constraints on the spectral boundaries (or limitations on scopes of the poles) are not enough. We have to figure out a systematic way to refine the probability distribution of the poles to approximate the true spectrum. Next, we will demonstrate how to achieve this goal by a combination of the self-adaptive sampling algorithm and the SPX method (dubbed SA-SPX) [46]. The main principle behind the SA-SPX method is to iteratively adjust the grid interval distribution. In consequence, the probability distribution of the poles is coordinated to approximate the true spectral density and the corresponding goodness-of-fit functional [see Eq. (11)] is automatically minimized. This can be achieved by using the spectral density obtained from a previous SPX run or from other analytic continuation methods to update the grid. Now let us concentrate on the bottomonium model again [62; 70]. Its parameters can be found in Section III.5. Initially, we generate the first frequency grid for the poles using the spectral functions obtained by the NAC and MaxEnt methods. From the calculated spectra, one can conclude that there is likely a sharp resonance peak around \(\omega=9.6\) GeV (with a band edge at approximately 9.5 GeV) and a broad feature ranging from 10.0 GeV to 16.0 GeV (see Fig. 4). Keeping these hints in mind, we try to de sign a pseudo-spectrum consisting of two Gaussian peaks: \[\rho_{\rm pseudo}(\omega)=\sum_{i=1}^{2}A_{i}\exp\left[\frac{(\omega-M_{i})^{2}}{ \Gamma_{i}}\right], \tag{30}\] where \(A_{1}=5.0\), \(A_{2}=1.80\), \(M_{1}=9.60\), \(M_{2}=11.5\), \(\Gamma_{1}=0.01\), and \(\Gamma_{2}=5.0\). This pseudo-spectrum is illustrated in Fig. 7(a). The first peak at \(M_{1}\) originates from the resonance peak identified by the NAC method, while the second peak at \(M_{2}\) is inspired by the spectrum obtained by the MaxEnt method. It is worth noting that these spectral parameters can be further adjusted to mimic more accurately the results obtained by the NAC and MaxEnt methods. This pseudo-spectrum serves as a reference model. To generate the frequency grid for the poles, we execute the following steps: (1) Calculate the integrated spectral function \(\phi(\epsilon)\) via the equation: \[\phi(\epsilon)=\int_{\omega_{\rm min}}^{\epsilon}\rho_{\rm pseudo}(\omega)d \omega,\ \epsilon\in[\omega_{\rm min},\omega_{\rm max}]. \tag{31}\] Here, \(\omega_{\rm min}\) and \(\omega_{\rm max}\) are the left and right boundaries of the spectrum, respectively. And \(\epsilon\) represents a point within the interval \([\omega_{\rm min},\omega_{\rm max}]\). (2) Evaluate the new frequency grid \(f_{i}\) by using the equation: \[f_{i}=\phi^{-1}(\lambda_{i}),\ i=1,\cdots,N_{f}, \tag{32}\] where \(\lambda_{i}\) is a linear mesh in the interval \([\phi(\omega_{\rm min}),\phi(\omega_{\rm max})]\), and \(N_{f}\) is the number of grid points. Now the boundaries for the grid are set as \(\omega_{\rm min}=9.5\) GeV and \(\omega_{\rm max}=16.0\) GeV. The resulting new grid, as displayed in Fig. 7(b), is compared with the standard linear grid. Next, the newly generated grid is utilized to perform a SPX simulation from scratch. The calculated spectrum is then used to generate a newer frequency grid [at this time, the \(\rho_{\rm pseudo}(\omega)\) in Eq. (31) should be replaced with the calculated spectrum \(\rho(\omega)\)], and the SPX simulation is repeated. This iterative procedure is carried out until the obtained spectrum and frequency grid are converged. In our experience, 5 \(\sim\) 10 iterations are typically sufficient Figure 5: Robustness of the SPX method with respect to the noisy LQCD data. Here we just consider the Breit-Wigner model (1BW model). The noise level \(\delta\) is varied from \(10^{-8}\) to \(10^{-2}\). The other model parameters can be found in Sec. III.2. (a)-(g) Dependence on noise level \(\delta\) of calculated spectral functions. (h) The goodness-of-fit function \(\chi^{2}\) as a function of the noise level \(\delta\). The horizontal bar indicates the asymptotic value of \(\log(\langle\chi^{2}\rangle)\). for achieving convergence. Figure 7(c) shows the results obtained by using the C-SPX and SA-SPX methods, as well as the exact spectrum. It is evident that the spectrum obtained with the SA-SPX method comes closer to the exact spectrum than that obtained with the C-SPX method. The sharp resonance peak, the small bump near 12.0 GeV, and the long tail of the rise-and-decay feature are well reproduced by using the SA-SPX method. The only missing characteristic is the bump near 10.8 GeV. Additionally, an error analysis about the reconstructed Euclidean data is presented in Fig. 7(d). The goodness-of-fit function of the NAC method is the largest (\(\chi^{2}\approx 0.01\) to 0.1), while those of the MaxEnt, C-SPX, and SA-SPX methods are comparable (\(\chi^{2}\approx 10^{-6}\) to \(10^{-4}\)). This indicates that the spectrum obtained by the NAC method for this particular case is not reliable. ### Hardy basis function optimization for the NAC method As mentioned before, the iterative interpolation algorithm for the NAC method allows for the selection of an arbitrary contractive function, denoted as \(\theta_{N_{s}+1}\), at the final step. In the literature, Fei and Gull _et al._ proposed an optimized approach [47], in which \(\theta_{N_{s}+1}\) is expanded using the Hardy basis and its conjugate generate functions [60], and the coefficients for the expansion are determined by minimizing a smoothness norm [see Eq. (16)]. They demonstrated that using a constant value for \(\theta_{N_{s}+1}\) is apt to yield spectral functions with oscillations, whereas the optimized algorithm is useful for eliminating these oscillations and generating smoother spectral functions. Until now we just employ the optimized NAC approach for analytic continuations. However, we wonder whether the Hardy basis optimization is always better than the standard option. In order to answer this question, we perform additional tests for the Breit-Wigner model by using the NAC method. We compare the analytic continuation results obtained with a constant \(\theta_{N_{s}+1}\) and the optimized \(\theta_{N_{s}+1}\) (see Fig. 8). As anticipated, the optimized \(\theta_{N_{s}+1}\) suppresses oscillations in a large measure and yields smoother spectra, especially for the 2BW model. However, we also notice that the performance of the Hardy basis function optimization algorithm strongly depends on the value of the \(\lambda\) parameter. The optimized NAC method tends to make a wrong estimation about the location of the high-energy peak if the \(\lambda\) parameter is not reasonable. Therefore, if we know nothing about the basic features of the spectra, perhaps a constant \(\theta_{N_{s}+1}\) is a much safer choice. ## V Conclusion In the present work, we conduct a systematic investigation of two newly developed methods, namely the SPX method and the NAC method, for analytically continuing for the mock LQCD data. We treat four exact spectral functions, which are derived from physically motivated models or realistic LQCD simulations, including the Breit-Wigner model, the Gaussian mixture model, the resonance-continuum model, and the bottomonium model. We use the exact spectral functions to build clean Euclidean data by numerical integration. And later the statistical noises are added. The synthetic Euclidean data are used as input and then transformed back to the real axis using different analytic continuation methods. By comparing the results with the exact spectra, we are able to assess the accuracy of these methods. The SPX method is generally capable of resolving the major features of the spectral functions involved in this study. However, it encounters difficulties when dealing with spectra that exhibit a wide platform, such as the continuum model (see Sec. III.4), or when two features are too close together, Figure 6: Robustness of the NAC method with respect to the noisy LQCD data. Here we just consider the Breit-Wigner model (2BW model). The noise level \(\delta=10^{-8}\). The other model parameters can be found in Sec. III.2. The darker shaded region denotes the window for the reconstructed spectral functions. The blue and yellow solid lines mean the exact spectrum and the averaged spectrum, respectively. (a) Without Hardy basis function optimization. (b) With Hardy basis function optimization. such as the bottomonium spectrum (see Sec. IV.2). We believe that these difficulties can be partially overcome with the help of the constrained sampling algorithm and the self-adaptive sampling algorithm. The SPX method demonstrates good noise tolerance and exhibits robustness with respect to moderate noise levels. Overall, the performance of the SPX method is comparable to that of the commonly used MaxEnt method. In cases where the spectral function is complicated, the SPX method could outperform the MaxEnt method due to its ability to incorporate prior information about the spectrum into the frequency grid for the poles. This grid could be iteratively refined to obtain better spectrum. As for the NAC method, it is found to be numerically unstable even for input Euclidean data with extremely low noise level (\(\delta=10^{-8}\)). This drawback greatly limits the application of the NAC method in the LQCD simulations. Additionally, we observe that the Hardy basis optimization for \(\theta_{N_{x}+1}\) sometimes produces worse results when compared to those obtained with constant \(\theta_{N_{x}+1}\). Although the Hardy basis optimization can suppress possible oscillations in the spectrum, it tends to yield incorrect estimations for the positions of the high-energy peaks if the \(\lambda\) parameter is not optimal. Therefore, better basis functions for expanding \(\theta_{N_{x}+1}\) are highly desirable. Or else we need a smart algorithm to determine the optimal \(\lambda\). Nonetheless, the NAC method still proves its usefulness in analytic continuations of LQCD simulation data, as it allows for quick yet accurate estimations of the positions of the low-energy band edges and the resonance peaks. These important clues can then be used to construct a reference model for the probability distribution of the poles, which is subsequently utilized by the constrained SPX method. Figure 7: Analytic continuations of the bottomonium correlation function. (a) Spectra obtained by the MaxEnt and optimized NAC methods. The information extracted from the two spectra is used to construct a reference model [the pseudo-spectrum \(\rho_{\text{pseudo}}(\omega)\), see the solid blue line]. (b) Standard linear grid and non-uniform grid used in the C-SPX and SA-SPX simulations, respectively. Note that the non-uniform grid is constructed from the reference model via Eqs. (31) and (32). (c) Spectra obtained by the C-SPX and SA-SPX methods. (d) Error analysis for the reconstructed Euclidean data from the MaxEnt, optimized NAC, C-SPX, and SA-SPX methods. ## Acknowledgments The authors thank Prof. Lei Wang for fruitful discussions. This work is supported by the Innovation Foundation of China Academy of Engineering Physics (No. CX20200033), the National Natural Science Foundation of China (No. 12274380 and No. 11934020), the National Key Projects for Research and Development of China (No. 2021YFA1400400), the China Postdoctoral Science Foundation (No. 2021TQ0355), and the Special Research Assistant Program of Chinese Academy of Sciences.
2309.06468
Anatomy of the eigenstates distribution: a quest for a genuine multifractality
Motivated by a series of recent works, an interest in multifractal phases has risen as they are believed to be present in the Many-Body Localized (MBL) phase and are of high demand in quantum annealing and machine learning. Inspired by the success of the RosenzweigPorter (RP) model with Gaussian-distributed hopping elements, several RP-like ensembles with the fat-tailed distributed hopping terms have been proposed, with claims that they host the desired multifractal phase. In the present work, we develop a general (graphical) approach allowing a self-consistent analytical calculation of fractal dimensions for a generic RP model and investigate what features of the RP Hamiltonians can be responsible for the multifractal phase emergence. We conclude that the only feature contributing to a genuine multifractality is the on-site energies' distribution, meaning that no random matrix model with a statistically homogeneous distribution of diagonal disorder and uncorrelated off-diagonal terms can host a multifractal phase.
Anton Kutlin, Ivan M. Khaymovich
2023-09-12T18:00:01Z
http://arxiv.org/abs/2309.06468v2
**Anatomy of the eigenstates distribution:** ## Abstract **Motivated by a series of recent works, an interest in multifractal phases has risen as they are believed to be present in the Many-Body Localized (MBL) phase and are of high demand in quantum annealing and machine learning. Inspired by the success of the Rosenzweig-Porter (RP) model with Gaussian-distributed hopping elements, several RP-like ensembles with the fat-tailed distributed hopping terms have been proposed, with claims that they host the desired multifractal phase. In the present work, we develop a general (graphical) approach allowing a self-consistent analytical calculation of fractal dimensions for a generic RP model and investigate what features of the RP Hamiltonians can be responsible for the multifractal phase emergence. We conclude that the only feature contributing to a genuine multifractality is the on-site energies' distribution, meaning that no random matrix model with a statistically homogeneous distribution of diagonal disorder and uncorrelated off-diagonal terms can host a multifractal phase.** ###### Contents * 1 Introduction * 2 Multifractality and the spectrum of fractal dimensions * 3 Graphical algebra * 3.1 Raising a random variable to a power * 3.2 Sum of two independent random variables * 3.3 Weighted ensemble mixture of several independent random variables * 3.4 Product of two independent random variables * 3.5 Extensive sums of i.i.d random variables Gaussian Rosenzweig-Porter model * 5 Levy Rosenzweig-Porter model * 6 Log-normal Rosenzweig-Porter model * 7 Relation between LDOS and eigenstate distributions * 8 Absence of multifractality in Rosenzweig-Porter models * 9 Conclusions and discussion ## 1 Introduction A study of eigenstates of random Hamiltonians is crucial for understanding the spectral and transport properties of various systems. For example, basis-invariant ensembles like the Gaussian Orthogonal Ensemble (GOE) or Gaussian Unitary Ensemble (GUE) [1] have basis-invariant distributions of eigenstates, meaning that, on average, all their components are of the same order. This leads to the ergodicity of eigenstates and, thus, makes the system conducting. On the other hand, the 1d Anderson model's eigenstates are exponentially localized around their maxima, meaning that any local perturbation to the system only affects a finite number of such eigenstates, and the system is an insulator. The examples of ergodic and localized states are just the opposite ends of the broad spectrum of what is available. For example, systems at the critical points of the Anderson transition between the ergodic and localized phases are known to host the so-called fractal, or even multi fractal, eigenstates [2]. In a thermodynamic limit of the system size going to infinity, such critical eigenstates occupy an infinite number of sites, being still a measure zero of the total system size. The difference between fractal and multifractal eigenstates is even more subtle than that between fractal and localized ones. This is the difference in the probability density functions (PDF) of the eigenstate coefficients in these two cases. Fractal states are characterized by a single power-law tail in PDF which provides usually the only energy/time scale in the system in addition to the standard ones: the bandwidth and the mean level spacing. Multifractal states, in turn, have multiple (running) power-law exponents at different scales. In this respect, the multifractal distribution, being the most general type of smooth distribution, provides the set of energy scales and, thus, such systems usually have more rich time evolution and hierarchical local spectral structure, needed for the quantum-algorithm speed-up applications [3] and faster learning of artificial neural networks [4]. For these applications, the robust multifractal phases of matter are of crucial importance. As another motivation, we consider a series of recent works [5, 6, 7, 8], providing numerical evidence of the Hilbert space multifractality in the entire phase of many-body localization (MBL) in disordered interacting quantum systems. Since the direct study of MBL is challenging numerically and analytically, it is of particular interest and high demand to develop some proxy models that mimic the necessary properties of the MBL systems but are easier to tackle. One of such proxies is an Anderson model on random regular graphs (RRG) [9, 10], possessing a hierarchical structure, similar to the many-body Hilbert space. It was believed to host multifractal states [11, 12, 13, 14, 15, 16, 17, 18, 19], even before similar claims about the actual many-body systems have been made [10]. To simplify the problem even more, a random-matrix proxy to the RRG was proposed, namely the log-normal Rosenzweig-Porter model (LN-RP) [16, 20]. Due to the multifractal and heavy-tailed nature of the log-normal distribution, it was believed to host a genuine multifractal phase. However, the suggested mapping implied that the RRG corresponds to the such parameters of the LN-RP models, where the multifractal phase disappears at the tricritical Anderson transition point, meaning that the observed RRG multifractality can be just a finite-size effect, see also [21, 22, 23, 24, 25, 26]. Nevertheless, the very existence of the multifractal phase in the log-normal Rosenzweig-Porter model (unlike the fractal one in the Gaussian RP [27]) has never been proven mathematically and was based on the numerical evidence and the heuristic argument predicting multifractality for any RP model with heavy-tailed distribution, e.g., a Levy-RP model [28, 29]. Given that the Gaussian Rosenzweig-Porter model [30, 31, 32, 33, 34, 35], eight years after the discovery of the fractal phase there [36], is still the only analytically tractable model [27, 28, 37, 38] hosting an entire phase of fractal states, it makes sense to look for an analytical approach to the other RP models, like Levy- [28, 29] or LN-RP [16, 20, 39], as such attempt doesn't look hopeless. In this paper, we show that it is indeed possible. The paper is organized as follows. In Sec. 2, we define our main object of study, the spectrum of fractal dimensions, and discuss how it allows us to distinguish the multifractality from the fractality. Section 3 introduces a graphical approach to deal with the spectra of fractal dimensions (SFDs) of different independent random variables. Methodologically, this section contains the paper's main results. In Sec. 4, we demonstrate the application of the developed machinery to the Gaussian Rosenzweig-Porter model and compare our result to the previously known ones. Section 5 shows a predictive power of the above-developed method by applying it to the Levy-RP model and demonstrates that its local density of states (LDOS) has a fractal, but not multifractal distribution. In Sec. 6, we do the same for the log-normal RP model and, again, claim the absence of LDOS multifractality. In Sec. 7, we analyze if the lack of LDOS multifractality implies the absence of eigenstates multifractality and conclude that, for models with RP-like LDOS SFDs, it does. Finally, in Sec. 8, we prove that no RP-like model, i.e., a model with a regular on-site disorder, no correlations between hopping elements, and no spatial structure, can host a multifractal phase. ## 2 Multifractality and the spectrum of fractal dimensions For a non-negative random variable \(X\) (e.g., the eigenstate amplitude \(|\psi(i)|^{2}\) at site \(i\)), the spectrum of fractal dimensions (SFD) \(f_{X}(\alpha;N)\) parameterizes the probability density function \(p_{X}(x)\) as \[p_{X}(x)\mathrm{d}x=p_{X}(N^{-\alpha})\ln(N)N^{-\alpha}\mathrm{d}\alpha=\ln(N )N^{f_{X}(\alpha;N)-1}\mathrm{d}\alpha. \tag{1}\] For the wave-function amplitudes, \(N\) is the system size, which is considered to be large. In other words, we focus mainly on the large-\(N\) limit \(f_{X}(\alpha)=\lim_{N\to\infty}f_{X}(\alpha;N)\) if it is not stated explicitly otherwise. The intuition behind the spectrum of fractal dimensions is as follows: in the set of \(N\) independent samples of \(X\), we will find around \(N^{f_{X}(\alpha;N)}\) samples of the order of \(X\sim N^{-\alpha}\). Note that the parameter \(N\) can, in principle, be any number, but, in our physical applications, we will associate it with the system size. The SFD contains all the necessary information to extract the eigenstate fractal dimensions \(D_{q}\) and the critical exponents \(\tau_{q}=(q-1)D_{q}\)[2]. In the large-\(N\) limit, \(\tau_{q}\) is given by the Legendre transform of \(f(\alpha)\) in the saddle-point approximation: \[\tau_{q}\stackrel{{\mathrm{def}}}{{=}}-\log_{N}\mathrm{IPR}_{q} \simeq-\log_{N}\left[N\left<|\psi(i)|^{2q}\right>\right]\sim-\log_{N}\left( \int\mathrm{d}\alpha N^{-\alpha q}N^{f(\alpha)}\right)\xrightarrow[N\to\infty ]{}q\alpha_{q}-f(\alpha_{q}). \tag{2}\] Here the first equality is the definition of \(\tau_{q}\) via the logarithm \(\log_{N}\) of base \(N\) of the inverse participation ratio \(\mathrm{IPR}_{q}=\sum_{i}|\psi(i)|^{2q}\), with the sum approximated by the averaging over the probability distribution (1), noted by \(\langle\ldots\rangle\). In the r.h.s. of (2), the Legendre transform is given by the parameter \(\alpha_{q}\), minimizing \(q\alpha-f(\alpha)\). For a smooth \(f(\alpha)\), \(\alpha_{q}\) is defined as the solution of the equation \(f^{\prime}(\alpha_{q})=q\)1. A pictorial representation of this minimization is shown in Fig. 1. Footnote 1: If the relations provides several solutions for \(\alpha_{q}\), one should pick the one maximizing \(f(\alpha_{q})-\alpha_{q}\). From this result, one can readily identify at least two important values of \(\alpha\): \(\alpha_{0}\) and \(\alpha_{1}\). The value \(\alpha_{0}\) corresponds to the maximum of the SFD, \(f(\alpha_{0})=\max_{\alpha}\{f(\alpha)\}=1\)2. Using the intuitive meaning of SFD mentioned above, one can deduce that, in the sufficiently large sample, the realizations with \(x\sim N^{-\alpha_{0}}\) will prevail over any other values of \(x\), making this value _typical_ for the ensemble. On the other hand, the _mean_ value of \(x\) corresponds to \(\alpha_{1}\). For example, the SFDs of normalized wave functions with \(1=\sum_{i=1}^{N}|\psi(i)|^{2}=N\left<|\psi(i)|^{2}\right>\) always lie below the line \(f(\alpha)=\alpha\) and have at least one common point with this line in the thermodynamic limit. Finally, this interpretation allows us to give a mathematically strict definition of the support set dimension \(D\)[40]. To see this, let's calculate \(D_{1}\) as a limit of \(D_{q}\) for \(q\to 1\) via the L'Hopital's rule: Footnote 2: The maximum value of \(f(\alpha)\) is always \(1\) due to the normalization condition of the probability density function (1). \[D_{1}=\lim_{q\to 1}D_{q}=-\sum_{i}|\psi(i)|^{2}\log_{N}|\psi(i)|^{2}\sim\int \alpha N^{f(\alpha;N)-\alpha}\ln N\mathrm{d}\alpha\xrightarrow[N\to\infty]{} \alpha_{1}. \tag{3}\] Thus, since \(\alpha_{1}\) is responsible for the normalization, \(D_{1}=\alpha_{1}\) represents the actual, coherent with its physical meaning, support set dimension. As it has been just shown, in a generic case, the different fractal dimensions are given by the different points of the SFD. However, the SFD can also have discontinuities in its first derivative. In this case, each \(\alpha\), corresponding to one of such discontinuities, contributes to \(D_{q}\) in an entire range of different \(q\)'s, meaning that the same fraction of the eigenstate components contributes to different fractal dimensions and \(D_{q}\) stays constant in the corresponding range of \(q\)'s. The eigenstates are called multifractal [2] if \(D_{q}\) is _not_ constant for integer positive \(q\) or, in other words, it \(f(\alpha)\)_does_ have finite values at \(\alpha<\alpha_{1}\)3. Otherwise, we will talk about fractality. Footnote 3: Here and further we define \(\alpha_{1}\) as the leftmost of all equivalent ones if \(f(\alpha)\) has a straight segment of a finite length with the slope \(1\). ## 3 Graphical algebra Working with probabilities, we often need to calculate probability distributions of composite random variables, i.e., a PDF of a sum or a product. Such quantities can be obtained via proper integral convolutions or using characteristic functions. However, the calculation difficulty grows rapidly with the complexity of the composite random variable's expression. Working with fractal spectra brings a whole new life to such operations due to the possibility of approximate integration using the Laplace method. In this section, we derive simple pictorial rules allowing the calculation of composite random variables' SFDs on the fly. Below, we will consider the following operations: an exponentiation of a random variable (Sec. 3.1), a sum of two independent random variables (i.r.v.s) (Sec. 3.2), an "ensemble mixture" of several independent random variables with different distributions (Sec. 3.3), and a product of two i.r.v.s (Sec. 3.4). For the exponentiation and the ensemble mixture, the derivations of the Figure 1: Graphical representation of the relation between the spectrum of fractal dimensions \(f(\alpha)\) and the critical exponents \(\tau_{q}\), Eq. (2). Here, the solid red line shows the SFD of a certain distribution, while the tangent dashed lines of different colors with the slopes \(1\) (green), \(2/3\) (orange), and \(1/3\) (blue) correspond to Legendre transform from the r.h.s of (2). The label “\(D_{1}\)” corresponds to the value of \(\alpha=\alpha_{q\to 1}\), responsible for the normalization \(N\left\langle N^{-\alpha}\right\rangle\propto N^{0}\) and, hence, represents the actual, correctly defined, support set dimension [40], see the main text for more details. graphical rules are simple regardless of the random-variable (r.v.) distributions. The derivations for the rest two operations, become more straightforward under the assumption that their SFDs are convex functions. However, this doesn't limit the applicability of the results since sums and products of any two i.r.v.s can always be decomposed into a superposition of the operations involving i.r.v.s with convex SFDs only. ### Raising a random variable to a power We start by considering the simplest case, namely, by expressing a spectrum of fractal dimensions \(f_{X^{n}}(\alpha)\) of \(X^{n}\) in terms of the spectrum of fractal dimensions \(f_{X}(\alpha)\) of \(X\).Since, by definition \(p_{X}(N^{-\alpha})\propto N^{f_{X}(\alpha)+\alpha-1}\) and, by variables substitute, \(p_{X^{n}}(x)=n^{-1}x^{1/n-1}p_{X}(x^{1/n})\), we get that \[p_{X^{n}}(N^{-\alpha})=N^{f_{X^{n}}(\alpha)+\alpha-1}\propto N^{(1-1/n)\alpha} N^{f_{X}(\alpha/n)+\alpha/n-1}, \tag{4}\] and, hence, \[f_{X^{n}}(\alpha)=f_{X}(\alpha/n). \tag{5}\] Pictorially, this is just an \(n\)-times stretching of the spectrum of fractal dimensions in a horizontal direction with a fixed point \(\alpha=0\), see Fig. 2. For the negative \(n\), this is also a reflection with respect to the vertical axis. ### Sum of two independent random variables The main point for the sum rule, written in terms of SFDs, is that the expression \(x+y=N^{-\alpha_{X}}+N^{-\alpha_{Y}}\) is binary: it equals \(N^{-\alpha_{X}}\) if \(\alpha_{Y}>\alpha_{X}\) and \(N^{-\alpha_{Y}}\) otherwise. In other words, as soon as \(x<y\), we can always neglect \(x\) completely, and vice versa. If \(x+y=N^{-\alpha}\), then \(\alpha=\min\{\alpha_{X},\alpha_{Y}\}\). It allows us to write that \[N^{f_{X+Y}(\alpha)}\propto N^{f_{X}(\alpha)}P(\alpha_{X}<\alpha_{Y}|\alpha_{X} =\alpha)+N^{f_{Y}(\alpha)}P(\alpha_{Y}<\alpha_{X}|\alpha_{Y}=\alpha). \tag{6}\] Without loss of generality, let's assume that \(\alpha_{0}(X)<\alpha_{0}(Y)\), i.e. that typically \(Y\) is negligible. Then there are three cases: \(\alpha<\alpha_{0}(X)\), \(\alpha_{0}(X)<\alpha<\alpha_{0}(Y)\), and \(\alpha>\alpha_{0}(Y)\). Let's consider them one by one. The case \(\alpha<\alpha_{0}(X)\) is simple: since \(N^{-\alpha}\) is bigger than the typical values of both \(X\) and \(Y\), \[N^{f_{X+Y}(\alpha)}\propto N^{f_{X}(\alpha)}\cdot 1+N^{f_{Y}(\alpha)}\cdot 1 \implies f_{X+Y}(\alpha)=max\{f_{X}(\alpha),f_{Y}(\alpha)\}; \tag{7}\] Figure 2: Pictorial representation of random variable exponentiation. In this case, \(n\) is assumed to be greater than one. Here and below, we omit labeling of the axis, always assuming the horizontal axis to represent \(\alpha\) and the vertical axis to represent \(f(\alpha)\). The dashed horizontal line marks the level \(f(\alpha)=1\), the solid horizontal line corresponds to \(f(\alpha)=0\), and the labels “\(\alpha_{0}\)” and “\(n\alpha_{0}\)” correspond to the typical values of \(\alpha\) before and after the exponentiation. here, both \(P(\alpha_{X}<\alpha_{Y}|\alpha_{X}=\alpha)\) and \(P(\alpha_{Y}<\alpha_{X}|\alpha_{Y}=\alpha)\) are of the order of unity as the typical values \(\alpha_{0}(X)\), \(\alpha_{0}(Y)\) are in the interval \(\alpha_{X,Y}>\alpha\). The case \(\alpha>\alpha_{0}(Y)>\alpha_{0}(X)\) is not much different: since now the typical values of \(X\) and \(Y\) are both bigger than \(N^{-\alpha}\), \[N^{f_{X+Y}(\alpha)}\propto N^{f_{X}(\alpha)}N^{f_{Y}(\alpha)-1}+N^{f_{Y}( \alpha)}N^{f_{X}(\alpha)-1}\implies f_{X+Y}(\alpha)=f_{X}(\alpha)+f_{Y}(\alpha)-1; \tag{8}\] here, the corresponding conditional probabilities are denoted4 by \(N^{f_{X,Y}(\alpha)-1}\). And, as both \(f_{X}(\alpha)\) and \(f_{Y}(\alpha)\) in this region are smaller than one, this result shows a suppression of zeros which is quite logical: the more positive terms we add, the fewer probable we get something small. Footnote 4: It is the consequence of the assumed convexity of the functions \(f_{X,Y}(\alpha)\). Otherwise, we would have to write the probabilities as \(\max_{\alpha\in\alpha}N^{f_{X,Y}(\alpha)-1}\). Finally, in the intermediate case, \(\alpha_{0}(X)<\alpha<\alpha_{0}(Y)\), the probability \(P(\alpha_{X}<\alpha_{Y}|\alpha_{X}=\alpha)\) is of the order of one, while the probability \(P(\alpha_{Y}<\alpha_{X}|\alpha_{Y}=\alpha)\) is of the order of \(N^{f_{X}(\alpha)-1}\), hence, \[N^{f_{X+Y}(\alpha)}\propto N^{f_{X}(\alpha)}+N^{f_{Y}(\alpha)}N^{f_{X}(\alpha) -1}\implies f_{X+Y}(\alpha)=f_{X}(\alpha). \tag{9}\] In the last equality we used the fact that \(f_{Y}(\alpha)-1<0\) at \(\alpha<\alpha_{0}(Y)\). A pictorial representation of this result is shown in Fig. 3. ### Weighted ensemble mixture of several independent random variables Suppose that we are interested in the SFD of a random variable \(X\) with a probability density function proportional to a weighted sum, \(\sum_{j}w_{j}=1\), of other probability density functions: \[p_{X}(x)=\sum_{i}w_{i}p_{X_{i}}(x). \tag{10}\] Such a definition corresponds to the notion of conditional probability and the chain rule. Indeed, going from the sum to an integral and making substitutions \(w_{i}\to p(i)di\) and \(p_{X_{i}}(x)\to p(x|i)\), we arrive at the probably more familiar expression \[p_{X}(x)=\int p(x|i)p(i)\mathrm{d}i. \tag{11}\] Now, the transformation of the PDF to the SFD in the saddle-point approximation gives the relation \[f_{X}(\alpha)=\max_{i}\{f_{i}(\alpha)+f(i)-1\}, \tag{12}\] where \(f(i)\) is the SFD corresponding to the probability density function \(p(i)\). In the previous section, we already saw a similarly looking relation (7) for two discrete \(i=X,Y\) values with the same weights. It was a particular case of this more general mix rule 5. Figure 3: Pictorial representation of a sum of two i.r.v.s with convex SFDs. This is the mix rule that allows us to consider explicitly only r.v.s with convex SFDs without losing generality, as it was mentioned earlier in the introduction to this chapter. Indeed, it is easy to notice that any SFD can be defined as a mix of convex SFDs, while both sum and product of two i.r.v.s, being bi-linear operations, can be represented as the mixes of sums and products of the convex SFDs. ### Product of two independent random variables To calculate the SFD of the product of two i.r.v.s \(X\thicksim N^{-\xi}\) and \(Y\thicksim N^{-\eta}\), let's consider the corresponding convolution in terms of \(\xi\) and \(\eta\): \[N^{f_{XY}(\alpha)-1}\propto\int_{-\infty}^{\infty}\mathrm{d}\xi\mathrm{d}\eta \delta(\alpha-\xi-\eta)N^{f_{X}(\xi)-1}N^{f_{Y}(\eta)-1}=\int_{-\infty}^{ \infty}\mathrm{d}\xi N^{f_{X}(\xi)+f_{Y}(\alpha-\xi)-2}. \tag{13}\] Calculating this integral in the saddle-point approximation, we arrive at the expression suspiciously similar to the one from the mix rule: \[f_{XY}(\alpha)=\max_{\xi}\{f_{X}(\xi)+f_{Y}(\alpha-\xi)-1\}. \tag{14}\] And this is not a surprise: the product can be understood purely in terms of the previously introduced mix rule (Sec. 3.3). To see this, notice that multiplication by a _constant_\(N^{-s}\) results in a Figure 4: Graphical representation of the weighted mix rule. In this particular case, the resulting distribution consists of the ’blue’ distribution with the weight \(N^{1}\) and the ’orange’ distribution with the weight \(N^{\omega}\). Notice how the orange distribution in the r.h.s was shifted vertically according to its weight. After making such shifts for all involved SFDs, the resulting SFD is obtained by a simple envelope. Figure 5: Graphical representation of a product of the two i.r.v.s in terms of the weighted mix. In this figure, the ’blue’ SFD plays the role of weights, while the ‘orange’ SFD plays the role of the shifted distribution. Interchanging the roles of the two will not alter the final result. In the rightmost panel, there are many replications of the orange curve, shifted such that their maxima (marked by the black points) always lie on the blue curve. The envelope of this construction corresponds to the desired product. horizontal shift of the multiplied SFD by the distance \(s\). Thus, the multiplication by an arbitrarily distributed random variable \(X=N^{-\alpha}\) can be now considered as a superposition of different horizontal shifts \(\alpha\) (multiplication by a constant) with different vertical shifts \(f(\alpha)-1\) (the constant's weights), restoring the same mix rule, see Fig. 5. ### Extensive sums of i.i.d random variables In Sec. 3.2, we have already seen how to calculate a spectrum of fractal dimensions of a sum of two independent random variables. It can be easily generalized to a sum of any finite number of r.v.s. At the same time, we can imagine that an _extensive_ sum of r.v.s has to have something to do with its typical value, which doesn't follow from the finite sum rule, e.g., a central limit theorem predicts a square-root dependence of the typical value with the number of terms in the sum. To make the required generalizations, let's consider a non-negative random variable \(X\) defined by its spectrum of fractal dimensions \(f_{X}(\alpha)\) and a random variable \(S\) defined as a sum of \(N^{\beta}\) independent copies of the random variable \(X\): \[s=\sum_{i=1}^{N^{\beta}}x_{i}. \tag{15}\] Our task now is to calculate \(f_{S}(\alpha)\). We start by focusing on such values of \(x=N^{-\alpha}\) that appear in any typical sample of size \(N^{\beta}\). Specifically, let's consider \(\alpha\) such that its typical number of realizations \(N^{\beta}p_{\alpha(X)}(\alpha)\propto N^{f_{X}(\alpha)-1+\beta}\) is large enough, i.e., \(\alpha\in\Omega=\{\alpha|f_{X}(\alpha)-1+\beta\geq 0\}\). Choosing \(\alpha_{i}\in\Omega\), one can estimate a probability to find exactly \(N^{s_{i}-1+\beta}\delta\alpha\) terms of the order of \(N^{-\alpha_{i}}\) in a particular realization of the sum as \(\exp\{-\big{(}N^{\delta_{i}}-N^{f_{X}(\alpha_{i})}\big{)}^{2}/2N^{f_{X}(\alpha _{i})+1-\beta}\delta\alpha\}\) which is double-exponentially small in \(g_{i}\), provided \(g_{i}\neq f_{X}(\alpha_{i})\). Thus, neglecting the double-exponentially small contributions, one can write a typical value of the sum as [41, 20] \[s_{typ}\propto\int_{\Omega}\mathrm{d}\alpha N^{-\alpha}N^{f_{X}(\alpha)-1+ \beta}\propto N^{-\min_{\Omega}\{\alpha-f_{X}(\alpha)+1-\beta\}}, \tag{16}\] meaning that \(\alpha_{0}(S)=\min_{\Omega}\{\alpha-f_{X}(\alpha)+1-\beta\}\). In addition, one can deduce that \(f_{S}(\alpha>\alpha_{0}(S))=-\infty\), which is just a consequence of the 'zeros-suppression' effect mentioned in Sec. 3.2 in its extreme case. At the same time, the values of \(\alpha\) corresponding to \(f_{X}(\alpha)+\beta<1\) with \(\alpha<\alpha_{0}(S)\) are unlikely to be represented in a typical realization of the sum even by a single term. However, in case of such an unlikely event, the whole sum will be determined by this very contribution. Thus, these rare events should be handled with the mix rule (Sec. 3.3): a probability density for such an event to contribute equals to \(N^{\beta}p_{\alpha(X)}(\alpha)\sim N^{f_{X}(\alpha_{i})+\beta-1}\ll 1\), resulting in the following graphical rules for obtaining the SFD of an extensive sum: 1. Draw \(f_{X}(\alpha)+\beta\). 2. Draw a unit-slope line having exactly one common point with \(f_{X}(\alpha)+\beta\) in the area where \(f_{X}(\alpha)+\beta\geq 1\). 3. A point \(\alpha_{0}\) where this unit-slope line crosses the horizontal level \(f=1\) is our new typical value, according to (16). There is nothing to the right of this point due to the zeros-suppression effect. 4. To the left of this point \(f_{S}(\alpha)\) just equals \(f_{X}(\alpha)+\beta\). As one can see from this construction, all tails with a slope greater than unity eventually die as \(\beta\to\infty\), in agreement with the standard central limit theorem. In addition, in agreement with the generalized central limit theorem for the one-sided stable distributions, in order for distributions to be stable, the tails of their PDFs should decrease not faster than \(\infty\,s^{-2}\), which is exactly what they do, see Sec. 5 for more details. For such distributions, the unit-slope line touches \(f_{X}(\alpha)+\beta\) exactly at \(f=1\) for any \(\beta>0\), thus, \(f_{S}(\alpha)\) never develops discontinuities, and the tail never dies. We have just derived the rule for extensive summation in the graphical language. For the derivation of the same result in a more traditional mathematical fashion, see Appendix A. ## 4 Gaussian Rosenzweig-Porter model The Gaussian Rosenzweig-Porter ensemble is essentially just an ensemble of \(N\times N\) Gaussian orthogonal (or unitary) random matrices with broken rotational symmetry: \[H_{RP}=H_{0}+V,\quad[H_{0}]_{ij}=\varepsilon_{i}\delta_{ij},\;\varepsilon_{i} \in\mathcal{N}(0,1),\quad V=N^{-\gamma/2}H_{GOE/GUE}, \tag{17}\] where the elements of \(H_{GOE/GUE}\) are i.i.d. Gaussian r.v.s, with zero mean and unit variance. In this section, we show how the graphical rules defined above can help self-consistently calculate the spectrum of fractal dimensions of the local density of states of the model from the first principles. We do it using the cavity method in its diagonal approximation [42, 37]. The idea of the cavity method is to use an exact expression relating diagonal elements of an \(N\times N\) Green's function \(G(z)=(z-H)^{-1}\) and an \((N-1)\times(N-1)\) reduced Green's function \(G^{(i)}(z)=(z-H^{(i)})^{-1}\), where \(H\) and \(H^{(i)}\) differ by a single site \(i\) (\(H^{(i)}\) has a small "cavity"): \[G_{ii}(z)=\left(z-\varepsilon_{i}-\sum_{j,k\neq i}V_{ij}G^{(i)}_{jk}(z)V_{ki} \right)^{-1}; \tag{18}\] here, \(z=\varepsilon-\mathrm{i}\eta\) is a complex-valued parameter with a small imaginary part \(\eta\) to ensure the existence of \(G(z)\). For \(\eta>0\), one can define a local density of states \(\nu_{i}(\varepsilon)\) as \[\nu_{i}(\varepsilon)=\frac{1}{\pi}\operatorname{Im}G_{ii}(z)=\sum_{n=1}^{N} \frac{\eta/\pi}{(\varepsilon-E_{n})^{2}+\eta^{2}}|\psi_{n}(i)|^{2}. \tag{19}\] Figure 6: Graphical representation of the generalized central limit theorem. The summation result is shown in red. Following [42, 43], we choose \(\eta=N^{\beta}\delta_{\varepsilon}\) with \(0<\beta<D_{1}\) and \(\delta_{\varepsilon}\) being a mean level spacing in the corresponding part of the spectrum. Such a choice allows us to get meaningful physical results for any system in a delocalized phase. Unless otherwise noted, we consider bulk with \(\delta_{\varepsilon}\propto N^{-1}\). The idea of the diagonal approximation is to say that, in a thermodynamic limit, the sum in the denominator of (18) is dominated by the diagonal elements of the reduced Green's function [42, 44]: \[G_{ii}(\varepsilon)\xrightarrow[N\to\infty]{}\left(z-\varepsilon_{i}-\sum_{j \neq i}|V_{ij}|^{2}G_{jj}^{(i)}(z)\right)^{-1}. \tag{20}\] The approximation is considered valid, providing the system is in a non-ergodic phase 6. However, the SFD's graphical algebra introduced above can not be directly applied to this expression as it also contains complex variables. To proceed, we need to either generalize our graphical algebra to complex random variables or write the local density of states explicitly staying in the real domain. While the former approach is also possible (see Appendix D), for simplicity, here we employ the latter one: Footnote 6: See Appendix B for more details on the diagonal cavity method applicability conditions. \[\nu_{i}(\varepsilon)\sim\frac{\Gamma_{i}(\varepsilon)}{(\varepsilon- \varepsilon_{i})^{2}+\Gamma_{i}(\varepsilon)^{2}},\quad\Gamma_{i}(\varepsilon )=\sum_{j\neq i}|V_{ij}|^{2}\nu_{j}(\varepsilon). \tag{21}\] In this expression, we neglected the real part of the diagonal self-energy \(\Sigma_{i}=\sum_{j\neq i}|V_{ij}|^{2}G_{jj}^{(i)}(\varepsilon)\) compared to the on-site disorder amplitude 7 and omitted \(\eta\) compared to the broadening \(\Gamma_{i}(\varepsilon)\). These simplifications, again, restrict the applicability of the following results to the non-ergodic delocalized part of the phase diagram. Now, the problem of calculating the spectrum of fractal dimensions of the local density of states \(\nu_{i}(\varepsilon)\) appears as a self-consistent problem [41, 42, 29, 44], and below, we show how to solve it using the SFD graphical algebra introduced earlier. Footnote 7: The real part of the self-energy appears in the expression for LDOS as an energy shift renormalizing the on-site energies. Hence, the shift is not important until the renormalization affects the model’s bandwidth in the ergodic phase, i.e. until \(\gamma\leq 1\). This can be seen just by comparing the bandwidth \(N^{(1-\gamma)/2}\) of the hopping matrix \(V\) to the on-site disorder amplitude, which is \(1\). First, let's consider the broadening \(\Gamma_{i}(\varepsilon)\). Since the broadening can be expressed as an extensive sum, its SFD possesses all the characteristic properties of extensive sums, allowing us to guess the SFD almost completely. Indeed, since this is an extensive sum, there can be nothing to the right of its typical value, i.e., \(f_{\Gamma_{i}}(\alpha>\alpha_{0}(\Gamma_{i}))=-\infty\). On the other hand, the terms entering the sum do not correspond to any fat-tailed distribution: \(V_{ij}\) is Gaussian by definition, and the LDOS is bounded from above by inverse amplitude of the on-site disorder, which is of the order of \(N^{0}\). This means that there can be nothing also to the left of the broadening typical value. As a result, its SFD should look like \[\Gamma_{i}: \tag{22}\] The only unknown parameter left is the value of this typical value \(\Gamma_{i}\sim N^{-c}\), which we parameterized by \(c\). The site index \(i\) is used in the previous paragraph in \(\Gamma_{i}\) and \(\nu_{i}\) to specify the random quantities, corresponding to this site. Due to the statistical homogeneity, assumed by the self-consistent cavity formulation of the problem and respected by the model under consideration, further, we omit this unnecessary (double) indexing and write just \(\Gamma\), or \(f_{\Gamma}(\alpha)\), or, similarly, \(p_{\nu}(x)\), where possible. Therefore, the index-free symbols \(\Gamma\), \(\nu\), etc., should be considered as shortcuts for "level broadening", "local density of states", etc. Now, having the shape of the broadening fixed, we can calculate the distribution of \(\nu\), substitute it to the definition of \(\Gamma\), and find \(c\) self-consistently. To perform the first step, one should ensure that all terms entering the expression for \(\nu\) are independent - which is not true because \(\Gamma\) enters the expression two times. However, since, from the SFDs' point of view, \(\Gamma\) is just a constant, we can still perform this step without introducing any further complications. The calculation is depicted below: \[\includegraphics[width=142.26378pt, width=142.26378pt]{figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figsfigs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figsfigs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figsfigs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figsfigs/figs/figs/figs/figs/figs/figs/figs/figsfigs/figs/figs/figs/figs/figs/figs/figs/figs/figsfigs/figs/figs/figs/figs/figs/figs/figs/figsfigs/figs/figsfigs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figs/figsfigs/figs/figs/figs/figsfigs/figs/figfigs/figs/figs/figsfigs/figs/figs/figs/figs/figs/figs/figsfigs/figs/figs/figsfigs/figfigs/figs/figs/figsfigs/figs/figs/figsfigs/figfigs/figs/figfigs/figs/figs/figs/figs/figs/figs/figsfigs/figs/figfigs/figs/figfigs/figsfigs/figs/figfigs/figs/figfigs/figs/figsfigs/figfigs/figfigs/figs/figs/figfigs/figsfigs/figfigs/figfigs/figs/figfigs/figsfigs/figs/figfigs/figs/figfigs/figsfigfigs/figfigs/figs/figsfigfigs/figs/figfigs/figfigs/figsfigs/figfigs/figs/figfigs/figsfigs/figfigs/figfigs/figfigs/figfigs/figs/figfigs/figsfigs/figfigs/figfigs/figfigs/figs/figfigsfigs/figfigs/figsfigs/figfigfigfigs/figfigs/figfigs/figfigsfigs/figfigfigs/figfigsfigs/figfigfigs/figfigs/figfigs/figfigsfigs/figfigfigs/figfigsfigs/figfigs/figfigsfigs/figfigfigsfigs/figfigs/figfigsfigfigs/figfigsfigs/figfigfigs/figfigsfigs/figfigfigs/figfigsfigfig/figfigsfigsfig/figfigsfigs/figfigfigsfigs/figfigfigsfigs/figfigsfigs/figfigfigsfigs/figfigfigs In the last equation, (31), we demonstrated the elevation of the highest blue point with the small dashed line. Further we will use the same notation below. Comparing (31) to (22), one can find \(c=\gamma-1\), in complete agreement with the previously known results. The ergodic and Anderson transitions then follow from the equations \(c=0\) (\(\Gamma\propto N^{0}\), thus, all \(v_{i}\) are roughly of the same amplitude) and \(c=1\) (\(\Gamma\propto\delta_{\varepsilon}\) and, thus, the normalization of \(v\) is contributed by a finite fraction of large components), giving, as expected, \[\gamma_{ET}=1,\quad\text{and}\quad\gamma_{AT}=2. \tag{32}\] While the result (27) is not new, its graphical calculation already provides some insights. For example, the orange linear region with the slope \(1/2\) between \(\alpha=\pm c\) comes from the fact that our diagonal matrix elements are homogeneous in the bulk of the distribution and, thus, this should be a general feature of many similar random matrix models 8. The only contribution from the off-diagonal elements to the final result (27) was in the form of fixing the value of \(c\), which is reflected by the different colors we use for different contributions. The shape itself was controlled only by the statistics of the on-site disorder and, in particular, by the fact that \(p_{e_{i}}(x)\) is finite and non-zero as \(x\to\varepsilon\). Footnote 8: Recently, an interest to non-homogeneous (Cantor-set-like) on-site energies distributions has arisen [41, 45]. While due to the introduced above restriction \(\Gamma\gg\eta\gg\delta_{\varepsilon}\), we cannot apply our method in the localized phase, \(\gamma>2\), let us see how the Anderson localization _formally_ looks like from our graphical self-consistency equation's point of view. When \(\gamma\) approaches \(2\) from below, the blue points at \(\alpha=2\gamma-2-c\) and \(\alpha=\gamma-c\) from (31) approach each other until they coincide for \(\gamma=\gamma_{AT}=2\). For \(\gamma>2\), the same picture looks like \[\sum_{j\neq i}\lvert V_{ij}\rvert^{2}\,v_{j}: \tag{33}\] which already contradicts our initial guess (22) for the level broadening. Nevertheless, one can try to use (33) as a new guess, which results in a self-consistency equation \(c=\gamma+c-2\). The unknown \(c\) drops out of this equation, leaving us with the only point this construction can hold for, \(\gamma=2\). And, while this attempt doesn't lead us to a correct solution, it hints at an important conclusion about the localized phase: the level broadening for \(\gamma>2\) no longer has the form (22), originated from the collective contribution of many sum's terms \(\lvert V_{ij}\rvert^{2}\,v_{j}\). Instead, as one may assume from the previously known results [36], it should also contain an individual contribution from the localized wave function's maximum to preserve the LDOS normalization condition. ## 5 Levy Rosenzweig-Porter model Inspired by the success of the Rosenzweig-Porter model with normally distributed off-diagonal amplitudes and by the distribution of the effective matrix elements in the Hilbert space, derived for many-body disordered models [46], a similar RP ensemble with the fat-tailed Levy-distributed amplitudes was proposed [28, 29]. The main motivation was that this Levy Rosenzweig-Porter should host the desired multifractal phase. This section reviews the statement and provides our own analysis of the proposed model. The Levy Rosenzweig-Porter ensemble is the ensemble of random matrices (17), with the uncorrelated diagonal on-site energies \(\varepsilon_{i}\), distributed according to some narrow size-independent distribution like the normal or box distribution, and with the i.i.d. off-diagonal elements \(V_{ij}\) of typical amplitudes \(N^{-\gamma/2}\), distributed according to the following PDF with the parameter \(\mu\): \[p(V_{ij})\sim\frac{2\mu N^{-\mu\gamma/2}}{|V_{ij}|^{1+\mu}}\theta\left(|V_{ij}| -N^{-\gamma/2}\right). \tag{34}\] Such a polynomial decay of the PDF tails makes the off-diagonal elements' distribution heavy-tailed for \(\mu<2\) (variance is undefined) and fat-tailed for \(\mu<1\) (mean of the absolute value is undefined). The SFDs of the distributions with such polynomial tails look like (35) These heavy- and fat-tail properties are precisely what led the authors of [29] to an elegant argument supporting the existence of the multifractal phase in the Levy-RP models. The argument is based on estimating the fractal dimensions \(D_{1}\) and \(D_{\infty}\) and concludes that they are not equal, meaning the wave functions are multifractal. Below, we calculate a fractal spectrum of the bulk eigenstates of the Levy Rosenzweig-Porter (Levy-RP) model, analogously to how we did it for the Gaussian Rosenzweig-Porter model in Sec. 4. As we are mostly interested in the (multi)fractal phase, which is expected [29] to exist for \(1<\mu<2\) and \(2/\mu<\gamma<2\), this is exactly the parameter range we consider. Thus, we start by defining the fractal spectrum of \(|V_{ij}|^{2}\) and trying to guess the SFD of the broadening \(\Gamma\). Rescaling (35) according to the exponentiation rule from Sec. 3.1 and using \(\alpha_{0}=\gamma/2\), we get (36) A product of \(|V_{ij}|^{2}\) and the LDOS \(\nu_{j}\) is at least as heavy-tailed as \(|V_{ij}|^{2}\). Taking also into account that \(\nu\) is normalizable, and, i.e., bounded, we conclude that the extensive sum \(\Gamma_{i}=\sum_{j\neq i}|V_{ij}|^{2}\,\nu_{j}\) must have the SFD of the form (37) where the right-wing tails are, as usual, eaten by the extensive number of the elements in the sum. The only unknown here, like in the Gaussian RP case from Sec. 4, is the scaling \(c\) of the typical value of \(\Gamma\). Next, to obtain the SFD of \(\nu\), we use the mix rule from Sec. 3.3. Indeed, taking for each fixed \(\Gamma\)-value the corresponding conditional distribution (27) and the distribution of 'weights', given by (37), we get (38) where the new 'zeros' part to the right of \(c\) originates from the rare realizations of \(\Gamma_{i}\) so large that \(\mathrm{Im}[G_{ii}]\propto\Gamma_{i}^{-1}\). The label with the left arrow demonstrates that \(f_{\nu}(c+0)=1-\mu c\). To finish the calculations, we write the self-consistency equation: \[|V_{ij}|^{2}\nu_{j}: \tag{39}\] \[\sum_{j\neq i}|V_{ij}|^{2}\nu_{j}:\] \[\propto\mu a/2\] \[c^{*}\ \gamma-c\ \gamma+c\] Here, \(c^{*}=\gamma-c-(1-c)\cdot 2/\mu=c\), meaning that \[\gamma_{eff}=c+1=\frac{(2+\gamma)\mu-4}{2(\mu-1)}. \tag{41}\] As one can see from (38), the SFD of the LDOS in the Levy-RP model, similarly to the Gaussian RP case, corresponds to such \(f(\alpha)\) that \(f(\alpha<\alpha_{1})=-\infty\) and, hence, it is just fractal, not multifractal. Moreover, calculating the fractal dimensions \(D_{1}\) and \(D_{\infty}\) using this SFD\({}^{9}\) and the self-consistent value of \(\gamma_{eff}\), we get \[D_{1,\infty}=1-c=2-\gamma_{eff}=\frac{\mu(2-\gamma)}{2(\mu-1)},\quad 2/\mu< \gamma<2,\quad 1<\mu<2\, \tag{42}\] which coincides with the results for \(D_{1}\) from [29], but not with the ones for \(D_{\infty}\). The fractal phase with \(0<D_{1}<1\), as in the Gaussian RP case from Sec. 4, gives a way to the ergodic one as soon as \(c=0\), providing \[\gamma_{ET}=2/\mu\,\quad 1<\mu<2\, \tag{43}\] while the localized phase appears when \(c=1\), giving \[\gamma_{AT}=2\,\quad 1<\mu<2. \tag{44}\] As one can see from these equations, the purpose of introducing \(\gamma_{eff}\) was in preserving the analogy with the Gaussian RP model, as the fractal dimension takes a universal form \(D_{q>1/2}=2-\gamma_{eff}\) as well as the corresponding phase diagram (32). At this point, we are ready to draw a phase diagram of the Levy-RP model according to its LDOS SFD. To do that, in addition to what we already did, we need to explore the regions \(\mu>2\) and \(\mu<1\). The former case of \(\mu>2\) reproduces the results of the Gaussian RP: it follows from the fact that the hopping distribution ceases to be heavy-tailed, the slope \(\mu/2\) from (39) becomes larger than one, and, as a consequence, the extensive sum from (40) produces the same expression for \(c^{*}\) as in the Gaussian RP model, giving the ergodic and the Anderson localization transitions at \(\gamma_{ET}=1\) and \(\gamma_{AT}=2\) for any \(\mu>2\). The latter case of \(\mu<1\) is a bit more interesting: similarly to the situation with \(\gamma>2\) from the end of Sec. 4, \(c\) drops out of the corresponding self-consistency equation, giving the Anderson localization transition line as \(\gamma_{AT}=2/\mu\). However, the support set dimensions (42) on this line are now not zero but one, meaning that the line corresponds also to the ergodic transition. The resulting phase diagram is given in Fig. 7. We already said that, due to the heavy-tailed distribution of the off-diagonal elements, the convergence of numerical simulations to the thermodynamic limit of the Levy-RP model is very slow, see also [41]. However, an attempt to verify our analytical prediction can be seen in Fig. 8: an extrapolation to infinite sizes [10, 36, 47] is shown by black dots, and our theoretical prediction is by the thick red line(s). The meaning of two red lines, dashed and solid, to the right of the typical value, \(\alpha>\alpha_{0}\), is the following: the dashed line is plotted according to (38), while the solid line shows the actual behavior of the SFD in this region supported by the extrapolation of our numerical results. The reason why the calculations above failed to capture the behavior at \(\alpha>\alpha_{0}\) should originate from one of the approximations we made on the way. Based on the estimation from Appendix B, we can conclude that the diagonal cavity approximation holds in the parameters range \(2/\mu<\gamma<2\), which we defined at the beginning of Sec. 5, and its violation cannot be the reason for the inconsistency between (38) and simulation in Fig. 8. The only other approximation we made following [29] was throwing away the real part of the self-energy \(\Sigma_{i}(\varepsilon-\mathrm{i}\eta)=\sum_{j\neq i}|V_{ij}|^{2}G_{jj}^{(i)} (\varepsilon-\mathrm{i}\eta)\) compared to the diagonal disorder. This approximation seems reasonable until the typical value of \(\operatorname{Re}\Sigma_{i}\) is less than that of the on-site disorder and the self-energy distribution is less heavy-tailed than the on-site disorder. Indeed, providing the conditions above hold, the SFD of \(|\varepsilon_{i}+\operatorname{Re}\Sigma_{i}|\) will be identical to that of \(|\varepsilon_{i}|\). Strictly speaking, to prove the above statement, one should generalize the notion of SFD to distributions supported on the entire real axis (see Appendix C for details of this approach). However, here one can provide the simpler reasoning. Indeed, first, the shape of SFD for \(|\varepsilon_{i}+\operatorname{Re}\Sigma_{i}|\) for the atypically large values of \(\operatorname{Re}\Sigma_{i}\thicksim N^{-\alpha}\), \(\alpha<\alpha_{0}(|\operatorname{Re}\Sigma_{i}|)\), trivially coincides with that of \(|\varepsilon_{i}|+|\operatorname{Re}\Sigma_{i}|\). And second, the linear decay of the SFD with the unit slope at \(\alpha>\alpha_{0}(|\operatorname{Re}\Sigma_{i}|)\) follows from the finite value of the PDF for \(|\varepsilon_{i}+\operatorname{Re}\Sigma_{i}|\) at zero, guaranteed, in Figure 7: Localization phase diagram for the Lévy Rosenzweig-Porter model, according to its LDOS distribution. turn, by the mutual independence of \(\varepsilon_{i}\) and \(\Sigma_{i}\). In the case of the Levy-RB, the heavy-tailed hopping distribution violates the second condition because it _is_ more heavy-tailed than the on-site disorder distribution. This observation forces us to consider both real and imaginary parts of the self-energy or, equivalently, of Green's function \(G_{ii}\), making it necessary to generalize our SFD algebra of independent r.v.s to that of two correlated ones or, equivalently, to the complex-valued random variables. This broad and difficult topic is covered extensively in Appendix D, but the take-home message from that generalization is encouraging: the real part of the self-energy can only contribute to the untypically small values of LDOS, \(\alpha>\alpha_{0}\). This fact can also be understood intuitively: while we increase the denominator of (21) keeping \(\Gamma_{i}\) fixed, we decrease the value of the LDOS. The smallest value of the LDOS we get in such a way is its typical value, which is realized by a typical value of \(|\varepsilon-\varepsilon_{i}|\thicksim O(1)\). Finally, as \(\operatorname{Re}\Sigma_{i}\) starts to dominate when it is larger than \(1\), it only contributes to the SFD part to the right of \(\alpha_{0}\). Hence, we can continue neglecting it, providing we are only interested in \(D_{q}\) with \(q>0\), given by \(\alpha\leq\alpha_{0}\). Figure 8: Spectrum of fractal dimensions for the local density of states in the Lévy-Rosenzweig-Porter model for \(\gamma=1.6\) and \(\mu=1.8\). For these parameters, \(\gamma_{eff}=1.55\). A thick dashed red line shows the wrong prediction \(f_{\gamma}(c+0)=1-\mu c\) caused by us neglecting the real part of the self-energy. The correct prediction, \(f_{\gamma}(c+0)=2-\mu\gamma/2\), is shown by the solid red line and derived in Appendix E. Log-normal Rosenzweig-Porter model Another candidate to host a genuinely multifractal phase, circulating in the literature, is the so-called log-normal Rosenzweig-Porter model [16, 39, 20]. Its definition is analogous to other RP models, but the distribution of hopping matrix elements is defined as a real-valued log-normal distribution, with the parameters scaling with the system size \(N\): \[p_{V_{(\neq j)}}(v)\propto\frac{1}{|v|}\exp\left\{-\frac{\ln^{2}(|v|/v_{typ})} {2p\ln\!\left(v_{typ}^{-1}\right)}\right\},\quad v_{typ}=N^{-\gamma/2}. \tag{45}\] Proceeding according to (1), we find that its SFD is parabolic, \(f_{|V_{(\neq j)}|^{2}}(\alpha)=1-(\alpha-\gamma)^{2}/(4p\gamma)\), and hence truly multifractal, see Fig. 9. Indeed, if we want a truly multifractal wave function, why not to start from a truly multifractal hopping distribution? As we usually do at the beginning of the graphical calculations, let's define the range of parameters we consider and guess the first step's SFD. Defining the hopping SFD by the relation \(f_{|V_{(\neq j)}|^{2}}(\alpha)=1-(\alpha-\gamma)^{2}/(4a)\), let's start from considering \(1<\gamma<2\) and small \(a=p\gamma>0\). The'smallness' of \(a\) will be defined later. But, considering that, for \(a\to 0\), the hopping distribution approaches a narrow distribution with all moments well-defined, it is reasonable to assume that the LN-RP LDOS in this regime will be close to the Gaussian RP LDOS. With this idea in mind, let's start from the Gaussian RP LDOS SFD (27) and multiply it by the squared hopping element, with the log-normal distribution: \[|V_{ij}|^{2}\,v_{j}:\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \, The gray dot located at \(X=\gamma-c-2a\) marks the point on the SFD (46) where its slope is equal to one. The tangent line with a unit slope, touching this point, crosses the level \(f(\alpha)=1\) at the point \(c^{*}=X-(1-c-a)\). Substituting \(c^{*}\to c\), we get \[c=\gamma-a-1\text{ and }\gamma_{eff}=\gamma-a. \tag{48}\] From the shape of the SFD for the level broadening, we see that the resulting self-consistent LDOS SFD has the RP-like shape for \(\alpha<\gamma_{eff}\). At the same time, for \(\alpha>\gamma_{eff}\) it should show a very low probability of zeros. This is the only place where the parabolic input reveals itself. Formally speaking, this is already multifractality for \(D_{q}\) with negative \(q\), but from the physical application perspective and according to the definition we gave at the end of Sec. 2, we consider it as a trivial fractal phase. From Eq. (48), one can find the part of the phase diagram, Fig. 10. Indeed, the case of \(c=0\) corresponds to the level broadening of the order of the entire bandwidth which should correspond to the ergodic transition. This gives \[\gamma_{ET}=a+1\quad\Leftrightarrow\quad\gamma_{ET}=1/(1-p),\quad p<1/2. \tag{49}\] In the r.h.s. we have used the standard substitution \(a=p\gamma\). Let's now consider the question of the parameter's \(a\) smallness. The geometric construction pictured in (47) holds only until \(X>c^{*}\). This implies the restriction \(\gamma<2\). Given that, we can now draw a part of our model's phase diagram, Fig. 10. Next, let's try to move to higher values of \(\gamma\). To do that, let's start with some \(\gamma\) in the fractal phase, \(1+a<\gamma<2\), and gradually increase it until we reach the unexplored territory. When the gray dot from Eq. (47) crosses the horizontal dashed line \(f(\alpha)=1\), we can no longer determine \(c^{*}\) as a crossing point between the unit-slope tangent line and the level \(f(\alpha)=1\). Instead, we have to use the following construction: Figure 10: A part of the phase diagram for the LN-RP LDOS. The question marks signify the parameter range which we haven’t yet described. \[\sum_{j\neq i}|V_{ij}|^{2}\nu_{j}: \tag{50}\] From this we find that \(c^{*}=\gamma-c-2\sqrt{a(1-c)}\), and the self-consistent solution for \(c\) is now \[c=\frac{\gamma-a-\sqrt{a(4+a-2\gamma)}}{2}. \tag{51}\] Since the part of the SFD for the level broadening to the left of \(c\) doesn't have even a single point with a slope less than or equal to \(1/2\), the resulting SFD for the LDOS in the LN-RP model again has the shape of the LDOS SFD for the Gaussian RB except for the part to the right of the typical value \(\alpha=c\). Note that after the substitution of \(a=p\gamma\) Eq. (51) agrees well with (49) in [39] and (C3) in [29] for \(\tau^{*}\equiv\alpha(\gamma,p)\equiv c\). The square root in the self-consistent expression (51) for \(c\) immediately tells us that the result is valid only until \(\gamma\leq 2+a/2\). From the geometrical point of view, the line \(\gamma=2+a/2\) corresponds to the situation when the orange segment of (46) touches the level \(f(\alpha)=0\). Notice a similarity with the case \(\gamma>2\), discussed at the end of Sec. 4 in the context of the Gaussian RP model: the change in geometry happening for \(\gamma>2+a/2\) calls to modify the self-consistency equation once more, but, if one tries to do it, one would quickly realize that it is impossible as \(c\) drops out of the equation, leaving us with just the expression for this borderline itself. Similarly to the discussion at the end of Sec. 4, this disappearance of \(c\) from the self-consistency equation hints that one of the basic conditions cannot be satisfied, namely, the wave-function normalization condition, leading to the emergence of the localization peak \(f_{\nu}(\alpha=-1)=0\) and signaling the Anderson transition. Thus, the expression for the Anderson transition from the fractal phase is given by \[\gamma_{AT}=2+\frac{a}{2}\quad\Leftrightarrow\quad\gamma_{AT}=\frac{2}{1-p/2 }\, \tag{52}\] where the latter expression is straightforwardly derived from \(\gamma=2+a/2\) after the substitution \(a=p\gamma\). Especially intriguing is that, like in the Levy-RP model at \(\gamma=2/\mu\) and \(\mu<1\), the fractal dimensions on the Anderson transition lines are finite, implying a discontinuity of these quantities. Given that, in the LN-RP case, the line is the borderline of the fractal phase and, hence, the borderline of our method's applicability region, we cannot rule out the possibility of having a fine-tuned multifractality right on this line. To finish the LN-RP diagram exploration, let's find the line corresponding to the ergodicity transition at \(\gamma>2\). To do that, we solve the equation \(c=0\) and get \[\gamma_{ET}=2\sqrt{a}\quad\Leftrightarrow\quad\gamma_{ET}=4p\ ; \tag{53}\] the latter form, again, is the known expression obtained from the former one by the substitution \(a=p\gamma\). As a result, summarizing Eqs. (49), (52), and (53), we obtain the full phase diagram for the LN-RP model, see Fig. 11, confirming the previous results from [16, 29, 39, 20]. It has a tricritical point analogous to the Levy-RP case. According to [16], the case of RRG corresponds to the bisector \(a=\gamma\) (\(p=1\)) in this phase diagram, which crosses the above tricritical point \(a=\gamma=4\). And again, the diagram may only host multifractal states on the border of the localized phase. The rest of the diagram contains only localized, fractal, or ergodic phases. But how is this possible that even the model with a multifractal distribution of its hopping elements failed to host a multifractal phase? One of the possible answers to this question lies in the observable we studied: recall that the local density of states, being an average over an extensive number of wave functions, doesn't necessarily reproduce all the details of the individual wave function distributions. Thus, as it was shown, e.g., for the Anderson model on the Cayley tree in [17, 18], the absence of multifractality in the LDOS SFD doesn't eliminate the possibility of having the multifractal statistics of the eigenstates. In order to examine this opportunity of having the wave-function multifractality together with the fractality of the LDOS, we consider the relation between their SFDs in the next section. ## 7 Relation between LDOS and eigenstate distributions We are unaware of any method as powerful as the self-consistent cavity method but for individual wave functions \(\psi_{n}(i)\) instead of the local density of states \(\nu_{i}(\varepsilon)\). And, while we cannot calculate the corresponding SFD \(f_{|\psi|^{2}}(\alpha)\) directly, we can infer restrictions for this function implied by the shape of \(f_{\nu}(\alpha)\). Surprisingly, this analysis can be performed for any random Hamiltonian, and the result of this section goes beyond the Rosenzweig-Porter family. By definition (19), the local density of states \(\nu_{i}(\varepsilon)\) is proportional to the average of the squared wave functions' amplitudes \(|\psi_{n}(i)|^{2}\) over the energy window \(\eta\) around the energy \(\varepsilon\). This energy window for the averaging is controlled by the Lorentzian function meaning that its tails decrease as \((\varepsilon-E_{n})^{-2}\). First, for simplicity, let's consider a box kernel instead of the Lorentzian one and Figure 11: A full LN-RP LDOS phase diagram. The dark blue line shows where the expression \(\gamma=2\sqrt{a}\) is applicable, and the red point marks the tricritical point. define the box LDOS \(\tilde{\nu}_{i}(\varepsilon)\) as \[\tilde{\nu}_{i}(\varepsilon)=\delta_{\varepsilon}^{-1}\left\langle|\psi_{n}(i)|^ {2}\right\rangle_{\varepsilon\pm\eta}=\delta_{\varepsilon}^{-1}\frac{\sum_{n= 1}^{N}|\psi_{n}(i)|^{2}\theta(\eta-|E_{n}-\varepsilon|)}{\sum_{n=1}^{N}\theta (\eta-|E_{n}-\varepsilon|)}. \tag{54}\] Later, we return back to the standard Lorenzian LDOS. From now on, let's fix a specific site \(i\) of our system. In the RP family's models, all sites are statistically equivalent, but, in general, they don't have to be. Each realization of a random Hamiltonian \(H\) will then produce a single realization of \(\tilde{\nu}_{i}(\varepsilon)=\tilde{\nu}_{i}(\varepsilon;H)\) and of the order of \(N^{\beta}=\eta/\delta_{\varepsilon}\gg 1\) realizations of \(|\psi_{n}(i)|^{2}=|\psi_{n}(i;H)|^{2}\) contributing to the value of \(\tilde{\nu}_{i}(\varepsilon;H)\), Eq. (54). Our first task is to relate the distribution of \(|\psi_{n}(i)^{2}|\) at \(|\varepsilon-E_{n}|<\eta\) to the distribution of \(\tilde{\nu}_{i}(\varepsilon)\). For this, we consider \(0<\beta\ll 1\) in order to assume that the wave functions in this small energy window are statistically equivalent. 10 Footnote 10: If they are not, the result still holds but has a different physical meaning. It then relates the distribution of LDOS to the marginal distribution of all eigenstate coefficients from the considered energy window. The values of \(|\psi_{n}(i;H)|^{2}\) from each realization of \(H\) may or may not be correlated. Hence, since our graphical language was developed only for independent random variables, we cannot directly apply the generalized central limit theorem from Sec. 3.5 to this case of Eq. (54). Having said that, let's consider the whole ensemble of \(H\) and a set \(\tilde{\Omega}_{\alpha}\) consisting of all \(|\psi_{n}(i;H)|^{2}\) contributing to \(\tilde{\nu}_{i}(\varepsilon;H)\) such that \(\tilde{\nu}_{i}(\varepsilon;H)=N^{-\alpha}\): \[\tilde{\Omega}_{\alpha}=\left\{|\psi_{n}(i;H)|^{2}\,\Big{|}\,|E_{n}- \varepsilon|<\eta\,,\,\tilde{\nu}_{i}(\varepsilon;H)=N^{-\alpha}\right\}. \tag{55}\] The unconditional distribution \(p_{|\psi|^{2}}(x)\) of \(|\psi_{n}(i)|^{2}\) from our energy window can be obtained from the conditional distribution \(p_{|\psi|^{2}}(x|x\in\tilde{\Omega}_{\alpha})\) of \(|\psi_{n}(i)|^{2}\in\tilde{\Omega}_{\alpha}\) by a probability chain rule as \[p_{|\psi|^{2}}(x)=\int p_{|\psi|^{2}}(x|x\in\tilde{\Omega}_{\alpha})p_{\tilde{ \nu}}(N^{-\alpha})\mathrm{d}N^{-\alpha}. \tag{56}\] Transposing from the probability density functions to the corresponding spectra of fractal dimensions, we recover the mix rule from Sec. 3.3: \[f_{|\psi|^{2}}(\alpha)=\max_{\xi}\left\{f_{|\psi|^{2}}(\alpha|N^{-\alpha}\in \tilde{\Omega}_{\xi})+f_{\nu}(\xi)-1\right\}. \tag{57}\] This formula directly relates the SFD of the eigenvalues \(f_{|\psi|^{2}}(\alpha)\) to the SFD of the box LDOS \(f_{\tilde{\nu}}(\xi)\). Now, let's consider \(f_{|\psi|^{2}}(\alpha|N^{-\alpha}\in\tilde{\Omega}_{\xi})\) and infer what it may look like. First, from the definition of \(\tilde{\Omega}_{\xi}\), Eq. (55), we know that \(|\psi|^{2}\) from our conditional distribution \(p_{|\psi|^{2}}(x|x\in\tilde{\Omega}_{\xi})\) cannot exceed \(N^{-\xi}\delta_{\varepsilon}=N^{-\xi-1}\), see Eq. (54). Indeed, otherwise, such \(|\psi|^{2}\) would give rise to \(\tilde{\nu}>N^{-\xi}\) at least for some realizations of \(H\) related to \(\tilde{\Omega}_{\xi}\). Second, we know that the point \(\alpha=\xi-\ln_{N}\delta_{\varepsilon}=\xi+1\), \(f_{|\psi|^{2}}(\alpha)=1\) belongs to \(f_{|\psi|^{2}}(\alpha|N^{-\alpha}\in\tilde{\Omega}_{\xi})\). Otherwise, we would get \(\tilde{\nu}<N^{-\xi}\) for some \(H\) contributing to our conditional distribution. Hence, the conditional SFD is constrained to have the following form: \[|\psi_{n}(i)|^{2}\in\tilde{\Omega}_{\xi}:\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, any possible shape allowed by our definition of SFD, \(f_{|\psi|^{2}}(\alpha|N^{-\alpha}\in\tilde{\Omega}_{\xi})\leq 1\). Several examples of that are depicted in (58) by the thin lines of different colors. Hence, applying the mix rule to this manifold of conditional SFDs, we get that the unconditioned SFD \(f_{|\psi|^{2}}(\alpha)\) can differ from the corresponding SFD \(f_{\psi}(\alpha)\) of the box LDOS, Eq. (54), only in the region \(\alpha>\alpha_{0}(\tilde{\nu})\). To the left of this point, these two SFDs must coincide 11. Footnote 11: Here we have considered the convex \(f_{\gamma}(a)\). For non-convex ones, the difference may appear to the right, \(a>\alpha_{*}\), of each maximum \(f_{\gamma}(a_{*})=f_{*}\), but with the deviated values \(f_{|\psi|^{2}}(\alpha)\leq f_{*}\) Finally, let's return back to the proper LDOS with the Lorentzian kernel. It differs from the box LDOS, which we have just examined, because it can be dominated, in principle, by the wave functions outside the energy window \(\varepsilon\pm\eta\). This possibility lifts the restriction for the point \(\{1+\xi,1\}\) to belong to \(f_{|\psi|^{2}}(\alpha|N^{-\alpha}\in\tilde{\Omega}_{\xi})\) for all \(\xi\) except \(\xi=\alpha_{1}(\nu)-1\equiv D_{1}(\nu)-1\). Indeed, the latter is just a consequence of the wave-function normalization condition. Thus, we arrive at the following conclusion about the relation between the distributions of the eigenfunctions and the local density of states: \[f_{|\psi|^{2}}(D_{1})=f_{\gamma}(D_{1}-1),\quad\text{and}\quad f_{|\psi|^{2}}( \alpha)\leq f_{\gamma}(\alpha-1),\quad\alpha<\alpha_{0}(\nu). \tag{59}\] For the log-normal and Levy Rosenzweig-Porter models, it means that \(D_{q}(|\psi|^{2})=D_{q}(\nu)\) for \(q\geq 1/2\). This conclusion follows from the fact that, for both of these models, \(f_{\gamma}(\alpha<D_{1}(\nu))=-\infty\), while the derivative of \(f_{\gamma}(\alpha)\) at \(\alpha=D_{1}(\nu)+0\) is equal to \(1/2\). For most practical applications, where only \(q>1/2\) matters, these models show only fractal, but not multifractal properties. ## 8 Absence of multifractality in Rosenzweig-Porter models As one may have already guessed from the previously considered models, the finding of a multifractal phase hosted by an RP model is far from trivial. In this section, we will prove that it is, in fact, impossible. Before proceeding to the proof itself, let's focus on an important limitation of our method. Indeed, as one can guess, the above-developed method is insensitive to the changes in the PDF that do not affect the SFD. In this sense, all the sparse graph models and the standard Anderson models on the lattices cannot be described by this method, as the localization transition and the fractality are not governed by the scaling with the system size \(N\). As a remarkable example, let's consider a random Hamiltonian \(H=H_{RP}+A_{ER}\), where \(H_{RP}\) is a Gaussian RP model Hamiltonian from (17), and \(A_{ER}\) is an adjacency matrix of a random Erdos-Renyi graph, with a fluctuating finite number of non-zero hopping terms of the order of unity. The hopping of this 'Erdos-Renyi-RP model' corresponds to the SFD \[|V_{ij}|^{2}: \tag{60}\] hereafter, let's assume \(\gamma>1+c\) with a certain positive \(c>0\). Note that the additional blue point at the origin in the above plot corresponds to a finite number of "neighbors" for any site of this model, connected to it by the hopping of the order one. This is given in addition to the all-to-all RP-like hopping of the scaling with the system size amplitude \(N^{-\gamma/2}\), see, e.g., [48] for the case of correlated hopping of this kind. Performing the calculations of the SFD for \(\nu\), one can straightforwardly show that for \(\gamma>1+c\)_any_ SFD curve, respecting Mirlin-Fyodorov symmetry [49], \(f_{\nu}(\alpha)=\alpha+f_{\nu}(-\alpha)\), which is finite only on the support \(|\alpha|<c\), i.e., \(f_{\nu}(|\alpha|>c)=-\infty\), satisfies the self-consistent cavity equation for such hopping. For example, one can take the parabolic one \(f_{\nu}(\alpha)=1-(\alpha-c)^{2}/4c\) for \(|\alpha|<c\): (61) What's the catch? As we have mentioned above, for the Anderson model on sparse random graphs, the hopping scaling is not the only thing that matters for the localization and ergodicity breaking. In principle, different on-site energy distributions with the same SFD will lead to different LDOS SFDs, see, e.g., [15] for the RRG. Analogously, the different lattice dimensionalities of the standard Anderson model drastically change the localization diagram [50, 51]. Thus, from the perspective of Laplace's method in its leading order, the problem is ill-defined for such models, as it is not the SFD of the off-diagonal matrix elements and \(N\)-scaling, but PDFs and prefactors that resolve the localization phase diagram. This leads to the solution's ambiguity within the above graphical method and its inapplicability to such problems. In the following paragraphs, we focus on the models, where a multifractal segment in the LDOS SFD necessarily originates from the hopping SFD. This assumption implies that the resulting solution is solely determined by the SFDs of hopping terms and on-site energies. That being said, let's prove that conventional RP-like models, i.e., the models differing from the Gaussian RP only by the distribution of the i.i.d. uncorrelated hopping elements without the Erdos-Renyi component, cannot host any multifractal phase. To do that, we are going to exploit a similar anatomical approach to what we used in Sec. 7: we assume the solution found, trace back its features to the input distributions, and conclude if such a self-consistent solution can actually exist or not. So, for concreteness, let's start with the shape of the LDOS SFD given, e.g., by (61). As will be shortly seen, this particular choice doesn't affect the argument. Being a part of the iteration procedure, this shape ought to originate from mixing together different Gaussian-RP-like Lorenzian shapes of LDOS (27) corresponding to different fixed values of \(\Gamma\): please see the orange dashed line in (62) for \(\Gamma\thicksim N^{-\xi}\) and recall, e.g., how we obtained (38). Schematically, this inheritance can be illustrated by (62) here, the blue part of the LDOS SFD corresponds to the blue part of the broadening SFD, and, since, due to the Mirlin-Fyodorov symmetry, the magenta part of \(f_{\nu}(\alpha)\) is controlled by the same blue part of \(f_{\Gamma}(\alpha)\), the thin differently-colored curves of the SFD for the broadening demonstrate the unimportance of \(f_{\Gamma}(\alpha<0)\) as it can only affect \(f_{\nu}(\alpha>c)\). The fact that the blue region of the broadening SFD produces the identical region on the LDOS SFD is due to the derivative of \(f_{\Gamma}(\alpha)\) in this region being smaller than \(1/2\). Indeed, as soon as it becomes larger than \(1/2\), the corresponding contribution becomes subdominant with respect to the contribution of independent on-site energies, leading to the orange straight lines \(\propto\alpha/2\) of the Poisson distribution we have already used to. In its turn, the broadening SFD is obtained from the SFD of \(|V_{ij}|^{2}\,v_{j}\) by the extensive summation according to Sec. 3.5. Because of the zeros suppression effect, its tail, \(f_{\Gamma}(\alpha<\alpha_{0}(\Gamma))\), may only originate from the tail of \(|V_{ij}|^{2}\,v_{j}\) such that, for \(\alpha<c\), \(f_{\Gamma}(\alpha)=f_{|V|^{2}\,v}(\alpha)+1\). In our case, it must look like \[|V_{ij}|^{2}\,v_{j}:\quad\raisebox{-28.452756pt}{\includegraphics[width=14. 452756pt]{./figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figuresfigures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures//figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures//figures/figures/figures/figures//figures/figures//figures/figures/figures/figures/figures/figures//figures/figures/figures//figures/figures/figures/figures//figures/figures//figures/figures/figures//figures/figures/figures/figures//figures/figures/figures/figures//figures/figures//figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures//figures/figures/figures/figures//figures/figures/figures/figures/figures/figures//figures/figures//figures/figures/figures//figures/figures/figures//figures/figures//figures/figures///figures/figures//figures/figures//figures/figures/figures//figures/figures/figures//figures/figures/figures//figures/figures/figures/figures//figures/figures//figures/figures/figures/figures/figures//figures/figures//figures/figures//figures/figures//figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures//figures/figures/figures/figures/figures/figures//figures/figures/figures/figures//figures/figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures//figures/figures/figures/figures/figures//figures/figures/figures//figures//figures//figures/figures/figures/figures//figures/figures/figures//figures/figures/figures/figures/figures//figures/figures//figures/figures/figures/figures//figures//figures/figures//figures//figures/figures/figures/figures//figures/figures//figures/figures/figures//figures//figures//figures/figures/figures//figures/figures//figures/figures/figures//figures/figures//figures/figures/figures/figures/figures/figures//figures/figures/figures//figures/figures/figures//figures/figures/figures//figures/figures//figures/figures/figures/figures//figures//figures//figures/figures/figures//figures/figures//figures/figures//figures/figures/figures/figures//figures/figures//figures/figures//figures/figures/figures/figures/figures/figures/figures//figures/figures/figures/figures/figures//figures/figures/figures//figures/figures/figures/figures/figures/figures//figures/figures//figures/figures//figures//figures/figures//figures/figures//figures//figures/figures/figures//figures/figures/figures//figures/figures//figures/figures/figures//figures/figures/figures//figures/figures//figures/figures/figures//figures/figures//figures/figures//figures/figures//figures/figures/figures/figures//figures//figures//figures/figures/figures/ in \(f(\alpha)\) we have found is beyond the point with the tangent slope \(1/2\), \(\alpha>\alpha_{1/2}\). This corresponds to the small or even negative moments \(q<1/2\) of the fractal dimensions \(D_{q}\). Having these calculations, we have managed to track back the origin of all parts of the spectrum of fractal dimensions and concluded that the uncorrelated models with i.i.d. hopping terms and conventional Poisson disorder can host only fractal phases. Statistically non-homogeneous distributions of the on-site disorder (like in [41]) may lead to a zero fraction of multifractal states, but the question if it can lead to the formation of an entire multifractal phase remains open. Another possibility for creating genuine multifractality is adding hopping-term [52] and on-site disorder correlations [53, 54]. The latter, though, does not exclude non-trivial and anomalously slow dynamics in Rosenzweig-Porter models, see, e.g., [28, 39], which has a direct application to many-body disordered systems close to the MBL transition. The generalization of the developed graphical method for the effects of correlations of the local density of states may become a good way to map such many-body systems to their random-matrix proxies. In this direction, one of the most prominent things is to focus on the frozen dynamical phase, suggested in [39], where the return probability of the wave-packet spreading can be stuck after some finite-time evolution. Another direction to look at with the developed method is to consider a generic multifractal Rosenzweig-Porter model, obeying the RRG symmetry (see Eq. (6) in [39]) and focus on the origin of the tricritical point, found in the phase diagrams of Levy- and log-normal RP models. Finally, since our approach relies on Laplace's method of approximate integration, it does not only lead to a self-consistent solution in the thermodynamic limit \(N\to\infty\) (which has been done in this paper) but also may allow calculating sub-leading orders of the finite-size scaling for \(N\gg 1\). In this case, the cavity equation is not supposed to be solved self-consistently but to be viewed as a generator of the RG flow going to the known self-consistent fixed point. Among others, this point of view provides a way to analyze how to minimize the finite-size effects and speed up the convergence of numerical methods. ## Acknowledgements We are grateful to V. E. Kravtsov for fruitful discussions and to him together with B. L. Altshuler and L. B. Ioffe for the works on the related topics. Funding informationI. M. K. acknowledges the support by the Russian Science Foundation, Grant No. 21-12-00409.
2309.08445
Limiting absorption principles and linear inviscid damping in the Euler-Boussinesq system in the periodic channel
We consider the long-time behavior of solutions to the two dimensional non-homogeneous Euler equations under the Boussinesq approximation posed on a periodic channel. We study the linearized system near a linearly stratified Couette flow and prove inviscid damping of the perturbed density and velocity field for any positive Richardson number, with optimal rates. Our methods are based on time-decay properties of oscillatory integrals obtained using a limiting absorption principle, and require a careful understanding of the asymptotic expansion of the generalized eigenfunction near the critical layer. As a by-product of our analysis, we provide a precise description of the spectrum of the linearized operator, which, for sufficiently large Richardson number, consists of an essential spectrum (as expected according to classical hydrodynamic problems) as well as discrete neutral eigenvalues (giving rise to oscillatory modes) accumulating towards the endpoints of the essential spectrum.
Michele Coti Zelati, Marc Nualart
2023-09-15T14:47:20Z
http://arxiv.org/abs/2309.08445v2
Limiting absorption principles and linear inviscid damping in the Euler-Boussinesq system in the periodic channel ###### Abstract. We consider the long-time behavior of solutions to the two dimensional non-homogeneous Euler equations under the Boussinesq approximation posed on a periodic channel. We study the linearized system near a linearly stratified Couette flow and prove inviscid damping of the perturbed density and velocity field for any positive Richardson number, with optimal rates. Our methods are based on time-decay properties of oscillatory integrals obtained using a limiting absorption principle, and require a careful understanding of the asymptotic expansion of the generalized eigenfunction near the critical layer. As a by-product of our analysis, we provide a precise description of the spectrum of the linearized operator, which, for sufficiently large Richardson number, consists of an essential spectrum (as expected according to classical hydrodynamic problems) as well as discrete neutral eigenvalues (giving rise to oscillatory modes) accumulating towards the endpoints of the essential spectrum. Key words and phrases:Inviscid damping, limiting absorption principle, Boussinesq approximation 2020 Mathematics Subject Classification: 35Q31, 76B70, 35P05, 76E05 ###### Contents * 1 Introduction * 2 Main ideas and outline of the article * 2.1 Fourier decomposition and spectral representation * 2.2 Notation and conventions * 2.3 Green's function for the Taylor-Goldstein equation * 2.4 Regularization of the generalized stream-functions * 2.5 Spectral picture * 2.6 Solutions to the inhomogeneous Taylor-Goldstein equation * 2.7 Inviscid damping estimates through the limiting absorption principle * 2.8 Limiting absorption principle for spectral boundary terms * 3 Explicit solutions to the Taylor-Goldstein equation * 3.1 The case \(\beta^{2}\neq 1/4\) * 3.2 The case \(\beta^{2}=1/4\) * 3.3 Derivative formulae for solutions to the Taylor-Goldstein equation * 4 Bounds on the Green's function for \(\beta^{2}\neq 1/4\) * 4.1 Pointwise bounds near the critical layer * 4.2 Estimates for \(\mathcal{G}_{m,\varepsilon}\) away from the critical layer * 4.3 Proof of Theorem 2 * 5 Bounds on the Green's function for \(\beta^{2}=1/4\) * 5.1 Estimates near the critical layer * 5.2 Estimates for \(\mathcal{G}_{m,\varepsilon}\) away from the critical layer * 6 Contour integral reduction 6.1 Integral reduction for \(\beta^{2}>1/4\): discrete and embedded eigenvalues * 6.2 Integral reduction for \(\beta^{2}<1/4\): no discrete eigenvalues * 6.3 Integral reduction for \(\beta^{2}=1/4\) * 7 Bounds on solutions to the inhomogeneous Taylor-Goldstein equation * 8 Boundary terms estimates * 8.1 Estimates for first order boundary terms * 8.2 Boundary pointwise estimates on Green's Function's derivatives * 8.3 Estimates for second order boundary terms * 9 Estimates for the Generalized Stream-functions * 10 Time-decay estimates * A Properties of the Whittaker functions * A.1 Basic definitions and asymptotic expansions * A.2 Lower bounds for Whittaker functions * A.3 Growth bounds and comparison estimates for \(\beta^{2}>1/4\) * A.4 Growth bounds and comparison estimates for \(\beta^{2}=1/4\) * A.5 Growth bounds and comparison estimates for \(\beta^{2}<1/4\) ## 1. Introduction Under the Boussinesq approximation, the motion of an incompressible, non-homogeneous, inviscid fluid is described by the Euler equations \[(\partial_{t}+\tilde{\mathbf{v}}\cdot\nabla)\tilde{\omega} =-\mathfrak{g}\partial_{x}\tilde{\rho}, \tag{1.1}\] \[(\partial_{t}+\tilde{\mathbf{v}}\cdot\nabla)\tilde{\rho} =0,\] where \(\tilde{\mathbf{v}}=\nabla^{\perp}\Delta^{-1}\tilde{\omega}\) denotes the velocity field of the fluid with vorticity \(\tilde{\omega}=\nabla^{\perp}\cdot\tilde{\mathbf{v}}\) and density \(\tilde{\rho}\), and \(\mathfrak{g}\) is the gravity constant. In the periodic channel \(\mathbb{T}\times[0,1]\), we are interested in the linear asymptotic stability of the special equilibrium solution \[\bar{\mathbf{v}}=(y,0),\qquad\bar{\rho}(y)=1-\vartheta y,\qquad\partial_{y}p=- \mathfrak{g}\bar{\rho}(y), \tag{1.2}\] which describes a Couette flow that is linearly stratified by a density with slope \(\vartheta>0\). We introduce the perturbed velocity \(\tilde{\mathbf{v}}=\bar{\mathbf{v}}+\mathbf{v}\) and density profile \(\tilde{\rho}=\bar{\rho}+\vartheta\rho\), and define the corresponding vorticity perturbation \(\omega=\nabla^{\perp}\cdot\mathbf{v}\). After neglecting the nonlinear terms, the linearized Euler-Boussinesq system (1.1) near (1.2) can be written as \[\begin{cases}\partial_{t}\omega+y\partial_{x}\omega=-\beta^{2}\partial_{x} \rho\\ \partial_{t}\rho+y\partial_{x}\rho=\partial_{x}\psi,\\ \Delta\psi=\omega,\end{cases} \tag{1.3}\] with \(\psi\) being the streamfunction and \(\beta=\sqrt{\vartheta\mathfrak{g}}>0\). The understanding of the long-time dynamics of solutions to (1.3) is very much related to the spectral properties of the associated linear operator \[\mathcal{L}=\begin{pmatrix}y\partial_{x}&\beta^{2}\partial_{x}\\ -\Delta^{-1}\partial_{x}&y\partial_{x}\end{pmatrix}. \tag{1.4}\] In the setting of the periodic channel, \(\mathcal{L}\) can have quite interesting features: it has both continuous and point spectrum, with a sequence of eigenvalues accumulating to the endpoint of the spectrum. As a consequence, any asymptotic stability result requires well-prepared initial data, whose projection onto the point spectrum vanishes. We summarize the main result of this article in the following theorem. There a few key assumptions on the initial data that we informally state in the theorem and comment on right after. **Theorem 1**.: _Let \(\beta>0\) and assume that the initial data \((\omega^{0},\rho^{0})\) vanish on the physical boundaries, is orthogonal to the subspace generated by the eigenfunctions of \(\mathcal{L}\), satisfy an orthogonality condition at the endpoint of the essential spectrum and_ \[\int_{\mathbb{T}}\omega^{0}(x,y)\mathrm{d}x=\int_{\mathbb{T}}\rho^{0}(x,y) \mathrm{d}x=0. \tag{1.5}\] _Let \(\mathbf{v}=(v^{x},v^{y})=\nabla^{\perp}\psi=(-\partial_{y}\psi,\partial_{x}\psi)\) be the corresponding velocity field. We have the following estimates._ * _If_ \(\beta^{2}\neq 1/4\)_, let_ \(\mu=\mathrm{Re}\sqrt{1/4-\beta^{2}}\) _and_ \(\nu=\mathrm{Im}\sqrt{1/4-\beta^{2}}\)_. Then,_ \[\|v^{x}(t)\|_{L^{2}} \lesssim\frac{1}{t^{\frac{1}{2}-\mu}}\left(\|\rho^{0}\|_{L^{2}_{x} H^{3}_{y}}+\|\omega^{0}\|_{L^{2}_{x}H^{3}_{y}}\right),\] (1.6) \[\|v^{y}(t)\|_{L^{2}} \lesssim\frac{1}{t^{\frac{3}{2}-\mu}}\left(\|\rho^{0}\|_{L^{2}_{ x}H^{4}_{y}}+\|\omega^{0}\|_{L^{2}_{x}H^{4}_{y}}\right),\] (1.7) \[\|\rho(t)\|_{L^{2}} \lesssim\frac{1}{t^{\frac{1}{2}-\mu}}\left(\|\rho^{0}\|_{H^{1}_{ x}H^{3}_{y}}+\|\omega^{0}\|_{H^{1}_{x}H^{3}_{y}}\right),\] (1.8) _for all_ \(t\geq 1\)_._ * _If_ \(\beta^{2}=1/4\)_, then_ \[\|v^{x}(t)\|_{L^{2}} \lesssim\frac{1+\log(t)}{t^{\frac{1}{2}}}\left(\|\rho^{0}\|_{L^{ 2}_{x}H^{3}_{y}}+\|\omega^{0}\|_{L^{2}_{x}H^{3}_{y}}\right),\] (1.9) \[\|v^{y}(t)\|_{L^{2}} \lesssim\frac{1+\log(t)}{t^{\frac{3}{2}}}\left(\|\rho^{0}\|_{L^{ 2}_{x}H^{4}_{y}}+\|\omega^{0}\|_{L^{2}_{x}H^{4}_{y}}\right),\] (1.10) \[\|\rho(t)\|_{L^{2}} \lesssim\frac{1+\log(t)}{t^{\frac{1}{2}}}\left(\|\rho^{0}\|_{H^{1 }_{x}H^{3}_{y}}+\|\omega^{0}\|_{H^{1}_{x}H^{3}_{y}}\right),\] (1.11) _for all_ \(t\geq 1\)_._ **Remark 1.1** (Assumptions on data).: The assumptions on the initial data are completely natural. The vanishing at the boundary points \(y\in\{0,1\}\) is a typical requirement [20, 23], while (1.5) is inessential, as the \(x\)-average is a constant of motion for (1.3). The orthogonality to eigenfunctions of \(\mathcal{L}\) is needed to avoid oscillatory, non-decaying modes (which are present for \(\beta^{2}>1/4\), see Section 2.5). Lastly, the precise meaning of the spectral assumption at the endpoints of the essential spectrum \(\sigma_{ess}(\mathcal{L})=[0,1]\) is in condition (H) in Section 2.8 below. It requires orthogonality to certain generalized eigenfunctions that appear at \(\partial\sigma_{ess}(\mathcal{L})=\{0,1\}\). The inviscid damping estimates (1.6)-(1.11) encode the asymptotic stability of (1.3) and precisely describe the long-time dynamics. The decay is due to a combination of _mixing_ (due to the background Couette flow) and _stratification_ (due to the background density). The former has been extensively studied in the homogeneous Euler equations both at the linear level [2, 8, 13, 28, 22, 23, 35, 36, 37, 38] and at the nonlinear level [3, 19, 20, 21, 24]. In the presence of stratification, the spectral stability of the Euler-Boussinesq system has been address in the classical work of Miles [25] and Howard [17]. See [32, Section 3.2.3] for a survey on the literature regarding the spectral problem. The first work in the direction of asymptotic stability dates back to Hartman [16] in 1975, in which (1.3) on \(\mathbb{T}\times\mathbb{R}\) was solved explicitly on the Fourier side using hypergeometric functions. Moreover, it was predicted the vorticity should be unstable in \(L^{2}\), with a growth proportional to \(\sqrt{t}\). This approach was used in [33] to prove decay rates analogous to those in Theorem 1 in \(\mathbb{T}\times\mathbb{R}\). In this spatial setting, a different approach based on an energy method in Fourier space was used in [4] to prove both inviscid damping and instability in the spectrally stable regime \(\beta^{2}>1/4\), confirming the predictions of [16]. The analysis has been extended in the full nonlinear setting in [1]. A third proof of linear inviscid damping on \(\mathbb{T}\times\mathbb{R}\) can be found in our companion article [7], in which the methods developed here can be used to provide explicit solutions in physical variables to (1.3). Our article constitutes the first result of (linear) asymptotic stability of a stably stratified shear flow for the Euler-Boussinesq equations in the periodic channel, as well as the first rigorous characterization of the spectrum of the linearized operator (1.4) and in particular the existence of discrete neutral eigenvalues for \(\beta^{2}>1/4\). From a technical standpoint, the main difficulty lies in the stratification of the background density \(\bar{\rho}\). This manifests itself in the equation that rules the underlying spectral problem (the Taylor-Goldstein equation, see (TG) below), which becomes more singular than the usual Rayleigh equation for inviscid homogeneous fluids. This work also connects with the global well-posedness for the Euler-Boussinesq equations and, by extension, to the axisymmetric 3d Euler equations. Certain solutions to the Euler-Boussinesq and 3d Euler equations are known to blow up in finite time, see the ground-breaking work of Elgindi [9], and related works [10, 5, 11]. On the other hand, there are examples where inviscid damping plays a key role in proving global well-posedness for the 3d Euler equations and for the inhomogeneous 2d Euler equations, see [14] and [34, 6], respectively. In the case of Euler-Boussinesq near stratified shear flows, a long-time existence result relying on inviscid damping estimates can be found in [1]. ## 2. Main ideas and outline of the article In this section, we give a brief account of the strategy of proof of Theorem 1, recording the main steps that will be then expanded in the subsequent sections, and providing a quick reasoning behind the assumptions of Theorem 1 on the initial data. We focus on the case \(\beta^{2}\neq 1/4\) for the sake of clarity. When \(\beta^{2}=1/4\), the strategy is the same, but the statements of the main results typically differ by a logarithmic correction, and we prefer to postpone them in the relevant Section 5. We also set some of the notation and assumptions that will be used throughout the manuscript. ### Fourier decomposition and spectral representation The setting of the periodic channel \(\mathbb{T}\times[0,1]\) considered in this article poses new challenges as it forbids the use of Fourier methods in the vertical direction \(y\). However, we can decouple (1.3) in Fourier modes in \(x\in\mathbb{T}\), writing \[\omega=\sum_{m\in\mathbb{Z}}\omega_{m}(t,y)\mathrm{e}^{imx},\qquad\rho=\sum_{ m\in\mathbb{Z}}\rho_{m}(t,y)\mathrm{e}^{imx},\qquad\psi=\sum_{m\in\mathbb{Z}} \psi_{m}(t,y)\mathrm{e}^{imx},\] so that \[(\partial_{t}+imy)\omega_{m}=-im\beta^{2}\rho_{m},\qquad(\partial_{t}+imy) \rho_{m}=im\psi_{m},\] for each \(m\in\mathbb{Z}\), with \[\begin{cases}\Delta_{m}\psi_{m}=\omega_{m},\\ \psi_{m}|_{y=0,1}=0,\end{cases}\qquad\Delta_{m}:=\partial_{y}^{2}-m^{2}.\] The modes corresponding to the \(x\)-average, namely when \(m=0\), are clearly conserved and therefore we will not consider them further (cf. (1.5)). Moreover, since \(\omega\) and \(\rho\) are real-valued, we necessarily have that \(\overline{\omega_{-m}}=\omega_{m}\) and \(\overline{\rho_{-m}}=\rho_{m}\). Without loss of generality, we take \(m\geq 1\). For our purposes, it is more convenient to write (1.3) in the compact stream-function formulation \[\partial_{t}\begin{pmatrix}\psi_{m}\\ \rho_{m}\end{pmatrix}+imL_{m}\begin{pmatrix}\psi_{m}\\ \rho_{m}\end{pmatrix}=0,\] and directly obtain its solution as \[\begin{pmatrix}\psi_{m}\\ \rho_{m}\end{pmatrix}=\mathrm{e}^{-imL_{m}t}\begin{pmatrix}\psi_{m}^{0}\\ \rho_{m}^{0}\end{pmatrix}\] where \(L_{m}\) is the linear operator defined by \[L_{m}=\begin{pmatrix}\Delta_{m}^{-1}(y\Delta_{m})&\beta^{2}\Delta_{m}^{-1}\\ -1&y\end{pmatrix} \tag{2.1}\] Using Dunford's formula [12, 27], we have that \[\begin{pmatrix}\psi_{m}(t,y)\\ \rho_{m}(t,y)\end{pmatrix}=\frac{1}{2\pi i}\int_{\partial\Omega}\mathrm{e}^{-imct} (c-L_{m})^{-1}\begin{pmatrix}\psi_{m}^{0}(y)\\ \rho_{m}^{0}(y)\end{pmatrix}\,\mathrm{d}c, \tag{2.2}\] where here \(\Omega\) is any domain containing the spectrum \(\sigma(L_{m})\). Under suitable conditions on the initial data (see Proposition 6.1 below), we can reduce the contour of integration to \[\begin{pmatrix}\psi_{m}(t,y)\\ \rho_{m}(t,y)\end{pmatrix}=\frac{1}{2\pi i}\lim_{\varepsilon\to 0}\int_{0}^{1} \mathrm{e}^{-imy_{0}t}\left[(-y_{0}-i\varepsilon+L_{m})^{-1}-(-y_{0}+i \varepsilon+L_{m})^{-1}\right]\begin{pmatrix}\psi_{m}^{0}\\ \rho_{m}^{0}\end{pmatrix}\,\mathrm{d}y_{0}. \tag{2.3}\] In particular, the contour integral along the essential spectrum of \(L_{m}\), \(\sigma_{ess}(L_{m})=[0,1]\) is the only non-trivial contribution from \(\sigma(L_{m})\) to the Dunford's formula. For \(\varepsilon>0\), we denote \[\begin{pmatrix}\psi_{m,\varepsilon}^{\pm}(y,y_{0})\\ \rho_{m,\varepsilon}^{\pm}(y,y_{0})\end{pmatrix}:=(-y_{0}\pm i\varepsilon+L_{m} )^{-1}\begin{pmatrix}\psi_{m}^{0}(y)\\ \rho_{m}^{0}(y)\end{pmatrix} \tag{2.4}\] and obtain the coupled system of equations \[\omega_{m}^{0}(y) =(y-y_{0}\pm i\varepsilon)\Delta_{m}\psi_{m,\varepsilon}^{\pm}( y,y_{0})+\beta^{2}\rho_{m,\varepsilon}^{\pm}(y,y_{0}),\] \[\rho_{m}^{0}(y) =(y-y_{0}\pm i\varepsilon)\rho_{m,\varepsilon}^{\pm}(y,y_{0})- \psi_{m,\varepsilon}^{\pm}(y,y_{0}).\] We first solve \[\rho_{m,\varepsilon}^{\pm}(y,y_{0})=\frac{1}{y-y_{0}\pm i\varepsilon}\left( \rho_{m}^{0}(y)+\psi_{m,\varepsilon}^{\pm}(y,y_{0})\right) \tag{2.5}\] and from there we obtain the following inhomogeneous _Taylor-Goldstein equation_ for \(\psi_{m,\varepsilon}^{\pm}\), \[\Delta_{m}\psi_{m,\varepsilon}^{\pm}+\beta^{2}\frac{\psi_{m,\varepsilon}^{\pm }}{(y-y_{0}\pm i\varepsilon)^{2}}=\frac{\omega_{m}^{0}}{y-y_{0}\pm i \varepsilon}-\beta^{2}\frac{\rho_{m}^{0}}{(y-y_{0}\pm i\varepsilon)^{2}},\] (TG) along with homogeneous Dirichlet boundary conditions at \(y=0,1\). ### Notation and conventions Throughout the manuscript, we assume \(\beta>0\) and \(m\geq 1\). We say that \(A\lesssim B\) when there exists \(C>0\) such that \(A\leq CB\). Also, for \(j\geq 0\) we define \[Q_{j,m}=\|\rho_{m}^{0}\|_{H_{y}^{j+2}}+\|\omega_{m}^{0}\|_{H_{y}^{j+2}},\] to quantify the regularity requirements on the initial data. ### Green's function for the Taylor-Goldstein equation Solutions to (TG) are fundamental objects of study of this work. They can be constructed via the classical method of Green's functions, by first solving the _homogeneous_ Taylor-Goldstein equation \[\text{TG}_{m,\varepsilon}^{\pm}\phi=0,\qquad\text{TG}_{m,\varepsilon}^{\pm}: =\Delta_{m}+\frac{\beta^{2}}{(y-y_{0}\pm i\varepsilon)^{2}},\] (TGh) for \(y\in(0,1)\). We refer to \(\text{TG}_{m,\varepsilon}^{\pm}\) as to the _Taylor-Goldstein operator_. As in the statement of Theorem 1, we define throughout the article the numbers \[\mu=\operatorname{Re}\left(\sqrt{1/4-\beta^{2}}\right),\qquad\nu=\operatorname {Im}\left(\sqrt{1/4-\beta^{2}}\right), \tag{2.6}\] and we denote by \(\mathcal{G}_{m,\varepsilon}^{\pm}(y,y_{0},z)\) the Green's function of the Taylor-Goldstein equation, which satisfies \[\text{TG}_{m,\varepsilon}^{\pm}\mathcal{G}_{m,\varepsilon}^{\pm}(y,y_{0},z)= \delta(y-z). \tag{2.7}\] While \(\mathcal{G}_{m,\varepsilon}^{\pm}(y,y_{0},z)\) has an explicit expression, reported in Proposition 3.1, we record its important properties as the key result. **Theorem 2**.: _Let \(\beta^{2}\neq 1/4\). There exists \(\varepsilon_{0}>0\) such that for all \(\varepsilon\in(0,\varepsilon_{0})\) and for all \(y,y_{0}\in[0,1]\) such that \(m|y-y_{0}|\leq 3\beta\), we have_ \[|y-y_{0}+i\varepsilon|^{-\frac{1}{2}+\mu}\|\mathcal{G}_{m,\varepsilon}^{\pm} (y,y_{0},\cdot)\|_{L_{x}^{2}}+|y-y_{0}+i\varepsilon|^{\frac{1}{2}+\mu}\|\partial _{y}\mathcal{G}_{m,\varepsilon}^{\pm}(y,y_{0},\cdot)\|_{L_{x}^{2}}\lesssim\frac {1}{m^{1+\mu}}.\] The theorem provides sharp bounds on the Green's function near the _critical layer_\(y=y_{0}\), where (TGh) is singular and (TG) has a regular singular point. The scale of the problem is crucially determined by \(\beta\) and \(m\). The proof of Theorem 2 is carried out in Section 4, while the analogous result for \(\beta^{2}=1/4\) is stated in Theorem 5 and proven in Section 5. They are based on the asymptotic properties of Whittaker functions [31], whose main properties can be found in Appendix A. ### Regularization of the generalized stream-functions The source term of (TG) is, a priori, too singular for \(\psi_{m,\varepsilon}^{\pm}\) to be obtained as an application of the Green's function on (TG). However, the singularity of the source term is no worse than \(\frac{\beta^{2}}{(y-y_{0}\pm i\varepsilon)^{2}}\), which is precisely the potential of the Taylor-Goldstein operator (TGh). Then, (TG) may be written as \[\text{TG}_{m,\varepsilon}^{\pm}\psi_{m}^{\pm}=\text{TG}_{m,\varepsilon}^{\pm }\left(\frac{1}{\beta^{2}}(y-y_{0}\pm i\varepsilon)\omega_{m}^{0}-\rho_{m}^{ 0}\right)+\Delta_{m}\big{(}\rho_{m}^{0}(y)-\frac{1}{\beta^{2}}(y-y_{0}\pm i \varepsilon)\omega_{m}^{0}(y)\big{)}.\] Hence, for \(z,y_{0}\in[0,1]\) and \(0\leq\varepsilon\leq 1\), define \[F_{m,\varepsilon}^{\pm}(z,y_{0}):=\Delta_{m}\rho_{m}^{0}(z)-\frac{1}{\beta^{2 }}\Delta_{m}\big{(}(z-y_{0}\pm i\varepsilon)\omega_{m}^{0}(z)\big{)} \tag{2.8}\] and note that, since the pair of initial data vanish on the physical boundaries \(y=0\) and \(y=1\), the solution \(\psi_{m,\varepsilon}^{\pm}(y,y_{0})\) to (TG) is given by \[\psi_{m,\varepsilon}^{\pm}(y,y_{0})=\frac{1}{\beta^{2}}(y-y_{0}\pm i \varepsilon)\omega_{m}^{0}(y)-\rho_{m}^{0}(y)+\varphi_{m,\varepsilon}^{\pm}( y,y_{0}), \tag{2.9}\] while \[\rho_{m,\varepsilon}^{\pm}(y,y_{0})=\frac{1}{\beta^{2}}\omega_{m}^{0}(y)+ \frac{1}{y-y_{0}\pm i\varepsilon}\varphi_{m,\varepsilon}^{\pm}(y,y_{0}). \tag{2.10}\] Here, \(\varphi_{m,\varepsilon}^{\pm}\) solves \[\text{TG}_{m,\varepsilon}^{\pm}\varphi_{m,\varepsilon}^{\pm}=F_{m,\varepsilon} ^{\pm} \tag{2.11}\] and is given by \[\varphi_{m,\varepsilon}^{\pm}(y,y_{0})=\int_{0}^{1}\mathcal{G}_{m,\varepsilon }^{\pm}(y,y_{0},z)F_{m,\varepsilon}^{\pm}(z,y_{0})\mathrm{d}z. \tag{2.12}\] The main reason to write \(\psi_{m,\varepsilon}^{\pm}\) and \(\rho_{m,\varepsilon}^{\pm}\) using (2.9) and (2.10) is that now \(F_{m,\varepsilon}^{\pm}\in L_{z}^{2}\) and we can use the bounds on the Green's function \(\mathcal{G}_{m,\varepsilon}^{\pm}\) from Theorem 2 in (2.12) to estimate \(\varphi_{m,\varepsilon}^{\pm}\), and thus \(\psi_{m,\varepsilon}^{\pm}\) and \(\rho_{m,\varepsilon}^{\pm}\), near the critical layer. The introduction of \(F_{m,\varepsilon}^{\pm}\) constitutes a first example of the underlying _motif_ of inviscid damping, namely that _decay costs regularity_. ### Spectral picture The main assumption of Theorem 1 consists in requiring that the initial data are orthogonal to the subspace generated by the eigenfunctions of \(L_{m}\). Generically speaking, (embedded) eigenvalues may constitute an obstruction to damping phenomena, as they can give rise to oscillatory modes or even growing (hence unstable) modes. The spectral picture here is quite intriguing and drastically different compared to the case of the periodic strip. The main result on the spectrum of \(L_{m}\) is below. **Theorem 3**: _Let \(\beta>0\). Then the essential spectrum of \(L_{m}\) is \(\sigma_{ess}(L_{m})=[0,1]\). Moreover,_ * _any eigenvalue_ \(c\in\mathbb{C}\) _such that_ \(|\mathrm{Re}(c)-1/2|\geq 1/2\)_, must have_ \(\mathrm{Im}(c)=0\)_;_ * _for_ \(\beta^{2}>1/4\)_,_ * _there are no eigenvalues_ \(c\in\mathbb{C}\) _such that_ \(\mathrm{Im}(c)\neq 0\) _and_ \(\mathrm{Re}(c)\in(0,1)\)_._ * _there are no real eigenvalues_ \(c\in\mathbb{R}\) _such that_ \(c<-\beta/m\) _or_ \(c>1+\beta/m\)_._ * _there is a countably infinite number of discrete eigenvalues_ \(c\in\mathbb{C}\)_, with_ \(\mathrm{Im}(c)=0\) _and_ \(\mathrm{Re}(c)\in(-\beta/m,0)\cup(1,1+\beta/m)\)_. Moreover, they accumulate towards_ \(0\) _and_ \(1\)_._ * _for_ \(\beta^{2}\leq 1/4\)_,_ * _there is no eigenvalue_ \(c\in\mathbb{C}\) _such that_ \(\mathrm{Re}(c)\leq 0\) _or_ \(\mathrm{Re}(c)\geq 1\) _there is no eigenvalue_ \(c\in\mathbb{C}\) _such that_ \(|\mathrm{Im}(c)|\geq\beta/m\) _or_ \(|\mathrm{Im}(c)|\leq\varepsilon_{0}\)_._ The three cases outlined above are depicted in Figure 1. Unstable eigenmodes can be ruled out by the classical Miles-Howard stability criterion [17, 25] when \(\beta^{2}\geq 1/4\), so that any eigenvalue \(c\in\mathbb{C}\) of \(L_{m}\) must have \(\mathrm{Im}(c)=0\). However, spectral stability is typically not sufficient to deduce asymptotic stability. This is particularly clear when \(\beta^{2}>1/4\), for which infinitely many eigenvalues exist, corresponding to neutral (oscillatory) modes. This is a specific feature of the problem in the _periodic channel_. The same problem on the periodic strip does not have any of these modes, as the essential spectrum is the whole real line, and hence eigenvalues are "pushed away to infinity". In the periodic channel, each of these discrete eigenvalues are found to be zeroes of the Wronskian of the Green's function and this is precisely how we characterize them in Proposition 6.3. When \(\beta^{2}<1/4\), we are able to rule out the existence of eigenvalues in the proximity of the essential spectrum, which is a consequence of suitable lower bounds on the Wronskian. Nonetheless, isolated unstable eigenvalues in an intermediate region may exist in this case, although their presence does not affect the conclusion of Theorem 1 if the data are orthogonal to them. The proof of their existence is an interesting open question. The proof of Theorem 1 is postponed to Section 6. It requires an extensive analysis of the resolvent operator \((c-L_{m})^{-1}\) and of spectral integrals of the form (2.2), where the domain of integration containing the essential spectrum is carefully designed. ### Solutions to the inhomogeneous Taylor-Goldstein equation Once the Green's function is established and (TG) is regularized due to the introduction of \(F_{m,\varepsilon}^{\pm}\) and \(\varphi_{m,\varepsilon}^{\pm}\), most of the analysis on \(\psi_{m,\varepsilon}^{\pm}\) will follow from the properties of generic solutions \(\Phi_{m,\varepsilon}^{\pm}\) to the general inhomogeneous Taylor-Goldstein equation \[\text{TG}_{m,\varepsilon}^{\pm}\Phi_{m,\varepsilon}^{\pm}=f,\] (TGf) for some \(f\in L^{2}\) and with boundary conditions \(\Phi_{m,\varepsilon}^{\pm}(0,y_{0})=\Phi_{m,\varepsilon}^{\pm}(1,y_{0})=0\). To formally quantify the distance to the critical layer, for \(y_{0}\in[0,1]\) and \(n\geq 1\) we introduce the nested sets \[J_{n}=\{y\in[0,1]:m|y-y_{0}|\leq n\beta\}\] and \(J_{n}^{c}=[0,1]\setminus J_{n}\). A direct consequence of Theorem 2 are the asymptotic expansions of \(\Phi_{m,\varepsilon}^{\pm}\) near the critical layer. That is, for all \(y\in J_{3}\) we have \[|y-y_{0}\pm i\varepsilon|^{-\frac{1}{2}+\mu}|\Phi_{m,\varepsilon}^{\pm}(y,y_ {0})|+|y-y_{0}\pm i\varepsilon|^{\frac{1}{2}+\mu}|\partial_{y}\Phi_{m, \varepsilon}^{\pm}(y,y_{0})|\lesssim\frac{1}{m^{1+\mu}}\|f\|_{L_{y}^{2}}. \tag{2.13}\] Figure 1. The essential spectrum \(\sigma_{ess}(L_{m})=[0,1]\) is in red. Eigenvalues are denotes by \(*\). Theorem 3 shows their existence for \(\beta^{2}>1/4\), while when \(\beta^{2}<1/4\) we can only discern that they do not exist close to the essential spectrum. Using the entanglement inequality \[\|\partial_{y}\Phi^{\pm}_{m,\varepsilon}\|^{2}_{L^{2}_{y}(J^{c}_{3})}+m^{2}\|\Phi^ {\pm}_{m,\varepsilon}\|^{2}_{L^{2}_{y}(J^{c}_{3})}\lesssim m^{2}\|\Phi^{\pm}_{m,\varepsilon}\|^{2}_{L^{2}_{y}(J^{c}_{2}\cap J_{3})}+\frac{1}{m^{2}}\|f\|^{2}_{ L^{2}_{y}(J^{c}_{2})}, \tag{14}\] which is inspired from [18] and proved in Lemma 7.1, the localised asymptotic expansions (13) provide integral estimates on \(\Phi^{\pm}_{m,\varepsilon}\) away from the critical layer, \[\|\partial_{y}\Phi^{\pm}_{m,\varepsilon}(y,y_{0})\|_{L^{2}_{y}(J^{c}_{3})}+m\| \Phi^{\pm}_{m,\varepsilon}(y,y_{0})\|_{L^{2}_{y}(J^{c}_{3})}\lesssim\frac{1}{m }\|f\|_{L^{2}_{y}}. \tag{15}\] The precise statements and proofs of (13) and (15), as well as the corresponding versions for \(\beta^{2}=1/4\), can be found in Proposition 7.2 in Section 7. ### Inviscid damping estimates through the limiting absorption principle The last step in the proof of Theorem 1 is a stationary phase argument to deduce decay of \(\psi_{m}\) and \(\rho_{m}\) in (3). As customary, it involves an integration by parts in the spectral variable \(y_{0}\) to gain time-decay from the oscillatory phase. The amount of decay that can be obtained is linked to the regularity of the generalized streamfunctions \(\psi^{\pm}_{m,\varepsilon}\) in (4), and even more crucially to their asymptotic expansion at the critical layer (matching that of the Green's function in Theorem 2, as can be seen from (9) and (12)). Moreover, the integration leads to boundary terms at the endpoint of the spectrum that need to be treated _ad hoc_. To obtain the asymptotic expansions of \(\psi^{\pm}_{m,\varepsilon}\) near the critical layer, in Proposition 3.5 we observe that \(\partial_{y}+\partial_{y_{0}}\) commutes with the Taylor-Goldstein operator (TGh) and we deduce formulas for \(\partial_{y_{0}}\psi^{\pm}_{m,\varepsilon}\), and several other derivatives with respect to both \(y\) and \(y_{0}\). These formulas involve solutions \(\Phi^{\pm}_{m,\varepsilon}\) to (TGf) for source terms \(f\) given by derivatives of \(F^{\pm}_{m,\varepsilon}\). As is clear from (13), the asymptotic expansions of \(\Phi^{\pm}_{m,\varepsilon}\), and in turn of \(\partial_{y_{0}}\psi^{\pm}_{m,\varepsilon}\) and related derivatives, are conditional to the \(L^{2}\) boundedness of derivatives of \(F^{\pm}_{m,\varepsilon}\), constituting a further example of the fact that decay costs regularity. Some formulas from Proposition 3.5 involve as well terms related to \(\partial_{y}\varphi^{\pm}_{m,\varepsilon}(z,y_{0})\), and higher derivatives, evaluated at the physical boundaries \(z=0\) and \(z=1\). In general, these boundary terms arise when the Taylor-Goldstein operator (TGh) acting on the \(\partial_{y}\) derivative of solutions to (TGf) is inverted, and usually they do not vanish. See Proposition 3.5 for more details. Near the critical layer, these boundary terms are studied in Section 8 and some require (H) to be sufficiently regular, see Proposition 8.3 for more details. Once the asymptotic expansions for \(\psi^{\pm}_{m,\varepsilon}\) near the critical layer are established via Proposition 3.5 and Proposition 7.2, these are used through the entanglement inequality (14) to derive the regularity estimates of \(\psi^{\pm}_{m,\varepsilon}\) away from the critical layer. Additionally, asymptotic expansions and regularity estimates for \(\rho^{\pm}_{m,\varepsilon}\) are deduced accordingly thanks to (10). The precise statements and proofs are found in Section 9. Both the asymptotic expansions and the regularity estimates are uniform in \(\varepsilon\) sufficiently small, so that the limiting functions in (3) retain the same properties. ### Limiting absorption principle for spectral boundary terms The stationary phase argument employed in the proof of Theorem 1 requires an integration by parts in the spectral variable \(y_{0}\) in (3) regarding \(\psi_{m}\) that involves spectral boundary terms evaluated at \(y_{0}=0\) and \(y_{0}=1\). These boundary terms are \[-\frac{1}{2\pi i}\frac{1}{imt}\lim_{\varepsilon\to 0}\left[\mathrm{e}^{- imy_{0}t}\left(\psi^{-}_{m,\varepsilon}(y,y_{0})-\psi^{+}_{m, \varepsilon}(y,y_{0})\right)\right]_{y_{0}=0}^{y_{0}=1}. \tag{16}\] For \(y_{0}=0\), from (9) and (12) we note that \[\begin{split}\psi^{-}_{m,\varepsilon}(y,0)-\psi^{+}_{m, \varepsilon}(y,0)=&-\frac{2i\varepsilon}{\beta^{2}}\omega^{0}_{m}+ \int_{0}^{1}\left(\mathcal{G}^{-}_{m,\varepsilon}(y,0,z)-\mathcal{G}^{+}_{m, \varepsilon}(y,0,z)\right)F_{m}(z,0)\mathrm{d}z\\ &+\frac{i\varepsilon}{\beta^{2}}\int_{0}^{1}\left(\mathcal{G}^{- }_{m,\varepsilon}(y,0,z)+\mathcal{G}^{+}_{m,\varepsilon}(y,0,z)\right)\Delta_ {m}\omega^{0}_{m}\mathrm{d}z,\end{split} \tag{17}\] where \(F_{m}(z,0)=F_{m,0}^{\pm}(z,0)\). Moreover, for \(\beta^{2}>\frac{1}{4}\), from Lemma 6.11, there exists \(\varepsilon_{0}>0\) and \(C_{\varepsilon}\geq C_{0}>0\) such that \[\left|\mathcal{G}_{m,\varepsilon}^{-}(y,0,z)-\mathcal{G}_{m,\varepsilon}^{+}( y,0,z)-C_{\varepsilon}\phi_{u,m}(y)\phi_{u,m}(z)\right|\lesssim\varepsilon^{ \frac{1}{2}}, \tag{2.18}\] for all \(\varepsilon\leq\varepsilon_{0}\) and uniformly in \(y,z\in[0,1]\). Here, \(\phi_{u,m}\), given by (3.8), denotes the generalized eigenfunction associated to the generalized eigenvalue \(y_{0}=0\). Analogous expressions to (2.17) and (2.18) can be deduced for the boundary term associated to \(y_{0}=1\), now involving \(\phi_{l,m}\), the generalized eigenfunction associated to the generalized eigenvalue \(y_{0}=1\) and given by (3.9). In view of (2.18), for (2.16) to vanish we require the initial data \((\omega_{m}^{0},\rho_{m}^{0})\) to be such that \[\int_{0}^{1}\phi_{u,m}(z)F_{m}(z,0)\mathrm{d}z=\int_{0}^{1}\phi_{l,m}(z)F_{m}( z,1)\mathrm{d}z=0.\] (H) This is the key orthogonality assumption at the endpoint of the essential spectrum, which was discussed in Remark 1.1. Then, we are able to show **Theorem 4**.: _We have that_ \[\lim_{\varepsilon\to 0}\left\|\psi_{m,\varepsilon}^{-}(\cdot,y_{0})-\psi_{m, \varepsilon}^{+}(\cdot,y_{0})\right\|_{L_{y}^{2}}=0,\qquad y_{0}\in\{0,1\}.\] The proof of Theorem 4 is carried out in Section 6, where (2.18) is shown in Lemma 6.11 for \(\beta^{2}>1/4\). For the case \(\beta^{2}\leq 1/4\), the difference of Green's functions at \(y_{0}=0\) and \(y_{0}=1\) vanish as \(\varepsilon\to 0\) and no orthogonality conditions are needed, see Lemma 6.17 and Lemma 6.22 for more details. ## 3. Explicit solutions to the Taylor-Goldstein equation The first step towards the proof of Theorem 1 is to derive the expression of the Green's function associated to (TG). The building block consists of the so-called Whittaker functions [31], a modified form of hypergeometric functions that solve equations of the form \[\partial_{\zeta}^{2}M_{\kappa,\gamma}+\left(-\frac{1}{4}+\frac{\kappa}{\zeta }+\frac{1/4-\gamma^{2}}{\zeta^{2}}\right)M_{\kappa,\gamma}=0,\qquad\zeta\in \mathbb{C}, \tag{3.1}\] for parameters \(\kappa,\gamma\in\mathbb{C}\). Their properties are reported in Appendix A. ### The case \(\beta^{2}\neq 1/4\) We use Whittaker functions with \(\gamma=\pm(\mu+i\nu)=\pm\sqrt{1/4-\beta^{2}}\) and \(b=0\), see (2.6), and denote by \(M_{\pm}(\zeta):=M_{0,\pm(\mu+i\nu)}(2m\zeta)\) the solution to the rescaled Whittaker equation \[\partial_{\zeta}^{2}M_{\pm}+\left(-\frac{1}{4}+\frac{1/4-(1/4-\beta^{2})}{4m^ {2}\zeta^{2}}\right)M_{\pm}=0,\qquad\zeta\in\mathbb{C}. \tag{3.2}\] The construction of the Green's function is contained in the following result. **Proposition 3.1**.: _Let \(\varepsilon\in(0,1)\) and \(\beta^{2}\neq 1/4\). The Green's function \(\mathcal{G}_{m,\varepsilon}^{\pm}\) of \(\mathrm{TG}_{m,\varepsilon}^{\pm}\) is given by_ \[\mathcal{G}_{m,\varepsilon}^{\pm}(y,y_{0},z)=\frac{1}{\mathcal{W}_{m, \varepsilon}^{\pm}(y_{0})}\begin{cases}\phi_{u,m,\varepsilon}^{\pm}(y,y_{0}) \phi_{l,m,\varepsilon}^{\pm}(z,y_{0}),&0\leq z\leq y\leq 1,\\ \phi_{u,m,\varepsilon}^{\pm}(z,y_{0})\phi_{l,m,\varepsilon}^{\pm}(y,y_{0}),&0 \leq y\leq z\leq 1,\end{cases} \tag{3.3}\] _where \(\phi_{u,m,\varepsilon}^{\pm}(\cdot,y_{0})\) and \(\phi_{l,m,\varepsilon}^{\pm}(\cdot,y_{0})\) are two homogeneous solutions to (TGh) such that \(\phi_{u,m,\varepsilon}^{\pm}(1,y_{0})=0\) and \(\phi_{l,m,\varepsilon}^{\pm}(0,y_{0})=0\), respectively, for all \(y_{0}\in[0,1]\). They are explicitly given by_ \[\phi_{u,m,\varepsilon}^{\pm}(y,y_{0}):=M_{+}(1-y_{0}\pm i\varepsilon)M_{-}(y- y_{0}\pm i\varepsilon)-M_{-}(1-y_{0}\pm i\varepsilon)M_{+}(y-y_{0}\pm i\varepsilon) \tag{3.4}\] _and_ \[\phi_{l,m,\varepsilon}^{\pm}(y,y_{0}):=M_{+}(-y_{0}\pm i\varepsilon)M_{-}(y-y_{ 0}\pm i\varepsilon)-M_{-}(-y_{0}\pm i\varepsilon)M_{+}(y-y_{0}\pm i\varepsilon), \tag{3.5}\] _with Wronskian_ \[\mathcal{W}_{m,\varepsilon}^{\pm}(y_{0}):=4(\mu+i\nu)m\Big{(}M_{+}(-y_{0}\pm i \varepsilon)M_{-}(1-y_{0}\pm i\varepsilon)-M_{-}(-y_{0}\pm i\varepsilon)M_{+}( 1-y_{0}\pm i\varepsilon)\Big{)}. \tag{3.6}\] _Furthermore, we have the relation \(\mathcal{G}_{m,\varepsilon}^{+}(y,y_{0},z)=\overline{\mathcal{G}_{m,\varepsilon} ^{-}(y,y_{0},z)}\), for all \(y,y_{0},z\in[0,1]\)._ Proof.: We introduce the variables \(\tilde{y}_{\pm}=2m(y-y_{0}\pm i\varepsilon)\) and \(\tilde{z}_{\pm}=2m(z-y_{0}\pm i\varepsilon)\) and we write \(\mathcal{G}_{m,\varepsilon}^{\pm}(y,y_{0},z)=\mathcal{G}(\tilde{y}_{\pm},\tilde {z}_{\pm})\) and rewrite (7) as \[\partial_{\tilde{y}_{\pm}}^{2}\mathcal{G}+\left(-\frac{1}{4}+\frac{1/4-(1/4- \beta^{2})}{\tilde{y}_{\pm}^{2}}\right)\mathcal{G}=\frac{1}{4m^{2}}\delta \left(\frac{1}{2m}\left(\tilde{y}_{\pm}-\tilde{z}_{\pm}\right)\right). \tag{10}\] The left-hand side above has precisely the form of (11), and therefore the general solution is given in terms of the homogeneous solutions in (11)-(12) by \[\mathcal{G}_{m,\varepsilon}^{\pm}(y,y_{0},z)=\begin{cases}C_{1}(\widetilde{z }_{\pm})\phi_{u,m,\varepsilon}^{\pm}(y,y_{0}),&0\leq z\leq y\leq 1,\\ C_{2}(\widetilde{z}_{\pm})\phi_{l,m,\varepsilon}^{\pm}(y,y_{0}),&0\leq y\leq z \leq 1,\end{cases}\] where \(C_{i}\) are constants to be determined. Imposing the continuity and jump conditions of the Green's function, together with basic properties of the Whittaker functions [26], we obtain the desired result. We also record the following proposition regarding homogeneous solutions to (T6h). **Proposition 3.2**.: _The unique solutions to the homogeneous (T6h) for \(\varepsilon=0\) and \(y_{0}=0,1\) with homogeneous Dirichlet boundary conditions at \(y=0,1\) are given by_ \[\phi_{u,m}(y):=M_{+}(1)M_{-}(y)-M_{-}(1)M_{+}(y) \tag{11}\] _and_ \[\phi_{l,m}(y):=M_{+}(1)M_{-}(1-y)-M_{-}(1)M_{+}(1-y). \tag{12}\] ### The case \(\beta^{2}=1/4\) We next provide the Green's function to the Taylor-Goldstein equation in the case \(\beta^{2}=1/4\). In this case, the Whittaker equation (10) has to be taken for \(a=b=0\), and \(M_{0}(\zeta):=M_{0,0}(2m\zeta)\) satisfies \[\partial_{\zeta}^{2}M_{0}+\left(-\frac{1}{4}+\frac{1/4}{4m^{2}\zeta^{2}}\right) M_{0}=0,\qquad\zeta\in\mathbb{C}. \tag{13}\] The second independent homogeneous solution from which we build the Green's function is given by \(W_{0}(\zeta):=W_{0,0}(2m\zeta)\), defined to be the unique solution to (13) such that \[W_{0,0}(\zeta)=\sqrt{\frac{\zeta}{\pi}}\left(2\log(2)+\varsigma-\log(\zeta) \right)+O\left(\zeta^{\frac{3}{2}}\log(\zeta)\right),\] as \(\zeta\to 0\), where \(\varsigma\) denotes the Euler constant. Apart from the introduction of \(W_{0}\), the result here is similar to that in Proposition 3.1. **Proposition 3.3**.: _Let \(\varepsilon\in(0,1)\). The Green's function \(\mathcal{G}_{m,\varepsilon}^{\pm}\) of \(\text{TG}_{m,\varepsilon}^{\pm}\) is given by_ \[\mathcal{G}_{m,\varepsilon}^{\pm}(y,y_{0},z)=\frac{1}{\mathcal{W}_{m, \varepsilon}^{\pm}(y_{0})}\begin{cases}\phi_{u,m,\varepsilon}^{\pm}(y,y_{0}) \phi_{l,m,\varepsilon}^{\pm}(z,y_{0})&0\leq z\leq y\leq 1,\\ \phi_{u,m,\varepsilon}^{\pm}(z,y_{0})\phi_{l,m,\varepsilon}^{\pm}(y,y_{0})&0 \leq y\leq z\leq 1,\end{cases} \tag{14}\] _where \(\phi_{u,m,\varepsilon}^{\pm}(\cdot,y_{0})\) and \(\phi_{l,m,\varepsilon}^{\pm}(\cdot,y_{0})\) are two homogeneous solutions to (T6h) such that \(\phi_{u,m,\varepsilon}^{\pm}(1,y_{0})=0\) and \(\phi_{l,m,\varepsilon}^{\pm}(0,y_{0})=0\), respectively, for all \(y_{0}\in[0,1]\). They are explicitly given by_ \[\phi_{u,m,\varepsilon}^{\pm}(y,y_{0}):=W_{0}(1-y_{0}\pm i\varepsilon)M_{0}(y-y _{0}\pm i\varepsilon)-M_{0}(1-y_{0}\pm i\varepsilon)W_{0}(y-y_{0}\pm i\varepsilon) \tag{15}\] _and_ \[\phi_{l,m,\varepsilon}^{\pm}(y,y_{0}):=W_{0}(-y_{0}\pm i\varepsilon)M_{0}(y-y _{0}\pm i\varepsilon)-M_{0}(-y_{0}\pm i\varepsilon)W_{0}(y-y_{0}\pm i\varepsilon). \tag{16}\] _with Wronskian_ \[\mathcal{W}_{m,\varepsilon}^{\pm}(y_{0}):=\frac{2m}{\sqrt{\pi}}\Big{(}W_{0}(-y _{0}\pm i\varepsilon)M_{0}(1-y_{0}\pm i\varepsilon)-M_{0}(-y_{0}\pm i \varepsilon)W_{0}(1-y_{0}\pm i\varepsilon)\Big{)}. \tag{17}\] _Furthermore, we have the relation \(\mathcal{G}_{m,\varepsilon}^{+}(y,y_{0},z)=\overline{\mathcal{G}_{m, \varepsilon}^{-}(y,y_{0},z)}\), for all \(y,y_{0},z\in[0,1]\)._ Similarly, we state the following proposition regarding homogeneous solutions to (T\(\mathrm{G}\)h) when \(\beta^{2}=1/4\). **Proposition 3.4**.: _The unique solutions to the homogeneous (\(\mathrm{T}\)Gh) for \(\varepsilon=0\) and \(y_{0}=0,1\) with homogeneous Dirichlet boundary conditions at \(y=0,1\) are given by_ \[\phi_{u,m}(y):=W_{0}(1)M_{0}(y)-M_{0}(1)W_{0}(y) \tag{3.15}\] _and_ \[\phi_{l,m}(y):=W_{0}(1)M_{0}(1-y)-M_{0}(1)W_{0}(1-y). \tag{3.16}\] ### Derivative formulae for solutions to the Taylor-Goldstein equation We finish this section by exhibiting the following useful expressions for various derivatives of \(\psi_{m,\varepsilon}^{\pm}\) and \(\rho_{m,\varepsilon}^{\pm}\). **Proposition 3.5**.: _Let \(\varepsilon\in(0,1)\). Then,_ \[\begin{split}\partial_{y_{0}}\psi_{m,\varepsilon}^{\pm}(y,y_{0}) &=-\frac{1}{\beta^{2}}\omega_{m}^{0}(y)+\mathcal{B}_{m, \varepsilon}^{\pm}(y,y_{0},z)\Big{]}_{z=0}^{z=1}-\int_{0}^{1}\partial_{y} \mathcal{G}_{m,\varepsilon}^{\pm}(y,y_{0},z)F_{m,\varepsilon}^{\pm}(z,y_{0} )\mathrm{d}z\\ &\quad+\int_{0}^{1}\mathcal{G}_{m,\varepsilon}^{\pm}(y,y_{0},z) \left(\partial_{z}F_{m,\varepsilon}^{\pm}(z,y_{0})+\partial_{y_{0}}F_{m, \varepsilon}^{\pm}(z,y_{0})\right)\mathrm{d}z,\end{split} \tag{3.17}\] _where \(\mathcal{B}_{m,\varepsilon}^{\pm}(y,y_{0}):=(\partial_{y}+\partial_{y_{0}})^{ 2}\varphi_{m,\varepsilon}^{\pm}(y,y_{0})\). Moreover,_ \[\begin{split}\partial_{y_{0}}^{2}\psi_{m,\varepsilon}(y,y_{0})& =F_{m,\varepsilon}^{\pm}(y,y_{0})-2\partial_{y}\mathcal{B}_{m, \varepsilon}^{\pm}(y,y_{0},z)\Big{]}_{z=0}^{z=1}+\widetilde{\mathcal{B}_{m, \varepsilon}^{\pm}}(y,y_{0},z)\Big{]}_{z=0}^{z=1}\\ &\quad+\left(m^{2}-\frac{\beta^{2}}{(y-y_{0}\pm i\varepsilon)^{ 2}}\right)\int_{0}^{1}\mathcal{G}_{m,\varepsilon}^{\pm}(y,y_{0},z)F_{m, \varepsilon}^{\pm}(z,y_{0})\mathrm{d}z\\ &\quad-2\int_{0}^{1}\partial_{y}\mathcal{G}_{m,\varepsilon}^{\pm} (y,y_{0},z)\left(\partial_{z}+\partial_{y_{0}}\right)F_{m,\varepsilon}^{\pm} (z,y_{0})\mathrm{d}z\\ &\quad+\int_{0}^{1}\mathcal{G}_{m,\varepsilon}^{\pm}(y,y_{0},z) \left(\partial_{z}+\partial_{y_{0}}\right)^{2}F_{m,\varepsilon}^{\pm}(z,y_{0 })\mathrm{d}z,\end{split} \tag{3.18}\] _where \(\widetilde{\mathcal{B}_{m,\varepsilon}^{\pm}}(y,y_{0},z):=\partial_{z} \mathcal{G}_{m,\varepsilon}^{\pm}(y,y_{0},z)\left(\partial_{z}+\partial_{y_{ 0}}\right)^{2}\varphi_{m,\varepsilon}^{\pm}(z,y_{0})\). Additionally,_ \[\partial_{y}\psi_{m,\varepsilon}^{\pm}(y,y_{0})=\frac{1}{\beta^{2}}\left( \omega_{m}^{0}(y)+(y-y_{0}\pm i\varepsilon)\partial_{y}\omega_{m}^{0}(y) \right)-\partial_{y}\rho_{m}^{0}(y)+\partial_{y}\varphi_{m,\varepsilon}^{\pm}( y,y_{0}) \tag{3.19}\] _and_ \[\begin{split}\partial_{y_{0},y}^{2}\psi_{m,\varepsilon}^{\pm}(y,y _{0})&=-\frac{1}{\beta^{2}}\partial_{y}\omega_{m}^{0}(y)-F_{m, \varepsilon}^{\pm}(y,y_{0})+\partial_{y}\mathcal{B}_{m,\varepsilon}^{\pm}(y,y _{0},z)\Big{]}_{z=0}^{z=1}\\ &\quad+\int_{0}^{1}\partial_{y}\mathcal{G}_{m,\varepsilon}^{\pm}(y, y_{0},z)\left(\partial_{z}F_{m,\varepsilon}^{\pm}(z,y_{0})+\partial_{y_{0}}F_{m, \varepsilon}^{\pm}(z,y_{0})\right)\mathrm{d}z\\ &\quad-\left(m^{2}-\frac{\beta^{2}}{(y-y_{0}\pm i\varepsilon)^{ 2}}\right)\int_{0}^{1}\mathcal{G}_{m,\varepsilon}^{\pm}(y,y_{0},z)F_{m, \varepsilon}^{\pm}(z,y_{0})\mathrm{d}z.\end{split} \tag{3.20}\] Proof.: The formula for \(\partial_{y}\psi_{m,\varepsilon}^{\pm}\) follows from taking a \(\partial_{y}\) derivative in (2.9). Similarly, once \(\partial_{y_{0}}\psi_{m,\varepsilon}^{\pm}\) is established, the expression for \(\partial_{y_{0},y}\psi_{m,\varepsilon}^{\pm}\) follows from taking a \(\partial_{y}\) derivative in (3.17) and noting that \(\mathcal{G}_{m,\varepsilon}^{\pm}\) is the Green's function of the Taylor-Goldstein operator. As for \(\partial_{y_{0}}\psi_{m,\varepsilon}^{\pm}\) and \(\partial_{y_{0}}^{2}\psi_{m,\varepsilon}\) we show these expressions using the Taylor-Goldstein equation and taking \(y_{0}\) and \(y\) derivatives there. More precisely, note that \(\partial_{y}+\partial_{y_{0}}\) commutes with the Taylor-Goldstein operator (T\(\mathrm{G}\)h). As such, \(\mathrm{T}\mathrm{G}_{m,\varepsilon}^{\pm}\left(\partial_{y}+\partial_{y_{0}} \right)\varphi_{m,\varepsilon}^{\pm}=\left(\partial_{y}+\partial_{y_{0}} \right)F_{m,\varepsilon}^{\pm}\) and the first part of the lemma follows, upon noting that \[\partial_{y_{0}}\psi_{m,\varepsilon}^{\pm}=-\frac{1}{\beta^{2}}\omega_{m}^{0}+ \partial_{y_{0}}\varphi_{m,\varepsilon}^{\pm}\] and that \[\int_{0}^{1}\mathcal{G}_{m,\varepsilon}^{\pm}(y,y_{0},z)\text{TG}_{m, \varepsilon}^{\pm}\partial_{z}\varphi_{m,\varepsilon}^{\pm}(z,y_{0})\text{d}z= -\left.\partial_{z}\mathcal{G}_{m,\varepsilon}^{\pm}(y,y_{0},z)\partial_{z} \varphi_{m,\varepsilon}^{\pm}(z,y_{0})\right|_{z=0}^{z=1}+\partial_{y}\varphi _{m,\varepsilon}^{\pm}(y,y_{0}).\] As for the second part of the lemma, \(\text{TG}_{m,\varepsilon}^{\pm}\left(\partial_{y}+\partial_{y_{0}}\right)^{2} \varphi_{m,\varepsilon}^{\pm}=\left(\partial_{y}+\partial_{y_{0}}\right)^{2}F _{m,\varepsilon}^{\pm}\), from which we deduce that \[\partial_{y_{0}}\left(\partial_{y}+\partial_{y_{0}}\right) \varphi_{m,\varepsilon}^{\pm} =-\partial_{y}\left(\partial_{y}+\partial_{y_{0}}\right)\varphi_ {m,\varepsilon}^{\pm}+\left[\partial_{z}\mathcal{G}_{m,\varepsilon}^{\pm}( y,y_{0},z)(\partial_{y_{0}}+\partial_{z})^{2}\varphi_{m,\varepsilon}^{\pm}(z,y_{0}) \right]_{z=0}^{z=1}\] \[\quad+\int_{0}^{1}\mathcal{G}_{m,\varepsilon}^{\pm}(y,y_{0},z) \left(\partial_{z}+\partial_{y_{0}}\right)^{2}F_{m,\varepsilon}^{\pm}(z,y_{0} )\text{d}z.\] Now, since \((\partial_{y_{0}}+\partial_{y})^{2}\varphi_{m,\varepsilon}^{\pm}=\partial_{y_ {0}}^{2}\varphi_{m,\varepsilon}^{\pm}+2\partial_{y_{0}}\partial_{y}\varphi_{m,\varepsilon}^{\pm}+\partial_{y}^{2}\varphi_{m,\varepsilon}^{\pm}\), we observe that for \(y=0\) and \(y=1\), \[(\partial_{y_{0}}+\partial_{y})^{2}\varphi_{m,\varepsilon}^{\pm}( y,y_{0}) =-\frac{2}{\beta^{2}}\partial_{y}\omega_{m}^{0}(y)-F_{m,\varepsilon}^{\pm}(y,y_{0} )+2\partial_{y}\Big{[}\partial_{z}\mathcal{G}_{m,\varepsilon}^{\pm}(y,y_{0}, z)\partial_{z}\varphi_{m,\varepsilon}^{\pm}(z,y_{0})\Big{]}_{z=0}^{z=1}\] \[\quad+2\int_{0}^{1}\partial_{y}\mathcal{G}_{m,\varepsilon}^{\pm} (y,y_{0},z)\left(\partial_{z}+\partial_{y_{0}}\right)F_{m,\varepsilon}^{\pm} (z,y_{0})\text{d}z\] Moreover, from \(\text{TG}_{m,\varepsilon}^{\pm}\left(\partial_{y}+\partial_{y_{0}}\right) \varphi_{m,\varepsilon}^{\pm}=\left(\partial_{y}+\partial_{y_{0}}\right)F_{m, \varepsilon}^{\pm}\) we can also obtain \[\left(\partial_{y}+\partial_{y_{0}}\right)\varphi_{m,\varepsilon}^{\pm}= \left[\partial_{z}\mathcal{G}_{m,\varepsilon}^{\pm}(y,y_{0},z)\partial_{y} \varphi_{m,\varepsilon}^{\pm}(y,y_{0})\right]_{z=0}^{z=1}+\int_{0}^{1} \mathcal{G}_{m,\varepsilon}^{\pm}(y,y_{0},z)(\partial_{z}+\partial_{y_{0}})F _{m,\varepsilon}^{\pm}(z,y_{0})\text{d}z,\] so that \[\partial_{y}\left(\partial_{y}+\partial_{y_{0}}\right)\varphi_{m, \varepsilon}^{\pm} =\partial_{y}\Big{[}\partial_{z}\mathcal{G}_{m,\varepsilon}^{\pm} (y,y_{0},z)\partial_{y}\varphi_{m,\varepsilon}^{\pm}(y,y_{0})\Big{]}_{z=0}^{z=1}\] \[\quad+\int_{0}^{1}\partial_{y}\mathcal{G}_{m,\varepsilon}^{\pm} (y,y_{0},z)(\partial_{z}+\partial_{y_{0}})F_{m,\varepsilon}^{\pm}(z,y_{0}) \text{d}z.\] We finish with the observation that \(\left(\partial_{y_{0}}-\partial_{y}\right)\left(\partial_{y}+\partial_{y_{0}} \right)\varphi_{m,\varepsilon}^{\pm}=(\partial_{y_{0}}^{2}-\partial_{y}^{2}) \varphi_{m,\varepsilon}^{\pm}\), that is, \[\partial_{y_{0}}^{2}\varphi_{m,\varepsilon}^{\pm} =\partial_{y}^{2}\varphi_{m,\varepsilon}^{\pm}+\left(\partial_{ y_{0}}-\partial_{y}\right)\left(\partial_{y}+\partial_{y_{0}}\right) \varphi_{m,\varepsilon}^{\pm}\] \[=\left(m^{2}-\beta^{2}\frac{1}{(y-y_{0}+i\varepsilon)^{2}} \right)\varphi_{m,\varepsilon}^{\pm}+F_{m,\varepsilon}^{\pm}+\partial_{y_{0}} \left(\partial_{y}+\partial_{y_{0}}\right)\varphi_{m,\varepsilon}^{\pm}- \partial_{y}\left(\partial_{y}+\partial_{y_{0}}\right)\varphi_{m,\varepsilon}^ {\pm}.\] Gathering the previously obtained terms, the second part of the lemma follows, since \(\partial_{y_{0}}^{2}\psi_{m,\varepsilon}^{\pm}=\partial_{y_{0}}^{2}\varphi_{m, \varepsilon}^{\pm}\). With the same ideas as above, we can also find useful expressions for \(\partial_{y_{0}}\rho_{m,\varepsilon}^{\pm}\) and \(\partial_{y}\rho_{m,\varepsilon}^{\pm}\), thanks again to (2.5). **Corollary 3.6**.: _Let \(\varepsilon\in(0,1)\). Then,_ \[\partial_{y_{0}}\rho_{m,\varepsilon}^{\pm}(y,y_{0}) =\frac{1}{(y-y_{0}\pm i\varepsilon)^{2}}\int_{0}^{1}\mathcal{G}_{m,\varepsilon}^{\pm}(y,y_{0},z)F_{m,\varepsilon}^{\pm}(z,y_{0})\text{d}z+\frac {1}{y-y_{0}\pm i\varepsilon}\mathcal{B}_{m,\varepsilon}^{\pm}(y,y_{0},z) \Big{]}_{z=0}^{z=1} \tag{3.21}\] \[\quad+\frac{1}{y-y_{0}\pm i\varepsilon}\int_{0}^{1}\mathcal{G}_{m,\varepsilon}^{\pm}(y,y_{0},z)\left(\partial_{z}F_{m,\varepsilon}^{\pm}(z,y_{0} )+\partial_{y_{0}}F_{m,\varepsilon}^{\pm}(z,y_{0})\right)\text{d}z\] \[\quad-\frac{1}{y-y_{0}\pm i\varepsilon}\int_{0}^{1}\partial_{y} \mathcal{G}_{m,\varepsilon}^{\pm}(y,y_{0},z)F_{m,\varepsilon}^{\pm}(z,y_{0}) \text{d}z,\] _and_ \[\begin{split}\partial_{y}\rho^{\pm}_{m,\varepsilon}(y,y_{0})& =\frac{1}{\beta^{2}}\partial_{y}\omega^{0}_{m}(y)-\frac{1}{(y-y_{0}\pm i \varepsilon)^{2}}\int_{0}^{1}\mathcal{G}^{\pm}_{m,\varepsilon}(y,y_{0},z)F^{ \pm}_{m,\varepsilon}(z,y_{0})\mathrm{d}z\\ &\quad+\frac{1}{y-y_{0}\pm i\varepsilon}\int_{0}^{1}\partial_{y} \mathcal{G}^{\pm}_{m,\varepsilon}(y,y_{0},z)F^{\pm}_{m,\varepsilon}(z,y_{0}) \mathrm{d}z.\end{split} \tag{3.22}\] ## 4. Bounds on the Green's function for \(\beta^{2}\neq 1/4\) This section is devoted to the proof of Theorem 2, which provides \(L^{2}\) bounds on the Green's function \(\mathcal{G}^{\pm}_{m,\varepsilon}\). We separate the estimates into bounds near the critical layer (Section 4.1) and away from the critical layer (Section 4.2). We wrap up the proof in Section 4.3. ### Pointwise bounds near the critical layer The aim is to provide pointwise bounds for the Green's function and its \(\partial_{y}\) derivative when both \(y\) and \(z\) variables are close to the spectral variable \(y_{0}\). **Proposition 4.1**.: _Let \(y,y_{0},z\in[0,1]\) such that \(m|y-y_{0}+i\varepsilon|\leq 10\beta\) and \(m|z-y_{0}+i\varepsilon|\leq 10\beta\). There exists \(\varepsilon_{0}>0\) such that_ \[|\mathcal{G}^{+}_{m,\varepsilon}(y,y_{0},z)|\lesssim m^{-2\mu}|y-y_{0}+i \varepsilon|^{\frac{1}{2}-\mu}|z-y_{0}+i\varepsilon|^{\frac{1}{2}-\mu}\] _and_ \[|\partial_{y}\mathcal{G}^{+}_{m,\varepsilon}(y,y_{0},z)|\lesssim m^{-2\mu}|y- y_{0}+i\varepsilon|^{-\frac{1}{2}-\mu}|z-y_{0}+i\varepsilon|^{\frac{1}{2}-\mu},\] _for all \(\varepsilon\leq\varepsilon_{0}\)._ The proofs depend heavily on the Wronskian associated to the Green's function \(\mathcal{G}^{\pm}_{m,\varepsilon}(y,y_{0},z)\) and whether \(\beta^{2}>1/4\) or not. We begin with the case in which \(\beta^{2}>1/4\), for which \(\mu=0\) and \(\nu>0\). **Proposition 4.2**.: _Let \(\beta^{2}>1/4\). Within the assumptions of Proposition 4.1, there exists \(\varepsilon_{0}>0\) such that_ \[|\mathcal{G}^{+}_{m,\varepsilon}(y,y_{0},z)|\lesssim|y-y_{0}+i\varepsilon|^{ \frac{1}{2}}|z-y_{0}+i\varepsilon|^{\frac{1}{2}}\] _and_ \[|\partial_{y}\mathcal{G}^{+}_{m,\varepsilon}(y,y_{0},z)|\lesssim|y-y_{0}+i \varepsilon|^{-\frac{1}{2}}|z-y_{0}+i\varepsilon|^{\frac{1}{2}},\] _for all \(\varepsilon\leq\varepsilon_{0}\)._ Proof.: Let us assume that \(y\leq z\). Then (3.3) tells us that \[\mathcal{G}^{+}_{m,\varepsilon}(y,y_{0},z)=\frac{1}{W^{\pm}_{m,\varepsilon}(y_ {0})}\phi^{+}_{u,m,\varepsilon}(z,y_{0})\phi^{+}_{l,m,\varepsilon}(y,y_{0}) \tag{4.1}\] and we have from Lemma A.3 that \[|\phi^{+}_{u,m,\varepsilon}(z,y_{0})|\lesssim m^{\frac{1}{2}}|z-y_{0}+i \varepsilon|^{\frac{1}{2}}\big{(}|M_{-}(1-y_{0}+i\varepsilon)|+|M_{+}(1-y_{0} +i\varepsilon)|\big{)},\] while \[|\phi^{+}_{l,m,\varepsilon}(y,y_{0})|\lesssim m^{\frac{1}{2}}|y-y_{0}+i \varepsilon|^{\frac{1}{2}}\big{(}|M_{-}(y_{0}-i\varepsilon)|+|M_{+}(y_{0}-i \varepsilon)|\big{)}.\] The proof follows once we show that \[|\mathcal{W}^{+}_{m,\varepsilon}(y_{0})|\geq C_{\nu}m|M_{-}(y_{0}-i\varepsilon )||M_{+}(1-y_{0}+i\varepsilon)| \tag{4.2}\] and \[\frac{|M_{+}(y_{0}-i\varepsilon)|}{|M_{-}(y_{0}-i\varepsilon)|}+\frac{|M_{-}(1 -y_{0}+i\varepsilon)|}{|M_{+}(1-y_{0}+i\varepsilon)|}\lesssim 1. \tag{4.3}\] To prove the lower bound on the Wronskian, we begin by writing out a suitable expression for \(\mathcal{W}^{+}_{m,\varepsilon}(y_{0})\), where we have used the analytic continuation properties of the Whittaker functions \(M_{\pm}\): \[\mathcal{W}^{+}_{m,\varepsilon}(y_{0}) =4i\nu m\Big{(}\mathrm{e}^{\nu\pi}M_{-}(y_{0}-i\varepsilon)M_{+}(1 -y_{0}+i\varepsilon)-\mathrm{e}^{-\nu\pi}M_{+}(y_{0}-i\varepsilon)M_{-}(1-y_{0 }+i\varepsilon)\Big{)}\] \[=4i\nu mM_{-}(y_{0}-i\varepsilon)M_{+}(1-y_{0}+i\varepsilon)\left( \mathrm{e}^{\nu\pi}-\mathrm{e}^{-\nu\pi}\frac{M_{+}(y_{0}-i\varepsilon)}{M_{- }(y_{0}-i\varepsilon)}\frac{M_{-}(1-y_{0}+i\varepsilon)}{M_{+}(1-y_{0}+i \varepsilon)}\right).\] The proof depends on the location of \(y_{0}\in[0,1]\) as well as on the smallness of \(m\). In this direction, let \(N_{\nu}>0\) given in Lemma A.6. \(\bullet\)**Case 1: \(m<N_{\nu}\).** Assume that \(y_{0}\leq 1/2\) (otherwise we would have \(1-y_{0}\leq 1/2\) and the proof would carry over unaltered). Therefore, it follows that \(2my_{0}<N_{\nu}\) and \(m\leq 2m(1-y_{0})<2N_{\nu}\). Hence, there exists \(\varepsilon_{0}>0\) such that from Lemma A.7 \[\left|\frac{M_{+}(y_{0}-i\varepsilon)}{M_{+}(y_{0}+i\varepsilon)}\right|\leq \mathrm{e}^{\frac{5}{4}\nu\pi},\] and from Lemma A.8 \[\frac{|M_{-}(1-y_{0}+i\varepsilon)|}{|M_{+}(1-y_{0}+i\varepsilon)|}\leq \mathrm{e}^{\frac{1}{4}\nu\pi},\] for all \(\varepsilon\leq\varepsilon_{0}\). Moreover, \[|\mathcal{W}^{+}_{m,\varepsilon}(y_{0})|\geq C_{\nu}m|M_{-}(y_{0}-i\varepsilon )||M_{+}(1-y_{0}+i\varepsilon)|\] for \(C_{\nu}=4\nu(\mathrm{e}^{\nu\pi}-\mathrm{e}^{\nu\pi/2})\). \(\bullet\)**Case 2: \(m\geq N_{\nu}\).** Assume now that \(2my_{0}\leq N_{\nu}\). Then, since \(m\geq N_{\nu}\) we have that \(2m(1-y_{0})\geq N_{\nu}\). The other case is completely analogous and \(m\geq N_{\nu}\) ensures that \(2my_{0}<N_{\nu}\) and \(2m(1-y_{0})<N_{\nu}\) cannot hold simultaneously for any \(y_{0}\in[0,1]\). Therefore, it follows from Lemma A.7 that \[\left|\frac{M_{+}(y_{0}-i\varepsilon)}{M_{+}(y_{0}+i\varepsilon)}\right|\leq \mathrm{e}^{\frac{5}{4}\nu\pi},\] while from Lemma A.6 we obtain \[\left|\frac{M_{-}(1-y_{0}+i\varepsilon)}{M_{+}(1-y_{0}+i\varepsilon)}\right| \leq\mathrm{e}^{\frac{1}{4}\nu\pi},\] for all \(\varepsilon\leq\varepsilon_{0}\), for some \(\varepsilon_{0}>0\). The lower bound on \(|\mathcal{W}^{+}_{m,\varepsilon}(y_{0})|\) holds for the same \(C_{\nu}\) as above. We next consider the case \(\beta^{2}<1/4\), for which \(\nu=0\) and \(0<\mu<1/2\). **Proposition 4.3**.: _Let \(\beta^{2}<1/4\). Within the assumptions of Proposition 4.1, there exists \(\varepsilon_{0}>0\) such that_ \[|\mathcal{G}^{+}_{m,\varepsilon}(y,y_{0},z)|\lesssim m^{-2\mu}|y-y_{0}+i \varepsilon|^{\frac{1}{2}-\mu}|z-y_{0}+i\varepsilon|^{\frac{1}{2}-\mu}\] _and_ \[|\partial_{y}\mathcal{G}^{+}_{m,\varepsilon}(y,y_{0},z)|\lesssim m^{-2\mu}|y-y _{0}+i\varepsilon|^{-\frac{1}{2}-\mu}|z-y_{0}+i\varepsilon|^{\frac{1}{2}-\mu},\] _for all \(\varepsilon\leq\varepsilon_{0}\)._ Proof.: Let us assume that \(y\leq z\) and deal with the expression in (4.1). From Lemma A.3 we have \[|\phi^{+}_{u,m,\varepsilon}(z,y_{0})|\lesssim m^{\frac{1}{2}-\mu}|z-y_{0}+i \varepsilon|^{\frac{1}{2}-\mu}\big{(}|M_{-}(1-y_{0}+i\varepsilon)|+|M_{+}(1-y _{0}+i\varepsilon)|\big{)},\] while \[|\phi^{+}_{l,m,\varepsilon}(y,y_{0})|\lesssim m^{\frac{1}{2}-\mu}|y-y_{0}+i \varepsilon|^{\frac{1}{2}-\mu}\big{(}|M_{-}(y_{0}-i\varepsilon)|+|M_{+}(y_{0}- i\varepsilon)|\big{)},\] both following from the observation that \((2m|y-y_{0}+i\varepsilon|)^{2\mu}\leq 10\). Using the analytic continuation properties of the Whittaker functions \(M_{\pm}\) we obtain \[\mathcal{W}^{+}_{m,\varepsilon}(y_{0}) =4\mu m\left(\mathrm{e}^{i\mu\pi}M_{+}(y_{0}-i\varepsilon)M_{-}(1- y_{0}+i\varepsilon)-\mathrm{e}^{-i\mu\pi}M_{-}(y_{0}-i\varepsilon)M_{+}(1-y_{0}+i \varepsilon)\right)\] \[=-4\mu m\left(\mathrm{e}^{-i\mu\pi}M_{-}(y_{0}-i\varepsilon)M_{+}( 1-y_{0}+i\varepsilon)-\mathrm{e}^{i\mu\pi}M_{+}(y_{0}-i\varepsilon)M_{-}(1-y _{0}+i\varepsilon)\right)\] One needs to obtain suitable estimates on several quotients. This is again done considering the location of \(y_{0}\in[0,1]\) and the smallness of \(m\). Thus, let \(N_{\mu,0}>0\) given in Lemma A.15. \(\bullet\)**Case 1:**\(m\leq N_{\mu,0}\)**.** Assume initially that \(y_{0}\leq\frac{1}{2}\). Then, \(2my_{0}\leq N_{\mu,0}\) and \(m\leq 2m(1-y_{0})\leq 2N_{\mu,0}\). Assume further that \(2my_{0}\leq\delta_{\mu,1}\) as given in Lemma A.16. From Lemma A.17 choosing \(N_{\mu,1}:=m\) and Lemma A.5 we have that \[\frac{3}{4}\leq\left|\frac{M_{+}(1-y_{0}+i\varepsilon)}{M_{+}(1-y_{0})}\right| \leq\frac{5}{4},\] and from Lemma A.16 \[\left|\frac{M_{-}(1-y_{0}+i\varepsilon)}{M_{+}(1-y_{0})}\right|=\left|\frac{M_ {-}(1-y_{0}+i\varepsilon)}{M_{-}(1-y_{0})}\right|\left|\frac{M_{-}(1-y_{0})}{ M_{+}(1-y_{0})}\right|\leq\frac{5}{4}M\left(\frac{1}{2}-\mu,1-2\mu,2N_{\mu,0} \right),\] for \(\varepsilon\leq\varepsilon_{0}\) small enough. Additionally, since \(2my_{0}\leq\delta_{\mu,1}\), we have from Lemma A.16 that \[\left|\frac{M_{+}(y_{0}-i\varepsilon)}{M_{-}(y_{0}-i\varepsilon)}\right|\leq \frac{1}{5M\left(\frac{1}{2}-\mu,1-2\mu,2N_{1}\right)}.\] With the above comparison estimates at hand, we note that \[\mathrm{e}^{-i\mu\pi}M_{-}(y_{0}-i\varepsilon)M_{+}(1-y_{0}+i \varepsilon)-\mathrm{e}^{i\mu\pi}M_{+}(y_{0}-i\varepsilon)M_{-}(1-y_{0}+i \varepsilon)\] \[\qquad=\mathrm{e}^{-i\mu\pi}M_{-}(y_{0}-i\varepsilon)M_{+}(1-y_{ 0})\left(\frac{M_{+}(1-y_{0}+i\varepsilon)}{M_{+}(1-y_{0})}-\mathrm{e}^{2i\mu \pi}\frac{M_{+}(y_{0}-i\varepsilon)}{M_{-}(y_{0}-i\varepsilon)}\frac{M_{-}(1-y _{0}+i\varepsilon)}{M_{+}(1-y_{0})}\right)\] and therefore we can lower bound \[|\mathcal{W}^{+}_{m,\varepsilon}(y_{0})|\geq 2\mu mM_{+}(1-y_{0})|M_{-}(y_{0}-i \varepsilon)|.\] The bounds on the Green's functions follow from the lower bound on the Wronskian and the comparison estimates stated above. Assume now that \(2my_{0}>\delta_{\mu,1}\). Then, due to Lemma A.17 we have both \[|M_{\pm}(y_{0}-i\varepsilon)-M_{\pm}(y_{0})|\leq\frac{\sin\mu\pi}{4}\left|M_{ \pm}(y_{0})\right|\] and \[|M_{\pm}(1-y_{0}+i\varepsilon)-M_{\pm}(1-y_{0})|\leq\frac{\sin\mu\pi}{4} \left|M_{\pm}(1-y_{0})\right|,\] for all \(\varepsilon\leq\varepsilon_{0}\). With the observation that \[\big{|}\mathrm{e}^{-i\mu\pi}M_{-}(y_{0})M_{+}(1-y_{0})-\mathrm{e}^ {i\mu\pi}M_{+}(y_{0})M_{-}(1-y_{0})\big{|}\] \[\qquad\geq\sin\mu\pi\big{(}M_{-}(y_{0})M_{+}(1-y_{0})+M_{+}(y_{0} )M_{-}(1-y_{0})\big{)},\] and the expansion \[M_{-}(y_{0}-i\varepsilon)M_{+}(1-y_{0}+i\varepsilon) =M_{-}(y_{0})M_{+}(1-y_{0})\] \[\quad+\big{(}M_{-}(y_{0}-i\varepsilon)-M_{-}(y_{0})\big{)}M_{+}(1- y_{0})\] \[\quad+M_{-}(y_{0})\big{(}M_{+}(1-y_{0}+i\varepsilon)-M_{+}(1-y_{0} )\big{)},\] one can lower bound \[|\mathcal{W}^{+}_{m,\varepsilon}(y_{0})|\geq\mu m\sin\mu\pi\big{(}M_{-}(y_{0}) M_{+}(1-y_{0})+M_{+}(y_{0})M_{-}(1-y_{0})\big{)}.\] As before, we obtain the bound on the Green's function combing the lower bound on the Wronskian with the above comparison estimates. \(\bullet\)**Case 2:**\(m\geq N_{\mu,0}\)**- Assume** \(2my_{0}<N_{\mu,0}\)**. Since** \(m\geq N_{\mu,0}\) **then** \(2m(1-y_{0})\geq N_{\mu,0}\)**. Assume further that** \(2my_{0}\leq\delta_{\mu,1}\) **as given in Lemma** A.16 **and let** \(C_{\mu}:=2^{-4\mu}\frac{\Gamma(1-\mu)}{\Gamma(1+\mu)}\)**. Then, from Lemma** A.7 **we have that** \[\left|\frac{M_{+}(y_{0}-i\varepsilon)}{M_{-}(y_{0}-i\varepsilon)}\right|\leq \frac{1}{3}C_{\mu}^{-1},\] **while from Lemma** A.15**,** \[\left|\frac{M_{-}(1-y_{0}+i\varepsilon)}{M_{+}(1-y_{0}+i\varepsilon)}\right| \leq\frac{3}{2}C_{\mu}.\] **Since we can write** \[\mathcal{W}_{m,\varepsilon}^{+}(y_{0})=-4\mu m\mathrm{e}^{-i\mu\pi}M_{-}(y_{0 }-i\varepsilon)M_{+}(1-y_{0}+i\varepsilon)\left(1-\mathrm{e}^{2i\mu\pi}\frac{ M_{+}(y_{0}-i\varepsilon)}{M_{-}(y_{0}-i\varepsilon)}\frac{M_{-}(1-y_{0}+i \varepsilon)}{M_{+}(1-y_{0}+i\varepsilon)}\right)\] **we are able to lower bound** \[|\mathcal{W}_{m,\varepsilon}^{+}(y_{0})|\geq 2\mu m|M_{-}(y_{0}-i\varepsilon) ||M_{+}(1-y_{0}+i\varepsilon)|,\] **and the estimates on the Green's function follow directly.** **On the other hand, if** \(2my_{0}\geq\delta_{\mu,1}\)**, we shall write** \[\mathrm{e}^{-i\mu\pi} M_{-}(y_{0}-i\varepsilon)M_{+}(1-y_{0}+i\varepsilon)-\mathrm{e}^{i \mu\pi}M_{+}(y_{0}-i\varepsilon)M_{-}(1-y_{0}+i\varepsilon)\] \[=\mathrm{e}^{-i\mu\pi}M_{-}(y_{0})M_{+}(1-y_{0}+i\varepsilon)- \mathrm{e}^{i\mu\pi}M_{+}(y_{0})M_{-}(1-y_{0}+i\varepsilon)\] \[\quad+\mathrm{e}^{-i\mu\pi}\big{(}M_{-}(y_{0}-i\varepsilon)-M_{-} (y_{0})\big{)}M_{+}(1-y_{0}+i\varepsilon)\] \[\quad-\mathrm{e}^{i\mu\pi}\big{(}M_{+}(y_{0}-i\varepsilon)-M_{+} (y_{0})\big{)}M_{-}(1-y_{0}+i\varepsilon)\] \[=T_{1}+T_{2}+T_{3}.\] **and we note that** \[T_{1}=M_{+}(1-y_{0}+i\varepsilon)\left(\mathrm{e}^{-i\mu\pi}M_{-}(y_{0})- \mathrm{e}^{i\mu\pi}M_{+}(y_{0})\frac{M_{-}(1-y_{0}+i\varepsilon)}{M_{+}(1-y_{ 0}+i\varepsilon)}\right),\] **with** \[\mathrm{Im}\left(\mathrm{e}^{-i\mu\pi}M_{-}(y_{0})-\mathrm{e}^{i \mu\pi}M_{+}(y_{0})\frac{M_{-}(1-y_{0}+i\varepsilon)}{M_{+}(1-y_{0}+i \varepsilon)}\right)\] \[=-\sin\mu\pi\left(M_{-}(y_{0})+M_{+}(y_{0})\mathrm{Re}\left( \frac{M_{-}(1-y_{0}+i\varepsilon)}{M_{+}(1-y_{0}+i\varepsilon)}\right)+\frac{1 }{\tan\mu\pi}M_{+}(y_{0})\mathrm{Im}\left(\frac{M_{-}(1-y_{0}+i\varepsilon)}{ M_{+}(1-y_{0}+i\varepsilon)}\right)\right).\] **Once again, from Lemma** A.15**, we have that** \[\left|\mathrm{Re}\left(\frac{M_{-}(1-y_{0}+i\varepsilon)}{M_{+}(1-y_{0}+i \varepsilon)}\right)-C_{\mu}\right|+\left|\frac{1}{\tan\mu\pi}\mathrm{Im} \left(\frac{M_{-}(1-y_{0}+i\varepsilon)}{M_{+}(1-y_{0}+i\varepsilon)}\right) \right|\leq\frac{C_{\mu}}{4},\] **so that we can lower bound** \[|T_{1}|\geq\sin\mu\pi|M_{+}(1-y_{0}+i\varepsilon)|\left(M_{-}(y_{0})+\frac{C_{ \mu}}{2}M_{+}(y_{0})\right).\] **Next, we shall see that the terms** \(T_{2}\) **and** \(T_{3}\) **are sufficiently small so that they can be absorbed by** \(T_{1}\)**. To this end, from Lemma** A.17 **we have that** \[|T_{2}|\leq\frac{\sin\mu\pi}{2}|M_{+}(1-y_{0}+i\varepsilon)|M_{-}(y_{0}),\] **and, combined with Lemma** A.15**, we also have that** \[|T_{3}|\leq\sin\mu\pi\frac{C_{\mu}}{4}M_{+}(y_{0})|M_{+}(1-y_{0}+i\varepsilon)|,\] for all \(\varepsilon\leq\varepsilon_{0}\) small enough. Hence, we conclude that \[|T_{1}+T_{2}+T_{3}|\geq\sin\mu\pi\left(\frac{1}{2}M_{-}(y_{0})+\frac{C_{\mu}}{4} M_{+}(y_{0})\right)|M_{+}(1-y_{0}+i\varepsilon)|\] and we lower bound \[|\mathcal{W}_{m,\varepsilon}^{+}(y_{0})|\geq\mu m\sin\mu\pi\left(2M_{-}(y_{0})+ C_{\mu}M_{+}(y_{0})\right)|M_{+}(1-y_{0}+i\varepsilon)|.\] The bounds on the Green's function are a straightforward consequence of the above lower bound \(\mathcal{W}_{m,\varepsilon}^{+}(y_{0})\) and the comparison estimates. ### Estimates for \(\mathcal{G}_{m,\varepsilon}\) away from the critical layer Throughout this section, let \(\varepsilon_{0}\) be given by Proposition 4.1 and assume that \(m>8\beta\). Hence, both \(y_{0}<\frac{4\beta}{m}\) and \(y_{0}>1-\frac{4\beta}{m}\) cannot hold simultaneously and through the section we assume without loss of generality that \(y0<1-\frac{4\beta}{m}\). The proof of the following results combines an entanglement inequality inspired by [18] and the estimates from Proposition 4.1. Firstly we obtain estimates when \(z\) is far from the critical layer, but \(y\) is still near the spectral variable \(y_{0}\). **Lemma 4.4**.: _Let \(y_{0}\in[0,1]\) and \(0<\varepsilon\leq\varepsilon_{0}\). For all \(z\in[0,1]\) such that \(m|z-y_{0}|\leq 9\beta\) we have the following._ \[\|\partial_{y}\mathcal{G}_{m,\varepsilon}^{\pm}(\cdot,y_{0},z)\|_{L^{2}_{y}(J ^{c}_{3})}^{2}+m^{2}\|\mathcal{G}_{m,\varepsilon}^{\pm}(\cdot,y_{0},z)\|_{L^{ 2}_{y}(J^{c}_{3})}^{2}\lesssim m^{-2\mu}|z-y_{0}\pm i\varepsilon|^{1-2\mu}.\] Proof.: Assume without loss of generality that \(y_{0}<1-\frac{3\beta}{m}\). Let \(y_{2}=y_{0}+\frac{2\beta}{m}\) and take \(\eta\in C^{1}_{p}([y_{2},1])\), the space of piecewise continuously differentiable functions. To ease notation, we denote \(h(y):=\mathcal{G}_{m,\varepsilon}^{+}(y,y_{0},z)\). Hence \(h(y)\) solves \[\left(\Delta_{m}+\beta^{2}\frac{1}{(y-y_{0}+i\varepsilon)^{2}}\right)h=\delta (y-z).\] Multiplying the equation by \(\overline{h}\eta^{2}\) and integrating from \(y_{2}\) to 1, we find that \[-\overline{h}(z)\eta^{2}(z)\mathcal{H}(z-y_{2})=\int_{y_{2}}^{1}|\partial_{y} h|^{2}\eta^{2}+2\partial_{y}h\overline{h}\partial_{y}\eta\eta+m^{2}|h|^{2} \eta^{2}-\beta^{2}\frac{|h|^{2}\eta^{2}}{(y-y_{0}+i\varepsilon)^{2}}\,\mathrm{ d}y\] and thus \[|\overline{h}(z)\eta^{2}(z)\mathcal{H}(z-y_{2})|\geq\int_{y_{2}}^{1}\frac{1}{2 }|\partial_{y}h|^{2}\eta^{2}+\left(\frac{m^{2}}{2}\eta^{2}-2(\partial_{y}\eta )^{2}\right)|h|^{2}\,\mathrm{d}y,\] where we have used Young's inequality and \(m|y-y_{0}+i\varepsilon|\geq 2\beta\), for all \(y\geq y_{2}\). Here, \(\mathcal{H}\) represents the Heavyside function. Now, we shall choose \(\eta\) as follows: \[\eta(y)=\begin{cases}\frac{m}{\beta}(y-y_{2}),&y\in(y_{2},y_{2}+\frac{\beta}{ m}),\\ 1,&y\in(y_{2}+\frac{\beta}{m},1)\end{cases}.\] Note that \(\eta\) is a piecewise \(C^{1}\) function such that it is a linear function in \((y_{2},y_{2}+\frac{\beta}{m})\) and it is constant in \((y_{2}+\frac{\beta}{m},1)\). Hence, \[|h(z)|+\frac{m^{2}}{\beta^{2}}\int_{y_{2}}^{y_{2}+\frac{\beta}{m}}|h|^{2} \mathrm{d}y\geq\frac{1}{2}\int_{y_{2}+\frac{\beta}{m}}^{1}\left(|\partial_{y} h|^{2}+m^{2}|h|^{2}\right)\mathrm{d}y.\] Using Proposition 4.1, we can estimate \[|h(z)|+\frac{m^{2}}{\beta^{2}}\int_{y_{2}}^{y_{2}+\frac{\beta}{m} }|h(y,y_{0},z)|^{2}\mathrm{d}y \lesssim m^{-4\mu}|z-y_{0}+i\varepsilon|^{1-2\mu}\left(1+\frac{m^{2 }}{\beta^{2}}\int_{y_{2}}^{y_{2}+\frac{\beta}{m}}|y-y_{0}+i\varepsilon|^{1-2 \mu}\mathrm{d}y\right)\] \[\lesssim m^{-2\mu}|z-y_{0}+i\varepsilon|^{1-2\mu}.\] Therefore, since \(y_{2}=y_{0}+\frac{2\beta}{m}\) we have the bound \[\int_{y_{0}+\frac{3\beta}{m}}^{1}\left(|\partial_{y}h|^{2}+m^{2}|h|^{2}\right) \mathrm{d}y\lesssim m^{-2\mu}|z-y_{0}+i\varepsilon|^{1-2\mu}.\] and the Lemma follows. We shall now deduce estimates for \(\partial_{y}\mathcal{G}^{\pm}_{m,\varepsilon}(y,y_{0},z)\) when \(y\) is still near \(y_{0}\) but \(z\) is away from the critical layer. To this end, we shall use the symmetry of the Green's function and the following result. **Lemma 4.5**.: _Let \(y_{0}\in[0,1]\) and \(0<\varepsilon\leq\varepsilon_{0}\). For all \(z\in[0,1]\) such that \(m|z-y_{0}|\leq 3\beta\) we have the following._ \[\|\partial_{y}\partial_{z}\mathcal{G}^{\pm}_{m,\varepsilon}(\cdot,y_{0},z)\|_ {L^{2}_{y}(J^{\varepsilon}_{4})}^{2}+\|\partial_{z}\mathcal{G}^{\pm}_{m, \varepsilon}(\cdot,y_{0},z)\|_{L^{2}_{y}(J^{\varepsilon}_{4})}^{2}\lesssim m^ {-2\mu}|z-y_{0}\pm i\varepsilon|^{-1-2\mu}.\] Proof.: We assume without loss of generality that \(y_{0}\leq 1-\frac{4\beta}{m}\). For any \(y>z\), we have that \(g(y):=\partial_{z}\mathcal{G}^{+}_{m,\varepsilon}(y,y_{0},z)\) solves \[\left(\Delta_{m}+\beta^{2}\frac{1}{(y-y_{0}+i\varepsilon)^{2}}\right)g=0,\] with \(g(1)=0\). Multiplying the equation by \(\overline{g}\eta^{2}\) and integrating from \(y_{2}=y_{0}+\frac{7\beta}{2m}>z\) to 1, we find that \[0 =\int_{y_{2}}^{1}|\partial_{y}g|^{2}\eta^{2}+2\partial_{y}h \overline{g}\partial_{y}\eta\eta+m^{2}|g|^{2}\eta^{2}-\beta^{2}\frac{|g|^{2} \eta^{2}}{(y-y_{0}+i\varepsilon)^{2}}\,\mathrm{d}y\] \[\geq\int_{y_{2}}^{1}\frac{1}{2}|\partial_{y}g|^{2}\eta^{2}+\left( \frac{m^{2}}{2}\eta^{2}-2(\partial_{y}\eta)^{2}\right)|g|^{2}\,\mathrm{d}y,\] where we have used Young's inequality and \(m|y-y_{0}|\geq 2\beta\), for all \(y\geq y_{2}\). For \[\eta(y)=\begin{cases}\frac{2m}{\beta}(y-y_{2}),&y\in(y_{2},y_{2}+\frac{\beta} {2m}),\\ 1,&y\in(y_{2}+\frac{\beta}{2m},1),\end{cases}.\] we get \[\frac{m^{2}}{\beta^{2}}\int_{y_{2}}^{y_{2}+\frac{\beta}{2m}}|g|^{2}\mathrm{d}y \geq\frac{1}{2}\int_{y_{2}+\frac{\beta}{2m}}^{1}\left(|\partial_{y}g|^{2}+m^{2 }|g|^{2}\right)\mathrm{d}y.\] Using Proposition 4.1, we can estimate \[\frac{m^{2}}{\beta^{2}}\int_{y_{2}}^{y_{2}+\frac{\beta}{2m}}|g(y, y_{0},z)|^{2}\mathrm{d}y \lesssim m^{-4\mu}|z-y_{0}+i\varepsilon|^{-1-2\mu}\frac{m^{2}}{ \beta^{2}}\int_{y_{2}}^{y_{2}+\frac{\beta}{2m}}|y-y_{0}+i\varepsilon|^{1-2\mu }\mathrm{d}y\] \[\lesssim m^{-2\mu}|z-y_{0}+i\varepsilon|^{-1-2\mu}.\] Now, \(y_{2}+\frac{\beta}{2m}=y_{0}+\frac{4\beta}{m}\) so that \[\int_{y_{0}+\frac{4\beta}{m}}^{1}\left(|\partial_{y}g|^{2}+m^{2}|g|^{2}\right) \mathrm{d}y\lesssim m^{-2\mu}|z-y_{0}+i\varepsilon|^{-1-2\mu}.\] The proof is finished. The next corollary is a direct consequence of the above Lemma together with the observation that once the estimate for \(\partial_{z}\mathcal{G}^{\pm}_{m,\varepsilon}(y,y_{0},z)\) is established, the estimate of \(\partial_{y}\mathcal{G}^{\pm}_{m,\varepsilon}(y,y_{0},z)\) follows from the fact that, since \(\mathcal{G}^{\pm}_{m,\varepsilon}(y,y_{0},z)=\mathcal{G}^{\pm}_{m,\varepsilon }(z,y_{0},y)\), then \((\partial_{y}\mathcal{G}^{\pm}_{m,\varepsilon})(y,y_{0},z)=(\partial_{z} \mathcal{G}^{\pm}_{m,\varepsilon})(z,y_{0},y)\). **Corollary 4.6**.: _Let \(y_{0}\in[0,1]\) and \(0<\varepsilon\leq\varepsilon_{0}\). For all \(y\in[0,1]\) such that \(m|y-y_{0}|\leq 3\beta\) we have that._ \[\|\partial_{y}\mathcal{G}^{\pm}_{m,\varepsilon}(y,y_{0},\cdot)\|_{L^{2}_{x}(J^ {\varepsilon}_{4})}\lesssim\frac{1}{m^{1+\mu}}|y-y_{0}\pm i\varepsilon|^{-\frac{ 1}{2}-\mu}.\] ### Proof of Theorem 2 Let \(0<\varepsilon\leq\varepsilon_{0}\leq\frac{\beta}{m}\) and assume that \(m|y-y_{0}|\leq 3\beta\). For \(m\leq 8\beta\), the Theorem follows directly from Proposition 4.1. Hence, we consider for \(m>8\beta\) and we note that \[\|\mathcal{G}_{m,\varepsilon}^{\pm}\|_{L^{2}_{\varepsilon}}\leq\|\mathcal{G}_{ m,\varepsilon}^{\pm}\|_{L^{2}_{2}(J_{3})}+\|\mathcal{G}_{m,\varepsilon}^{\pm}\|_{L^{2 }_{2}(J_{3}^{c})},\quad\|\partial_{y}\mathcal{G}_{m,\varepsilon}^{\pm}\|_{L^{ 2}_{2}}\leq\|\partial_{y}\mathcal{G}_{m,\varepsilon}^{\pm}\|_{L^{2}_{2}(J_{4} )}+\|\partial_{y}\mathcal{G}_{m,\varepsilon}^{\pm}\|_{L^{2}_{2}(J_{4}^{c})}.\] Now, the bounds for \(\|\mathcal{G}_{m,\varepsilon}^{\pm}\|_{L^{2}_{2}(J_{3})}\) and \(\|\partial_{y}\mathcal{G}_{m,\varepsilon}^{\pm}\|_{L^{2}_{2}(J_{4})}\) follow from Proposition 4.1, while the estimate for \(\|\mathcal{G}_{m,\varepsilon}^{\pm}\|_{L^{2}_{2}(J_{3}^{c})}\) is given in Lemma 4 due to the \(y,z\) symmetry of the Green's function and the estimate for \(\|\partial_{y}\mathcal{G}_{m,\varepsilon}^{\pm}\|_{L^{2}_{2}(J_{4}^{c})}\) is given by Corollary 4.6. The theorem follows. ## 5 Bounds on the Green's function for \(\beta^{2}=1/4\) This section studies and obtains \(L^{2}\) bounds on the Green's function for the case \(\beta^{2}=1/4\). Most of the results and proof are analogous to the ones presented in Section 4 above, so we limit ourselves to present the statements we use and comment on the main ingredients of the proof. **Theorem 5**.: _There exists \(\varepsilon_{0}>0\) such that for all \(\varepsilon\in(0,\varepsilon_{0})\) and for all \(y,y_{0}\in[0,1]\) such that \(m|y-y_{0}|\leq 3\beta\), we have_ \[|y-y_{0}\pm i\varepsilon|^{-\frac{1}{2}}\|\mathcal{G}_{m,\varepsilon}^{\pm}(y,y_{0},\cdot)\|_{L^{2}_{z}}+|y-y_{0}\pm i\varepsilon|^{\frac{1}{2}}\|\partial _{y}\mathcal{G}_{m,\varepsilon}^{\pm}(y,y_{0},\cdot)\|_{L^{2}_{z}}\lesssim \frac{1}{m}\left(1+\big{|}\log m\,|y-y_{0}\pm i\varepsilon|\,\big{|}\right)\] In comparison with Theorem 2, we have a logarithmic correction to the behavior near the critical layer. The proof is omitted as it is analogous to that of Theorem 2, once all the intermediate steps are established. The rest of this section is devoted to the proof of such steps, to be compared with the analogous one of Section 4. ### Estimates near the critical layer Using the analytic continuation properties from Lemma A.2, we can write the Wronskian as \[\mathcal{W}_{m,\varepsilon}^{+}(y_{0}) =\frac{2im}{\sqrt{\pi}}\big{(}M_{0}(1-y_{0}+i\varepsilon)W_{0}(y_ {0}-i\varepsilon)-W_{0}(1-y_{0}+i\varepsilon)M_{0}(y_{0}-i\varepsilon)\big{)}\] \[\quad+2mM_{0}(1-y_{0}+i\varepsilon)M_{0}(y_{0}-i\varepsilon).\] We then have the following. **Proposition 5.1**.: _Let \(y,y_{0},z\in[0,1]\) such that \(m|y-y_{0}+i\varepsilon|\leq 10\beta\) and \(m|z-y_{0}+i\varepsilon|\leq 10\beta\). There exists \(\varepsilon_{0}>0\) such that_ \[|\mathcal{G}_{m,\varepsilon}^{+}(y,y_{0},z)|\lesssim|y-y_{0}+i\varepsilon|^{ \frac{1}{2}}|z-y_{0}+i\varepsilon|^{\frac{1}{2}}\left(1+\big{|}\log(m|y-y_{0}+ i\varepsilon)|\big{|}\right)\left(1+\big{|}\log(m|z-y_{0}+i\varepsilon)|\big{|}\right)\] _and_ \[|\partial_{y}\mathcal{G}_{m,\varepsilon}^{+}(y,y_{0},z)|\lesssim|y-y_{0}+i \varepsilon|^{-\frac{1}{2}}|z-y_{0}+i\varepsilon|^{\frac{1}{2}}\left(1+\big{|} \log(m|y-y_{0}+i\varepsilon)|\big{|}\right)\left(1+\big{|}\log(m|z-y_{0}+i \varepsilon)|\big{|}\right)\] _for all \(\varepsilon\leq\varepsilon_{0}\)._ Proof.: Assume without loss of generality that \(y\leq z\). From the asymptotic expansions given by Lemma A.3, we have that \[|\phi_{u,m,\varepsilon}^{+}(z,y_{0})|\lesssim(2m|z-y_{0}+i\varepsilon|)^{\frac {1}{2}}\left(1+\log(2m|z-y_{0}+i\varepsilon)|\right)\big{(}|W_{0}(1-y_{0}+i \varepsilon)|+|M_{0}(1-y_{0}+i\varepsilon)|\big{)},\] while \[|\phi_{l,m,\varepsilon}^{+}(y,y_{0})|\lesssim(2m|y-y_{0}+i\varepsilon|)^{\frac {1}{2}}\left(1+\log(2m|y-y_{0}+i\varepsilon)|\right)\big{(}|W_{0}(y_{0}-i \varepsilon)|+|M_{0}(y_{0}-i\varepsilon)|\big{)}.\] The proposition follows from the estimates on the Wronskian given in the lemma below. **Lemma 5.2**.: _Let \(y_{0}\in[0,1]\). There exists \(0<\varepsilon_{0}\leq\frac{\beta}{m}\) and \(C>0\) such that_ \[|\mathcal{W}_{m,\varepsilon}^{+}(y_{0})|\geq C_{\nu}m|M_{0}(y_{0}-i\varepsilon) ||M_{0}(1-y_{0}+i\varepsilon)|,\] _for all \(\varepsilon\leq\varepsilon_{0}\)._ Proof.: The proof follows from treating the next two cases. Let \(N_{0}>0\) be given as in Lemma A.10. \(\bullet\)**Case 1: \(m<N_{0}\).** Assume that \(y_{0}\leq\frac{1}{2}\). Then \(2my_{0}<N_{0}\) and \(m\leq 2m(1-y_{0})<2N_{0}\). Assume further that \(2my_{0}\leq\delta_{1}\) given by Lemma A.11. Then, \[\mathcal{W}^{+}_{m,\varepsilon}(y_{0})=\frac{2mi}{\sqrt{\pi}}M_{0}(1-y_{0}+i \varepsilon)W_{0}(y_{0}-i\varepsilon)\left(1-\frac{W_{0}(1-y_{0}+i\varepsilon )}{M_{0}(1-y_{0}+i\varepsilon)}\frac{M_{0}(y_{0}-i\varepsilon)}{W_{0}(y_{0}-i \varepsilon)}-i\sqrt{\pi}\frac{M_{0}(y_{0}-i\varepsilon)}{W_{0}(y_{0}-i \varepsilon)}\right)\] and from Lemma A.12 and Lemma A.11 we have \[\left|\frac{W_{0}(1-y_{0}+i\varepsilon)}{M_{0}(1-y_{0}+i\varepsilon)}\right| \leq C_{0},\quad\left|\frac{M_{0}(y_{0}-i\varepsilon)}{W_{0}(y_{0}-i \varepsilon)}\right|\leq\frac{1}{2(C_{0}+\sqrt{\pi})},\] for all \(\varepsilon\leq\varepsilon_{0}\), from which the lower bounds on the Wronskian follows. Similarly, assume now that \(\delta_{1}<2my_{0}<N_{0}\), in this case we write \[\mathcal{W}^{+}_{m,\varepsilon}(y_{0})=2mM_{0}(1-y_{0}+i\varepsilon)M_{0}(y_{ 0}-i\varepsilon)\left(1+\frac{i}{\sqrt{\pi}}\left(\frac{W_{0}(y_{0}-i \varepsilon)}{M_{0}(y_{0}-i\varepsilon)}-\frac{W_{0}(1-y_{0}+i\varepsilon)}{ M_{0}(1-y_{0}+i\varepsilon)}\right)\right)\] and we further note that, for all \(\varepsilon\leq\varepsilon_{0}\), \[\left|1+\frac{i}{\sqrt{\pi}}\left(\frac{W_{0}(y_{0}-i\varepsilon)} {M_{0}(y_{0}-i\varepsilon)}-\frac{W_{0}(1-y_{0}+i\varepsilon)}{M_{0}(1-y_{0}+ i\varepsilon)}\right)\right|\] \[\qquad\geq 1-\frac{1}{\sqrt{\pi}}\left(\left|\mathrm{Im}\left( \frac{W_{0}(y_{0}-i\varepsilon)}{M_{0}(y_{0}-i\varepsilon)}\right)\right|+ \left|\mathrm{Im}\left(\frac{W_{0}(1-y_{0}+i\varepsilon)}{M_{0}(1-y_{0}+i \varepsilon)}\right)\right|\right)\] \[\qquad\geq\frac{1}{2},\] due to the estimates from Lemma A.12. The lower bound on the Wronskian follows as before. \(\bullet\)**Case 2: \(m\geq N_{0}\).** Under the assumption that \(2m(1-y_{0})\geq N_{0}\) and that \(2m(y_{0}-i\varepsilon)\leq\delta_{1}\) we can write \[\mathcal{W}^{+}_{m,\varepsilon}(y_{0})=\frac{2mi}{\sqrt{\pi}}M_{0}(1-y_{0}+i \varepsilon)W_{0}(y_{0}-i\varepsilon)\left(1-\frac{W_{0}(1-y_{0}+i\varepsilon )}{M_{0}(1-y_{0}+i\varepsilon)}\frac{M_{0}(y_{0}-i\varepsilon)}{W_{0}(y_{0}-i \varepsilon)}-i\sqrt{\pi}\frac{M_{0}(y_{0}-i\varepsilon)}{W_{0}(y_{0}-i \varepsilon)}\right)\] and we have that \[\left|\frac{W_{0}(1-y_{0}+i\varepsilon)}{M_{0}(1-y_{0}+i\varepsilon)}\right| \leq\sqrt{\pi},\quad\left|\frac{M_{0}(y_{0}-i\varepsilon)}{W_{0}(y_{0}-i \varepsilon)}\right|\leq\frac{1}{4\sqrt{\pi}},\] from which we obtain the lower bound \[|\mathcal{W}^{+}_{m,\varepsilon}(y_{0})|\geq\frac{4m}{\sqrt{\pi}}|M_{0}(1-y_{0 }+i\varepsilon)||W_{0}(y_{0}-i\varepsilon)|.\] Now, when \(2my_{0}\geq\delta_{1}\), we write \[\mathcal{W}^{+}_{m,\varepsilon}(y_{0})=2mM_{0}(1-y_{0}+i\varepsilon)M_{0}(y_{ 0}-i\varepsilon)\left(1+\frac{i}{\sqrt{\pi}}\left(\frac{W_{0}(y_{0}-i \varepsilon)}{M_{0}(y_{0}-i\varepsilon)}-\frac{W_{0}(1-y_{0}+i\varepsilon)}{ M_{0}(1-y_{0}+i\varepsilon)}\right)\right)\] and we further note that, for all \(\varepsilon\leq\varepsilon_{0}\), \[\left|1+\frac{i}{\sqrt{\pi}}\left(\frac{W_{0}(y_{0}-i\varepsilon) }{M_{0}(y_{0}-i\varepsilon)}-\frac{W_{0}(1-y_{0}+i\varepsilon)}{M_{0}(1-y_{0} +i\varepsilon)}\right)\right|\] \[\qquad\qquad\geq 1-\frac{1}{\sqrt{\pi}}\left(\left|\mathrm{Im} \left(\frac{W_{0}(y_{0}-i\varepsilon)}{M_{0}(y_{0}-i\varepsilon)}\right) \right|+\left|\frac{W_{0}(1-y_{0}+i\varepsilon)}{M_{0}(1-y_{0}+i\varepsilon }\right|\right)\] \[\qquad\qquad\geq\frac{1}{2},\] due to the estimates from Lemma A.12 and Lemma A.10. ### Estimates for \(\mathcal{G}_{m,\varepsilon}\) away from the critical layer Throughout this section, let \(\varepsilon_{0}\) be given by Lemma 5.2 and let \(m>8\beta\). **Lemma 5.3**.: _Let \(y_{0}\in[0,1]\) and \(0<\varepsilon\leq\varepsilon_{0}\). For all \(z\in[0,1]\) such that \(m|z-y_{0}|\leq 9\beta\) we have_ \[\|\partial_{y}\mathcal{G}_{m,\varepsilon}^{\pm}(\cdot,y_{0},z)\|_{L^{2}_{y}(J ^{\varepsilon}_{3})}^{2}+m^{2}\|\mathcal{G}_{m,\varepsilon}^{\pm}(\cdot,y_{0}, z)\|_{L^{2}_{y}(J^{\varepsilon}_{3})}^{2}\lesssim|z-y_{0}\pm i\varepsilon| \left(1+\big{|}\log(m|z-y_{0}\pm i\varepsilon|)\big{|}\right)^{2}.\] Proof.: We comment the case \(y_{0}<1-\frac{3\beta}{m}\). The proof goes on the same spirit as the one for Lemma 4.4. For \(y_{2}=y_{0}+\frac{2\beta}{m}\) and \(h(y)=\mathcal{G}_{m,\varepsilon}^{+}(y,y_{0},z)\), introducing a suitable cut-off function we have that \[|h(z)|+\frac{m^{2}}{\beta^{2}}\int_{y_{2}}^{y_{2}+\frac{\beta}{m}}|h|^{2} \mathrm{d}y\geq\frac{1}{2}\int_{y_{2}+\frac{\beta}{m}}^{1}\left(|\partial_{y} h|^{2}+m^{2}|h|^{2}\right)\mathrm{d}y.\] Using Proposition 5.1, we estimate \[|h(z)|+\frac{m^{2}}{\beta^{2}}\int_{y_{2}}^{y_{2}+\frac{\beta}{m} }|h(y,y_{0},z)|^{2}\mathrm{d}y\] \[\lesssim\frac{m^{2}}{\beta^{2}}|z-y_{0}+i\varepsilon|\left(1+ \big{|}\log(m|z-y_{0}+i\varepsilon|)\big{|}\right)^{2}\int_{y_{2}}^{y_{2}+ \frac{\beta}{m}}|y-y_{0}+i\varepsilon|\left(1+\big{|}\log(m|y-y_{0}+i \varepsilon|)\big{|}\right)^{2}\mathrm{d}y\] \[\quad+|z-y_{0}+i\varepsilon|\left(1+\big{|}\log(m|z-y_{0}+i \varepsilon|)\big{|}\right)^{2}\] \[\lesssim|z-y_{0}+i\varepsilon|\left(1+\big{|}\log(m|z-y_{0}+i \varepsilon|)\big{|}\right)^{2},\] since \(1\leq m|y-y_{0}+i\varepsilon|\leq 2\), for all \(y\in\left[y_{2},y_{2}+\frac{\beta}{m}\right]\). Therefore, recalling \(y_{2}=y_{0}+\frac{2\beta}{m}\) we have the bound \[\int_{y_{0}+\frac{3\beta}{m}}^{1}\left(|\partial_{y}h|^{2}+m^{2}|h|^{2}\right) \mathrm{d}y\lesssim|z-y_{0}+i\varepsilon|\left(1+\big{|}\log(m|z-y_{0}+i \varepsilon|)\big{|}\right)^{2}\] and the proof follows. We next provide an intermediate result towards estimates for \(\|\partial_{y}\mathcal{G}_{m,\varepsilon}^{\pm}(y,y_{0},\cdot)\|_{L^{2}_{x}(J ^{\varepsilon}_{4})}\). **Lemma 5.4**.: _Let \(y_{0}\in[0,1]\) and \(0<\varepsilon\leq\varepsilon_{0}\). For all \(z\in[0,1]\) such that \(m|z-y_{0}|\leq 3\beta\) we have_ \[\|\partial_{y}\partial_{z}\mathcal{G}_{m,\varepsilon}^{\pm}(\cdot,y_{0},z)\|_ {L^{2}_{y}(J^{\varepsilon}_{4})}^{2}+\|\partial_{z}\mathcal{G}_{m,\varepsilon} ^{\pm}(\cdot,y_{0},z)\|_{L^{2}_{y}(J^{\varepsilon}_{4})}^{2}\lesssim|z-y_{0} \pm i\varepsilon|^{-1}\left(1+\big{|}\log(m|z-y_{0}+i\varepsilon|)\big{|} \right)^{2}.\] Proof.: From the proof of Lemma 4.5, for \(g(y):=\partial_{z}\mathcal{G}_{m,\varepsilon}^{+}(y,y_{0},z)\) we have that \[\frac{m^{2}}{\beta^{2}}\int_{y_{2}}^{y_{2}+\frac{\beta}{2m}}|g|^{2}\mathrm{d} y\geq\frac{1}{2}\int_{y_{2}+\frac{\beta}{2m}}^{1}\left(|\partial_{y}g|^{2}+m^{2}|g| ^{2}\right)\mathrm{d}y.\] Using Proposition 5.1 we estimate \[\frac{m^{2}}{\beta^{2}}\int_{y_{2}}^{y_{2}+\frac{\beta}{2m}}|g(y,y_{0},z)|^{2} \mathrm{d}y\lesssim|z-y_{0}+i\varepsilon|^{-1}\left(1+\big{|}\log(m|z-y_{0}+i \varepsilon|)\big{|}\right)^{2}.\] Therefore, since \(y_{2}+\frac{\beta}{2m}=y_{0}+\frac{4\beta}{m}\) we have the bound \[\int_{y_{0}+\frac{4\beta}{m}}^{1}\left(|\partial_{y}g|^{2}+m^{2}|g|^{2}\right) \mathrm{d}y\lesssim|z-y_{0}+i\varepsilon|^{-1}\left(1+\big{|}\log(m|z-y_{0}+i \varepsilon)\big{|}\big{|}\right)^{2},\] and the lemma follows. We finish the section with the estimates for \(\|\partial_{y}\mathcal{G}_{m,\varepsilon}^{\pm}(y,y_{0},\cdot)\|_{L^{2}_{x}(J^{ \varepsilon}_{4})}\), which are deduce using the symmetry properties of the Green's function as in Corollary 4.6 and are given in the next result. **Corollary 5.5**.: _Let \(y_{0}\in[0,1]\) and \(0<\varepsilon\leq\varepsilon_{0}\). For all \(y\in[0,1]\) such that \(m|y-y_{0}|\leq 3\beta\) we have_ \[\|\partial_{y}\mathcal{G}_{m,\varepsilon}^{\pm}(y,y_{0},\cdot)\|_{L_{2}^{2}(J_{ 4}^{c})}\lesssim\frac{1}{m}|y-y_{0}\pm i\varepsilon|^{-\frac{1}{2}}\left(1+ \big{|}\log(m|y-y_{0}+i\varepsilon|)\big{|}\right).\] ## 6. Contour integral reduction In this section, we study the contour integration that is present in the Dunford's formula (see (2.2)) \[\begin{pmatrix}\psi_{m}(t,y)\\ \rho_{m}(t,y)\end{pmatrix}=\frac{1}{2\pi i}\int_{\partial\Omega}\mathrm{e}^{ -imct}\mathcal{R}(c,L_{m})\begin{pmatrix}\psi_{m}^{0}\\ \rho_{m}^{0}\end{pmatrix}\,\mathrm{d}c,\qquad\mathcal{R}(c,L_{m}):=(c-L_{m}) ^{-1}, \tag{6.1}\] where \(\Omega\) is any domain containing \(\sigma(L_{m})\), the spectrum of the linearized operator \(L_{m}\) in (2.1). The main goal of this section is, under suitable conditions on the initial data, to reduce the above contour integration to a much simpler integration along the essential spectrum \(\sigma_{ess}(L_{m})=[0,1]\). As the domain of integration we take the rectangle \(\Omega=[-\beta/m,1+\beta/m]\times[-\beta/m,\beta/m]\), and further split in the regions, \[R_{0} =\left\{c=y_{0}+is\in\mathbb{C}:-\frac{\beta}{m}\leq y_{0}\leq 0, \,0\leq|s|\leq\frac{\beta}{m}\right\},\] \[R_{ess} =\left\{c=y_{0}+is\in\mathbb{C}:0\leq y_{0}\leq 1,\,0\leq|s|\leq \frac{\beta}{m}\right\},\] \[R_{1} =\left\{c=y_{0}+is\in\mathbb{C}:1\leq y_{0}\leq 1+\frac{\beta}{m}, \,0\leq|s|\leq\frac{\beta}{m}\right\},\] so that (6.1) becomes \[\begin{pmatrix}\psi_{m}(t,y)\\ \rho_{m}(t,y)\end{pmatrix}=\frac{1}{2\pi i}\left(\int_{\partial R_{0}}+\int_{ \partial R_{ess}}+\int_{\partial R_{1}}\right)\mathrm{e}^{-imct}\mathcal{R}(c,L_{m})\begin{pmatrix}\psi_{m}^{0}\\ \rho_{m}^{0}\end{pmatrix}\,\mathrm{d}c.\] The decomposition of \(\Omega\) is depicted in Figure 2 below. The goal of the next three sections is to show the following result, which amounts to reduce our contour integration as in (2.3). **Proposition 6.1**.: _Assume that the pair of initial data \((\omega_{m}^{0},\rho_{m}^{0})\) is orthogonal to the subspace generated by the eigenfunctions of \(L_{m}\). Then,_ \[\left\|\int_{\partial R_{0}}\mathrm{e}^{-imct}\mathcal{R}(c,L_{m})\mathrm{d}c \right\|_{L_{y}^{2}}+\left\|\int_{\partial R_{1}}\mathrm{e}^{-imct}\mathcal{R} (c,L_{m})\mathrm{d}c\right\|_{L_{y}^{2}}=0.\] Figure 2. The domain of integration \(\Omega\) and the decomposition into \(R_{0}\), \(R_{ess}\) and \(R_{1}\). _Moreover;_ \[\int_{\partial R_{ess}}\mathrm{e}^{-imct}\mathcal{R}(c,L_{m}) \mathrm{d}c=\frac{1}{2\pi i}\lim_{\varepsilon\to 0}\int_{0}^{1}\mathrm{e}^{- imyot}\big{(}(-y_{0}-i\varepsilon+L_{m})^{-1}\\ -(-y_{0}+i\varepsilon+L_{m})^{-1}\big{)}\begin{pmatrix}\psi_{m}^{ 0}\\ \rho_{m}^{0}\end{pmatrix}\,\mathrm{d}y_{0}.\] The description of the spectrum in Theorem 3 will then be clear from the following three sections. As a first step towards proving Proposition 6.1 we show that \(\sigma(L_{m})\subset\Omega\). **Lemma 6.2**.: _Let \(c\in\mathbb{C}\setminus\Omega\). Then, \(\left(c-L_{m}\right)^{-1}\) exists and_ \[\left\|(c-L_{m})^{-1}\begin{pmatrix}\psi_{m}^{0}\\ \rho_{m}^{0}\end{pmatrix}\right\|_{L^{2}}\leqslant\|\omega_{m}^{0}\|_{L^{2}_ {y}}+\|\rho_{m}^{0}\|_{L^{2}_{y}}\] Proof.: For any \(c\in\mathbb{C}\setminus\Omega\), we note that \(\frac{1}{|y-c|}\leq\frac{m}{\beta}\), for all \(y\in[0,1]\) and we define \(\psi_{m}(y,c)\) as the unique solution to \[\Delta_{m}\psi_{m}+\beta^{2}\frac{\psi_{m}}{(y-c)^{2}}=\frac{\omega_{m}^{0}}{ y-c}-\beta^{2}\frac{\rho_{m}^{0}}{(y-c)^{2}}.\] with homogeneous boundary conditions \(\psi_{m}(0,c)=\psi_{m}(1,c)=0\) given by standard ODE theory. We also define \[\rho_{m}(y,c)=\frac{1}{y-c}\left(\rho_{m}^{0}(y)+\psi_{m}(y,c)\right)\] and it is straightforward to see that \[(-c+L_{m})\begin{pmatrix}\psi_{m}(y,c)\\ \rho_{m}(y,c)\end{pmatrix}=\begin{pmatrix}\psi_{m}^{0}(y)\\ \rho_{m}^{0}(y)\end{pmatrix}.\] Therefore, \((-c+L_{m})\) is invertible and the desired resolvent estimates follow from usual energy estimates on the equation, that is, multiply the equation by \(\overline{\psi_{m}(y,c)}\) integrate by parts and absorb the potential term. In order to show that the contributions from \(\partial R_{0}\) and \(\partial R_{1}\) vanish, we study the resolvent operator for the cases \(\beta^{2}>1/4\), \(\beta^{2}=1/4\) and \(\beta^{2}<1/4\) separately. ### Integral reduction for \(\beta^{2}>1/4\): discrete and embedded eigenvalues The classical Miles-Howard stability criterion [17, 25] rules out the existence of unstable modes when \(\beta^{2}\geq 1/4\). That is, any eigenvalue \(c\in\mathbb{C}\) of \(L_{m}\) must have \(\mathrm{Im}(c)=0\). We next find and characterize the discrete set of real isolated eigenvalues that accumulate towards the essential spectrum, that is, towards 0 and 1. Our study involves a precise understanding of the Wronskian when \(\varepsilon=0\). For this, we denote \[\mathcal{W}_{m}(c):=M_{+}(c)M_{-}(1+c)-M_{-}(c)M_{+}(1+c) \tag{6.2}\] for all \(c>0\) and we note from (3.6) that \(\mathcal{W}_{m,0}^{\pm}(-c)=4i\nu m\mathcal{W}_{m}(c)\). We state the following. **Proposition 6.3**.: _There exists sequences \(\{p_{k}\}_{k\geq 1}\) and \(\{q_{k}\}_{k\geq 1}\) of strictly positive real numbers such that \(p_{k},q_{k}\to 0\) as \(k\to\infty\) and_ \[|\mathcal{W}_{m}(p_{k})|=2|M_{+}(p_{k})||M_{+}(1+p_{k})|,\quad\mathcal{W}_{m}( q_{k})=0,\] _for all \(k\geq 1\)._ Proof.: For any \(c>0\), from (6.2) we have that \[\mathcal{W}_{m}(c)=-2i\mathrm{Im}\big{(}M_{-}(c)M_{+}(1+c)\big{)},\] where further \(M_{-}(c)=\overline{M_{+}(c)}=|M_{+}(c)|\mathrm{e}^{-i\text{Arg}(M_{+}(c))}\) and \(M_{+}(1+c)=|M_{+}(1+c)|\mathrm{e}^{i\text{Arg}(M_{+}(1+c))}\). For \(x>0\), we define \(\Theta(x)=\text{Arg}(M_{+}(x))\) and we write \[\mathcal{W}_{m}(c)=-2i|M_{+}(c)||M_{+}(1+c)|\sin\left(\Theta(1+c)-\Theta(c) \right).\] The proposition follows if we can find some integer \(k_{0}\geq 0\) and two sequences \(\{p_{k}\}_{k\geq 1}\) and \(\{q_{k}\}_{k\geq 1}\) of strictly positive real numbers such that \[\Theta(1+p_{k})-\Theta(p_{k})=\left(k+k_{0}+\frac{1}{2}\right)\pi,\qquad\Theta(1+ q_{k})-\Theta(q_{k})=(k+k_{0})\pi,\] for all \(k\geq 1\). To this end, given the Wronskian properties of the pair \(M_{+}(x)\) and \(M_{-}(x)\) from [26], we note that for all \(x>0\) \[-2i\nu =M_{+}(x)M_{-}^{\prime}(x)-M_{+}^{\prime}(x)M_{-}(x)\] \[=|M_{+}(x)|\mathrm{e}^{i\Theta(x)}\left(|M_{+}(x)|^{\prime} \mathrm{e}^{-i\Theta(x)}-i\Theta^{\prime}(x)|M_{+}(x)|\mathrm{e}^{-i\Theta(x)}\right)\] \[\quad-|M_{+}(x)|\mathrm{e}^{-i\Theta(x)}\left(|M_{+}(x)|^{\prime }\mathrm{e}^{i\Theta(x)}+i\Theta^{\prime}(x)|M_{+}(x)|\mathrm{e}^{i\Theta(x)}\right)\] \[=-2i\Theta^{\prime}(x)|M_{+}(x)|^{2},\] and thus, \(\Theta^{\prime}(x)=\frac{\nu}{|M_{+}(x)|^{2}}>0\). Hence, for all \(c>0\) we define \[r(c):=\Theta(1+c)-\Theta(c)=\nu\int_{c}^{1+c}\frac{1}{|M_{+}(x)|^{2}}\mathrm{ d}x.\] Note that \(r(c)\) is continuous for all \(c>0\) and strictly decreasing. This follows from \(|M_{+}(x)|\) being strictly increasing, see Lemma A.5. Moreover, also from Lemma A.5, we have that \[r(c)\gtrsim_{\nu}\int_{c}^{1+c}\frac{1}{x}\mathrm{d}x\gtrsim_{\nu}\ln\left( \frac{1+c}{c}\right),\] which diverges as \(c\to 0\), while from Lemma A.6 and since \(|M_{+}(x)|\) is an increasing function of \(x\geq 0\), we have \[r(c)\leq\frac{\nu}{|M_{+}(c)|^{2}}\lesssim\mathrm{e}^{-c}.\] Therefore, \(r(c):(0,+\infty)\to(0,+\infty)\) is a bijection and we conclude the existence of two sequences of strictly positive real numbers \(\{p_{k}\}_{k\geq 1}\) and \(\{q_{k}\}_{k\geq 1}\) such that \(q_{k+1}<p_{k}<q_{k}\), for all \(k\geq 1\) and \[r(p_{k})=\left(k+\frac{1}{2}\right)\pi,\qquad r(q_{k})=k\pi,\] with the further property that \(p_{k},q_{k}\to 0\) as \(k\to\infty\). **Corollary 6.4**.: _There are infinitely many eigenvalues \(c_{k}:=-q_{k}<0\) of \(L_{m}\) accumulating at \(c=0\)._ Proof.: Any eigenvalue \(c<0\) of \(L_{m}\) is such that there exists a non-trivial solution \(\psi=\psi_{m}(y,c)\) to \[\Delta_{m}\psi+\beta^{2}\frac{\psi}{(y-c)^{2}}=0\] satisfying the boundary conditions \(\psi(0)=\psi(1)=0\). We can write such solution as \[\psi(y)=AM_{+}(y-c)+BM_{-}(y-c)\] and since \(c<0\), it is smooth. Imposing the boundary conditions, we have non-trivial coefficients \(A,B\in\mathbb{C}\) if and only if \(\mathcal{W}_{m}(c)=M_{+}(c)M_{-}(1+c)-M_{-}(c)M_{+}(1+c)\) vanishes. This is the case for the sequence \(\{q_{k}\}_{k\in\mathbb{N}}\). These \(q_{k}\) are the discrete eigenvalues of \(L_{m}\) and they accumulate towards 0. We shall next obtain suitable estimates on the contour integral of the resolvent. We focus on the integral along \(\partial R_{0}\), the results we show are the same for the integral along \(\partial R_{1}\). In this direction, we write \[\int_{\partial R_{0}}\mathrm{e}^{-imct}\mathcal{R}(c,L_{m})\mathrm{d}c=\int_{ \partial R_{*}}\mathrm{e}^{-imct}\mathcal{R}(c,L_{m})\mathrm{d}c+\int_{ \partial(R_{0}\setminus R_{*})}\mathrm{e}^{-imct}\mathcal{R}(c,L_{m})\mathrm{ d}c,\] where \(R_{*}=\left\{c=y_{0}+is\in\mathbb{C}:y_{0}\in[y_{*},0]\,,\,s\in[-\varepsilon_{*}, \varepsilon_{*}]\right\}\), for some \(y_{*}<0\) and \(\varepsilon_{*}>0\) that will be determined later. More precisely, we have that \[\left\|\int_{\partial R_{*}}\mathrm{e}^{-imct}\mathcal{R}(c,L_{m} )\mathrm{d}c\right\|_{L_{y}^{2}}\leq \left\|\int_{y_{*}}^{0}\mathrm{e}^{-imy_{0}t}\left(\mathrm{e}^{m \varepsilon_{*}t}\psi_{m,\varepsilon_{*}}^{-}(y,y_{0})-\mathrm{e}^{-m \varepsilon_{*}t}\psi_{m,\varepsilon_{*}}^{+}(y,y_{0})\right)\mathrm{d}y_{0} \right\|_{L_{y}^{2}}\] \[+\left\|\int_{0}^{\varepsilon_{*}}\mathrm{e}^{-imy_{*}t}\left( \mathrm{e}^{mst}\psi_{m,s}^{-}(y,y_{*})+\mathrm{e}^{-mst}\psi_{m,s}^{+}(y,y_{* })\right)\mathrm{d}s\right\|_{L_{y}^{2}}\] \[+\left\|\int_{0}^{\varepsilon_{*}}\left(\mathrm{e}^{mst}\psi_{m, s}^{-}(y,0)+\mathrm{e}^{-mst}\psi_{m,s}^{+}(y,0)\right)\mathrm{d}s\right\|_{L_{y}^ {2}}\] \[+\left\|\int_{y_{*}}^{0}\mathrm{e}^{-imy_{0}t}\left(\mathrm{e}^{ m\varepsilon_{*}t}\rho_{m,\varepsilon_{*}}^{-}(y,y_{0})-\mathrm{e}^{-m \varepsilon_{*}t}\rho_{m,\varepsilon_{*}}^{+}(y,y_{0})\right)\mathrm{d}y_{0} \right\|_{L_{y}^{2}}\] \[+\left\|\int_{0}^{\varepsilon_{*}}\mathrm{e}^{-imy_{*}t}\left( \mathrm{e}^{mst}\rho_{m,s}^{-}(y,y_{*})+\mathrm{e}^{-mst}\rho_{m,s}^{+}(y,y_{* })\right)\mathrm{d}s\right\|_{L_{y}^{2}}\] \[+\left\|\int_{0}^{\varepsilon_{*}}\left(\mathrm{e}^{mst}\rho_{m,s }^{-}(y,0)+\mathrm{e}^{-mst}\rho_{m,s}^{+}(y,0)\right)\mathrm{d}s\right\|_{L_{ y}^{2}}.\] In what follows, we obtain suitable estimates for each integral. We begin by obtaining bounds on the Green's functions \(\mathcal{G}_{m,\varepsilon}^{\pm}(y,y_{*},z)\) when \(y_{*}=-p_{k}\) for some \(k\geq 0\) small and for \(\varepsilon=\varepsilon_{*}\) small. **Lemma 6.5**.: _Let \(\varepsilon>0\) and \(p_{k}>0\) given by Proposition 6.3 for some \(k\geq 1\). Then,_ \[|\mathcal{G}_{m,\varepsilon}^{\pm}(y,-p_{k},z)|\lesssim_{m}|y+p_{k}-i \varepsilon|^{\frac{1}{2}},\] _uniformly for all \(y,z\in[0,1]\), for \(\frac{\varepsilon}{p_{k}}\) sufficiently small._ Proof.: We proceed similarly as in the proof of Lemma 6.11. That is, for \(y\leq z\), \[\mathcal{G}_{m,\varepsilon}^{-}(y,-p_{k},z)=\frac{\phi_{l,m, \varepsilon}^{-}(y,-p_{k})\phi_{u,m,\varepsilon}^{-}(z,-p_{k})}{\mathcal{W}_{ m,\varepsilon}^{-}(-p_{k})}\] Due to the explicit solutions of the Taylor-Goldstein equation, we can find that \[\phi_{l,m,\varepsilon}^{-}(y,-p_{k}) =M_{+}(p_{k}-i\varepsilon)M_{-}(y+p_{k}-i\varepsilon)-M_{-}(p_{k }-i\varepsilon)M_{+}(y+p_{k}-i\varepsilon)\] \[=M_{-}(p_{k})M_{-}(y+p_{k}-i\varepsilon)-M_{-}(p_{k})M_{+}(y+p_{ k}-i\varepsilon)+R_{1}(p_{k},\varepsilon)\] where \(R_{1}(p_{k},\varepsilon)\lesssim\frac{\varepsilon}{p_{k}}|M_{+}(p_{k})|\big{(} |M_{+}(y+p_{k}-i\varepsilon)|+|M_{-}(y+p_{k}-i\varepsilon)|\big{)}\). In particular, \[|\phi_{l,m,\varepsilon}^{-}(y,-p_{k})|\lesssim\left(1+\frac{ \varepsilon}{p_{k}}\right)|M_{+}(p_{k})||y+p_{k}-i\varepsilon|^{\frac{1}{2}}.\] On the other hand, \[\phi_{u,m,\varepsilon}^{-}(z,-p_{k}) =M_{+}(1+p_{k})M_{-}(z+p_{k})-M_{-}(1+p_{k})M_{+}(z+p_{k})+R_{2} (p_{k},\varepsilon)\] \[=2i\mathrm{Im}\left(M_{+}(1+p_{k})M_{-}(z+p_{k})\right)+R_{2}(\varepsilon)\] where \(|R_{2}(\varepsilon)|\lesssim|M_{+}(1+p_{k})|\frac{\varepsilon}{p_{k}}\). In particular, \[|\phi_{u,\varepsilon}^{-}(z,-p_{k})|\lesssim\left(1+\frac{ \varepsilon}{p_{k}}\right)|M_{+}(1+p_{k})|.\] Now, let us now estimate the Wronskian. We trivially have that \[\mathcal{W}^{-}_{m,\varepsilon}(-p_{k}) =4i\nu m\Big{(}M_{+}(p_{k}-i\varepsilon)M_{-}(1+p_{k}-i\varepsilon)- M_{+}(p_{k}-i\varepsilon)M_{-}(1+p_{k}-i\varepsilon)\Big{)}\] \[=4i\nu m\Big{(}M_{+}(p_{k})M_{-}(1+p_{k})-M_{-}(p_{k})M_{+}(1+p_{k })\Big{)}+R_{3}(p_{k},\varepsilon)\] \[=8\nu m\mathrm{Im}\Big{(}M_{-}(p_{k})M_{+}(1+p_{k})\Big{)}+R_{3}( p_{k},\varepsilon)\] \[=8\nu m(-1)^{k}|M_{+}(p_{k})||M_{-}(1+p_{k})|+R_{3}(p_{k}, \varepsilon),\] where \(|R_{3}(p_{k},\varepsilon)|\lesssim\nu m|M_{+}(p_{k})||M_{+}(1+p_{k})|\frac{ \varepsilon}{p_{k}}.\) In particular, \[|\mathcal{W}_{m,\varepsilon}(0)|\geq 8\nu m|M_{+}(p_{k})||M_{+}(1+p_{k})|-|R_{3} (p_{k},\varepsilon)|\gtrsim\nu m|M_{+}(p_{k})||M_{+}(1+p_{k})|,\] for \(\frac{\varepsilon}{p_{k}}\) small enough. The bound on \(\mathcal{G}^{-}_{m,\varepsilon}(y,-p_{k},z)\) follows directly. Once we have the pointwise bounds on the Green's function, we are able to prove the following. **Proposition 6.6**.: _Let \(y_{*}=-p_{k}\) be given by Proposition 6.3 for some \(k\geq 0\) and let \(\varepsilon_{*}>0\) such that \(\frac{\varepsilon_{*}}{|y_{*}|}\) is small enough. Then,_ \[\left\|\int_{0}^{\varepsilon_{*}}\mathrm{e}^{-imp_{k}t}\left(\mathrm{e}^{mst }\psi^{-}_{m,s}(y,y_{*})+\mathrm{e}^{-mst}\psi^{+}_{m,s}(y,y_{*})\right) \mathrm{d}s\right\|_{L^{2}_{y}}\lesssim\varepsilon_{*}\] _and_ \[\left\|\int_{0}^{\varepsilon_{*}}\mathrm{e}^{-imp_{k}t}\left(\mathrm{e}^{mst }\rho^{-}_{m,s}(y,y_{*})+\mathrm{e}^{-mst}\rho^{+}_{m,s}(y,y_{*})\right) \mathrm{d}s\right\|_{L^{2}_{y}}\lesssim\varepsilon_{*}^{\frac{1}{2}}\] Proof.: Firstly, Minkowski inequality provides \[\left\|\int_{0}^{\varepsilon_{*}}\mathrm{e}^{-imp_{k}t}\left(\mathrm{e}^{mst }\psi^{-}_{m,s}(y,y_{*})+\mathrm{e}^{-mst}\psi^{+}_{m,s}(y,y_{*})\right) \mathrm{d}s\right\|_{L^{2}_{y}}\lesssim\sum_{\kappa\in\{+,-\}}\int_{0}^{ \varepsilon_{*}}\|\psi^{\kappa}_{m,s}(y,-p_{k})\|_{L^{2}_{y}}\mathrm{d}s\] and we have that \[\psi^{\pm}_{m,s}(y,y_{0})=\frac{1}{\beta^{2}}(y-y_{0}\pm i\varepsilon)\omega^ {0}_{m}(y)-\rho^{0}_{m}(y)+\int_{0}^{1}\mathcal{G}^{\pm}_{m,\varepsilon}(y,y_{ 0},z)F^{\pm}_{m,\varepsilon}(z,y_{0})\mathrm{d}z.\] Using Cauchy-Schwarz and the uniform estimates from Lemma 6.5, we bound \[\sum_{\sigma\in\{+,-\}}\|\psi^{\sigma}_{m,s}(y,-p_{k})\|_{L^{2}_{y}}\lesssim\| \omega^{0}_{m}\|_{L^{2}_{y}}+\|\rho^{0}_{m}\|_{L^{2}_{y}}+\sum_{\sigma\in\{+,- \}}\|\mathcal{G}^{\sigma}_{m,s}(y,-p_{k},z)\|_{L^{2}_{y}L^{2}_{z}}\lesssim_{m}1\] and thus integrating in \(s\) from 0 to \(\varepsilon_{*}\) we get the first part of the Proposition. For the perturbed density, we recall that \[\rho^{\pm}_{m,s}(y,y_{0})=\frac{1}{\beta^{2}}\omega^{0}_{m}(y)+\frac{1}{y-y_{0 }\pm is}\int_{0}^{1}\mathcal{G}^{\pm}_{m,s}(y,y_{0},z)F^{\pm}_{m,s}(z,y_{0}) \mathrm{d}z.\] In particular, from Lemma 6.5 we have that \[|\rho^{\pm}_{m,s}(y,-p_{k})|\lesssim|\omega_{m}(y)|+|y+p_{k}-is|^{-\frac{1}{2 }}\|F^{\pm}_{m,s}(z,y_{0})\|_{L^{2}_{z}}\] For \(\|F^{\pm}_{m,s}(z,y_{0})\|_{L^{2}_{z}}\lesssim 1\) uniformly in \(s\in(0,\varepsilon_{*})\), we integrate in \(s\) from 0 to \(\varepsilon_{*}\) to get the desired result. We next obtain bounds on the Green's function when the spectral parameter has non-zero imaginary part. These bounds are shown to depend both on the modulus and on the argument of the complex spectral parameter. **Lemma 6.7**.: _Let \(y_{0}<0\) and \(\varepsilon>0\). Denote \(c=-y_{0}+i\varepsilon=r\mathrm{e}^{i\theta}\), with \(r>0\) and \(\theta\in\left(0,\frac{\pi}{2}\right)\). Then,_ \[|\mathcal{G}_{m,\varepsilon}^{\pm}(y,y_{0},z)|\lesssim\frac{|y-y_{0}\pm i \varepsilon|^{\frac{1}{2}}}{\sinh^{2}(\nu\theta)}\] _and there exists \(K_{c}>0\) such that_ \[|\mathcal{G}_{m,\varepsilon}^{-}(y,y_{0},z)-\mathcal{G}_{m,\varepsilon}^{+}(y,y_{0},z)-K_{c}\phi_{u,m}(y)\phi_{u,m}(z)|\lesssim\frac{r^{\frac{1}{2}}}{\sinh^ {2}(\nu\theta)},\] _uniformly for all \(y,z\in[0,1]\)._ Proof.: For \(y_{0}<0\) and \(\varepsilon>0\), we consider \(c=-y_{0}+i\varepsilon=r\mathrm{e}^{i\theta}\), with \(r>0\) and \(\theta\in\left(0,\frac{\pi}{2}\right)\). We next study \(\mathcal{G}_{m,\varepsilon}^{+}(y,y_{0},z)\). For \(y\leq z\), we write \[\mathcal{G}_{m,\varepsilon}^{+}(y,y_{0},z)=\frac{\phi_{l,m,\varepsilon}^{+}( y,y_{0})\phi_{u,m,\varepsilon}^{+}(z,y_{0})}{\mathcal{W}_{m,\varepsilon}^{+}(y _{0})}=\frac{\phi_{l,m,\varepsilon}^{+}(y,y_{0})\phi_{u,m,\varepsilon}^{+}(y, y_{0})\mathcal{W}_{m,\varepsilon}^{-}(y_{0})}{|\mathcal{W}_{m,\varepsilon}^{+}(y_{0}) |^{2}}\] The main difference with respect to the other estimates we have been carrying out is that now, we control \(|\mathcal{W}_{m,\varepsilon}^{+}(y_{0})|^{2}\) as follows: \[\mathcal{W}_{m,\varepsilon}^{+}(y_{0}) =4i\nu m\big{(}M_{+}(c)M_{-}(1+c)-M_{-}(c)M_{+}(1+c)\big{)}\] \[=4i\nu m\big{(}M_{+}(c)M_{-}(1)-M_{-}(c)M_{+}(1)\big{)}+R_{1}(c),\] with \(|R_{1}(c)|\lesssim r|M_{+}(c)||M_{+}(1)|\). For \(c=r\mathrm{e}^{i\theta}\), a detailed asymptotic analysis of \(M_{+}(c)\) and \(M_{-}(c)\) shows that \[M_{+}(c)=r^{\frac{1}{2}}\mathrm{e}^{-\nu\theta}\mathrm{e}^{i\frac{\theta}{2} }r^{i\nu}+R_{2}(c),\quad M_{-}(c)=r^{\frac{1}{2}}\mathrm{e}^{\nu\theta} \mathrm{e}^{i\frac{\theta}{2}}r^{-i\nu}+R_{3}(c),\] where \(|R_{2}(c)|,\,|R_{3}(c)|\lesssim r^{\frac{5}{2}}\). Hence, \[\mathcal{W}_{m,\varepsilon}^{+}(y_{0})=4i\nu mr^{\frac{1}{2}}\mathrm{e}^{i \frac{\theta}{2}}\big{(}\mathrm{e}^{-\nu\theta}r^{i\nu}M_{-}(1)-\mathrm{e}^{ \nu\theta}r^{-i\nu}M_{+}(1)\big{)}+R_{4}(c),\] with \(|R_{4}(c)|\leq C_{4}r^{\frac{3}{2}}|M_{+}(1)|\). In particular, for \(r\leq\frac{4\nu m\sinh(\nu\theta)}{C_{4}}\) small enough, we estimate \[|\mathcal{W}_{m,\varepsilon}^{+}(y_{0})|\geq 4\nu mr^{\frac{1}{2}}|M_{+}(1)|\sinh( \nu\theta).\] As expected, the bound degenerates as \(\theta\to 0^{+}\). With this lower bound we are able to prove the first part of the proposition, using the asymptotic expansions of \(M_{+}(y-y_{0}+i\varepsilon)\) and \(M_{-}(y-y_{0}+i\varepsilon)\), see Lemma A.3. Nevertheless, to obtain the second part of the proposition, we continue by estimating \[\phi_{u,m,\varepsilon}^{+}(z,y_{0})=M_{+}(1)M_{-}(z)-M_{-}(1)M_{+}(z)+R_{5}(c )=2i\mathrm{Im}\left(M_{+}(1)M_{-}(z)\right)+R_{5}(c),\] where \(|R_{5}(c)|\lesssim r^{\frac{1}{2}}|M_{+}(1)|\). Similarly, we have \[\phi_{l,m,\varepsilon}^{+}(y,0)\mathcal{W}_{m,\varepsilon}^{-}(y _{0})=4i\nu m\Big{(}M_{+}(c)M_{-}(y)M_{+}(\overline{c})M_{-}(1)-M_{+}(c)M_{-}( y)M_{-}(\overline{c})M_{+}(1)\\ -M_{-}(c)M_{+}(y)M_{+}(\overline{c})M_{-}(1)+M_{-}(c)M_{+}(y)M_{-}( \overline{c})M_{+}(1)\Big{)}+R_{6}(c),\] with \(|R_{6}(c)|\lesssim r^{\frac{1}{2}}|M_{+}(c)|^{2}|M_{+}(1)|\). In fact, we can recognize \[\phi_{l,m,\varepsilon}^{+}(y,0)\mathcal{W}_{m,\varepsilon}^{-}( y_{0})=4i\nu m\Big{(}2\mathrm{Re}\big{(}M_{+}(c)M_{-}(y)M_{+}(\overline{c})M_{-}(1) \big{)}\\ -|M_{+}(c)|^{2}M_{-}(y)M_{+}(1)-|M_{-}(c)|^{2}M_{+}(y)M_{-}(1) \Big{)}+R_{6}(c).\] Hence, we obtain \[\phi_{l,m,\varepsilon}^{+}(y,y_{0})\phi_{u,m,\varepsilon}^{+}(y, y_{0})\mathcal{W}_{m,\varepsilon}^{-}(y_{0}) =-8\nu m\mathrm{Im}\left(M_{+}(1)M_{-}(z)\right)\left(2\mathrm{Re} \big{(}M_{+}(c)M_{-}(y)M_{+}(\overline{c})M_{-}(1)\big{)}\right.\\ -|M_{+}(c)|^{2}M_{-}(y)M_{+}(1)-|M_{-}(c)|^{2}M_{+}(y)M_{-}(1)\Big{)} +R_{7}(c),\] where now \(|R_{7}(c)|\lesssim r^{\frac{1}{2}}|M_{+}(c)|^{2}|M_{+}(1)|^{2}\). In particular, \[\operatorname{Im} \left(\phi_{l,m,\varepsilon}^{+}(y,y_{0})\phi_{u,m,\varepsilon}^{+ }(y,y_{0})\mathcal{W}_{m,\varepsilon}^{-}(y_{0})\right)\] \[=-2\nu m\phi_{u,m}(z)\phi_{u,m}(y)\left(|M_{+}(c)|^{2}-|M_{-}(c)|^ {2}\right)+\operatorname{Im}\left(R_{7}(c)\right).\] and \[\left|\phi_{l,m,\varepsilon}^{+}(y,y_{0})\phi_{u,m,\varepsilon}^{+}(y,y_{0}) \mathcal{W}_{m,\varepsilon}^{-}(y_{0})\right|\lesssim|M_{+}(c)|^{2}|M_{+}(1) |^{2}.\] Together with the lower bound on the Wronskian, we conclude the proof. With the above bounds, we are able to estimate the contribution of the integral along the horizontal boundary. **Proposition 6.8**.: _For \(y_{*}<0\) small enough, let \(r_{*}\mathrm{e}^{i\theta_{*}}=y_{*}+i\varepsilon_{*}\). We have that_ \[\left\|\int_{0}^{y_{*}}\mathrm{e}^{-imy_{0}t}\big{(}\mathrm{e}^{m\varepsilon _{*}t}\psi_{m,\varepsilon_{*}}^{-}(y,y_{0})-\mathrm{e}^{-m\varepsilon_{*}t} \psi_{m,\varepsilon_{*}}^{+}(y,y_{0})\big{)}\mathrm{d}y_{0}\right\|_{L_{y}^{ 2}}\lesssim\frac{r_{*}^{\frac{3}{2}}}{\sinh^{2}(\nu\theta_{*})}\] _and_ \[\left\|\int_{0}^{y_{*}}\mathrm{e}^{-imy_{0}t}\big{(}\mathrm{e}^{m\varepsilon_{ *}t}\rho_{m,\varepsilon_{*}}^{-}(y,y_{0})-\mathrm{e}^{-m\varepsilon_{*}t} \rho_{m,\varepsilon_{*}}^{+}(y,y_{0})\big{)}\mathrm{d}y_{0}\right\|_{L_{y}^{ 2}}\lesssim\frac{r_{*}^{\frac{1}{2}}}{\sinh^{2}(\nu\theta_{*})}\] Proof.: Firstly, note that \[\mathrm{e}^{m\varepsilon_{*}t}\psi_{m,\varepsilon_{*}}^{-}(y,y_{0 })-\mathrm{e}^{-m\varepsilon_{*}t}\psi_{m,\varepsilon_{*}}^{+}(y,y_{0}) =\mathrm{e}^{m\varepsilon_{*}t}\left(\psi_{m,\varepsilon_{*}}^{-} (y,y_{0})-\psi_{m,\varepsilon_{*}}^{+}(y,y_{0})\right)\] \[\quad+\left(\mathrm{e}^{m\varepsilon_{*}t}-\mathrm{e}^{-m \varepsilon_{*}t}\right)\psi_{m,\varepsilon_{*}}^{+}(y,y_{0})\] while \[\psi_{m,\varepsilon_{*}}^{-}(y,y_{0})-\psi_{m,\varepsilon_{*}}^{ +}(y,y_{0}) =-\frac{2i\varepsilon_{*}}{\beta^{2}}\omega_{m}^{0}+\int_{0}^{1} \left(\mathcal{G}_{m,\varepsilon_{*}}^{-}(y,y_{0},z)-\mathcal{G}_{m, \varepsilon_{*}}^{+}(y,y_{0},z)\right)F_{m}(z,0)\mathrm{d}z\] \[\quad+\frac{i\varepsilon_{*}}{\beta^{2}}\int_{0}^{1}\left( \mathcal{G}_{m,\varepsilon_{*}}^{-}(y,y_{0},z)+\mathcal{G}_{m,\varepsilon_{*} }^{+}(y,y_{0},z)\right)\Delta_{m}\omega_{m}^{0}\mathrm{d}z.\] Now, for \(r\mathrm{e}^{i\theta}=-y_{0}+i\varepsilon_{*}\), we use Lemma 6.7 to bound \[\varepsilon_{*}\left\|\int_{0}^{1}\left(\mathcal{G}_{m,\varepsilon_{*}}^{-}(y,y_{0},z)+\mathcal{G}_{m,\varepsilon_{*}}^{+}(y,y_{0},z)\right)\Delta_{m} \omega_{m}^{0}\mathrm{d}z\right\|_{L_{y}^{2}}\lesssim\frac{\varepsilon}{\sinh^ {2}(\nu\theta)}\lesssim\frac{\varepsilon}{\sinh^{2}(\nu\theta_{*})}\] and, together with the orthogonality condition of the initial data, \[\left\|\int_{0}^{1}\left(\mathcal{G}_{m,\varepsilon_{*}}^{-}(y,y_{0},z)- \mathcal{G}_{m,\varepsilon_{*}}^{+}(y,y_{0},z)\right)F_{m}(z,0)\mathrm{d}z \right\|_{L_{y}^{2}}\lesssim\frac{r^{\frac{1}{2}}}{\sinh^{2}(\nu\theta)} \lesssim\frac{r_{*}^{\frac{1}{2}}}{\sinh^{2}(\nu\theta_{*})},\] where \(r_{*}\mathrm{e}^{i\theta_{*}}=y_{*}+i\varepsilon_{*}\). With this bound uniform in \(y_{0}\in[y_{*},0]\), we obtain \[\int_{0}^{y_{*}}\|\psi_{m,\varepsilon}^{-}(y,y_{0})-\psi_{m,\varepsilon}^{+}(y,y_{0})\|_{L_{y}^{2}}\mathrm{d}y_{0}\lesssim\varepsilon|y_{*}|+\frac{r_{*}^{ \frac{1}{2}}}{\sinh^{2}(\nu\theta_{*})}|y_{*}|+\frac{\varepsilon}{\sinh^{2}( \nu\theta_{*})}|y_{*}|\lesssim r_{*}^{\frac{3}{2}}\frac{\cos(\theta_{*})}{ \sinh^{2}(\nu\theta_{*})}.\] On the other hand, \[\left\|\int_{0}^{|y_{*}|}\left(\mathrm{e}^{m\varepsilon_{*}t}-\mathrm{e}^{-m \varepsilon_{*}t}\right)\psi_{m,\varepsilon_{*}}^{+}(y,y_{0})\mathrm{d}y_{0} \right\|_{L_{y}^{2}}\lesssim\frac{\varepsilon_{*}|y_{*}|}{\sinh^{2}(\nu\theta) }\lesssim\frac{r_{*}^{2}}{\sinh^{2}(\nu\theta_{*})},\] For the second part of the proposition, we recall that \[\rho_{m,\varepsilon_{*}}^{-}(y,y_{0}) -\rho_{m,\varepsilon_{*}}^{+}(y,y_{0})\] \[=\frac{1}{y-y_{0}-i\varepsilon_{*}}\int_{0}^{1}\big{(}\mathcal{G}_{ m,\varepsilon_{*}}^{-}(y,y_{0},z)-\mathcal{G}_{m,\varepsilon_{*}}^{+}(y,y_{0},z) \big{)}F_{m}(z,y_{0})\mathrm{d}z\] \[\quad+\frac{2i\varepsilon_{*}}{(y-y_{0})^{2}+\varepsilon_{*}^{2}} \int_{0}^{1}\mathcal{G}_{m,\varepsilon_{*}}^{+}(y,y_{0},z)F_{m}(z,y_{0}) \mathrm{d}z\] \[\quad+\frac{i\varepsilon_{*}}{\beta}\int_{0}^{1}\left(\frac{1}{y- y_{0}-i\varepsilon_{*}}\mathcal{G}_{m,\varepsilon_{*}}^{-}(y,y_{0},z)+\frac{1}{y-y_{0} +i\varepsilon_{*}}\mathcal{G}_{m,\varepsilon_{*}}^{+}(y,y_{0},z)\right)\Delta _{m}\omega_{m}^{0}\mathrm{d}z.\] Using the bounds of Lemma 6.7 and the orthogonal condition on the initial data, we bound \[|\rho_{m,\varepsilon_{*}}^{-}(y,y_{0})-\rho_{m,\varepsilon_{*}}^{ +}(y,y_{0})| \lesssim\frac{1}{|y-y_{0}-i\varepsilon_{*}|}\frac{|y-y_{0}+i \varepsilon_{*}|^{\frac{1}{2}}}{\sinh^{2}(\nu\theta)}+\frac{\varepsilon_{*}}{ |y-y_{0}+i\varepsilon_{*}|^{2}}\frac{|y-y_{0}+i\varepsilon_{*}|^{\frac{1}{2}} }{\sinh^{2}(\nu\theta)}\] \[\quad+\frac{|y-y_{0}+i\varepsilon_{*}|^{\frac{1}{2}}}{\sinh^{2}( \nu\theta)}\] \[\lesssim\frac{1}{\sinh^{2}(\nu\theta_{*})}\frac{1}{|y-y_{0}+i \varepsilon_{*}|^{\frac{1}{2}}}.\] Hence, \[\int_{0}^{y_{*}}|\rho_{m,\varepsilon_{*}}^{-}(y,y_{0})-\rho_{m,\varepsilon_{*} }^{+}(y,y_{0})|\mathrm{d}y_{0}\lesssim\frac{1}{\sinh^{2}(\nu\theta_{*})}\int_ {0}^{|y_{*}|}\frac{1}{|y_{0}|^{\frac{1}{2}}}\mathrm{d}y_{0}\lesssim\frac{|y_{ *}|^{\frac{1}{2}}}{\sinh^{2}(\nu\theta_{*})}\] and similarly \[\left\|\int_{0}^{y_{*}}\left(\mathrm{e}^{m\varepsilon_{*}t}-\mathrm{e}^{-m \varepsilon_{*}t}\right)|\rho_{m,\varepsilon_{*}}^{+}(y,y_{0})|\mathrm{d}y_{0 }\right\|_{L_{y}^{2}}\lesssim\frac{\varepsilon_{*}|y_{*}|^{\frac{1}{2}}}{ \sinh^{2}(\nu\theta_{*})}\lesssim\frac{r_{*}^{\frac{3}{2}}}{\sinh^{2}(\nu \theta_{*})}.\] The proof is concluded. We combine the estimates from Proposition 6.6 and Proposition 6.8 to obtain the following result. **Proposition 6.9**.: _For all \(\delta>0\), there exists \(\theta_{*}\in\big{(}0,\frac{\pi}{2}\big{)}\) such that, for \(r_{*}=\sinh^{8}(\nu\theta_{*})\), \(y_{*}=-r_{*}\cos(\theta_{*})\) and \(\varepsilon_{*}=r_{*}\sin(\theta_{*})\), we denote \(R_{*}:=\{c=y_{0}+is\in\mathbb{C}:y_{0}\in[y_{*},0]\,,\,s\in[-\varepsilon_{*}, \varepsilon_{*}]\}\). Then, there holds_ \[\left\|\int_{\partial R_{*}}\mathrm{e}^{-imct}\mathcal{R}(c,L_{m})\mathrm{d}c \right\|_{L_{y}^{2}}\leq\delta.\] Proof.: We choose \(\theta_{*}>0\) such that \(y_{*}=-r_{*}\cos(\theta_{*})=-\sinh^{8}(\nu\theta_{*})\cos(\theta_{*})=-p_{k}\), for some \(k>0\), where \(p_{k}\) is given by Proposition 6.3. This is possible because for \(\theta_{*}\) small enough, \(g(\theta_{*}):=\sinh^{8}(\nu\theta_{*})\cos(\theta_{*})\) is a continuous strictly monotone increasing function of \(\theta_{*}\) such that \(g(0)=0\). Moreover, since \(p_{k}\to 0^{+}\) for \(k\to\infty\), we may assume \(\theta_{*}\) is sufficiently small. Hence, \(\frac{\varepsilon_{*}}{y_{*}}=\tan(\theta_{*})\) is sufficiently small and we use Proposition 6.6 to bound \[\left\|\int_{0}^{\varepsilon_{*}}\mathrm{e}^{-imy_{*}t}\left(\mathrm{e}^{mst} \psi_{m,s}^{-}(y,y_{*})+\mathrm{e}^{-mst}\psi_{m,s}^{+}(y,y_{*})\right) \mathrm{d}s\right\|_{L_{y}^{2}}\lesssim\varepsilon_{*}\lesssim\sinh^{8}(\nu \theta_{*}),\] and \[\left\|\int_{0}^{\varepsilon_{*}}\mathrm{e}^{-imy_{*}t}\left(\mathrm{e}^{mst} \rho_{m,s}^{-}(y,y_{*})+\mathrm{e}^{-mst}\rho_{m,s}^{+}(y,y_{*})\right) \mathrm{d}s\right\|_{L_{y}^{2}}\lesssim\varepsilon_{*}^{\frac{1}{2}}\lesssim \sinh^{4}(\nu\theta_{*}).\] Now, we use Proposition 6.8 to bound \[\left\|\int_{0}^{y_{*}}\mathrm{e}^{-imv_{0}t}\left(\mathrm{e}^{m\varepsilon_{*}t} \psi_{m,\varepsilon_{*}}^{-}(y,y_{0})-\mathrm{e}^{-m\varepsilon_{*}t}\psi_{m, \varepsilon_{*}}^{+}(y,y_{0})\right)\mathrm{d}y_{0}\right\|_{L_{y}^{2}} \lesssim\frac{r_{*}^{\frac{3}{2}}}{\sinh^{2}(\nu\theta_{*})}\lesssim\sinh^{10 }(\nu\theta_{*})\] and \[\left\|\int_{0}^{y_{*}}\mathrm{e}^{-imv_{0}t}\big{(}\mathrm{e}^{m\varepsilon_{* }t}\rho_{m,\varepsilon_{*}}^{-}(y,y_{0})-\mathrm{e}^{-m\varepsilon_{*}t}\rho_{ m,\varepsilon_{*}}^{+}(y,y_{0})\big{)}\mathrm{d}y_{0}\right\|_{L_{y}^{2}} \lesssim\frac{r_{*}^{\frac{1}{2}}}{\sinh^{2}(\nu\theta_{*})}\lesssim\sinh^{2} (\nu\theta_{*})\] Finally, we use (2.9), (2.10) and the bounds from Proposition 7.2 to estimate \[\left\|\int_{0}^{\varepsilon_{*}}\left(\mathrm{e}^{mst}\psi_{m,s}^{-}(y,0)+ \mathrm{e}^{-mst}\psi_{m,s}^{+}(y,0)\right)\mathrm{d}s\right\|_{L_{y}^{2}} \lesssim\varepsilon_{*}\lesssim\sinh^{8}(\nu\theta_{*})\] and \[\left\|\int_{0}^{\varepsilon_{*}}\left(\mathrm{e}^{mst}\rho_{m,s}^{-}(y,0)+ \mathrm{e}^{-mst}\rho_{m,s}^{+}(y,0)\right)\mathrm{d}s\right\|_{L_{y}^{2}} \lesssim\varepsilon_{*}^{\frac{1}{2}}\lesssim\sinh^{4}(\nu\theta),\] The proposition follows choosing \(\theta_{*}\) small enough (that is, \(\theta_{*}=g^{-1}(p_{k})\) for \(k>0\) sufficiently large). We are finally in position to prove Proposition 6.1 for the case \(\beta^{2}>1/4\). Proof of Proposition 6.1.: We shall see that \(\left\|\int_{\partial R_{0}}\mathrm{e}^{-imct}\mathcal{R}(c,L_{m})\mathrm{d}c \right\|_{L_{y}^{2}}\leq\delta\), for all \(\delta>0\). Indeed, given \(\delta>0\), from Proposition 6.9 there exists \(\theta_{*}\) such that \(y_{*}=-\sinh^{8}(\nu\theta_{*})\cos(\theta_{*})=-p_{k}\), for some \(k>0\) large enough and such that \[\left\|\int_{\partial R_{*}}\mathrm{e}^{-imct}\mathcal{R}(c,L_{m})\mathrm{d}c \right\|_{L_{y}^{2}}\leq\delta.\] Now, there are finitely many isolated eigenvalues in \(R_{0}\setminus R_{*}\), they are real and between \(-\frac{\beta}{m}\) and \(y_{*}<0\). Moreover, \(\mathrm{e}^{-imct}\mathcal{R}(c,L_{m})\) is an holomorphic function, for all \(c\in R\setminus R_{*}\) such that \(c\neq q_{j}\), for any \(0\leq j\leq k\). Thus, \[\frac{1}{2\pi i}\int_{\partial R\setminus\partial R_{*}}\mathrm{e}^{-imct} \mathcal{R}(c,L_{m})\mathrm{d}c=\sum_{j=0}^{k}\mathrm{e}^{-imq_{j}t}\mathbb{P} _{q_{j}}\begin{pmatrix}\omega_{m}^{0}\\ \rho_{m}^{0}\end{pmatrix}=0,\] where \(\mathbb{P}_{q_{j}}\begin{pmatrix}\omega_{m}^{0}\\ \rho_{m}^{0}\end{pmatrix}\) denotes the \(L^{2}\)-projection of \(\begin{pmatrix}\omega_{m}^{0}\\ \rho_{m}^{0}\end{pmatrix}\) to the generalized eigenspace \(E_{q_{j}}\) associated to the eigenvalue \(q_{j}\). With this, the proof is finished. The next proposition shows that the generalized eigenspace associated to any discrete eigenvalues is, in fact, simple. **Proposition 6.10**.: _Let \(c\in\mathbb{R}\) be a discrete eigenvalue of \(L_{m}\). Then \(\ker\left(L_{m}-c\right)^{2}=\ker\left(L_{m}-c\right)\). In particular, \(c\) is a semi-simple eigenvalue._ Proof.: Note that the pair \((\omega,\rho)\in\ker\left(L_{m}-c\right)\) if and only if \[(y-c)\omega+\beta^{2}\rho=0,\qquad-\psi+(y-c)\rho=0,\] where, as usual, we denote \(\psi=\Delta_{m}^{-1}\omega\). Hence, \(\rho=\frac{\psi}{y-c}\) and the equation \[\Delta_{m}\psi+\beta^{2}\frac{\psi}{(y-c)^{2}}=0\] characterizes the eigenfunctions of \(L_{m}\) of eigenvalue \(c\in\mathbb{R}\). Now, the pair \(\left(\omega,\rho\right)\in\ker\left(L_{m}-c\right)^{2}\) if and only if \[(y-c)^{2}\omega-\beta^{2}\psi+2(y-c)\beta^{2}\rho =0,\] \[-\Delta_{m}^{-1}((y-c)\omega)-(y-c)\psi+(y-c)^{2}\rho-\beta^{2} \Delta_{m}^{-1}\rho =0.\] Obtaining \(\rho\) in terms of \(\omega\) from the first equation and plugging it into the second one, we see that \(\omega\) solves \[0=-\Delta_{m}^{-1}((y-c)\omega)-(y-c)\psi-\frac{(y-c)^{3}}{2\beta^{2}}\omega+ \frac{y-c}{2}\psi+\frac{1}{2}\Delta_{m}^{-1}\left(\frac{1}{y-c}\left[(y-c)^{2} \omega-\beta^{2}\psi\right]\right). \tag{6.3}\] Multiplying by \(\overline{\omega}\) and integrating by parts, we see that \[0 =-\int_{0}^{1}\overline{\psi}(y-c)\omega-\int_{0}^{1}\overline{ \omega}(y-c)\psi-\int_{0}^{1}\frac{(y-c)^{3}}{2\beta^{2}}|\omega|^{2}+\int_{0 }^{1}\overline{\omega}\frac{y-c}{2}\psi+\int_{0}^{1}\overline{\psi}\frac{y-c }{2}\omega-\int_{0}^{1}\frac{\beta^{2}}{2}|\psi|^{2}\] \[=-\frac{1}{2\beta^{2}}\int_{0}^{1}(y-c)^{3}\left(|\omega|^{2}+ \beta^{2}\frac{\overline{\psi}\omega+\overline{\omega}\psi}{(y-c)^{2}}+\beta ^{4}\frac{|\psi|^{2}}{(y-c)^{4}}\right)\] \[=-\frac{1}{2\beta^{2}}\int_{0}^{1}(y-c)^{3}\left|\Delta_{m}\psi+ \beta^{2}\frac{\psi}{(y-c)^{2}}\right|^{2}.\] Hence, since either \(c<0\) or \(c>1\), we conclude that the pair \((\omega,\rho)\) satisfies \[\Delta_{m}\psi+\beta^{2}\frac{\psi}{(y-c)^{2}}=0,\] with \(\Delta_{m}\psi=\omega\). That is, \((\omega,\rho)\in\ker\left(L_{m}-c\right)\). We finish the subsection proving Theorem 4 for \(\beta^{2}>1/4\). We need the following lemma, which shows that for \(y_{0}\in\{0,1\}\), the difference \(\mathcal{G}_{m,\varepsilon}^{-}(y,y_{0})-\mathcal{G}_{m,\varepsilon}^{+}(y,y_ {0})\) approaches a varying multiple of a generalized eigenfunction of the linearized operator as \(\varepsilon\to 0\) associated to the "embedded eigenvalue" \(c=0\). We state the result for \(y_{0}=0\), since the case \(y_{0}=1\) is analogous. **Lemma 6.11**.: _Let \(y_{0}=0\) and \(0<\varepsilon\ll 1\) sufficiently small. Then, there exists \(C_{\varepsilon}\in\mathbb{R}\) such that_ \[\left|\mathcal{G}_{m,\varepsilon}^{-}(y,0,z)-\mathcal{G}_{m,\varepsilon}^{+}(y,0,z)-C_{\varepsilon}\phi_{u,m}(y)\phi_{u,m}(z)\right|\lesssim\varepsilon^{ \frac{1}{2}},\] _where \(\phi_{u,m}\) is given in (3.8)._ Proof.: The result is trivially true for \(y=0\) and \(y=1\) because both \(\phi_{u}\) and \(\mathcal{G}^{\pm}\) vanish there. Therefore, in the sequel we consider \(0<y<1\). Due to the complex conjugation property of the Green's function, \[\mathcal{G}_{m,\varepsilon}^{-}(y,0,z)-\mathcal{G}_{m,\varepsilon}^{+}(y,0,z )=2i\mathrm{Im}\left(\mathcal{G}_{m,\varepsilon}^{-}(y,0,z)\right)\] Assuming initially that \(y\leq z\), we have \[2i\mathrm{Im}\left(\mathcal{G}_{m,\varepsilon}^{-}(y,0,z)\right) =2i\mathrm{Im}\left(\frac{\phi_{l,m,\varepsilon}^{-}(y,0)\phi_{ u,m,\varepsilon}^{-}(z,0)}{\mathcal{W}_{m,\varepsilon}^{-}(0)}\right)\] \[=\frac{2i}{|\mathcal{W}_{m,\varepsilon}^{-}(0)|^{2}}\mathrm{Im} \left(\phi_{l,m,\varepsilon}^{-}(y,0)\phi_{u,m,\varepsilon}^{-}(z,0)\mathcal{W }_{m,\varepsilon}^{+}(0)\right).\] Due to the explicit solutions of the Taylor-Goldstein equation, we write \[\phi_{l,m,\varepsilon}^{-}(y,0)\mathcal{W}_{m,\varepsilon}^{+}(0) =4i\nu m\Big{(}M_{+}(-i\varepsilon)M_{-}(y)M_{+}(i\varepsilon)M_{ -}(1)+M_{-}(-i\varepsilon)M_{+}(y)M_{-}(i\varepsilon)M_{+}(1)\] \[\quad-M_{+}(-i\varepsilon)M_{-}(y)M_{-}(i\varepsilon)M_{+}(1)-M_{ -}(-i\varepsilon)M_{+}(y)M_{+}(i\varepsilon)M_{-}(1)+R_{1}(\varepsilon)\Big{)},\] where \(|R_{1}(\varepsilon)|\lesssim_{m,\nu}\varepsilon^{\frac{1}{2}}|M_{+}(i \varepsilon)|^{2}\). Observe also that since \(\overline{M_{\pm}(\zeta)}=M_{\mp}(\overline{\zeta})\) for all \(\zeta\in\mathbb{C}\), we can write \[\phi_{l,m,\varepsilon}^{-}(y,0)\mathcal{W}_{m,\varepsilon}^{+}(0) =4i\nu m\Big{(}2\mathrm{Re}\big{(}M_{+}(-i\varepsilon)M_{-}(y)M_{+}(i \varepsilon)M_{-}(1)\big{)}\] \[\qquad\qquad-|M_{+}(-i\varepsilon)|^{2}M_{-}(y)M_{+}(1)-|M_{+}(i \varepsilon)|^{2}M_{+}(y)M_{-}(1)+R_{1}(\varepsilon)\Big{)}\] Since \(M_{+}(-i\varepsilon)=-i\mathrm{e}^{\nu\pi}M_{+}(i\varepsilon)\), we obtain \[\phi_{l,m,\varepsilon}^{-}(y,0)\mathcal{W}_{m,\varepsilon}^{+}(0) =4i\nu m\Big{(}2\mathrm{Re}\big{(}M_{+}(-i\varepsilon)M_{-}(y)M_{+}(i \varepsilon)M_{-}(1)\big{)}\] \[\qquad\qquad\qquad-\mathrm{e}^{2\nu\pi}|M_{+}(i\varepsilon)|^{2} M_{-}(y)M_{+}(1)-|M_{+}(i\varepsilon)|^{2}M_{+}(y)M_{-}(1)+R_{1}(\varepsilon) \Big{)}.\] Now, \(\overline{M_{-}(y)M_{+}(1)}=M_{+}(y)M_{-}(1)\) and we further observe that \[\mathrm{Im}\big{(}M_{+}(y)M_{-}(1)\big{)}=-\mathrm{Im}\big{(}M_{-}(y)M_{+}(1) \big{)}=-\frac{1}{2i}\big{(}M_{-}(y)M_{+}(1)-M_{+}(y)M_{-}(1)\big{)}=-\frac{1} {2i}\phi_{u,m}(y).\] On the other hand, \[\phi_{u,m,\varepsilon}^{-}(z,0)=M_{+}(1)M_{-}(z)-M_{-}(1)M_{+}(z)+R_{2}( \varepsilon)=2i\mathrm{Im}\left(M_{+}(1)M_{-}(z)\right)+R_{2}(\varepsilon)\] with \(|R_{2}(\varepsilon)|\lesssim\varepsilon^{\frac{1}{2}}\). Thus, \[\phi_{l,m,\varepsilon}^{-}(y,0) \phi_{u,m,\varepsilon}^{-}(z,0)\mathcal{W}_{m,\varepsilon}^{+}(0)\] \[=-8\nu m\mathrm{Im}\left(M_{+}(1)M_{-}(z)\right)\left(2\mathrm{Re }\big{(}M_{+}(-i\varepsilon)M_{-}(y)M_{+}(i\varepsilon)M_{-}(1)\big{)}\right.\] \[\qquad\qquad\qquad-\mathrm{e}^{2\nu\pi}|M_{+}(i\varepsilon)|^{2} M_{-}(y)M_{+}(1)-|M_{+}(i\varepsilon)|^{2}M_{+}(y)M_{-}(1)\Big{)}+R_{3}(\varepsilon)\] \[=-8\nu m\mathrm{Im}\left(M_{+}(1)M_{-}(z)\right)\left(2\mathrm{Re }\big{(}M_{+}(-i\varepsilon)M_{-}(y)M_{+}(i\varepsilon)M_{-}(1)\big{)}\right.\] \[\qquad\qquad\qquad-|M_{+}(i\varepsilon)|^{2}\left(\mathrm{e}^{2 \nu\pi}+1\right)\mathrm{Re}\left(M_{-}(y)M_{+}(1)\right)\Big{)}\] \[\quad\quad-8i\nu m\mathrm{Im}\left(M_{+}(1)M_{-}(z)\right)|M_{+}( i\varepsilon)|^{2}\left(\mathrm{e}^{2\nu\pi}-1\right)\mathrm{Im}\left(M_{-}(y)M_{+}(1) \right)+R_{3}(\varepsilon),\] where \(|R_{3}(\varepsilon)|\lesssim\varepsilon^{\frac{1}{2}}|M_{+}(i\varepsilon)|^{2}\), uniformly in \(y,z\in[0,1]\). In particular, \[\mathrm{Im}\left(\phi_{l,m,\varepsilon}^{-}(y,0)\phi_{u,m,\varepsilon}^{-}(z, 0)\mathcal{W}_{m,\varepsilon}^{+}(0)\right)=2\nu m|M_{+}(i\varepsilon)|^{2} \left(\mathrm{e}^{2\nu\pi}-1\right)\phi_{u,m}(y)\phi_{u,m}(z)+\mathrm{Im} \left(R_{3}(\varepsilon)\right).\] Moreover, due to the symmetry of the Green's function with respect to \(y\) and \(z\), we also have that \[\mathrm{Im}\left(\phi_{l,m,\varepsilon}^{-}(z,0)\phi_{u,m,\varepsilon}^{-}(y, 0)\mathcal{W}_{m,\varepsilon}^{+}(0)\right)=2\nu m|M_{+}(i\varepsilon)|^{2} \left(\mathrm{e}^{2\nu\pi}-1\right)\phi_{u,m}(y)\phi_{u,m}(z)+\mathrm{Im} \left(\widetilde{R}_{3}(\varepsilon)\right).\] Let us now estimate the modulus squared of the Wronskian, that is, \(|\mathcal{W}_{m,\varepsilon}(0)|^{2}\). We trivially have that \[|\mathcal{W}_{m,\varepsilon}(0)|^{2} =16\nu^{2}m^{2}\Big{[}|M_{+}(-i\varepsilon)|^{2}|M_{-}(1-i \varepsilon)|^{2}+|M_{+}(i\varepsilon)|^{2}|M_{-}(1+i\varepsilon)|^{2}\] \[\qquad-2\mathrm{Re}\Big{(}M_{+}(-i\varepsilon)M_{-}(1-i \varepsilon)M_{+}(i\varepsilon)M_{-}(1+i\varepsilon)\Big{)}\Big{]}\] \[=16\nu^{2}m^{2}\Big{[}\mathrm{e}^{2\nu\pi}|M_{+}(i\varepsilon)|^ {2}|M_{-}(1-i\varepsilon)|^{2}+|M_{+}(i\varepsilon)|^{2}|M_{-}(1+i\varepsilon)| ^{2}\] \[\qquad-2\mathrm{e}^{\nu\pi}|M_{+}(i\varepsilon)|^{2}|M_{-}(1-i \varepsilon)||M_{-}(1+i\varepsilon)|\cos(\theta_{\varepsilon})\Big{]}\] where \(\theta_{\varepsilon}=\mathrm{Arg}\big{(}M_{+}(-i\varepsilon)M_{-}(1-i \varepsilon)M_{+}(i\varepsilon)M_{-}(1+i\varepsilon)\big{)}\). As before, since \(M_{-}(\zeta)\) is smooth at \(\zeta=1\), we can further write \[|\mathcal{W}_{m,\varepsilon}(0)|^{2}=16\nu^{2}m^{2}|M_{+}(i\varepsilon)|^{2}|M_ {-}(1)|^{2}\left(\mathrm{e}^{2\nu\pi}-2\mathrm{e}^{\nu\pi}\cos(\theta_{ \varepsilon})+1\right)+R_{4}(\varepsilon)\] where \(|R_{4}(\varepsilon)|\lesssim|\varepsilon|^{\frac{1}{2}}|M_{+}(i\varepsilon)|^{2}\). With this, we are able to write \[\mathcal{G}_{m,\varepsilon}^{-}(y,0,z)-\mathcal{G}_{m,\varepsilon}^ {+}(y,0,z) =\frac{2i}{|\mathcal{W}_{m,\varepsilon}^{-}(0)|^{2}}\mathrm{Im} \left(\phi_{l,m,\varepsilon}^{-}(y,0)\phi_{u,m,\varepsilon}^{-}(z,0)\mathcal{W }_{m,\varepsilon}^{+}(0)\right)\] \[=-\frac{i}{4\nu m}\frac{|M_{+}(i\varepsilon)|^{2}|\left(\mathrm{e }^{2\nu\pi}-1\right)\phi_{u}(y)\phi_{u}(z)}{|M_{+}(i\varepsilon)|^{2}|M_{-}(1)|^ {2}\left(\mathrm{e}^{2\nu\pi}-2\mathrm{e}^{\nu\pi}\cos(\theta_{\varepsilon})+1 \right)+R_{4}(\varepsilon)}\] \[\quad+\frac{i}{8\nu^{2}m^{2}}\frac{\mathrm{Im}\left(R_{3}( \varepsilon)\right)}{|M_{+}(i\varepsilon)|^{2}|M_{-}(1)|^{2}\left(\mathrm{e} ^{2\nu\pi}-2\mathrm{e}^{\nu\pi}\cos(\theta_{\varepsilon})+1\right)+R_{4}( \varepsilon)}\] The lemma follows for \[C_{\varepsilon}:=-\frac{i}{4\nu m}\frac{\mathrm{e}^{2\nu\pi}-1}{|M_{-}(1)|^{2} \left(\mathrm{e}^{2\nu\pi}-2\mathrm{e}^{\nu\pi}\cos(\theta_{\varepsilon})+1 \right)}.\] and recalling that \(|\mathrm{Im}\left(R_{3}(\varepsilon)\right)|\lesssim\varepsilon^{\frac{1}{2}}| M_{+}(i\varepsilon)|^{2}\), **Remark 6.12**.: Note that \(C_{\varepsilon}\) is bounded but \(\lim_{\varepsilon\to 0}C_{\varepsilon}\) does not exist. Indeed, as can be seen from the asymptotic expansions of Lemma A.3, we have that \(\theta_{\varepsilon}=2\nu\log(\varepsilon)+2\text{Arg}(M_{-}(1))+O(\varepsilon)\), as \(\varepsilon\to 0\). Thus, \(\theta_{\varepsilon}\) diverges to \(-\infty\) and \(\cos(\theta_{\varepsilon})\) does not converge. Hence, assumption (H) becomes necessary in order to have a well defined pointwise limiting absorption principle for \(y_{0}=0\). We are now in position to prove Theorem 4 for \(\beta^{2}>1/4\). Proof of Theorem 4.: For \(y_{0}=0\) we have that \[\psi_{m,\varepsilon}^{-}(y,y_{0})-\psi_{m,\varepsilon}^{+}(y,y_{ 0}) =-\frac{2i\varepsilon}{\beta^{2}}\omega_{m}^{0}+\int_{0}^{1}\left( \mathcal{G}_{m,\varepsilon}^{-}(y,0,z)-\mathcal{G}_{m,\varepsilon}^{+}(y,0,z) \right)F_{m}(z,0)\mathrm{d}z\] \[\quad+\frac{i\varepsilon}{\beta^{2}}\int_{0}^{1}\left(\mathcal{G }_{m,\varepsilon}^{-}(y,0,z)+\mathcal{G}_{m,\varepsilon}^{+}(y,0,z)\right) \Delta_{m}\omega_{m}^{0}\mathrm{d}z.\] Since \(\omega_{m}^{0}\in H_{y}^{1}\), the first term vanishes easily, while the third term also tends to zero when \(\varepsilon\to 0\), after a direct application of the Cauchy-Schwarz inequality and the facts that \(\|\mathcal{G}_{m,\varepsilon}^{\pm}\|_{L_{x}^{2}}\) is uniformly bounded in \(\varepsilon\) due to Theorem 2 and \(\omega_{m}^{0}\in H_{y}^{2}\). As for the second term, we invoke Lemma 6.11 to show that \[\left|\int_{0}^{1}(\mathcal{G}_{m,\varepsilon}^{-}-\mathcal{G}_{m,\varepsilon}^{+})F_{m}(z)\mathrm{d}z\right| \leq\left|C_{\varepsilon}\phi_{u}(y,0)\int_{0}^{1}\phi_{u}(z,0)F_ {m}(z,0)\mathrm{d}z\right|\] \[+\left|\int_{0}^{1}(\mathcal{G}_{m,\varepsilon}^{-}-\mathcal{G}_{ m,\varepsilon}^{+}-C_{\varepsilon}\phi_{u,m}(y)\phi_{u,m}(z))F_{m}(z)\mathrm{d}z\right|\] \[\lesssim\varepsilon^{\frac{1}{2}}\int_{0}^{1}|F_{m}(z)|\mathrm{d}z\] \[\lesssim\varepsilon^{\frac{1}{2}}\left(\|\rho_{m}^{0}\|_{H_{y}^{2} }+\|\omega_{m}^{0}\|_{H_{y}^{2}}\right),\] which vanishes as \(\varepsilon\to 0\). The proof for \(y_{0}=1\) follows similarly from Lemma 6.11. We thus omit the details. ### Integral reduction for \(\beta^{2}<1/4\): no discrete eigenvalues Thanks to the Hardy inequality [15] \[\int_{0}^{1}\left(\frac{1}{x}\int_{0}^{x}f(t)\mathrm{d}t\right)^{2}\mathrm{d}x \leq 4\int_{0}^{1}|f(t)|^{2}\mathrm{d}t,\qquad f\in L^{2}(0,1), \tag{6.4}\] we are able to prove \(H^{1}\) bounds for the generalized stream functions \(\psi_{m,\varepsilon}^{\pm}(y,y_{0})\) that are uniform in \(\varepsilon>0\) **Proposition 6.13**.: _Let \(0\leq\varepsilon\leq 1\). Then,_ \[\|\partial_{y}\psi^{\pm}_{m,\varepsilon}(y,y_{0})\|_{L^{2}}^{2}+m^{2}\|\psi^{\pm }_{m,\varepsilon}(y,y_{0})\|_{L^{2}}^{2}\lesssim\|\omega^{0}_{m}\|_{H^{2}}^{2}+ \|\rho^{0}_{m}\|_{H^{2}}^{2}. \tag{6.5}\] _Moreover,_ \[\int_{0}^{1}\frac{\varepsilon(y-y_{0})}{((y-y_{0})^{2}+\varepsilon^{2})^{2}}| \varphi^{\pm}_{m,\varepsilon}|^{2}\mathrm{d}y\lesssim\|\omega^{0}_{m}\|_{H^{2 }}^{2}+\|\rho^{0}_{m}\|_{H^{2}}^{2},\] _and_ \[\|\rho^{\pm}_{m,\varepsilon}(y,y_{0})\|_{L^{2}}^{2}\lesssim\|\omega^{0}_{m}\| _{H^{2}}^{2}+\|\rho^{0}_{m}\|_{H^{2}}^{2}.\] _If we further assume that \(|-y_{0}\pm i\varepsilon|\geq c_{0}\), for some \(c_{0}>0\), then_ \[\|\partial_{y}\psi^{\pm}_{m,\varepsilon}(y,y_{0})\|_{L^{2}}^{2}+m^{2}\|\psi^{ \pm}_{m,\varepsilon}(y,y_{0})\|_{L^{2}}^{2}\lesssim\frac{1}{c_{0}^{2}}\|\omega ^{0}_{m}\|_{L^{2}}^{2}+\frac{1}{c_{0}^{4}}\|\rho^{0}_{m}\|_{L^{2}}\] _and_ \[\|\rho^{\pm}_{m,\varepsilon}(y,y_{0})\|_{L^{2}}^{2}\lesssim\frac{1}{c_{0}^{2} }\|\omega^{0}_{m}\|_{L^{2}}^{2}+\frac{1}{c_{0}^{4}}\|\rho^{0}_{m}\|_{L^{2}}^{2}.\] _In particular, \(c=-y_{0}\pm i\varepsilon\) belongs to the resolvent set of the operator \(L_{m}\)._ Proof.: Multiplying (2.11) by \(\overline{\varphi^{\pm}_{m,\varepsilon}(y,y_{0})}\) and integrating by parts, we obtain \[\int_{0}^{1}\left(|\partial_{y}\varphi^{\pm}_{m,\varepsilon}(y,y_{0})|^{2}+m^ {2}|\partial_{y}\varphi^{\pm}_{m,\varepsilon}(y,y_{0})|^{2}-\beta^{2}\frac{| \varphi^{\pm}_{m,\varepsilon}(y,y_{0})|^{2}}{(y-y_{0}\pm i\varepsilon)^{2}} \right)\mathrm{d}y=-\int_{0}^{1}F^{\pm}_{m,\varepsilon}(y,y_{0})\varphi^{\pm }_{m,\varepsilon}(y,y_{0})\mathrm{d}y\] Assume now that \(y_{0}\leq 0\) (the case \(y_{0}\geq 1\) would be done similarly) and observe that, thanks to the Hardy inequality (6.4) and \(\varphi^{\pm}_{m,\varepsilon}(0,y_{0})=0\), \[\left|\beta^{2}\int_{0}^{1}\frac{|\varphi^{\pm}_{m,\varepsilon}(y,y_{0})|^{2}}{(y-y_{0}\pm i\varepsilon)^{2}}\mathrm{d}y\right|\leq\beta^{2} \int_{0}^{1}\frac{|\varphi^{\pm}_{m,\varepsilon}(y,y_{0})|^{2}}{y^{2}}\mathrm{ d}y \leq\beta^{2}\int_{0}^{1}\left(\frac{1}{y}\int_{0}^{y}|\partial_{y} \varphi^{\pm}_{m,\varepsilon}(y^{\prime},y_{0})|\mathrm{d}y^{\prime}\right)^{ 2}\mathrm{d}y\] \[\leq 4\beta^{2}\int_{0}^{1}|\partial_{y}\varphi^{\pm}_{m, \varepsilon}(y,y_{0})|^{2}\mathrm{d}y.\] Therefore, we conclude that \[(1-4\beta^{2})\int_{0}^{1}|\partial_{y}\varphi^{\pm}_{m,\varepsilon}(y,y_{0})| ^{2}\mathrm{d}y+m^{2}\int_{0}^{1}|\varphi^{\pm}_{m,\varepsilon}(y,y_{0})|^{2} \mathrm{d}y\lesssim\frac{1}{m^{2}}\int_{0}^{1}|F^{\pm}_{m,\varepsilon}(y,y_{0} )|^{2}\mathrm{d}y.\] Thus (6.5) follows from (2.9), the observation that \(4\beta^{2}<1\) and \(\|F^{\pm}_{m,\varepsilon}(y,y_{0})\|_{L^{2}_{y}}^{2}\lesssim\|\omega^{0}_{m}\| _{H^{2}}^{2}+\|\rho^{0}_{m}\|_{H^{2}}\). For the second statement, we take the real and imaginary part of (2.11), for which we get \[\Delta_{m}\mathrm{Re}(\varphi^{\pm}_{m,\varepsilon})+\frac{1}{4((y-y_{0})^{2} +\varepsilon^{2})^{2}}\left(((y-y_{0})^{2}-\varepsilon^{2})\mathrm{Re}(\varphi ^{\pm}_{m,\varepsilon})\pm 2\varepsilon(y-y_{0})\mathrm{Im}(\varphi^{\pm}_{m, \varepsilon})\right)=\mathrm{Re}(F^{\pm}_{m,\varepsilon})\] and \[\Delta_{m}\mathrm{Im}(\varphi^{\pm}_{m,\varepsilon})+\frac{1}{4((y-y_{0})^{2}+ \varepsilon^{2})^{2}}\left(((y-y_{0})^{2}-\varepsilon^{2})\mathrm{Im}( \varphi^{\pm}_{m,\varepsilon})\mp 2\varepsilon(y-y_{0})\mathrm{Re}(\varphi^{\pm}_{m, \varepsilon})\right)=\mathrm{Im}(F^{\pm}_{m,\varepsilon}).\] Cross multiplying the equations by \(\mathrm{Im}(\varphi^{\pm}_{m,\varepsilon})\) and \(\mathrm{Re}(\varphi^{\pm}_{m,\varepsilon})\), respectively, subtracting them and integrating, we obtain \[\pm\int_{0}^{1}\frac{\varepsilon(y-y_{0})}{2((y-y_{0})^{2}+\varepsilon^{2})^{2 }}|\varphi^{\pm}_{m,\varepsilon}|^{2}\mathrm{d}y=\int_{0}^{1}\mathrm{Im}( \varphi^{\pm}_{m,\varepsilon})\mathrm{Re}(F^{\pm}_{m,\varepsilon})-\mathrm{ Re}(\varphi^{\pm}_{m,\varepsilon})\mathrm{Im}(F^{\pm}_{m, \varepsilon})\mathrm{d}y\] so that \[\int_{0}^{1}\frac{\varepsilon(y-y_{0})}{((y-y_{0})^{2}+\varepsilon^{2})^{2}}| \varphi^{\pm}_{m,\varepsilon}|^{2}\mathrm{d}y\lesssim\int_{0}^{1}|\varphi^{\pm}_{m,\varepsilon}|^{2}+|F^{\pm}_{m,\varepsilon}|^{2}\mathrm{d}y\lesssim\|\omega^{0}_{m }\|_{H^{2}}^{2}+\|\rho^{0}_{m}\|_{H^{2}}^{2}.\] The third statement of the proposition follows from the density formula (2.10), the Hardy-type inequality and the uniform bounds from the first statement of the proposition. The proof is finished. From the arguments of the proof, one can directly obtain the following result. **Corollary 6.14**.: _Let \(y_{0}\leq 0\) or \(y_{0}\geq 1\). Then, \(y_{0}+ic\) is not an eigenvalue of \(L_{m}\), for any \(c\in\mathbb{R}\)._ With the \(\varepsilon\)-uniform \(H_{0}^{1}\) for \(\psi_{m,\varepsilon}^{\pm}(y,y_{0})\) at hand we are now able to prove Theorem 4. **Proposition 6.15**.: _We have that_ \[\lim_{\varepsilon\to 0}\left\|\int_{-\frac{\beta}{m}}^{0}\mathrm{e}^{- imy_{0}t}\big{(}\mathrm{e}^{m\varepsilon\ast}t\psi_{m,\varepsilon\ast}^{-}(y,y_{0})- \mathrm{e}^{-m\varepsilon\ast}t\psi_{m,\varepsilon\ast}^{+}(y,y_{0})\big{)} \mathrm{d}y_{0}\right\|_{L_{y}^{2}}=0\] _and_ \[\lim_{\varepsilon\to 0}\left\|\int_{-\frac{\beta}{m}}^{0}\mathrm{e}^{- imy_{0}t}\big{(}\mathrm{e}^{m\varepsilon\ast}t\rho_{m,\varepsilon\ast}^{-}(y,y_{0})- \mathrm{e}^{-m\varepsilon\ast}t\rho_{m,\varepsilon\ast}^{+}(y,y_{0})\big{)} \mathrm{d}y_{0}\right\|_{L_{y}^{2}}=0.\] Proof.: Let us denote \(\psi_{m,\varepsilon}(y,y_{0})=\psi_{m,\varepsilon}^{+}(y,y_{0})-\psi_{m, \varepsilon}^{-}(y,y_{0})\). Using (2.9), we have \[\psi_{m,\varepsilon}(y,y_{0})=\frac{2i\varepsilon}{\beta^{2}}\omega_{m}^{0}(y )+\varphi_{m,\varepsilon}^{+}(y,y_{0})-\varphi_{m,\varepsilon}^{-}(y,y_{0})\] and we further denote \(\varphi_{m,\varepsilon}(y,y_{0})=\varphi_{m,\varepsilon}^{+}(y,y_{0})-\varphi _{m,\varepsilon}^{-}(y,y_{0})\), which solves \[\left(\partial_{y}^{2}-m^{2}+\beta^{2}\frac{1}{(y-y_{0}+i\varepsilon)^{2}} \right)\varphi_{m,\varepsilon}(y,y_{0})=\beta^{2}\frac{4i\varepsilon(y-y_{0}) }{((y-y_{0})^{2}+\varepsilon^{2})^{2}}\varphi_{m,\varepsilon}^{-}(y,y_{0})- \frac{2i\varepsilon}{\beta^{2}}\Delta_{m}\omega_{m}^{0}(y).\] Multiplying by \(\overline{\varphi_{m,\varepsilon}(y,y_{0})}\) integrating by parts and proceeding as before, we see \[(1-4\beta^{2})\int_{0}^{1}|\partial_{y}\varphi_{m,\varepsilon}|^{2}\mathrm{d }y+m^{2}\int_{0}^{1}|\varphi_{m,\varepsilon}|^{2}\mathrm{d}y\lesssim\frac{2 \varepsilon}{\beta^{2}}\|\omega_{m}^{0}\|_{H^{2}}^{2}+\beta^{2}\int_{0}^{1} \frac{4\varepsilon(y-y_{0})}{((y-y_{0})^{2}+\varepsilon^{2})^{2}}|\varphi_{m, \varepsilon}^{-}||\varphi_{m,\varepsilon}|\mathrm{d}y.\] Moreover, using Young's inequality we can bound \[\beta^{2}\int_{0}^{1}\frac{4\varepsilon(y-y_{0})}{((y-y_{0})^{2 }+\varepsilon^{2})^{2}}|\varphi_{m,\varepsilon}^{-}||\varphi_{m,\varepsilon} |\mathrm{d}y \leq\beta^{2}\int_{0}^{1}\frac{2\varepsilon(y-y_{0})}{((y-y_{0}) ^{2}+\varepsilon^{2})^{2}}\left(c_{0}^{2}|\varphi_{m,\varepsilon}|^{2}+\frac{1 }{c_{0}^{2}}|\varphi_{m,\varepsilon}^{-}|^{2}\right)\mathrm{d}y\] \[\leq c_{0}^{2}\beta^{2}\int_{0}^{1}\frac{1}{y^{2}}||\varphi_{m, \varepsilon}|^{2}\mathrm{d}y+\frac{\beta^{2}}{c_{0}^{2}}\int_{0}^{1}\frac{2 \varepsilon(y-y_{0})}{((y-y_{0})^{2}+\varepsilon^{2})^{2}}|\varphi_{m, \varepsilon}^{-}|^{2}\mathrm{d}y\] \[\leq 4c_{0}^{2}\beta^{2}\int_{0}^{1}|\partial_{y}\varphi_{m, \varepsilon}|^{2}\mathrm{d}y+\frac{\beta^{2}}{c_{0}^{2}}\int_{0}^{1}\frac{2 \varepsilon(y-y_{0})}{((y-y_{0})^{2}+\varepsilon^{2})^{2}}|\varphi_{m, \varepsilon}^{-}|^{2}\mathrm{d}y.\] Therefore, absorbing the derivative term in the left hand side for some \(c_{0}\) small enough, we obtain \[\int_{0}^{1}|\partial_{y}\varphi_{m,\varepsilon}|^{2}\mathrm{d}y+m^{2}\int_{ 0}^{1}|\varphi_{m,\varepsilon}|^{2}\mathrm{d}y\lesssim\varepsilon\|\omega_{m} ^{0}\|_{H^{2}}^{2}+\int_{0}^{1}\frac{2\varepsilon(y-y_{0})}{((y-y_{0})^{2}+ \varepsilon^{2})^{2}}|\varphi_{m,\varepsilon}^{-}|^{2}\mathrm{d}y. \tag{6.6}\] Given the uniform bounds in \(\varepsilon>0\) from Proposition 6.13, we have that \[\lim_{\varepsilon\to 0}\left\|\int_{-\frac{\beta}{m}}^{0}\mathrm{e}^{- imy_{0}t}\big{(}\mathrm{e}^{m\varepsilon\ast}\psi_{m,\varepsilon}^{-}(y,y_{0})- \mathrm{e}^{-m\varepsilon t}\psi_{m,\varepsilon}^{+}(y,y_{0})\big{)}\mathrm{d}y _{0}\right\|_{L_{y}^{2}}\] \[\quad=\lim_{\varepsilon\to 0}\left\|\int_{-\frac{\beta}{m}}^{0} \mathrm{e}^{-imy_{0}t}\mathrm{e}^{m\varepsilon\ast}\big{(}\psi_{m,\varepsilon}^{-} (y,y_{0})-\psi_{m,\varepsilon}^{+}(y,y_{0})\big{)}\mathrm{d}y_{0}\right\|_{L_{y }^{2}}\] \[\quad\leq\lim_{\varepsilon\to 0}\int_{-\frac{\beta}{m}}^{0}\left(\frac{2 \varepsilon}{\beta}\|\omega_{m}^{0}\|_{L_{y}^{2}}+\|\varphi_{m,\varepsilon}(y,y_{0} )\|_{L_{y}^{2}}\right)\mathrm{d}y_{0}\] \[\quad=\lim_{\varepsilon\to 0}\int_{-\frac{\beta}{m}}^{0}\|\varphi_{m, \varepsilon}(y,y_{0})\|_{L_{y}^{2}}\,\mathrm{d}y_{0}.\] Now, note that with (6.6) we can estimate \[\lim_{\varepsilon\to 0}\int_{-\frac{\beta}{m}}^{0}\|\varphi_{m, \varepsilon}(y,y_{0})\|_{L^{2}_{y}}\,\mathrm{d}y_{0} \lesssim\lim_{\varepsilon\to 0}\int_{-\frac{\beta}{m}}^{0}\left(\int_{0}^{1} \frac{2\varepsilon(y-y_{0})}{((y-y_{0})^{2}+\varepsilon^{2})^{2}}|\varphi_{m, \varepsilon}^{-}|^{2}\mathrm{d}y\right)^{\frac{1}{2}}\mathrm{d}y_{0}\] \[\lesssim\lim_{\varepsilon\to 0}\varepsilon^{\frac{1}{2}}\int_{- \frac{\beta}{m}}^{0}\left(\int_{0}^{1}\frac{1}{(y-y_{0})^{2}}\mathrm{d}y \right)^{\frac{1}{2}}\mathrm{d}y_{0},\] where we have used the pointwise bound \(|\varphi_{m,\varepsilon}^{-}(y,y_{0})|^{2}\lesssim y\), obtained from Proposition 6.13. The conclusion follows. For the density statement, we recall that \[\rho_{m,\varepsilon}^{-}(y,y_{0})-\rho_{m,\varepsilon}^{+}(y,y_{0})=\frac{ \varphi_{m,\varepsilon}(y,y_{0})}{y-y_{0}-i\varepsilon}+\frac{2i\varepsilon} {(y-y_{0})^{2}+\varepsilon^{2}}\varphi_{m,\varepsilon}^{+}(y,y_{0}),\] from which, together with Proposition 6.13, we deduce that \[\lim_{\varepsilon\to 0}\left\|\int_{-\frac{\beta}{m}}^{0} \mathrm{e}^{-imy_{0}t}\big{(}\mathrm{e}^{m\varepsilon_{*}t}\rho_{m,\varepsilon _{*}}^{-}(y,y_{0})-\mathrm{e}^{-m\varepsilon_{*}t}\rho_{m,\varepsilon_{*}}^{+ }(y,y_{0})\big{)}\mathrm{d}y_{0}\right\|_{L^{2}_{y}}\] \[\quad\leq\lim_{\varepsilon\to 0}\left\|\int_{-\frac{\beta}{m}}^{0} \mathrm{e}^{-imy_{0}t}\mathrm{e}^{m\varepsilon_{*}t}\big{(}\rho_{m,\varepsilon _{*}}^{-}(y,y_{0})-\rho_{m,\varepsilon_{*}}^{+}(y,y_{0})\big{)}\mathrm{d}y_{0 }\right\|_{L^{2}_{y}}\] \[\quad\lesssim\lim_{\varepsilon\to 0}\int_{-\frac{\beta}{m}}^{0} \left(\left\|\frac{\varphi_{m,\varepsilon}(y,y_{0})}{y-y_{0}-i\varepsilon} \right\|_{L^{2}_{y}}+\left\|\frac{2i\varepsilon\varphi_{m,\varepsilon}^{+}(y, y_{0})}{(y-y_{0})^{2}+\varepsilon^{2}}\right\|_{L^{2}_{y}}\right)\mathrm{d}y_{0}\] Using the Hardy inequality (6.4), the estimates from (6.6) and the above arguments, we have \[\lim_{\varepsilon\to 0}\int_{-\frac{\beta}{m}}^{0}\left\|\frac{\varphi_{m, \varepsilon}(y,y_{0})}{y-y_{0}-i\varepsilon}\right\|_{L^{2}_{y}}\mathrm{d}y_{0 }\lesssim\lim_{\varepsilon\to 0}\int_{-\frac{\beta}{m}}^{0}\left\|\partial_{y} \varphi_{m,\varepsilon}(y,y_{0})\right\|_{L^{2}_{y}}\mathrm{d}y_{0}=0.\] On the other hand, thanks to the bounds from Proposition 6.13, we also have \[\lim_{\varepsilon\to 0}\int_{-\frac{\beta}{m}}^{0}\left\|\frac{2i \varepsilon\varphi_{m,\varepsilon}^{+}(y,y_{0})}{(y-y_{0})^{2}+\varepsilon^{2 }}\right\|_{L^{2}_{y}}\mathrm{d}y_{0}\leq\lim_{\varepsilon\to 0} \varepsilon^{\frac{1}{2}}\int_{-\frac{\beta}{m}}^{0}\left\|\frac{2 \varepsilon^{\frac{1}{2}}(y-y_{0})^{\frac{1}{2}}\varphi_{m,\varepsilon}^{+}(y, y_{0})}{(y-y_{0})^{2}+\varepsilon^{2}}\right\|_{L^{2}_{y}}\frac{1}{(-y_{0})^{ \frac{1}{2}}}\mathrm{d}y_{0}=0.\] With this, the proof is finished. We next show that the contribution from the vertical boundaries of the contour integral is also negligible. **Proposition 6.16**.: _Let \(y_{0}\in\left[-\frac{\beta}{m},0\right]\). We have that_ \[\lim_{\varepsilon\to 0}\left\|\int_{0}^{\varepsilon}\mathrm{e}^{-imy_{0}t} \big{(}\mathrm{e}^{mst}\psi_{m,s}^{-}(y,y_{0})+\mathrm{e}^{-ms_{*}t}\psi_{m,s} ^{+}(y,y_{0})\big{)}\mathrm{d}s\right\|_{L^{2}_{y}}=0\] _and_ \[\lim_{\varepsilon\to 0}\left\|\int_{0}^{\varepsilon}\mathrm{e}^{-imy_{0}t} \big{(}\mathrm{e}^{mst}\rho_{m,\varepsilon_{*}}^{-}(y,y_{0})+\mathrm{e}^{-mst} \rho_{m,s}^{+}(y,y_{0})\big{)}\mathrm{d}s\right\|_{L^{2}_{y}}=0.\] Proof.: The statement follows from Minkowski inequality and the fact that \[\lim_{\varepsilon\to 0}\int_{0}^{\varepsilon}\left\|\psi_{m,s}^{-}(y,y_{0}) \right\|_{L^{2}_{y}}+\left\|\psi_{m,s}^{+}(y,y_{0})\right\|_{L^{2}_{y}}+\left\| \rho_{m,s}^{-}(y,y_{0})\right\|_{L^{2}_{y}}+\left\|\rho_{m,s}^{+}(y,y_{0}) \right\|_{L^{2}_{y}}\mathrm{d}s=0,\] due to the uniform bounds in \(s\in[0,\varepsilon]\) of these quantities from Proposition 6.13. We are now in position to carry out the proof of Proposition 6.1 for the case \(\beta^{2}<1/4\). Proof of Proposition 6.1.: Since the resolvent operator \(\mathcal{R}(c,L_{m})\) is invertible for all \(c\in\mathbb{C}\) with \(\mathrm{Re}(c)\leq 0\) and \(|c|\geq c_{0}\) for some \(c_{0}>0\), confer Proposition 6.13, we can reduce the contour integral to the boundary of the set \(R_{\varepsilon}:=\{c=y_{0}+is\in\mathbb{C}:y_{0}\in[-\beta/m,0]\,,\,s\in[- \varepsilon,\varepsilon]\}\). Now, Proposition 6.15 and Proposition 6.16 show that the integral along \(\partial R_{\varepsilon}\) is negligible as \(\varepsilon\to 0\). The Proposition follows. We finish the subsection with the proof of Theorem 4 for \(b^{2}<1/4\), which is a direct consequence of the following Lemma. **Lemma 6.17**.: _Let \(y,z\in[0,1]\). There exists \(\varepsilon_{0}>0\) such that_ \[\sup_{y,z\in[0,1]}\left|\mathcal{G}_{m,\varepsilon}^{-}(y,y_{0},z)-\mathcal{G }_{m,\varepsilon}^{+}(y,y_{0},z)\right|\lesssim\varepsilon^{2\mu}+\varepsilon ^{\frac{1}{2}},\qquad y_{0}\in\{0,1\},\] _for all \(\varepsilon\leq\varepsilon_{0}\)._ Proof.: We take \(y_{0}=0\), the other case is analogous. The argument is similar to the one presented for Lemma 6.11. As before, we need to understand \[\mathcal{G}_{m,\varepsilon}^{-}(y,0,z)-\mathcal{G}_{m,\varepsilon}^{+}(y,0,z) =2i\mathrm{Im}\left(\mathcal{G}_{m,\varepsilon}^{-}(y,0,z)\right)\] For \(y\leq z\), we have \[2i\mathrm{Im}\left(\mathcal{G}_{m,\varepsilon}^{-}(y,0,z)\right)=\frac{2i}{| \mathcal{W}_{m,\varepsilon}^{-}(0)|^{2}}\mathrm{Im}\left(\phi_{l,m,\varepsilon }^{-}(y,0)\phi_{u,m\varepsilon}^{-}(z,0)\mathcal{W}_{m,\varepsilon}^{+}(0) \right).\] Due to the explicit solutions of the Taylor-Goldstein equation, we can find that \[\phi_{l,m,\varepsilon}^{-}(y,0) \mathcal{W}_{m,\varepsilon}^{+}(0)\] \[=4\mu m\Big{(}M_{+}(-i\varepsilon)M_{-}(y)M_{+}(i\varepsilon)M_{ -}(1)+M_{-}(-i\varepsilon)M_{+}(y)M_{-}(i\varepsilon)M_{+}(1)\] \[\qquad\qquad-M_{+}(-i\varepsilon)M_{-}(y)M_{-}(i\varepsilon)M_{ +}(1)-M_{-}(-i\varepsilon)M_{+}(y)M_{+}(i\varepsilon)M_{-}(1)+R_{1}( \varepsilon)\Big{)},\] where \(|R_{1}(\varepsilon)|\lesssim_{m,\mu}\varepsilon^{\frac{1}{2}-\mu}|M_{-}(i \varepsilon)|^{2}\). Moreover, since now \(\overline{M_{\pm}(\zeta)}=M_{\pm}(\overline{\zeta})\) for all \(\zeta\in\mathbb{C}\), we can write \[\phi_{l,m,\varepsilon}^{-}(y,0)\mathcal{W}_{m,\varepsilon}^{+}(0) =4\mu m\Big{(}|M_{+}(i\varepsilon)|^{2}M_{-}(y)M_{-}(1)+|M_{-}(i \varepsilon)|^{2}M_{+}(y)M_{+}(1)\] \[\qquad-M_{+}(-i\varepsilon)M_{-}(i\varepsilon)M_{-}(y)M_{+}(1)-M_ {+}(i\varepsilon)M_{-}(-i\varepsilon)M_{+}(y)M_{-}(1)+R_{1}(\varepsilon) \Big{)}.\] On the other hand, \[\phi_{u,m,\varepsilon}^{-}(z,0)=M_{+}(1)M_{-}(z)-M_{-}(1)M_{+}(z)+R_{2}( \varepsilon)=\phi_{u}(z)+R_{2}(\varepsilon)\] with \(|R_{2}(\varepsilon)|\lesssim\varepsilon^{\frac{1}{2}-\mu}\). Thus, \[\phi_{l,m,\varepsilon}^{-}(y,0) \phi_{u,m,\varepsilon}^{-}(z,0)\mathcal{W}_{m,\varepsilon}^{+}(0)\] \[=-4\mu m\phi_{u}(z)\Big{(}M_{+}(-i\varepsilon)M_{-}(i\varepsilon )M_{-}(y)M_{+}(1)+M_{+}(i\varepsilon)M_{-}(-i\varepsilon)M_{+}(y)M_{-}(1) \Big{)}\] \[\qquad+4\mu m\phi_{u}(z)\Big{(}|M_{+}(i\varepsilon)|^{2}M_{-}(y) M_{-}(1)+|M_{-}(i\varepsilon)|^{2}M_{+}(y)M_{+}(1)\Big{)}+R_{3}(\varepsilon),\] where \(|R_{3}(\varepsilon)|\lesssim\mu m\varepsilon^{\frac{1}{2}-\mu}|M_{-}(i \varepsilon)|^{2}\), uniformly in \(y,z\in[0,1]\). In particular, since \(M_{\pm}(y)\in\mathbb{R}\), for all \(y\in[0,1]\), we have \[\mathrm{Im}\left(\phi_{l,m,\varepsilon}^{-}(y,0)\phi_{u,m,\varepsilon}^{-}(z, 0)\mathcal{W}_{m,\varepsilon}^{+}(0)\right)=-4\mu m\phi_{u,m}(z)\phi_{u,m}(y) \mathrm{Im}\left(M_{+}(-i\varepsilon)M_{-}(i\varepsilon)\right)+\mathrm{Im} \left(R_{3}(\varepsilon)\right).\] Moreover, due to the symmetry of the Green's function with respect to \(y\) and \(z\), we also have that \[\mathrm{Im}\left(\phi_{l,m,\varepsilon}^{-}(z,0)\phi_{u,m,\varepsilon}^{-}(y,0 )\mathcal{W}_{m,\varepsilon}^{+}(0)\right)=-4\mu m\phi_{u,m}(z)\phi_{u,m}(y) \mathrm{Im}\left(M_{+}(-i\varepsilon)M_{-}(i\varepsilon)\right)+\mathrm{Im} \left(\widetilde{R}_{3}(\varepsilon)\right).\] For the Wronskian, we have from (3.6) that \[\left|\mathcal{W}^{+}_{m,\varepsilon}(0)\right|=4\mu m|M_{-}(i\varepsilon)||M_{+} (1)|\left|1-\frac{M_{+}(i\varepsilon)}{M_{-}(i\varepsilon)}\frac{M_{-}(1)}{M_{+ }(1)}+R_{4}(\varepsilon)\right|.\] where \(|R_{4}(\varepsilon)|\lesssim\varepsilon^{\frac{1}{2}-\mu}\). In particular, for \(\varepsilon\leq\varepsilon_{0}\) small enough we have from Lemma A.16 that \[\left|\mathcal{W}^{+}_{m,\varepsilon}(0)\right|\geq 2\mu m|M_{-}(i\varepsilon)||M_ {+}(1)|.\] Therefore, \[\left|\mathcal{G}^{-}_{m,\varepsilon}(y,0,z)-\mathcal{G}^{+}_{m, \varepsilon}(y,0,z)\right| =\frac{2}{|\mathcal{W}^{+}_{m,\varepsilon}(0)|^{2}}\left| \mathrm{Im}\left(\phi^{-}_{l,m,\varepsilon}(y,0)\phi^{-}_{u,m,\varepsilon}(z,0)\mathcal{W}^{+}_{m,\varepsilon}(0)\right)\right|\] \[\leq\frac{2}{\mu m}\frac{|\phi_{u,m}(z)\phi_{u,m}(y)\mathrm{Im} \left(M_{+}(-i\varepsilon)M_{-}(i\varepsilon)\right)|}{|M_{-}(i\varepsilon)| ^{2}|M_{+}(1)|^{2}}+R_{5}(\varepsilon)\] \[\lesssim\varepsilon^{2\mu}+\varepsilon^{\frac{1}{2}-\mu},\] and the lemma follows. ### Integral reduction for \(\beta^{2}=1/4\) The special case in which \(\beta^{2}=1/4\) is critical in the sense that the Hardy inequality (6.4) may saturate and thus the derivative bounds in Proposition 6.13 are no longer uniform in \(\varepsilon>0\). Still, we are able to prove the following result. **Proposition 6.18**.: _Let \(y_{0}\leq 0\) and \(0<\varepsilon\leq 1\). Then,_ \[\frac{\varepsilon^{2}}{1+\varepsilon^{2}}\|\partial_{y}\psi^{\pm}_{m, \varepsilon}(\cdot,y_{0})\|^{2}_{L^{2}}+m^{2}\|\psi^{\pm}_{m,\varepsilon}( \cdot,y_{0})\|^{2}_{L^{2}}\lesssim\|\omega^{0}_{m}\|^{2}_{H^{2}}+\|\rho^{0}_{ m}\|^{2}_{H^{2}}.\] _Moreover,_ \[\int_{0}^{1}\frac{\varepsilon(y-y_{0})}{((y-y_{0})^{2}+\varepsilon^{2})^{2}}| \varphi^{\pm}_{m,\varepsilon}|^{2}\mathrm{d}y\lesssim\|\omega^{0}_{m}\|^{2}_ {H^{2}}+\|\rho^{0}_{m}\|^{2}_{H^{2}}.\] _If we further assume that \(|-y_{0}\pm i\varepsilon|\geq c_{0}\), for some \(c_{0}>0\), then_ \[\frac{c_{0}^{2}}{1+c_{0}^{2}}\|\partial_{y}\psi^{\pm}_{m,\varepsilon}(y,y_{0} )\|^{2}_{L^{2}}+m^{2}\|\psi^{\pm}_{m,\varepsilon}(y,y_{0})\|^{2}_{L^{2}} \lesssim\frac{1}{c_{0}^{2}}\|\omega^{0}_{m}\|^{2}_{L^{2}}+\frac{1}{c_{0}^{4}} \|\rho^{0}_{m}\|^{2}_{L^{2}}.\] _In particular, \(c=-y_{0}\pm i\varepsilon\) belongs to the resolvent set of \(L_{m}\)._ Proof.: The proof is similar to the one for Proposition 6.13. Here, since \(\beta^{2}=1/4\) we estimate \[\frac{1}{4}\int_{0}^{1}\frac{|\psi^{\pm}_{m,\varepsilon}|^{2}}{|y-y_{0}\pm i \varepsilon|^{2}}\mathrm{d}y\leq\frac{1}{1+c_{0}^{2}}\int_{0}^{1}\frac{|\psi^{ \pm}_{m,\varepsilon}|^{2}}{4y^{2}}\mathrm{d}y\leq\frac{1}{1+c_{0}^{2}}\int_{0} ^{1}|\partial_{y}\psi_{m,\varepsilon}|^{2}\mathrm{d}y,\] which can be absorbed by \(\int_{0}^{1}|\partial_{y}\psi_{m,\varepsilon}|^{2}\mathrm{d}y\), thus producing the desired \(H^{1}\) estimates. The estimate on the \(L^{2}\) norm of the derivative degenerates as \(\varepsilon\) becomes small. We may lose pointwise bounds on the solution, and for this reason we investigate more thoroughly the Green's function \(\mathcal{G}^{\pm}_{m,\varepsilon}(y,y_{0},z)\) when \(-1\ll y_{0}\leq 0\). In particular, we have that **Proposition 6.19**.: _Let \(y,z\in[0,1]\). There exists \(\delta>0\) such that_ \[\left|\mathcal{G}^{+}_{m,\varepsilon}(y,y_{0},z)\right|\lesssim|y-y_{0}+i \varepsilon|^{\frac{1}{2}}|z-y_{0}+i\varepsilon|^{\frac{1}{2}}\left(1+\big{|} \log m|y-y_{0}+i\varepsilon|\big{|}\right)\left(1+\big{|}\log m|z-y_{0}+i \varepsilon|\big{|}\right)\] _and_ \[\left|\partial_{y}\mathcal{G}^{+}_{m,\varepsilon}(y,y_{0},z)\right|\lesssim|y-y _{0}+i\varepsilon|^{-\frac{1}{2}}|z-y_{0}+i\varepsilon|^{\frac{1}{2}}\left(1+ \big{|}\log m|y-y_{0}+i\varepsilon|\big{|}\right)\left(1+\big{|}\log m|z-y_{0 }+i\varepsilon|\big{|}\right)\] _for all \(y_{0}<0\) and \(\varepsilon>0\) with \(|-y_{0}\pm i\varepsilon|\leq\delta\)._ We remark that the hidden implicit constant may depend on \(m\), but for our purposes this is unimportant. Proof.: The proof follows the same steps as the one for Proposition 5.1. We shall obtain suitable estimates on the Wrosnkian. Now, since \(y_{0}<0\), we recall \[\mathcal{W}_{m,\varepsilon}^{\pm}(y_{0}):=\frac{2m}{\sqrt{\pi}}\Big{(}W_{0}(-y_ {0}\pm i\varepsilon)M_{0}(1-y_{0}\pm i\varepsilon)-M_{0}(-y_{0}\pm i \varepsilon)W_{0}(1-y_{0}\pm i\varepsilon)\Big{)}.\] Using Lemma A.11 and Lemma A.12, there exists \(C>0\) and \(\delta>0\) such that \[\left|\frac{W_{0}(1-y_{0}\pm i\varepsilon)}{M_{0}(1-y_{0}\pm i\varepsilon)} \right|\leq C,\quad\left|\frac{M_{0}(-y_{0}\pm i\varepsilon)}{W_{0}(-y_{0}\pm i \varepsilon)}\right|\leq\frac{1}{2C},\] for all \(|-y_{0}\pm i\varepsilon|\leq\delta\). Hence, we can lower bound \[\left|\mathcal{W}_{m,\varepsilon}^{\pm}(y_{0})\right|\geq\frac{m}{\sqrt{\pi}} \left|W_{0}(-y_{0}\pm i\varepsilon)\right|\left|M_{0}(1-y_{0}\pm i\varepsilon)\right|\] and the proposition follows from the asymptotic expansions of the homogeneous solutions that conform the Green's function. With the above asymptotics at hand, we are now able to prove the following result. **Proposition 6.20**.: _Let \(\delta>0\) be given by Proposition 6.19 and let \(y_{0}<0\) such that \(|y_{0}|\leq\frac{\delta}{2}\). We have that_ \[\left\|\int_{-\frac{\delta}{2}}^{0}\mathrm{e}^{-imyot}\big{(}\mathrm{e}^{m \varepsilon t}\psi_{m,\varepsilon}^{-}(y,y_{0})-\mathrm{e}^{-m\varepsilon t} \psi_{m,\varepsilon}^{+}(y,y_{0})\big{)}\mathrm{d}y_{0}\right\|_{L_{y}^{2}} \lesssim\varepsilon^{\frac{1}{2}}\] _and also_ \[\left\|\int_{-\frac{\delta}{2}}^{0}\mathrm{e}^{-imyot}\big{(}\mathrm{e}^{m \varepsilon t}\rho_{m,\varepsilon}^{-}(y,y_{0})-\mathrm{e}^{-m\varepsilon t} \rho_{m,\varepsilon}^{+}(y,y_{0})\big{)}\mathrm{d}y_{0}\right\|_{L_{y}^{2}} \lesssim\varepsilon^{\frac{1}{2}},\] _for all \(\varepsilon>0\) such that \(|-y_{0}+i\varepsilon|\leq\delta\)._ Proof.: Following the same strategy as in the proof of Proposition 6.15, we see that \(\varphi_{m,\varepsilon}(y,y_{0})\) satisfies \[m^{2}\|\varphi_{m,\varepsilon}\|_{L^{2}}^{2}\lesssim\varepsilon\|\omega_{m}^{ 0}\|_{H^{2}}^{2}+\int_{0}^{1}\frac{2\varepsilon(y-y_{0})}{((y-y_{0})^{2}+ \varepsilon^{2})^{2}}\left(|\varphi_{m,\varepsilon}^{-}|^{2}+|\varphi_{m, \varepsilon}^{+}|^{2}\right)\mathrm{d}y.\] In particular, using the asymptotic bounds from Proposition 6.19 we can estimate \[\int_{0}^{1}\frac{2\varepsilon(y-y_{0})}{((y-y_{0})^{2}+ \varepsilon^{2})^{2}}\left(|\varphi_{m,\varepsilon}^{-}|^{2}+|\varphi_{m, \varepsilon}^{+}|^{2}\right)\mathrm{d}y \lesssim\int_{0}^{1}\frac{\varepsilon(y-y_{0})}{|y-y_{0}+i \varepsilon|^{3}}\left(1+|\log|y-y_{0}+i\varepsilon||\right)^{2}\mathrm{d}y\] \[\lesssim\int_{0}^{1}\frac{\varepsilon(y-y_{0})}{|y-y_{0}+i \varepsilon|^{\frac{7}{2}}}\mathrm{d}y\] \[\lesssim\varepsilon\int_{0}^{1}\frac{1}{(y-y_{0})^{\frac{5}{2}}} \mathrm{d}y\] \[\lesssim\varepsilon\left(1+(-y_{0})^{-\frac{3}{2}}\right).\] We conclude the first part of the proof upon noting that \[\int_{-\frac{\delta}{2}}^{0}\|\varphi_{m,\varepsilon}(y,y_{0})\|_{L_{y}^{2}} \,\mathrm{d}y_{0}\lesssim\varepsilon^{\frac{1}{2}}\|\omega_{m}^{0}\|_{H^{2}}+ \varepsilon^{\frac{1}{2}}\int_{-\frac{\delta}{2}}^{0}\left(1+(-y_{0})^{-\frac{ 3}{2}}\right)^{\frac{1}{2}}\mathrm{d}y_{0}\lesssim\varepsilon^{\frac{1}{2}}.\] For the second part of the proposition, from (2.10) we have \[\rho_{m,\varepsilon}^{-}(y,y_{0})-\rho_{m,\varepsilon}^{+}(y,y_{0})=\frac{ \varphi_{m,\varepsilon}(y,y_{0})}{y-y_{0}-i\varepsilon}+\frac{2i\varepsilon}{ (y-y_{0})^{2}+\varepsilon^{2}}\varphi_{m,\varepsilon}^{+}(y,y_{0})\] and we write \[\varphi_{m,\varepsilon}(y,y_{0})=\int_{0}^{1}\mathcal{G}_{m,\varepsilon}^{-}(y,y_ {0},z)\left(\frac{i\varepsilon(z-y_{0})}{((z-y_{0})^{2}+\varepsilon^{2})^{2}} \varphi_{m,\varepsilon}^{-}(z,y_{0})-8i\varepsilon\Delta_{m}\omega_{m}^{0}(z) \right)\mathrm{d}z.\] In particular, using Proposition 6.18 and Proposition 6.19 we estimate \[\left|\int_{0}^{1}\mathcal{G}_{m,\varepsilon}^{-}(y,y_{0},z)\frac {i\varepsilon(z-y_{0})}{((z-y_{0})^{2}+\varepsilon^{2})^{2}}\varphi_{m, \varepsilon}^{-}(z,y_{0})\mathrm{d}z\right|\\ \lesssim\varepsilon^{\frac{1}{2}}\left(\int_{0}^{1}\frac{(z-y_{0 })}{((z-y_{0})^{2}+\varepsilon^{2})^{2}}|\mathcal{G}_{m,\varepsilon}^{-}(y,y_{ 0},z)|^{2}\mathrm{d}z\right)^{\frac{1}{2}}\\ \lesssim\varepsilon^{\frac{1}{2}}|y-y_{0}-i\varepsilon|^{\frac{1 }{2}}(1+|\log|y-y_{0}-i\varepsilon||)\left(\int_{0}^{1}\frac{(z-y_{0})}{|z-y_ {0}-i\varepsilon|^{3}}(1+|\log|z-y_{0}-i\varepsilon||)^{2}\mathrm{d}z\right)^ {\frac{1}{2}}\\ \lesssim\varepsilon^{\frac{1}{2}}|y-y_{0}-i\varepsilon|^{\frac{1 }{2}}(1+|\log|y-y_{0}-i\varepsilon||)\left(\int_{0}^{1}\frac{1}{(z-y_{0})^{2} +\frac{1}{4}}\mathrm{d}z\right)^{\frac{1}{2}}\\ \lesssim\varepsilon^{\frac{1}{2}}|y-y_{0}-i\varepsilon|^{\frac{1 }{2}}(1+|\log|y-y_{0}-i\varepsilon||)\left(1+(-y_{0})^{-\frac{1}{2}-\frac{1}{ 8}}\right).\] With this pointwise bound, we obtain \[\left\|\frac{\varphi_{m,\varepsilon}(y,y_{0})}{y-y_{0}-i\varepsilon} \right\|_{L^{2}_{y}} \lesssim\varepsilon^{\frac{1}{2}}\left(1+(-y_{0})^{-\frac{1}{2}- \frac{1}{8}}\right)\left(\int_{0}^{1}|y-y_{0}-i\varepsilon|^{-1}(1+|\log|y-y_{ 0}-i\varepsilon||)^{2}\mathrm{d}y\right)^{\frac{1}{2}}\] \[\lesssim\varepsilon^{\frac{1}{2}}\left(1+(-y_{0})^{-\frac{1}{2}- \frac{1}{8}}\right)\left(\int_{0}^{1}|y-y_{0}-i\varepsilon|^{-1-\frac{1}{4}} \mathrm{d}y\right)^{\frac{1}{2}}\] \[\lesssim\varepsilon^{\frac{1}{2}}\left(1+(-y_{0})^{-\frac{3}{4}}\right)\] and thus \[\int_{-\frac{\varepsilon}{2}}^{0}\left\|\frac{\varphi_{m,\varepsilon}(y,y_{0}) }{y-y_{0}-i\varepsilon}\right\|_{L^{2}_{y}}\mathrm{d}y_{0}\lesssim\varepsilon ^{\frac{1}{2}}.\] On the other hand, from the bounds obtained in Proposition 6.18, we have \[\int_{-\frac{\varepsilon}{2}}^{0}\left\|\frac{2i\varepsilon}{(y-y_{0})^{2}+ \varepsilon^{2}}\varphi_{m,\varepsilon}^{+}(y,y_{0})\right\|_{L^{2}_{y}} \mathrm{d}y_{0}\lesssim\varepsilon^{\frac{1}{2}}\int_{-\frac{\varepsilon}{2}}^ {0}\frac{1}{(-y_{0})^{\frac{1}{2}}}\left\|\frac{\varepsilon^{\frac{1}{2}}(y-y _{0})^{\frac{1}{2}}}{(y-y_{0})^{2}+\varepsilon^{2}}\varphi_{m,\varepsilon}^{+} (y,y_{0})\right\|_{L^{2}_{y}}\mathrm{d}y_{0}\lesssim\varepsilon^{\frac{1}{2}},\] and the proof is concluded. Similarly, the contribution from the resolvent integral along the vertical boundaries of the contour is also negligible. **Proposition 6.21**.: _Let \(y_{0}\in[-\beta/m,0]\). We have that_ \[\left\|\int_{0}^{\varepsilon}\mathrm{e}^{-imy_{0}t}\big{(}\mathrm{e}^{mst} \psi_{m,s}^{-}(y,y_{0})+\mathrm{e}^{-ms_{*}t}\psi_{m,s}^{+}(y,y_{0})\big{)} \mathrm{d}s\right\|_{L^{2}_{y}}\lesssim\varepsilon\] _and_ \[\left\|\int_{0}^{\varepsilon}\mathrm{e}^{-imy_{0}t}\big{(}\mathrm{e}^{mst} \rho_{m,s}^{-}(y,y_{0})+\mathrm{e}^{-mst}\rho_{m,s}^{+}(y,y_{0})\big{)} \mathrm{d}s\right\|_{L^{2}_{y}}\lesssim\varepsilon^{\frac{1}{4}}.\] Proof.: The first part concerning the stream-functions \(\psi^{\pm}_{m,\varepsilon}(y,y_{0})\) is a direct consequence of the uniform \(L^{2}\) bounds of \(\psi^{\pm}_{m,\varepsilon}(y,y_{0})\) obtained in Proposition 6.18. As for the density statement, we use (2.10); thanks to the asymptotic bounds from Proposition 6.19 we further observe that \[\int_{0}^{\varepsilon}\left\|\frac{\varphi^{\pm}_{m,\varepsilon}( y,y_{0})}{y-y_{0}\pm is}\right\|_{L^{2}_{y}}\mathrm{d}s \lesssim\int_{0}^{\varepsilon}\left(\int_{0}^{1}|y-y_{0}\pm is|^{-1}(1+| \log|y-y_{0}-i\varepsilon||)^{2}\mathrm{d}y\right)^{\frac{1}{2}}\mathrm{d}s\] \[\lesssim\int_{0}^{\varepsilon}\left(\int_{0}^{1}|y-y_{0}\pm is|^{ -\frac{3}{2}}\mathrm{d}y\right)^{\frac{1}{2}}\mathrm{d}s\lesssim\int_{0}^{ \varepsilon}|s|^{-\frac{3}{4}}\mathrm{d}s.\] With the above estimate, the bound follows swiftly. We are now in position to prove Proposition 6.1 for the special case \(\beta^{2}=1/4\). Proof of Proposition 6.1.: Let \(\delta>0\) be given by Proposition 6.19. For all \(\varepsilon<\frac{\delta}{2}\), we introduce the rectangular region \(R_{\varepsilon}:=\{c=y_{0}+is\in\mathbb{C}:y_{0}\in[-\delta/2,0]\,,\,s\in[- \varepsilon,\varepsilon]\}\). From Proposition 6.20 and Proposition 6.21 we conclude that \[\left\|\int_{\partial R_{\varepsilon}}\mathrm{e}^{-imct}\mathcal{R}(c,L_{m}) \mathrm{d}c\right\|_{L^{2}_{y}}\lesssim\varepsilon^{\frac{1}{4}},\qquad\left\| \int_{\partial(R\setminus R_{\varepsilon})}\mathrm{e}^{-imct}\mathcal{R}(c,L_{ m})\mathrm{d}c\right\|_{L^{2}_{y}}=0,\] because any \(c\in R\setminus R_{\varepsilon}\) belongs to the resolvent set of the operator \(L_{m}\). Indeed, any \(c\in R\setminus R_{\varepsilon}\) is such that \(\mathrm{Re}(c)\leq 0\), \(|c|\geq\frac{\delta}{2}\), and we can see from Proposition 6.18 that \(\mathcal{R}(c,L_{m})\) is invertible. Finally, in order to prove Theorem 4 for \(\beta^{2}=1/4\), we state and prove the following key Lemma, from which the Theorem easily follows. **Lemma 6.22**.: _Let \(y_{0}=0\) and \(y,z\in[0,1]\). Then, there exists \(\varepsilon_{0}>0\) such that_ \[\sup_{y,z\in[0,1]}\left|\mathcal{G}^{-}_{m,\varepsilon}(y,0,z)-\mathcal{G}^{+} _{m,\varepsilon}(y,0,z)\right|\lesssim\frac{1}{\log\left(\frac{4}{\varepsilon }\right)}+\varepsilon^{\frac{1}{4}},\] _for all \(\varepsilon\leq\varepsilon_{0}\)._ Proof.: We have \(\mathcal{G}^{-}_{m,\varepsilon}(y,0,z)-\mathcal{G}^{+}_{m,\varepsilon}(y,0,z)= 2i\mathrm{Im}\left(\mathcal{G}^{-}_{m,\varepsilon}(y,0,z)\right)\) and for \(y\leq z\), \[2i\mathrm{Im}\left(\mathcal{G}^{-}_{m,\varepsilon}(y,0,z)\right)=\frac{2i}{| \mathcal{W}^{-}_{m,\varepsilon}(0)|^{2}}\mathrm{Im}\left(\phi^{-}_{l,m, \varepsilon}(y,0)\phi^{-}_{u,m,\varepsilon}(z,0)\mathcal{W}^{+}_{m,\varepsilon }(0)\right).\] Now, using Proposition 3.3, Lemma A.4 and Lemma A.11, \[\phi^{-}_{l,m,\varepsilon}(y,0) \mathcal{W}^{+}_{m,\varepsilon}(0)\] \[=\frac{2m}{\sqrt{\pi}}\Big{(}|W_{0}(i\varepsilon)|^{2}M_{0}(y)M _{0}(1)+|M_{0}(i\varepsilon)|^{2}W_{0}(y)W_{0}(1)\] \[\qquad\qquad-W_{0}(i\varepsilon)M_{0}(-i\varepsilon)W_{0}(y)M_{0 }(1)-W_{0}(-i\varepsilon)M_{0}(i\varepsilon)M_{0}(y)W_{0}(1)+R_{1}(\varepsilon )\Big{)},\] where \(|R_{1}(\varepsilon)|\lesssim_{m,\mu}\varepsilon^{\frac{1}{4}}\!\!|W_{0}(i \varepsilon)|^{2}.\) Similarly, \[\phi^{-}_{u,m,\varepsilon}(z,0)=W_{0}(1)M_{0}(z)-M_{0}(1)W_{0}(z)+R_{2}( \varepsilon)=:\phi_{u,m}(z)+R_{2}(\varepsilon)\] with \(|R_{2}(\varepsilon)|\lesssim\varepsilon^{\frac{1}{4}}\). Thus, \[\phi^{-}_{l,m,\varepsilon}(y,0) \phi^{-}_{u,m,\varepsilon}(z,0)\mathcal{W}^{+}_{m,\varepsilon}(0)\] \[=-\frac{2m}{\sqrt{\pi}}\phi_{u,m}(z)\Big{(}W_{0}(-i\varepsilon)M _{0}(i\varepsilon)M_{0}(y)W_{0}(1)+W_{0}(i\varepsilon)M_{0}(-i\varepsilon)W_{0 }(y)M_{0}(1)\Big{)}\] \[+\frac{2m}{\sqrt{\pi}}\phi_{u,m}(z)\Big{(}|W_{0}(i\varepsilon)|^ {2}M_{0}(y)M_{0}(1)+|M_{0}(i\varepsilon)|^{2}W_{0}(y)W_{0}(1)\Big{)}+R_{3}( \varepsilon),\] where \(|R_{3}(\varepsilon)|\lesssim m\varepsilon^{\frac{1}{4}}|W_{0}(i\varepsilon)|^{2}\), uniformly in \(y,z\in[0,1]\). In particular, since \(M_{0}(y)\in\mathbb{R}\) and \(W_{0}(y)\in\mathbb{R}\), for all \(y\in[0,1]\), we have \[\mathrm{Im}\left(\phi_{l,m,\varepsilon}^{-}(y,0)\phi_{u,m,\varepsilon}^{-}(z, 0)\mathcal{W}_{m,\varepsilon}^{+}(0)\right)=-\frac{2m}{\sqrt{\pi}}\phi_{u,m}(z )\phi_{u,m}(y)\mathrm{Im}\left(W_{0}(-i\varepsilon)M_{0}(i\varepsilon)\right)+ \mathrm{Im}\left(R_{3}(\varepsilon)\right).\] Due to symmetry, we also have \[\mathrm{Im}\left(\phi_{l,m,\varepsilon}^{-}(z,0)\phi_{u,m,\varepsilon}^{-}(y, 0)\mathcal{W}_{m,\varepsilon}^{+}(0)\right)=-\frac{2m}{\sqrt{\pi}}\phi_{u,m}( z)\phi_{u,m}(y)\mathrm{Im}\left(W_{0}(-i\varepsilon)M_{0}(i\varepsilon)\right)+ \mathrm{Im}\left(\widetilde{R}_{3}(\varepsilon)\right).\] For the Wronskian, we have from (3.14) that \[\left|\mathcal{W}_{m,\varepsilon}^{+}(0)\right|=\frac{2m}{\sqrt{\pi}}|W_{0}(i \varepsilon)||M_{0}(1)|\left|1-\frac{M_{0}(i\varepsilon)}{W_{0}(i\varepsilon )}\frac{W_{0}(1)}{M_{0}(1)}+R_{4}(\varepsilon)\right|.\] where \(|R_{4}(\varepsilon)|\lesssim\varepsilon\). In particular, for \(\varepsilon\leq\varepsilon_{0}\) small enough we have from Lemma A.11 that \[\left|\mathcal{W}_{m,\varepsilon}^{+}(0)\right|\geq\frac{m}{\sqrt{\pi}}|W_{0} (i\varepsilon)||M_{0}(1)|.\] Therefore, \[\left|\mathcal{G}_{m,\varepsilon}^{-}(y,0,z)-\mathcal{G}_{m, \varepsilon}^{+}(y,0,z)\right| =\frac{2}{|\mathcal{W}_{m,\varepsilon}^{+}(0)|^{2}}\left| \mathrm{Im}\left(\phi_{l,m,\varepsilon}^{-}(y,0)\phi_{u,m,\varepsilon}^{-}(z, 0)\mathcal{W}_{m,\varepsilon}^{+}(0)\right)\right|\] \[\leq\frac{4\sqrt{\pi}}{m}\frac{|\phi_{u,m}(z)\phi_{u,m}(y) \mathrm{Im}\left(W_{0}(-i\varepsilon)M_{0}(i\varepsilon)\right)|}{|W_{0}(i \varepsilon)|^{2}|M_{0}(1)|^{2}}+R_{5}(\varepsilon)\] \[\lesssim\left|\frac{M_{0}(i\varepsilon)}{W_{0}(i\varepsilon)} \right|+\varepsilon^{\frac{1}{4}},\] and the conclusion follows from Lemma A.11. ## 7. Bounds on solutions to the inhomogeneous Taylor-Goldstein equation This section provides bounds for solutions \(\Phi_{m,\varepsilon}\) to the inhomogeneous Taylor-Goldstein equation (TGf) with boundary conditions \(\Phi_{m,\varepsilon}(0,y_{0})=\Phi_{m,\varepsilon}(1,y_{0})=0\). The following lemma relates regions of the interval \((0,1)\) that are far away from a fixed \(y_{0}\in[0,1]\) to nearby regions of \(y_{0}\). **Lemma 7.1**.: _Let \(y_{0}\in[0,1]\), \(n\geq 1\) and \(\Phi_{m,\varepsilon}\) be the solution to (TGf). Then, we have that_ \[\|\partial_{y}\Phi_{m,\varepsilon}\|_{L^{2}_{y}(J^{c}_{3})}^{2}+m^{2}\|\Phi_{ m,\varepsilon}\|_{L^{2}_{y}(J^{c}_{3})}^{2}\lesssim m^{2}\|\Phi_{m,\varepsilon}\|_{L^{2}_ {y}(J^{c}_{2}\cap J_{3})}^{2}+\frac{1}{m^{2}}\|f\|_{L^{2}_{y}(J^{c}_{2})}^{2}.\] Proof.: For \(y_{n}=y_{0}+\frac{n\beta}{m}\), the lemma follows from the energy inequality \[\frac{1}{2}\int_{y_{0}}^{1}\left[|\partial_{y}\Phi_{m,\varepsilon}|^{2}+m^{2}| \Phi_{m,\varepsilon}|^{2}\right]\mathrm{d}y\leq\frac{m^{2}}{\beta^{2}}\int_{y _{2}}^{y_{3}}|\Phi_{m,\varepsilon}|^{2}\mathrm{d}y+\int_{y_{2}}^{1}|f||\Phi_{m, \varepsilon}|\mathrm{d}y,\] and Young's inequality to absorb the potential term. We omit the details. With the above lemma we are in position to provide bounds on the solution to (TGf). **Proposition 7.2**.: _Let \(\Phi_{m,\varepsilon}\) be the solution to (TGf). Then_ * _If_ \(m|y-y_{0}|\leq 3\beta\) _and_ \(\beta^{2}\neq 1/4\)_, then_ \[|y-y_{0}+i\varepsilon|^{-\frac{1}{2}+\mu}|\Phi_{m,\varepsilon}(y,y_{0})|+|y-y_ {0}+i\varepsilon|^{\frac{1}{2}+\mu}|\partial_{y}\Phi_{m,\varepsilon}(y,y_{0})| \lesssim\frac{1}{m^{1+\mu}}\|f\|_{L^{2}_{y}}.\] * _If_ \(m|y-y_{0}|\leq 3\beta\) _and_ \(\beta^{2}=1/4\)_, then_ \[|y-y_{0}+i\varepsilon|^{-\frac{1}{2}}|\Phi_{m,\varepsilon}(y,y_{0})|+|y-y_{0}+i \varepsilon|^{\frac{1}{2}}|\partial_{y}\Phi_{m,\varepsilon}(y,y_{0})| \lesssim\frac{1}{m}\left(1+\big{|}\log\left(m|y-y_{0}\pm i\varepsilon|\right) \big{|}\right)\|f\|_{L^{2}_{y}}.\] * _If_ \(m|y-y_{0}|\geq 3\beta\) _then_ \[m\|\Phi_{m,\varepsilon}(y,y_{0})\|_{L^{2}_{y}(J^{\varepsilon}_{3})}+\|\partial_{y }\Phi_{m,\varepsilon}(y,y_{0})\|_{L^{2}_{y}(J^{\varepsilon}_{3})}\lesssim\frac{ 1}{m}\|f\|_{L^{2}_{y}}\] _and_ \[|\partial_{y}\Phi_{m,\varepsilon}(y,y_{0})|\lesssim\|f\|_{L^{2}_{y}}.\] Proof.: The first part is a straightforward application of the bounds on the Green's function from Theorem 2 and the Cauchy-Schwartz inequality, once we write \(\Phi_{m,\varepsilon}(y,y_{0})=\int_{0}^{1}\mathcal{G}^{+}_{m,\varepsilon}(y,y _{0},z)f(z,y_{0})\mathrm{d}z.\) The second part of the proposition follows from the first part, which gives \(m\|\Phi_{m,\varepsilon}\|_{L^{2}_{y}(J^{\varepsilon}_{3}\cap J_{3})}\lesssim \frac{1}{m}\|f\|_{L^{2}_{y}}\) and Lemma 7.1. For the pointwise bound, assume without loss of generality that \(y_{0}+\frac{3\beta}{m}<y\leq 1\). Then, let \(y_{3}=y_{0}+\frac{3\beta}{m}\) and write \[\partial_{y}\Phi_{m,\varepsilon}(y,y_{0})=\partial_{y}\Phi_{m,\varepsilon}(y_ {3},y_{0})+\int_{y_{3}}^{y}\left[\left(m^{2}-\beta^{2}\frac{1}{(y^{\prime}-y_{ 0}+i\varepsilon)^{2}}\right)\Phi_{m,\varepsilon}(y^{\prime},y_{0})+f(y^{ \prime})\right]\mathrm{d}y^{\prime}.\] Now, \(|y_{3}-y_{0}|=\frac{3\beta}{m}\) so that we estimate \(|\partial_{y}\Phi_{m,\varepsilon}(y_{3},y_{0})|\lesssim\frac{1}{m^{1+\mu}} \left|\frac{\beta}{m}\right|^{-\frac{1}{2}-\mu}\lesssim m^{-\frac{1}{2}}\). Similarly, we use the second part of the proposition to estimate the remaining terms in \(L^{2}_{y}(J^{\varepsilon}_{3})\) and obtain the desired conclusion. ## 8. Boundary terms estimates The purpose of this section is to obtain estimates on the boundary terms that appear in the expressions for \(\partial_{y_{0}}\psi^{\pm}_{m,\varepsilon}(y,y_{0})\) and other related derivatives. We begin by recording the following results, which will be used throughout the entire section. **Proposition 8.1**.: _Let \(\beta^{2}\neq 1/4\). There exists \(\varepsilon_{0}>0\) such that for all \(y,y_{0}\in[0,1]\) with \(m|y-y_{0}|\leq 3\beta\) there holds_ \[|y-y_{0}\pm i\varepsilon|^{-\frac{1}{2}-\mu}|\partial_{z}\mathcal{G}^{\pm}_{m,\varepsilon}(y,y_{0},0)|+|y-y_{0}\pm i\varepsilon|^{\frac{1}{2}+\mu}|\partial _{y}\partial_{z}\mathcal{G}^{\pm}_{m,\varepsilon}(y,y_{0},0)|\lesssim m^{ \frac{1}{2}-\mu}\frac{1}{|M_{-}(y_{0}\mp i\varepsilon))|},\] _and_ \[|y-y_{0}\pm i\varepsilon|^{-\frac{1}{2}-\mu}|\partial_{z}\mathcal{G}^{\pm}_{m,\varepsilon}(y,y_{0},1)|+|y-y_{0}\pm i\varepsilon|^{\frac{1}{2}+\mu}| \partial_{y}\partial_{z}\mathcal{G}^{\pm}_{m,\varepsilon}(y,y_{0},1)|\lesssim m ^{\frac{1}{2}-\mu}\frac{1}{|M_{-}(1-y_{0}\pm i\varepsilon))|},\] _for all \(0\leq\varepsilon\leq\varepsilon_{0}\)._ Proof.: For \(z=0\), note that we have the explicit expression \[\partial_{z}\mathcal{G}^{\pm}_{m,\varepsilon}(y,y_{0},0)=\frac{M_{+}(1-y_{0} \pm i\varepsilon)M_{-}(y-y_{0}\pm i\varepsilon)-M_{-}(1-y_{0}\pm i\varepsilon) M_{+}(y-y_{0}\pm i\varepsilon)}{M_{+}(1-y_{0}\pm i\varepsilon)M_{-}(-y_{0}\pm i \varepsilon)-M_{-}(1-y_{0}\pm i\varepsilon)M_{+}(-y_{0}\pm i\varepsilon)},\] so that \[\partial_{y}\partial_{z}\mathcal{G}^{\pm}_{m,\varepsilon}(y,y_{0},0)=2m\frac{M _{+}(1-y_{0}\pm i\varepsilon)M^{\prime}_{-}(y-y_{0}\pm i\varepsilon)-M_{-}(1-y _{0}\pm i\varepsilon)M^{\prime}_{+}(y-y_{0}\pm i\varepsilon)}{M_{+}(1-y_{0} \pm i\varepsilon)M_{-}(-y_{0}\pm i\varepsilon)-M_{-}(1-y_{0}\pm i\varepsilon)M _{+}(-y_{0}\pm i\varepsilon)}.\] If \(m|y-y_{0}|\leq 3\beta\), we use the bounds on the Wronskian from Proposition 4.1. For \(\beta^{2}>1/4\), the conclusion is straightforward. For \(\beta^{2}<1/4\), we take a closer look to the Wronskian estimates obtained on the proof of Proposition 4.3. The bounds are a consequence of Lemma A.5, A.15-A.17. The argument for \(z=1\) is similar, we omit the details. **Proposition 8.2**.: _Let \(\beta^{2}=1/4\). There exists \(\varepsilon_{0}>0\) such that for all \(y,y_{0}\in[0,1]\) with \(m|y-y_{0}|\leq 3\beta\) there holds_ \[|y-y_{0}\pm i\varepsilon|^{-\frac{1}{2}}|\partial_{z}\mathcal{G}^{\pm}_{m, \varepsilon}(y,y_{0},0)|+|y-y_{0}\pm i\varepsilon|^{\frac{1}{2}}|\partial_{y} \partial_{z}\mathcal{G}^{\pm}_{m,\varepsilon}(y,y_{0},0)|\lesssim m^{\frac{1}{2 }}\frac{1+\big{|}\log\left(m|y-y_{0}\pm i\varepsilon|\right)\big{|}}{|M_{0}(y_ {0}\mp i\varepsilon))|},\] and_ \[|y-y_{0}\pm i\varepsilon|^{-\frac{1}{2}}|\partial_{2}\mathcal{G}_{m,\varepsilon}^ {\pm}(y,y_{0},1)|+|y-y_{0}\pm i\varepsilon|^{\frac{1}{2}}|\partial_{y}\partial_ {2}\mathcal{G}_{m,\varepsilon}^{\pm}(y,y_{0},1)|\lesssim m^{\frac{1}{2}}\frac{1 +\big{|}\log\left(m|y-y_{0}\pm i\varepsilon|\right)\big{|}}{|M_{0}(1-y_{0}\mp i \varepsilon))|},\] _for all \(0\leq\varepsilon\leq\varepsilon_{0}\)._ Proof.: Since \(m|y-y_{0}|\leq 3\beta\), the proof follows the same ideas to show Proposition 5.1, with the help of Lemma A.5, A.10-A.12, we omit the details. ### Estimates for first order boundary terms This subsection is devoted to obtain estimates on \[\mathcal{B}_{m,\varepsilon}^{\pm}(y,y_{0},z)=\partial_{2}\mathcal{G}_{m, \varepsilon}^{\pm}(y,y_{0},z)\partial_{z}\varphi_{m,\varepsilon}^{\pm}(z,y_{ 0})\] for \(z=0\) and \(z=1\) under the assumption that \(m|y-y_{0}|\leq 3\beta\). In what follows, we shall argue for \(z=0\), the statements and proofs for \(z=1\) are similar and we thus omit them. We begin by providing bounds for \(\partial_{z}\varphi_{m,\varepsilon}^{\pm}(0,y_{0})\). **Proposition 8.3**.: _Let \(y_{0}\in[0,1]\), we have the following._ * _If_ \(my_{0}\leq 3\beta\)_, then_ \(|\partial_{y}\varphi_{m,\varepsilon}^{\pm}(0,y_{0})|\lesssim m^{-\frac{1}{2}} Q_{0,m}\)_._ * _If_ \(my_{0}\geq 3\beta\)_, then_ \(|\partial_{y}\varphi_{m,\varepsilon}^{\pm}(0,y_{0})|\lesssim Q_{0,m}\)_._ For the proof, we assume that \(y_{0}<1/2\). Otherwise, the proposition follows from Proposition 7.2. Note that from (2.8) and (2.12), there holds \[\partial_{y}\varphi_{m,\varepsilon}^{\pm}(0,y_{0}) =\int_{0}^{1}\partial_{y}\mathcal{G}_{m,\varepsilon}^{\pm}(0,y_{0 },z)F_{m,\varepsilon}^{\pm}(z,y_{0})\mathrm{d}z\] \[=\int_{0}^{1}\partial_{y}\mathcal{G}_{m,\varepsilon}^{\pm}(0,y_{ 0},z)\left(F_{m}(z)+\frac{y_{0}\mp i\varepsilon}{\beta^{2}}\Delta_{m}\omega_ {m}^{0}(z)\right)\mathrm{d}z,\] Further observe that, due to (H) we have \[\int_{0}^{1}\partial_{y}\mathcal{G}_{m,\varepsilon}^{\pm}(0,y_{0 },z)F_{m}^{\pm}(z,0)\mathrm{d}z =-4(\mu+i\nu)m\int_{0}^{1}\frac{\phi_{u,m,\varepsilon}^{\pm}(z,y _{0})}{\mathcal{W}_{m,\varepsilon}^{\pm}(y_{0})}F_{m}(z,0)\mathrm{d}z\] \[=-4(\mu+i\nu)m\int_{0}^{1}\frac{\phi_{u,m,\varepsilon}^{\pm}(z,y _{0})-\phi_{u,m}(z)}{\mathcal{W}_{m,\varepsilon}^{\pm}(y_{0})}F_{m}(z,0) \mathrm{d}z\] and we define \[f_{m,\varepsilon}^{\pm}(z,y_{0}):=\phi_{u,m,\varepsilon}^{\pm}(z,y_{0})-\phi_ {u,m}(z).\] #### 8.1.1 Estimates on \(f_{m,\varepsilon}^{\pm}\) for \(\beta^{2}\neq 1/4\) From the explicit formulas (3.4) and (3.8), we have \[f_{m,\varepsilon}^{\pm}(z,y_{0}) =M_{+}(1-y_{0}\pm i\varepsilon)M_{-}(z-y_{0}\pm i\varepsilon)-M_{ +}(1)M_{-}(z)\] \[\quad-M_{-}(1-y_{0}\pm i\varepsilon)M_{+}(z-y_{0}\pm i\varepsilon )+M_{-}(1)M_{+}(z)\] and we can obtain the next result. **Proposition 8.4**.: _Let \(z,y_{0}\in[0,1]\) such that \(my_{0}\leq 3\beta\) and \(mz\leq 6\beta\). Let \(0\leq\varepsilon\leq\min\Big{(}\frac{\beta}{m},\frac{1}{2m}\Big{)}\). Then,_ \[|f_{m,\varepsilon}(z,y_{0})|\lesssim m^{\frac{1}{2}-\mu}|y_{0}\pm i\varepsilon |^{\frac{1}{2}-\mu}|M_{+}(1-y_{0}\pm i\varepsilon)|.\] _In particular, \(\|f_{m,\varepsilon}\|_{L_{y}^{2}(J)}\lesssim m^{-\mu}|y_{0}\pm i\varepsilon|^{ \frac{1}{2}-\mu}|M_{+}(1-y_{0}\pm i\varepsilon)|\)._ Proof.: We shall assume \(\beta^{2}<1/4\), the case \(\beta^{2}>1/4\) is analogous and easier. We write \[M_{+}(1-y_{0}\pm i\varepsilon)M_{-}(z-y_{0}\pm i\varepsilon)-M_{ +}(1)M_{-}(z) =M_{+}(1-y_{0}\pm i\varepsilon)\Big{(}M_{-}(z-y_{0}\pm i \varepsilon)-M_{-}(z)\Big{)}\] \[\quad+M_{-}(z)\Big{(}M_{+}(1-y_{0}\pm i\varepsilon)-M_{+}(1) \Big{)}\] and \[M_{-}(1-y_{0}\pm i\varepsilon)M_{+}(z-y_{0}\pm i\varepsilon)-M_{-}(1 )M_{+}(z) =M_{-}(1-y_{0}\pm i\varepsilon)\Big{(}M_{+}(z-y_{0}\pm i \varepsilon)-M_{+}(z)\Big{)}\] \[\quad+M_{+}(z)\Big{(}M_{-}(1-y_{0}\pm i\varepsilon)-M_{-}(1)\Big{)}.\] Firstly, we estimate \[M_{+}(1-y_{0}\pm i\varepsilon)-M_{+}(1)=\int_{0}^{1}\frac{\mathrm{d}}{ \mathrm{d}s}M_{+}(1+s(-y_{0}\pm i\varepsilon))\mathrm{d}s=(-y_{0}\pm i \varepsilon)\int_{0}^{1}M_{+}^{\prime}(1+s(-y_{0}\pm i\varepsilon))\mathrm{d}s\] and we divide our argument as follows. Let \(N_{\mu,0}\) be given as in Lemma A.15. For \(m\leq N_{\mu,0}\), we use Lemma A.3 and the fact that \(y_{0}\leq 1/2\) to bound \[|M_{+}(1-y_{0}\pm i\varepsilon)-M_{+}(1)| \lesssim m^{\frac{1}{2}+\mu}|y_{0}\pm i\varepsilon|\int_{0}^{1} \frac{\mathrm{d}s}{|1+s(-y_{0}\pm i\varepsilon)|^{\frac{1}{2}-\mu}}\] \[\lesssim m^{\frac{1}{2}+\mu}|y_{0}\pm i\varepsilon|\] \[\lesssim|y_{0}\pm i\varepsilon||M_{+}(1-y_{0}\pm i\varepsilon)|.\] In the last inequality, we have used Lemma A.5, A.17 and A.15. Similarly, \[|M_{-}(1-y_{0}\pm i\varepsilon)-M_{-}(1)|\lesssim|y_{0}\pm i\varepsilon||M_{-} (1-y_{0}\pm i\varepsilon)|\lesssim|y_{0}\pm i\varepsilon||M_{+}(1-y_{0}\pm i \varepsilon)|,\] where we have used Lemma A.5 and Lemma A.17 to deduce \(|M_{-}(1-y_{0}\pm i\varepsilon)|\lesssim|M_{+}(1-y_{0}\pm i\varepsilon)|\). For \(m\geq N_{\mu,0}\), we claim that \[\left|\frac{M_{+}^{\prime}(1+s(-y_{0}\pm i\varepsilon))}{M_{+}(1-y_{0}\pm i \varepsilon)}\right|\lesssim 1.\] Indeed, this follows from \[\left|\frac{M_{+}^{\prime}(1+s(-y_{0}\pm i\varepsilon))}{M_{+}(1-y_{0}\pm i \varepsilon)}\right|=\left|\frac{M_{+}^{\prime}(1+s(-y_{0}\pm i\varepsilon))}{ M_{+}(1+s(-y_{0}\pm i\varepsilon))}\right|\left|\frac{M_{+}(1+s(-y_{0}\pm i \varepsilon))}{M_{+}(1-y_{0}\pm i\varepsilon)}\right|\] and the corresponding bounds from Lemma A.6 since \(2m(1-y_{0})\geq m\geq N_{\mu,0}\). Hence, we have \[|M_{+}(1-y_{0}\pm i\varepsilon)-M_{+}(1)| \leq 2m|y_{0}\pm i\varepsilon|\int_{0}^{1}\left|M_{+}^{\prime}(1+s(- y_{0}\pm i\varepsilon))\right|\mathrm{d}s\] \[\lesssim m|y_{0}\pm i\varepsilon|\left|M_{+}(1-y_{0}\pm i \varepsilon)\right|.\] Similarly, we also have \[|M_{-}(1-y_{0}\pm i\varepsilon)-M_{-}(1)|\lesssim m|y_{0}\pm i\varepsilon| \left|M_{-}(1-y_{0}\pm i\varepsilon)\right|\lesssim m|y_{0}\pm i\varepsilon| \left|M_{+}(1-y_{0}\pm i\varepsilon)\right|,\] where we have used Lemma A.15 to deduce \(|M_{-}(1-y_{0}\pm i\varepsilon)|\lesssim|M_{+}(1-y_{0}\pm i\varepsilon)|\). We next turn our attention to the bounds for \(M_{-}(z-y_{0}\pm i\varepsilon)-M_{-}(z)\). As before, we consider two cases. \(\bullet\)**Case 1.** For \(2y_{0}\leq z\) we estimate \[M_{-}(z-y_{0}\pm i\varepsilon)-M_{-}(z)=\int_{0}^{1}\frac{\mathrm{d}}{\mathrm{ d}s}M_{-}(z+s(-y_{0}\pm i\varepsilon))\mathrm{d}s=(-y_{0}\pm i\varepsilon) \int_{0}^{1}M_{-}^{\prime}(z+s(-y_{0}\pm i\varepsilon))\mathrm{d}s.\] From Lemma A.3, \(M_{-}^{\prime}(\zeta)\lesssim\zeta^{-\frac{1}{2}-\mu}m^{\frac{1}{2}-\mu}\), and since \(2y_{0}\leq z\), we have that \(s|y_{0}\pm i\varepsilon|\leq|z+s(-y_{0}\pm i\varepsilon)|\), for all \(s\in(0,1)\). Thus, \[|M_{-}(z-y_{0}\pm i\varepsilon)-M_{-}(z)| \lesssim m^{\frac{1}{2}-\mu}|y_{0}\pm i\varepsilon|\int_{0}^{1} \frac{\mathrm{d}s}{|z+s(-y_{0}\pm i\varepsilon)|^{\frac{1}{2}+\mu}}\] \[\lesssim m^{\frac{1}{2}-\mu}|y_{0}\pm i\varepsilon|\int_{0}^{1} \frac{\mathrm{d}s}{|s(y_{0}\pm i\varepsilon)|^{\frac{1}{2}+\mu}}\] \[\lesssim m^{\frac{1}{2}-\mu}|y_{0}\pm i\varepsilon|^{\frac{1}{2}- \mu}.\] \(\bullet\) **Case 2.** For \(z\leq 2y_{0}\), we directly estimate using Lemma A.3, that is, \[|M_{-}(z-y_{0}\pm i\varepsilon)-M_{-}(z)|\leq|M_{-}(z-y_{0}\pm i \varepsilon)|+|M_{-}(z)| \lesssim m^{\frac{1}{2}-\mu}\left(|z-y_{0}\pm i\varepsilon|^{\frac{ 1}{2}-\mu}+|z|^{\frac{1}{2}-\mu}\right)\] \[\lesssim m^{\frac{1}{2}-\mu}|y_{0}\pm i\varepsilon|^{\frac{1}{2}- \mu}.\] From this localised estimates, we are able to obtain bounds on \(f_{m,\varepsilon}(z,y_{0})\) for \(mz\geq 6\beta\). For this, we first deduce useful estimates on \(\phi_{u,m}^{\pm}(z)\). **Lemma 8.5**.: _The function \(\phi_{u,m}(z)=M_{+}(1)M_{-}(z)-M_{-}(1)M_{+}(z)\) satisfies_ \[\Delta_{m}\phi_{u,m}^{\pm}(z)+\beta^{2}\frac{\phi_{u,m}^{\pm}(z)}{z^{2}}=0, \qquad\phi_{u,m}^{\pm}(1)=0.\] _For \(J_{6}=\{z\in[0,1]:mz\leq 6\beta\}\) and \(J_{6}^{c}=[0,1]\setminus J_{6}\), it is such that_ \[\|\phi_{u,m}\|_{L^{\infty}(J_{6})}\lesssim m^{\frac{1}{2}-\mu}|z|^{\frac{1}{2 }-\mu}|M_{+}(1)|,\quad\|\phi_{u,m}\|_{L_{y}^{2}(J_{6})}\lesssim m^{-\frac{1}{2 }}|M_{+}(1)|\] _and_ \[\|\partial_{z}\phi_{u,m}\|_{L_{y}^{2}(J_{6}^{c})}+m\|\phi_{u,m}\|_{L_{y}^{2}(J_ {6}^{c})}\lesssim m^{\frac{1}{2}}|M_{+}(1)|.\] Proof.: The statements for \(\|\phi_{u,m}\|_{L^{\infty}(J_{6})}\) and \(\|\phi_{u,m}\|_{L_{y}^{2}(J_{6})}\) follow from the asymptotic expansions for small argument given by Lemma A.3. The integral estimates follow from the \(\|\phi_{u,m}\|_{L_{y}^{2}(J_{6})}\) bounds using Lemma 7.1. The following proposition obtains \(L^{2}\) bounds on \(f_{m,\varepsilon}^{\pm}(\cdot,y_{0})\) from the localized bounds of Proposition 8.4 and the above lemma. **Proposition 8.6**.: _We have that_ \[\|f_{m,\varepsilon}^{\pm}(\cdot,y_{0})\|_{L^{2}}\lesssim m^{-\frac{1}{2}}(m|y _{0}-i\varepsilon|)^{\frac{1}{2}-\mu}|M_{+}(1-y_{0}+i\varepsilon)|.\] Proof.: It is straightforward to see that \(f_{m,\varepsilon}^{\pm}(z,y_{0})\) solves \[\Delta_{m}f_{m,\varepsilon}^{\pm}+\beta^{2}\frac{f_{m,\varepsilon}^{\pm}}{(z-y _{0}\pm i\varepsilon)^{2}}=\beta^{2}(-y_{0}\pm i\varepsilon)\left(\frac{2}{z (z-y_{0}\pm i\varepsilon)^{2}}+\frac{-y_{0}\pm i\varepsilon}{z^{2}(z-y_{0} \pm i\varepsilon)^{2}}\right)\phi_{u,m}\] and \(f_{m,\varepsilon}^{\pm}(1,y_{0})=0\). Hence, using the same strategy from Lemma 4.4, we have that \[\begin{split}\frac{1}{2}\int_{\frac{6\beta}{m}}^{1}|\partial_{z}f _{m,\varepsilon}^{\pm}|^{2}+m^{2}|f_{m,\varepsilon}^{\pm}|^{2}\mathrm{d}z& \leq\frac{m^{2}}{\beta^{2}}\int_{\frac{5\beta}{m}}^{\frac{6\beta}{m }}|f_{m,\varepsilon}^{\pm}|^{2}\mathrm{d}z\\ &+\beta^{2}|-y_{0}\pm i\varepsilon|\int_{\frac{5\beta}{m}}^{1} \left(\frac{2}{z}+\frac{|-y_{0}\pm i\varepsilon|}{z^{2}}\right)\frac{|\phi_{u, m}(z)|f_{m,\varepsilon}^{\pm}(z)|}{|z-y_{0}\pm i\varepsilon|^{2}}\mathrm{d}z.\end{split} \tag{8.1}\] Now, from Proposition 8.4, we have \[\frac{m^{2}}{\beta^{2}}\int_{\frac{5\beta}{m}}^{\frac{6\beta}{m}}|f_{m, \varepsilon}^{\pm}|^{2}\mathrm{d}z\lesssim\frac{m}{\beta}\left(m|y_{0}-i \varepsilon|\right)^{1-2\mu}|M_{+}(1-y_{0}+i\varepsilon)|^{2},\] while we write \[\beta^{2}|-y_{0}\pm i\varepsilon|^{2}\int_{\frac{5\beta}{m}}^{1}\frac{|\phi_{u,m}(z)|f_{m,\varepsilon}^{\pm}(z)|}{z^{2}|z-y_{0}\pm i\varepsilon|^{2}}\mathrm{ d}z=\beta^{2}|-y_{0}\pm i\varepsilon|^{2}\left(\int_{\frac{5\beta}{m}}^{\frac{6\beta}{m }}+\int_{\frac{6\beta}{m}}^{1}\right)\frac{|\phi_{u,m}(z)|f_{m,\varepsilon}^{ \pm}(z)|}{z^{2}|z-y_{0}\pm i\varepsilon|^{2}}\mathrm{d}z.\] For example, with the bounds of Proposition 8.4 and Lemma 8.5, and the fact that \(z\geq\frac{5\beta}{m}\) and \(y_{0}\leq\frac{3\beta}{m}\), we have \(|z-y_{0}\pm i\varepsilon|^{-2}\lesssim m^{2}\) and \[\beta^{2}|-y_{0}\pm i\varepsilon|^{2}\int_{\frac{5\beta}{m}}^{\frac {6\beta}{m}}\frac{|\phi_{u,m}(z)|f_{m,\varepsilon}^{\pm}(z)|}{z^{2}|z-y_{0}\pm i \varepsilon|^{2}}\mathrm{d}z \lesssim m^{2}|y_{0}-i\varepsilon|^{2}\|f_{m,\varepsilon}^{\pm} \|_{L^{\infty}(J)}|M_{+}(1)|^{2}\int_{y_{2}}^{y_{2}+\frac{\beta}{m}}m^{\frac{1 }{2}-\mu}z^{-\frac{3}{2}-\mu}\mathrm{d}z\] \[\lesssim m^{\frac{7}{2}-\mu}|y_{0}\pm i\varepsilon|^{\frac{5}{2}- \mu}|M_{+}(1)||M_{+}(1-y_{0}\pm i\varepsilon)|.\] On the other hand, Young's inequality and Lemma 8.5 gives \[\beta^{2}|-y_{0}\pm i\varepsilon|^{2}\int_{\frac{6\beta}{m}}^{1}\frac{|\phi_{ u,m}(z)|f_{m,\varepsilon}^{\pm}(z)|}{z^{2}|z-y_{0}\pm i\varepsilon|^{2}} \mathrm{d}z\leq\frac{m^{2}}{8}\int_{y_{2}+\frac{\beta}{m}}^{1}|f_{m, \varepsilon}^{\pm}(z)|^{2}\mathrm{d}z+Cm^{5}|y_{0}-i\varepsilon|^{4}|M_{+}(1 )|^{2},\] for some \(C>0\) large enough. Similarly, we bound \[\beta^{2}|-y_{0}\pm i\varepsilon|\int_{\frac{5\beta}{m}}^{\frac{6\beta}{m}} \frac{|\phi_{u,m}(z)|f_{m,\varepsilon}^{\pm}(z)|}{z|z-y_{0}\pm i\varepsilon|^{ 2}}\mathrm{d}z\lesssim m^{\frac{5}{2}-\mu}|y_{0}\pm i\varepsilon|^{\frac{3}{2 }-\mu}|M_{+}(1)||M_{+}(1-y_{0}\pm i\varepsilon)|\] and \[\beta^{2}|-y_{0}\pm i\varepsilon|\int_{\frac{6\beta}{m}}^{1}\frac{|\phi_{u,m} (z)|f_{m,\varepsilon}^{\pm}(z)|}{z|z-y_{0}\pm i\varepsilon|^{2}}\mathrm{d}z \leq\frac{m^{2}}{8}\int_{\frac{6\beta}{m}}^{1}|f_{m,\varepsilon}^{\pm}(z)|^{2 }\mathrm{d}z+Cm^{3}|y_{0}-i\varepsilon|^{2}|M_{+}(1)|^{2},\] for some \(C>0\) large enough. Hence, we absorb the potential term on the left hand side of (8.1) and conclude that \[\frac{1}{4}\int_{y_{2}+\frac{\beta}{m}}^{1}|\partial_{z}f_{m,\varepsilon}^{\pm }|^{2}+m^{2}|f_{m,\varepsilon}^{\pm}|^{2}\mathrm{d}z\lesssim m(m|y_{0}-i \varepsilon|)^{1-2\mu}|M_{+}(1-y_{0}+i\varepsilon)|^{2}.\] and the lemma follows. #### 8.1.2. Estimates on \(f_{m,\varepsilon}^{\pm}\) for \(\beta^{2}=1/4\) From the explicit formulas (3.12) and (3.15), we now have \[f_{m,\varepsilon}^{\pm}(z,y_{0}) =W_{0}(1-y_{0}\pm i\varepsilon)M_{0}(z-y_{0}\pm i\varepsilon)-W_ {0}(1)M_{0}(z)\] \[\quad-M_{0}(1-y_{0}\pm i\varepsilon)W_{0}(z-y_{0}\pm i\varepsilon) +M_{0}(1)W_{0}(z)\] from which we obtain the following result. **Proposition 8.7**.: _Let \(z,y_{0}\in[0,1]\) such that \(my_{0}\leq 3\beta\) and \(mz\leq 6\beta\). Let \(0\leq\varepsilon\leq\min\left(\frac{\beta}{m},\frac{1}{2m}\right)\). Then,_ \[|f_{m,\varepsilon}(z,y_{0})|\lesssim(m|y_{0}\pm i\varepsilon|)^{\frac{1}{2}} \left(1+\big{|}\log\left(my_{0}\right)\big{|}\right)|M_{0}(1-y_{0}\pm i \varepsilon)|.\] _In particular, \(\|f_{m,\varepsilon}\|_{L^{2}_{y}(J)}\lesssim|y_{0}\pm i\varepsilon|^{\frac{1}{2 }}\left(1+\big{|}\log\left(m|y_{0}\pm i\varepsilon|\right)\big{|}\right)|M_{0}( 1-y_{0}\pm i\varepsilon)|.\)_ Proof.: We write \[W_{0}(1-y_{0}\pm i\varepsilon)M_{0}(z-y_{0}\pm i\varepsilon)-W_ {0}(1)M_{0}(z) =W_{0}(1-y_{0}\pm i\varepsilon)\Big{(}M_{0}(z-y_{0}\pm i \varepsilon)-M_{0}(z)\Big{)}\] \[\quad+M_{0}(z)\Big{(}W_{0}(1-y_{0}\pm i\varepsilon)-W_{0}(1) \Big{)}\] We shall now estimate the differences involving the Whittaker function \(W_{0}\), the estimates for the differences involving \(M_{0}\) follow similarly as for the case \(\beta^{2}\neq 1/4\) and they are \[|M_{0}(z-y_{0}\pm i\varepsilon)-M_{0}(z)|\lesssim m^{\frac{1}{2}}|y_{0}\pm i \varepsilon|^{\frac{1}{2}},\quad|M_{0}(1-y_{0}\pm i\varepsilon)-M_{0}(1)| \lesssim m|y_{0}\pm i\varepsilon||M_{0}(1-y_{0}\pm i\varepsilon)|.\] Firstly, we estimate \[W_{0}(1-y_{0}\pm i\varepsilon)-W_{0}(1)=\int_{0}^{1}\frac{\mathrm{d}}{\mathrm{d }s}W_{0}(1+s(-y_{0}\pm i\varepsilon))\mathrm{d}s=2m(-y_{0}\pm i\varepsilon) \int_{0}^{1}W_{0}^{\prime}(1+s(-y_{0}\pm i\varepsilon))\mathrm{d}s\] and we divide our argument as follows. Let \(N_{\mu,0}\) be given as in Lemma A.15. For \(m\leq N_{\mu,0}\), we use Lemma A.4 and the fact that \(y_{0}\leq\frac{1}{2}\) to bound \[|W_{0}(1-y_{0}\pm i\varepsilon)-W_{0}(1)| \lesssim m^{\frac{1}{2}}|y_{0}\pm i\varepsilon|\int_{0}^{1}\frac{1 +\big{|}\log m|1+s(-y_{0}\pm i\varepsilon)|\big{|}}{|1+s(-y_{0}\pm i \varepsilon)|^{\frac{1}{2}}}\mathrm{d}s\] \[\lesssim m^{\frac{1}{2}}|y_{0}\pm i\varepsilon|\] \[\lesssim|y_{0}\pm i\varepsilon||M_{0}(1-y_{0}\pm i\varepsilon)|.\] In the last inequality, we have used Lemma A.5, A.17 and A.15. For \(m\geq N_{\mu,0}\), we claim that \[\left|\frac{W_{0}^{\prime}(1+s(-y_{0}\pm i\varepsilon))}{M_{0}(1-y_{0}\pm i \varepsilon)}\right|\lesssim 1.\] Indeed, this follows from \[\left|\frac{W_{0}^{\prime}(1+s(-y_{0}\pm i\varepsilon))}{M_{0}(1-y_{0}\pm i \varepsilon)}\right|=\left|\frac{W_{0}^{\prime}(1+s(-y_{0}\pm i\varepsilon))} {W_{0}(1+s(-y_{0}\pm i\varepsilon))}\right|\left|\frac{W_{0}(1+s(-y_{0}\pm i \varepsilon))}{W_{0}(1-y_{0}\pm i\varepsilon)}\right|\left|\frac{W_{0}(1-y_{0 }\pm i\varepsilon)}{M_{0}(1-y_{0}\pm i\varepsilon)}\right|\] and the corresponding bounds from Lemma A.6 since \(2m(1-y_{0})\geq m\geq N_{\mu,0}\). Hence, we have \[|W_{0}(1-y_{0}\pm i\varepsilon)-W_{0}(1)| \leq 2m|y_{0}\pm i\varepsilon|\int_{0}^{1}\left|W_{0}^{\prime}(1+ s(-y_{0}\pm i\varepsilon))\right|\mathrm{d}s\] \[\lesssim m|y_{0}\pm i\varepsilon|\left|M_{0}(1-y_{0}\pm i \varepsilon)\right|.\] We next turn our attention to the bounds for \(W_{0}(z-y_{0}\pm i\varepsilon)-W_{0}(z)\). As before, we consider two cases. \(\bullet\)**Case 1.** For \(2y_{0}\leq z\) we estimate \[W_{0}(z-y_{0}\pm i\varepsilon)-W_{0}(z)=\int_{0}^{1}\frac{\mathrm{d}}{\mathrm{ d}s}W_{0}(z+s(-y_{0}\pm i\varepsilon))\mathrm{d}s=(-y_{0}\pm i\varepsilon) \int_{0}^{1}W_{0}^{\prime}(z+s(-y_{0}\pm i\varepsilon))\mathrm{d}s.\] From Lemma A.4, \(W_{0}^{\prime}(\zeta)\lesssim m^{\frac{1}{2}}\zeta^{-\frac{1}{2}}\left(1+ \big{|}\log\left(m|\zeta|\right)\big{|}\right)\), and since \(2y_{0}\leq z\), we have that \(s|y_{0}\pm i\varepsilon|\leq|z+s(-y_{0}\pm i\varepsilon)|\), for all \(s\in(0,1)\). Thus, \[|W_{0}(z-y_{0}\pm i\varepsilon)-W_{0}(z)| \lesssim m^{\frac{1}{2}}|y_{0}\pm i\varepsilon|\int_{0}^{1}\frac {1+\big{|}\log\left(m|z+s(-y_{0}\pm i\varepsilon)|\right)\big{|}}{|z+s(-y_{0} \pm i\varepsilon)|^{\frac{1}{2}}}\mathrm{d}s\] \[\lesssim m^{\frac{1}{2}}|y_{0}\pm i\varepsilon|^{\frac{1}{2}}\int _{0}^{1}\frac{1+\big{|}\log\left(ms|y_{0}\pm i\varepsilon|\right)\big{|}}{s^{ \frac{1}{2}}}\mathrm{d}s\] \[\lesssim\left(m|y_{0}\pm i\varepsilon|\right)^{\frac{1}{2}}\left( 1+\big{|}\log\left(m|y_{0}\pm i\varepsilon|\right)\big{|}\right).\] \(\bullet\)**Case 2.** For \(z\leq 2y_{0}\), we directly estimate using Lemma A.4, that is, \[|M_{-}(z-y_{0}\pm i\varepsilon)-M_{-}(z)| \leq|M_{-}(z-y_{0}\pm i\varepsilon)|+|M_{-}(z)|\] \[\lesssim m^{\frac{1}{2}}|z-y_{0}\pm i\varepsilon|^{\frac{1}{2}} \left(1+\big{|}\log\left(m|z-y_{0}\pm i\varepsilon|\right)\big{|}\right)\] \[\quad+m^{\frac{1}{2}}z^{\frac{1}{2}}\left(1+\big{|}\log\left(m|y_ {0}\pm i\varepsilon|\right)\big{|}\right)\] \[\lesssim(m|y_{0}\pm i\varepsilon|)^{\frac{1}{2}}\left(1+\big{|} \log\left(m|y_{0}\pm i\varepsilon|\right)\big{|}\right).\] From this localised estimates, we are able to obtain bounds on \(f_{m,\varepsilon}(z,y_{0})\) for \(mz\geq 6\beta\). For this, we first deduce useful estimates on \(\phi_{u,m}^{\pm}(z)\). **Lemma 8.8**.: _The function \(\phi_{u,m}(z)=W_{0}(1)M_{0}(z)-M_{0}(1)W_{0}(z)\) satisfies_ \[\Delta_{m}\phi_{u,m}^{\pm}(z)+\beta^{2}\frac{\phi_{u,m}^{\pm}(z)}{z^{2}}=0, \qquad\phi_{u,m}^{\pm}(1)=0.\] _For \(J_{6}=\{z\in[0,1]:mz\leq 6\beta\}\) and \(J_{6}^{c}=[0,1]\setminus J_{6}\), it is such that_ \[\|\phi_{u,m}\|_{L^{\infty}(J)}\lesssim(mz)^{\frac{1}{2}}\left(1+\big{|}\log{(mz )}\big{|}\right)|M_{0}(1)|,\quad\|\phi_{u,m}\|_{L_{y}^{2}(J)}\lesssim m^{- \frac{1}{2}}|M_{0}(1)|\] _and_ \[\|\partial_{z}\phi_{u,m}\|_{L_{y}^{2}(J^{c})}+m\|\phi_{u,m}\|_{L_{y}^{2}(J^{c}) }\lesssim m^{\frac{1}{2}}|M_{+}(1)|\] Proof.: The statement for \(\|\phi_{u,m}\|_{L^{\infty}(J_{6})}\) follows from the asymptotic expansions for small argument given by Lemma A.3. For the integral estimates estimate, note that the change of variables \(u=mz\) provides \[\|\phi_{u,m}\|_{L_{y}^{2}(J_{6})}^{2}\lesssim\int_{0}^{\frac{6 \beta}{m}}(mz)\left(1+\big{|}\log{(mz)}\big{|}\right)^{2}|M_{0}(1)|^{2}\mathrm{ d}z =\frac{|M_{0}(1)|^{2}}{m}\int_{0}^{6\beta}\eta\left(1+|\log{(\eta) }|\right)^{2}\mathrm{d}\eta\] \[\lesssim\frac{|M_{0}(1)|^{2}}{m}.\] The result follows using Lemma 7.1. The following proposition obtains \(L^{2}(0,1)\) bounds on \(f_{m,\varepsilon}^{\pm}(\cdot,y_{0})\) from the localized bounds of Proposition 8.7 and the above Lemma. We omit its proof due to its similarity to the one for Proposition 8.6. **Proposition 8.9**.: _We have that_ \[\|f_{m,\varepsilon}^{\pm}(\cdot,y_{0})\|_{L^{2}(0,1)}\lesssim m^{-\frac{1}{2} }(m|y_{0}-i\varepsilon|)^{\frac{1}{2}}\left(1+\big{|}\log{(m|y_{0}\pm i \varepsilon|)}\big{|}\right)|M_{0}(1-y_{0}\pm i\varepsilon)|\] We are now able to compare \(\|f_{m,\varepsilon}^{\pm}(\cdot,y_{0})\|_{L^{2}(0,1)}\) and the Wronskian \(|\mathcal{W}_{m,\varepsilon}^{\pm}(y_{0})|\). **Lemma 8.10**.: _Let \(y_{0}\in[0,1]\) such that \(my_{0}\leq 3\beta\). There exists \(\varepsilon_{0}>0\) such that_ \[\frac{\|f_{m,\varepsilon}^{\pm}(\cdot,y_{0})\|_{L^{2}(0,1)}}{|\mathcal{W}_{m, \varepsilon}^{\pm}(y_{0})|}\lesssim m^{-\frac{3}{2}}.\] Proof.: Let \(N_{0}>0\) be given by Lemma A.10, \(\delta_{1}>0\) be given by Lemma A.11 and \(\delta_{2}>0\) be given by Lemma A.13. From Lemma 5.2, there holds the following, \(\bullet\)**Case 1.** For \(m\leq N_{0}\) and \(2m|y_{0}\pm i\varepsilon|\leq\delta_{1}\), we have \[|\mathcal{W}_{m,\varepsilon}^{\pm}(y_{0})|\gtrsim m|M_{0}(1-y_{0}\pm i \varepsilon)||W_{0}(y_{0}\mp i\varepsilon)|.\] where further \[\left|\frac{M_{0}(y_{0}\pm i\varepsilon)}{W_{0}(y_{0}\pm i\varepsilon)}\right| \leq\frac{1}{2\sqrt{\pi}}.\] Now, from Lemma A.13, if \(\delta_{1}\leq\delta_{2}\), then, \[|2m(y_{0}\pm i\varepsilon)|^{\frac{1}{2}}\left(1+\big{|}\log{(2m|y_{0}\pm i \varepsilon|)}\big{|}\right)\lesssim|W_{0}(y_{0}\pm i\varepsilon)|,\] and the conclusion follows. On the other hand, for \(\delta_{2}\leq 2m|y_{0}\pm i\varepsilon|\leq\delta_{1}\), we have that \[m\frac{\|f_{m,\varepsilon}^{\pm}(\cdot,y_{0})\|_{L^{2}}}{|\mathcal{W}_{m, \varepsilon}^{\pm}(y_{0})|}\lesssim m^{-\frac{1}{2}}\frac{(m|y_{0}\pm i \varepsilon|)^{\frac{1}{2}}}{M_{0}(y_{0}\pm i\varepsilon)}\lesssim m^{-\frac{1 }{2}},\] due to Lemma A.14. \(\bullet\)**Case 2.** For \(m\leq N_{0}\) and \(\delta_{1}\leq 2my_{0}\leq N_{0}\), we have now \[|\mathcal{W}_{m,\varepsilon}^{\pm}(y_{0})|\gtrsim m|M_{0}(1-y_{0}\pm i \varepsilon)||M_{0}(y_{0}\mp i\varepsilon)|,\] the conclusion follows using Lemma A.14 and the fact that \(\left(1+\big{|}\log{|2m(y_{0}\pm i\varepsilon)|}\big{|}\right)\lesssim 1\). \(\bullet\)**Case 3.** For \(m\geq N_{0}\), and \(2m|y_{0}\pm i\varepsilon|\leq\delta_{1}\), we have \[|\mathcal{W}_{m,\varepsilon}^{\pm}(y_{0})|\gtrsim m|M_{0}(1-y_{0}\pm i \varepsilon)||W_{0}(y_{0}\mp i\varepsilon)|.\] and also \[\left|\frac{M_{0}(y_{0}\pm i\varepsilon)}{W_{0}(y_{0}\pm i\varepsilon)}\right|\leq \frac{1}{2\sqrt{\pi}},\] we proceed as in Case 1, we omit the details. \(\bullet\)**Case 4.** For \(m\geq N_{0}\) and \(\delta_{1}\leq 2my_{0}\leq N_{0}\), we have \[|\mathcal{W}_{m,\varepsilon}^{\pm}(y_{0})|\gtrsim m|M_{0}(1-y_{0}\pm i \varepsilon)||M_{0}(y_{0}\mp i\varepsilon)|,\] we proceed as in Case 2, we omit the details. We are now in position to prove Proposition 8.3. Proof of Proposition 8.3.: For \(my_{0}\geq 3\beta\) we appeal to Proposition 7.2 to obtain the desired bound. On the other hand, for \(my_{0}\leq 3\beta\) let us recall that we can write \[\partial_{y}\varphi_{m,\varepsilon}^{\pm}(0,y_{0})=\int_{0}^{1}\partial_{y} \mathcal{G}_{m,\varepsilon}^{\pm}(0,y_{0},z)\left(F_{m}(z)+\frac{y_{0}\mp i \varepsilon}{\beta^{2}}\Delta_{m}\omega_{m}^{0}(z)\right)\mathrm{d}z\] For \(\beta^{2}\neq 1/4\), it is straightforward to see from Proposition 7.2 that \[\left|\frac{y_{0}\mp i\varepsilon}{\beta^{2}}\int_{0}^{1}\partial_{y} \mathcal{G}_{m,\varepsilon}^{\pm}(0,y_{0},z)\Delta_{m}\omega_{m}^{0}(z) \mathrm{d}z\right|\lesssim\frac{1}{m^{1+\mu}}|y_{0}\pm i\varepsilon|^{\frac{1 }{2}-\mu}\|\omega_{m}^{0}\|_{H_{y}^{2}},\] while, thanks to Proposition 8.6, the lower bounds on the Wronskian from Proposition 4.3 and Lemma A.16 we bound \[\left|\int_{0}^{1}\partial_{y}\mathcal{G}_{m,\varepsilon}^{\pm} (0,y_{0},z)F_{m}^{\pm}(z,0)\mathrm{d}z\right| =\left|4(\mu+i\nu)m\int_{0}^{1}\frac{f_{m,\varepsilon}^{\pm}(z,y_ {0})}{\mathcal{W}_{m,\varepsilon}^{\pm}(y_{0})}F_{m}^{\pm}(z,0)\mathrm{d}z\right|\] \[\lesssim m\frac{\|f_{m,\varepsilon}^{\pm}(\cdot,y_{0})\|_{L_{2}^ {2}}}{|\mathcal{W}_{m,\varepsilon}^{\pm}(y_{0})|}\|F_{m}^{\pm}(z,0)\|_{L_{z}^ {2}}\] \[\lesssim m^{-\frac{1}{2}}\|F_{m}\|_{L^{2}}.\] Similarly, for \(\beta^{2}=1/4\), using again Proposition 7.2, \[\left|\frac{y_{0}\mp i\varepsilon}{\beta^{2}}\int_{0}^{1}\partial _{y}\mathcal{G}_{m,\varepsilon}^{\pm}(0,y_{0},z)\Delta_{m}\omega_{m}^{0}(z) \mathrm{d}z\right| \lesssim\frac{1}{m}|y_{0}\pm i\varepsilon|^{\frac{1}{2}}\left(1+ \left|\log\left(m|y_{0}\pm i\varepsilon|\right)\right.\right|\right)\|\omega_ {m}^{0}\|_{H_{y}^{2}}\] \[\lesssim m^{-\frac{3}{2}}\|\omega_{m}^{0}\|_{H_{y}^{2}},\] while, thanks to Lemma 8.10 we have \[\left|\int_{0}^{1}\partial_{y}\mathcal{G}_{m,\varepsilon}^{\pm} (0,y_{0},z)F_{m}^{\pm}(z,0)\mathrm{d}z\right| =\left|4(\mu+i\nu)m\int_{0}^{1}\frac{f_{m,\varepsilon}^{\pm}(z, y_{0})}{\mathcal{W}_{m,\varepsilon}^{\pm}(y_{0})}F_{m}^{\pm}(z,0)\mathrm{d}z\right|\] \[\lesssim m\frac{\|f_{m,\varepsilon}^{\pm}(\cdot,y_{0})\|_{L_{z}^ {2}}}{|\mathcal{W}_{m,\varepsilon}^{\pm}(y_{0})|}\|F_{m}^{\pm}(z,0)\|_{L_{z}^ {2}}\] \[\lesssim m^{-\frac{1}{2}}\left\|F_{m}\right\|_{L^{2}}.\] With this, the proof is finished. We next provide pointwise localized bounds on \(\mathcal{B}_{m,\varepsilon}^{\pm}(y,y_{0},0)\). **Proposition 8.11**.: _Let \(\beta^{2}\neq 1/4\) and \(0\leq\varepsilon\leq\varepsilon_{0}\). Let \(y,y_{0}\in[0,1]\) such that \(m|y-y_{0}|\leq 3\beta\). Then,_ * _If_ \(my_{0}\leq 3\beta\)_, we have_ \[\left|\mathcal{B}_{m,\varepsilon}^{\pm}(y,y_{0},0)\right|\lesssim m^{-\frac{1 }{2}}y_{0}^{-\frac{1}{2}+\mu}|y-y_{0}\pm i\varepsilon|^{\frac{1}{2}-\mu}Q_{0, m}.\] * _If_ \(my_{0}\geq 3\beta\)_, we have_ \[\left|\mathcal{B}^{\pm}_{m,\varepsilon}(y,y_{0},0)\right|\lesssim(m|y-y_{0}\pm i \varepsilon|)^{\frac{1}{2}-\mu}Q_{0,m}.\] **Proposition 8.12**.: _Let \(\beta^{2}=1/4\) and \(0\leq\varepsilon\leq\varepsilon_{0}.\) Let \(y,y_{0}\in[0,1]\) such that \(m|y-y_{0}|\leq 3\beta\). Then,_ \[\left|\mathcal{B}^{\pm}_{m,\varepsilon}(y,y_{0},0)\right|\lesssim m^{-\frac{ 1}{2}}y_{0}^{-\frac{1}{2}}|y-y_{0}\pm i\varepsilon|^{\frac{1}{2}}\left(1+\big{|} \log{(m|y-y_{0}\pm i\varepsilon|)}\big{|}\right)Q_{0,m}.\] * _If_ \(my_{0}\geq 3\beta\)_, we have_ \[\left|\mathcal{B}^{\pm}_{m,\varepsilon}(y,y_{0},0)\right|\lesssim(m|y-y_{0} \pm i\varepsilon|)^{\frac{1}{2}}\left(1+\big{|}\log{(m|y-y_{0}\pm i\varepsilon |)}\big{|}\right)Q_{0,m}.\] With the above pointwise bounds, one deduces the following integral estimates for all \(\beta^{2}>0\). **Corollary 8.13**.: _Let \(y_{0}\in[0,1]\). Then,_ * _If_ \(my_{0}\leq 3\beta\)_, we have_ \[\|\mathcal{B}^{\pm}_{m,\varepsilon}(\cdot,y_{0},0)\|_{L^{2}_{y}(J^{ \varepsilon}_{m}\cap J_{3})}\lesssim m^{-\frac{3}{2}}y_{0}^{-\frac{1}{2}}Q_{0,m}.\] * _If_ \(my_{0}\geq 3\beta\)_, we have_ \[\|\mathcal{B}^{\pm}_{m,\varepsilon}(\cdot,y_{0},0)\|_{L^{2}_{y}(J^{ \varepsilon}_{m}\cap J_{3})}\lesssim m^{-\frac{1}{2}}Q_{0,m}.\] The two propositions are a consequence of Propositions 8.1, 8.2, the lower bounds from Lemma A.9, A.14, A.18 and the pointwise estimates on \(\partial_{y}\varphi^{\pm}_{m,\varepsilon}(0,y_{0})\) from Proposition 8.3. ### Boundary pointwise estimates on Green's Function's derivatives This subsection estimates derivatives of the Green's function \(\mathcal{G}^{\pm}_{m,\varepsilon}(y,y_{0},z)\) evaluated at the boundary values \(y,z\in\{0,1\}\). **Lemma 8.14**.: _We have that for \(my_{0}\geq 3\beta\),_ \[|\partial_{y}\partial_{z}\mathcal{G}^{\pm}_{m,\varepsilon}(0,y_{0},0)|\lesssim m\] _while for \(my_{0}\leq 3\beta\),_ \[|\partial_{y}\partial_{z}\mathcal{G}^{\pm}_{m,\varepsilon}(0,y_{0},0)|\lesssim \frac{1}{y_{0}}.\] Proof.: For \(\beta^{2}>1/4\), it follows from the proof of Proposition 4.1 and Lemma A.9. For \(\beta^{2}=1/4\), it follows from Lemma 5.2, Lemma A.13 and Lemma A.14. For \(b^{2}<1/4\), it follows from the proof of Proposition 4.3 and Lemma A.18. **Lemma 8.15**.: _For \(m\geq 6\beta\), we have_ * _If_ \(my_{0}\leq 3\beta\)_, then_ \(m(1-y_{0})\geq 3\beta\) _and_ \[|\partial_{y}\partial_{z}\mathcal{G}^{\pm}_{m,\varepsilon}(1,y_{0},0)|\lesssim m ^{\frac{1}{2}+\mu}y_{0}^{-\frac{1}{2}+\mu}.\] * _If_ \(m(1-y_{0})\leq 3\beta\)_, then_ \(my_{0}\geq 3\beta\) _and_ \[|\partial_{y}\partial_{z}\mathcal{G}^{\pm}_{m,\varepsilon}(1,y_{0},0)|\lesssim m.\] _On the other hand, for \(m\leq 6\beta\), we have that_ * _If_ \(y_{0}\leq\frac{1}{2}\)_, then_ \[|\partial_{y}\partial_{z}\mathcal{G}^{\pm}_{m,\varepsilon}(1,y_{0},0)|\lesssim y _{0}^{-\frac{1}{2}+\mu}.\] * _If_ \(1-y_{0}\leq\frac{1}{2}\)_, then_ \[|\partial_{y}\partial_{z}\mathcal{G}^{\pm}_{m,\varepsilon}(1,y_{0},0)|\lesssim(1 -y_{0})^{-\frac{1}{2}+\mu}.\] Proof.: It is straightforward the lower bounds on the Wronskian and Lemmas A.9, A.18, A.13, for \(\beta^{2}>1/4\), \(\beta^{2}=1/4\) and \(\beta^{2}<1/4\), respectively. **Lemma 8.16**.: _The same bounds as in Lemma 8.15 hold for \(|\partial_{y}\partial_{z}\mathcal{G}_{m,\varepsilon}^{\pm}(0,y_{0},1)|\)._ **Lemma 8.17**.: _We have that for \(m(1-y_{0})\geq 3\beta\),_ \[|\partial_{y}\partial_{z}\mathcal{G}_{m,\varepsilon}^{\pm}(1,y_{0},1)|\lesssim m\] _while for \(m(1-y_{0})\leq 3\beta\),_ \[|\partial_{y}\partial_{z}\mathcal{G}_{m,\varepsilon}^{\pm}(1,y_{0},1)|\lesssim \frac{1}{1-y_{0}}.\] ### Estimates for second order boundary terms In what follows, we shall consider only the case \(m\geq 6\beta\), since the setting \(m\leq 6\beta\) is analogous and easier. With the pointwise derivatives bounds obtained in the four previous lemmas, we are now in position to estimate \[\left(\partial_{y}+\partial_{y_{0}}\right)^{2}\varphi_{m, \varepsilon}^{\pm}(y,y_{0}) =-\frac{2}{\beta^{2}}\partial_{y}\omega_{m}^{0}(y)-F_{m, \varepsilon}^{\pm}(y,y_{0})+2\partial_{y}\mathcal{B}_{m,\varepsilon}^{\pm}( y,y_{0},z)\Big{]}_{z=0}^{z=1}\] \[\quad+2\int_{0}^{1}\partial_{y}\mathcal{G}_{m,\varepsilon}^{\pm} (y,y_{0},z)\left(\partial_{z}F_{m,\varepsilon}^{\pm}(z,y_{0})+\partial_{y_{0 }}F_{m,\varepsilon}^{\pm}(z,y_{0})\right)\mathrm{d}z\] for both \(y=0\) and \(y=1\). For simplicity we only discuss the case \(y=0\); the results and proofs are the same for the case \(y=1\). **Proposition 8.18**.: _Let \(m\geq 6\beta\) and \(y_{0}\in[0,1]\). Then, we have that_ * _For_ \(my_{0}\leq 3\beta\) _and_ \(\beta^{2}\neq 1/4\)_,_ * _For_ \(my_{0}\leq 3\beta\) _and_ \(\beta^{2}=1/4\)_,_ * _For_ \(my_{0}\geq 3\beta\) _and_ \(m(1-y_{0})\leq 3\beta\)_,_ * _For_ \(my_{0}\geq 3\beta\) _and_ \(m(1-y_{0})\geq 3\beta\)_,_ * _For_ \(my_{0}\geq 3\beta\) _and_ \(m(1-y_{0})\geq 3\beta\)_,_ * _For_ \(my_{0}\geq 3\beta\) _and_ \(m(1-y_{0})\geq 3\beta\)_,_ \[|\left(\partial_{y}+\partial_{y_{0}}\right)^{2}\varphi_{m,\varepsilon}^{\pm} (y,y_{0})|\lesssim mQ_{1,m}.\] Proof.: For \(y=0\), we can estimate \[|\partial_{y}\omega_{m}^{0}(0)|+|F_{m,\varepsilon}^{\pm}(0,y_{0})|\lesssim Q _{1,m}\] thanks to the Sobolev embedding. On the other hand, from Proposition 7.2, for \(my_{0}\leq 3\beta\) and \(\beta^{2}\neq 1/4\) we have \[\left|\int_{0}^{1}\partial_{y}\mathcal{G}_{m,\varepsilon}^{\pm}(0,y_{0},z) \left(\partial_{z}F_{m,\varepsilon}^{\pm}(z,y_{0})+\partial_{y_{0}}F_{m, \varepsilon}^{\pm}(z,y_{0})\right)\mathrm{d}z\right|\lesssim\frac{1}{m^{1+\mu }}|y_{0}\pm i\varepsilon|^{-\frac{1}{2}-\mu}Q_{1,m},\] while for \(my_{0}\leq 3\beta\) and \(\beta^{2}=1/4\) we have \[\left|\int_{0}^{1}\partial_{y}\mathcal{G}_{m,\varepsilon}^{\pm}(0,y_{0},z) \left(\partial_{z}+\partial_{y_{0}}\right)F_{m,\varepsilon}^{\pm}(z,y_{0}) \mathrm{d}z\right|\lesssim\frac{1}{m}|y_{0}\pm i\varepsilon|^{-\frac{1}{2}} \left(1+\big{|}\log\left(m|y_{0}\pm i\varepsilon|\right)\big{|}\right)Q_{1,m},\] whereas for \(my_{0}\geq 3\beta\), we have \[\left|\int_{0}^{1}\partial_{y}\mathcal{G}_{m,\varepsilon}^{\pm}(0,y_{0},z) \left(\partial_{z}+\partial_{y_{0}}\right)F_{m,\varepsilon}^{\pm}(z,y_{0}) \mathrm{d}z\right|\lesssim Q_{1,m}.\] Now, for the solid boundary terms \(\partial_{y}\mathcal{B}^{\pm}_{m,\varepsilon}(y,y_{0},z)\Big{|}_{z=0}^{z=1}\), we shall use Proposition 8.3 as well as Lemmas 8.14-8.17. Indeed, for \(\partial_{y}\mathcal{B}^{\pm}_{m,\varepsilon}(0,y_{0},0)=\partial_{y}\partial_{ z}\mathcal{G}^{\pm}_{m,\varepsilon}(0,y_{0},0)\partial_{y}\varphi^{\pm}_{m, \varepsilon}(0,y_{0})\), Lemma 8.14 provides * For \(my_{0}\leq 3\beta\), we have that \(|\partial_{y}\mathcal{B}^{\pm}_{m,\varepsilon}(0,y_{0},0)|\lesssim m^{-\frac{ 1}{2}}y_{0}^{-1}Q_{0,m}\). * For \(my_{0}\geq 3\beta\), we have that \(|\partial_{y}\mathcal{B}^{\pm}_{m,\varepsilon}(0,y_{0},0)|\lesssim mQ_{0,m}\). Similarly, for \(\partial_{y}\mathcal{B}^{\pm}_{m,\varepsilon}(0,y_{0},1)=\partial_{y}\partial _{z}\mathcal{G}^{\pm}_{m,\varepsilon}(0,y_{0},1)\partial_{y}\varphi^{\pm}_{m,\varepsilon}(1,y_{0})\), we have from Lemma 8.16 that * For \(my_{0}\leq 3\beta\), then \(|\partial_{y}\mathcal{B}^{\pm}_{m,\varepsilon}(0,y_{0},1)|\lesssim m^{\frac{ 1}{2}+\mu}y_{0}^{-\frac{1}{2}+\mu}Q_{0,m}\). * For \(my_{0}\geq 3\beta\), we further distinguish * For \(m(1-y_{0})\leq 3\beta\), then \(|\partial_{y}\mathcal{B}^{\pm}_{m,\varepsilon}(0,y_{0},1)|\lesssim m^{\mu}(1- y_{0})^{-\frac{1}{2}+\mu}Q_{0,m}\). * For \(m(1-y_{0})\geq 3\beta\), then \(|\partial_{y}\mathcal{B}^{\pm}_{m,\varepsilon}(0,y_{0},1)|\lesssim mQ_{0,m}\). As a result, for \(my_{0}\leq 3\beta\) and \(\beta^{2}\neq 1/4\) we have that \[|\left(\partial_{y}+\partial_{y_{0}}\right)^{2}\varphi^{\pm}_{m, \varepsilon}(y,y_{0})| \lesssim Q_{1,m}+\frac{1}{m^{1+\mu}}y_{0}^{-\frac{1}{2}-\mu}Q_{1,m }+m^{-\frac{1}{2}}y_{0}^{-1}Q_{0,m}+m^{\frac{1}{2}+\mu}y_{0}^{-\frac{1}{2}+ \mu}Q_{0,m}\] \[\lesssim\left(1+\frac{1}{m^{1+\mu}}y_{0}^{-\frac{1}{2}-\mu}+m^{- \frac{1}{2}}y_{0}^{-1}+m^{\frac{1}{2}}y_{0}^{-\frac{1}{2}}\right)Q_{1,m},\] while for \(my_{0}\leq 3\beta\) and \(\beta^{2}=1/4\) we have that \[|\left(\partial_{y}+\partial_{y_{0}}\right)^{2}\varphi^{\pm}_{m, \varepsilon}(y,y_{0})| \lesssim Q_{1,m}+\frac{1}{m}y_{0}^{-\frac{1}{2}}\left(1+\big{|} \log\left(my_{0}\right)\big{|}\right)Q_{1,m}+m^{-\frac{1}{2}}y_{0}^{-1}Q_{0,m }+m^{\frac{1}{2}}y_{0}^{-\frac{1}{2}}Q_{0,m}\] \[\lesssim\left(1+\frac{1}{m}y_{0}^{-\frac{1}{2}}\left(1+\big{|} \log\left(my_{0}\right)\big{|}\right)+m^{-\frac{1}{2}}y_{0}^{-1}+m^{\frac{1}{2} }y_{0}^{-\frac{1}{2}}\right)Q_{1,m}.\] Similarly, for \(my_{0}\geq 3\beta\) and \(m(1-y_{0})\leq 3\beta\) we conclude \[|\left(\partial_{y}+\partial_{y_{0}}\right)^{2}\varphi^{\pm}_{m, \varepsilon}(y,y_{0})|\lesssim Q_{1,m}+mQ_{0,m}+(1-y_{0})^{-\frac{1}{2}}Q_{0,m} \lesssim\left(m+(1-y_{0})^{-\frac{1}{2}}\right)Q_{1,m},\] whereas for \(my_{0}\geq 3\beta\) and \(m(1-y_{0})\geq 3\beta\) we obtain \[|\left(\partial_{y}+\partial_{y_{0}}\right)^{2}\varphi^{\pm}_{m, \varepsilon}(y,y_{0})|\lesssim Q_{1,m}+mQ_{0,m}\lesssim mQ_{1,m}\] and the proof is finished. We next present estimates for \[\widetilde{\mathcal{B}^{\pm}_{m,\varepsilon}}(y,y_{0},z)=\partial_{z}\mathcal{ G}^{\pm}_{m,\varepsilon}(y,y_{0},z)\left(\partial_{y}+\partial_{y_{0}}\right)^{2} \varphi^{\pm}_{m,\varepsilon}(y,y_{0}),\] at \(z=0\). As before, we only obtain these bounds under the assumption that \(m|y-y_{0}|\leq 3\beta\). We state them for \(z=0\); the result for \(z=1\) is the same and thus we omit the details. The next two Propositions are a direct consequence of Propositions 8.1, 8.2, 8.18, as well as Lemma A.9.13, A.14, A.18, depending on \(\beta^{2}\). **Proposition 8.19**.: _Let \(\beta^{2}\neq 1/4\) and \(y,y_{0}\in[0,1]\) such that \(m|y-y_{0}|\leq 3\beta\). Then,_ * _For_ \(my_{0}\leq 3\beta\)_,_ \[|\widetilde{\mathcal{B}^{\pm}_{m,\varepsilon}}(y,y_{0},0)|\lesssim m^{-\frac{ 1}{2}}y_{0}^{-\frac{3}{2}}(m|y-y_{0}\pm i\varepsilon|)^{\frac{1}{2}-\mu}Q_{1,m}\] * _For_ \(my_{0}\geq 3\beta\) _and_ \(m(1-y_{0})\leq 3\beta\)_,_ \[|\widetilde{\mathcal{B}^{\pm}_{m,\varepsilon}}(y,y_{0},0)|\lesssim\left(m+(1-y_{0}) ^{-\frac{1}{2}}\right)(m|y-y_{0}\pm i\varepsilon|)^{\frac{1}{2}-\mu}Q_{1,m},\] * _For_ \(my_{0}\geq 3\beta\) _and_ \(m(1-y_{0})\geq 3\beta\)_,_ \[|\widetilde{\mathcal{B}^{\pm}_{m,\varepsilon}}(y,y_{0},0)|\lesssim m(m|y-y_{0} \pm i\varepsilon|)^{\frac{1}{2}-\mu}Q_{1,m}.\] **Proposition 8.20**.: _Let \(\beta^{2}=1/4\) and \(y,y_{0}\in[0,1]\) such that \(m|y-y_{0}|\leq 3\beta\). Then,_ \[|\widetilde{\mathcal{B}^{\pm}_{m,\varepsilon}}(y,y_{0},0)|\lesssim m(m|y-y_{0} \pm i\varepsilon|)^{\frac{1}{2}-\mu}Q_{1,m}.\] * _For_ \(my_{0}\leq 3\beta\)_,_ \[|\widetilde{\mathcal{B}^{\pm}_{m,\varepsilon}}(y,y_{0},0)|\lesssim m^{-\frac{1}{ 2}}y_{0}^{-\frac{3}{2}}(m|y-y_{0}\pm i\varepsilon|)^{\frac{1}{2}}\left(1+\big{|} \log\left(m|y-y_{0}\pm i\varepsilon|\right)\big{|}\right)Q_{1,m}\] * _For_ \(my_{0}\geq 3\beta\) _and_ \(m(1-y_{0})\leq 3\beta\)_,_ \[|\widetilde{\mathcal{B}^{\pm}_{m,\varepsilon}}(y,y_{0},0)|\lesssim\left(m+(1- y_{0})^{-\frac{1}{2}}\right)(m|y-y_{0}\pm i\varepsilon|)^{\frac{1}{2}} \left(1+\big{|}\log\left(m|y-y_{0}\pm i\varepsilon|\right)\big{|}\right)Q_{1, m},\] * _For_ \(my_{0}\geq 3\beta\) _and_ \(m(1-y_{0})\geq 3\beta\)_,_ \[|\widetilde{\mathcal{B}^{\pm}_{m,\varepsilon}}(y,y_{0},0)|\lesssim m(m|y-y_{0 }\pm i\varepsilon|)^{\frac{1}{2}}\left(1+\big{|}\log\left(m|y-y_{0}\pm i \varepsilon|\right)\big{|}\right)Q_{1,m}.\] We upgrade the above pointwise estimates to integral bounds for \(y\in[0,1]\) such that \(2\beta\leq m|y-y_{0}|\leq 3\beta\), which will be useful later on. **Corollary 8.21**.: _Let \(0\leq\varepsilon\leq\varepsilon_{0}\leq\frac{\beta}{m}\). Then,_ * _For_ \(my_{0}\leq 3\beta\)_,_ \[\|\widetilde{\mathcal{B}^{\pm}_{m,\varepsilon}}(\cdot,y_{0},0)\|_{L^{2}_{y}(J^ {\varepsilon}_{2}\cap J_{3})}\lesssim m^{-1}y_{0}^{-\frac{3}{2}}Q_{1,m}\] * _For_ \(my_{0}\geq 3\beta\) _and_ \(m(1-y_{0})\leq 3\beta\)_,_ \[\|\widetilde{\mathcal{B}^{\pm}_{m,\varepsilon}}(\cdot,y_{0},0)\|_{L^{2}_{y}(J^ {\varepsilon}_{2}\cap J_{3})}\lesssim m^{-\frac{1}{2}}Q_{1,m}.\] We finish the section with estimates for \[\partial_{y}\mathcal{B}^{\pm}_{m,\varepsilon}(y,y_{0},z)=\partial_{y}\partial _{z}\mathcal{G}^{\pm}_{m,\varepsilon}(y,y_{0},z)\partial_{z}\varphi^{\pm}_{m,\varepsilon}(z,y_{0})\] for \(z=0\) and \(z=1\) under the localizing assumption that \(m|y-y_{0}|\leq 3\beta\). The next two results follows directly from Proposition 8.1, Proposition 8.2 and Proposition 8.3. **Proposition 8.22**.: _Let \(\beta^{2}\neq 1/4\). Let \(y,y_{0}\in[0,1]\) such that \(m|y-y_{0}|\leq 3\beta\). Then,_ * _For_ \(my_{0}\leq 3\beta\)_, we have that_ \[|\partial_{y}\mathcal{B}^{\pm}_{m,\varepsilon}(y,y_{0},0)|\lesssim m^{-\frac{ 1}{2}}y_{0}^{-\frac{1}{2}+\mu}|y-y_{0}\pm i\varepsilon|^{-\frac{1}{2}-\mu}Q_{ 0,m}.\] * _For_ \(my_{0}\geq 3\beta\)_, we have that_ \[|\partial_{y}\mathcal{B}^{\pm}_{m,\varepsilon}(y,y_{0},0)|\lesssim m^{\frac{ 1}{2}-\mu}|y-y_{0}\pm i\varepsilon|^{-\frac{1}{2}-\mu}Q_{0,m}.\] **Proposition 8.23**.: _Let \(b^{2}=1/4\) and \(y,y_{0}\in[0,1]\) such that \(m|y-y_{0}|\leq 3\beta\). Then,_ \[|\partial_{y}\mathcal{B}^{\pm}_{m,\varepsilon}(y,y_{0},0)|\lesssim m^{-\frac{ 1}{2}}y_{0}^{-\frac{1}{2}}|y-y_{0}\pm i\varepsilon|^{-\frac{1}{2}}\left(1+ \big{|}\log\left(m|y-y_{0}\pm i\varepsilon|\right)\big{|}\right)Q_{0,m}.\] * _For_ \(my_{0}\geq 3\beta\)_, we have that_ \[|\partial_{y}\mathcal{B}^{\pm}_{m,\varepsilon}(y,y_{0},0)|\lesssim m^{\frac{ 1}{2}}|y-y_{0}\pm i\varepsilon|^{-\frac{1}{2}}\left(1+\big{|}\log\left(m|y-y_{ 0}\pm i\varepsilon|\right)\big{|}\right)Q_{0,m}.\] Finally, we state the integral bounds that are deduced from the above estimates. **Corollary 8.24**.: _Let \(0\leq\varepsilon\leq\varepsilon_{0}\). Then,_ * _For_ \(my_{0}\leq 3\beta\)_, we have that_ \[\|\partial_{y}\mathcal{B}^{\pm}_{m,\varepsilon}(\cdot,y_{0},0)\|_{L^{2}_{y}(J^ {\varepsilon}_{2}\cap J_{3})}\lesssim(my_{0})^{-\frac{1}{2}}Q_{0,m}.\] * _For_ \(my_{0}\geq 3\beta\)_, we have that_ \[\|\partial_{y}\mathcal{B}^{\pm}_{m,\varepsilon}(\cdot,y_{0},0)\|_{L^{2}_{y}(J^ {\varepsilon}_{2}\cap J_{3})}\lesssim m^{\frac{1}{2}}Q_{0,m}.\] ## 9. Estimates for the Generalized Stream-functions This section is devoted to obtaining estimates for the generalized stream-functions \(\psi^{\pm}_{m,\varepsilon}(y,y_{0})\) and densities \(\rho^{\pm}_{m,\varepsilon}(y,y_{0},z)\), as well as for some of their derivatives. Moreover, we define \[\widetilde{\psi_{m}}(y,y_{0}):=\lim_{\varepsilon\to 0}\psi^{-}_{m, \varepsilon}(y,y_{0})-\psi^{+}_{m,\varepsilon}(y,y_{0}),\] and similarly \[\widetilde{\rho_{m}}(y,y_{0}):=\lim_{\varepsilon\to 0}\rho^{-}_{m, \varepsilon}(y,y_{0})-\rho^{+}_{m,\varepsilon}(y,y_{0}).\] We state the following proposition regarding estimates for \(\partial_{y_{0}}\varphi^{\pm}_{m,\varepsilon}(y,y_{0})\) and \(\partial^{2}_{y,y_{0}}\varphi^{\pm}_{m,\varepsilon}(y,y_{0})\), from which one obtains the corresponding estimates for \(\partial_{y_{0}}\widetilde{\psi_{m}}(y,y_{0})\) and \(\partial^{2}_{y,y_{0}}\widetilde{\psi_{m}}(y,y_{0})\), respectively. **Proposition 9.1**.: _The following holds true._ * _For_ \(m|y-y_{0}|\leq 3\beta\) _and_ \(\beta^{2}\neq 1/4\)_, we have that_ \[|\partial_{y_{0}}\varphi^{\pm}_{m,\varepsilon}(y,y_{0})|\lesssim\frac{1}{m^ {1+\mu}}|y-y_{0}|^{-\frac{1}{2}-\mu}Q_{1,m}+\sum_{\sigma=0,1}|\mathcal{B}^{\pm }_{m,\varepsilon}(y,y_{0},\sigma)|,\] _and_ \[|\partial^{2}_{y,y_{0}}\varphi^{\pm}_{m,\varepsilon}(y,y_{0})|\lesssim\frac{ 1}{m^{1+\mu}}|y-y_{0}|^{-\frac{3}{2}-\mu}Q_{1,m}+\sum_{\sigma=0,1}|\partial_{ y}\mathcal{B}^{\pm}_{m,\varepsilon}(y,y_{0},\sigma)|,\] _where the bounds for_ \(|\mathcal{B}^{\pm}_{m,\varepsilon}(y,y_{0},\sigma)|\) _and_ \(|\partial_{y}\mathcal{B}^{\pm}_{m,\varepsilon}(y,y_{0},\sigma)|\) _for_ \(\sigma=0,1\) _are given in Propositions_ 8.11 _and_ 8.22_, respectively._ * _For_ \(m|y-y_{0}|\leq 3\beta\) _and_ \(\beta^{2}=1/4\)_, we have that_ \[|\partial_{y_{0}}\varphi^{\pm}_{m,\varepsilon}(y,y_{0})|\lesssim\frac{1}{m}| y-y_{0}|^{-\frac{1}{2}}\left(1+\big{|}\log\left(m|y-y_{0}\pm i\varepsilon| \right)\big{|}\right)Q_{1,m}+\sum_{\sigma=0,1}|\mathcal{B}^{\pm}_{m, \varepsilon}(y,y_{0},\sigma)|,\] _and_ \[|\partial^{2}_{y,y_{0}}\varphi^{\pm}_{m,\varepsilon}(y,y_{0})|\lesssim\frac{1 }{m}|y-y_{0}|^{-\frac{3}{2}}\left(1+\big{|}\log\left(m|y-y_{0}\pm i\varepsilon| \right)\big{|}\right)Q_{1,m}+\sum_{\sigma=0,1}|\partial_{y}\mathcal{B}^{\pm}_{ m,\varepsilon}(y,y_{0},\sigma)|,\] _where the bounds for_ \(|\mathcal{B}^{\pm}_{m,\varepsilon}(y,y_{0},\sigma)|\) _and_ \(|\partial_{y}\mathcal{B}^{\pm}_{m,\varepsilon}(y,y_{0},\sigma)|\) _for_ \(\sigma=0,1\) _are given in Propositions_ 8.12 _and_ 8.23_, respectively._ * _For_ \(m|y-y_{0}|\geq 3\beta\)_, we have that_ \[\|\partial_{y}\partial_{y_{0}}\varphi^{\pm}_{m,\varepsilon}\|^{2}_{L^{2}_{y}( J^{\varepsilon}_{3})}+m^{2}\|\partial_{y_{0}}\varphi^{\pm}_{m,\varepsilon}\|^{2}_{L^{2}_{y}( J^{\varepsilon}_{3})}\lesssim Q^{2}_{1,m}+m^{2}\sum_{\sigma=0,1}\|\mathcal{B}^{\pm}_{m, \varepsilon}(\cdot,y_{0},\sigma)\|^{2}_{L^{2}_{y}(J^{\varepsilon}_{2}\cap J _{3})},\] _where the bounds for_ \(\|\mathcal{B}^{\pm}_{m,\varepsilon}(\cdot,y_{0},\sigma)\|_{L^{2}_{y}(J^{ \varepsilon}_{2}\cap J_{3})}\) _are given in Corollary_ 8.13_._ _In particular, these bounds also apply to \(\partial_{y_{0}}\widetilde{\psi_{m}}(y,y_{0})\) and \(\partial^{2}_{y,y_{0}}\widetilde{\psi_{m}}(y,y_{0})\)._ Proof.: Both \((i)\) and \((ii)\) follows from Proposition 3.5 and Proposition 7.2. As for \((iii)\), we argue assuming that \(\beta^{2}\neq 1/4\). Taking a \(\partial_{y_{0}}\) derivative in (2.11), we see that \[\text{TG}^{\pm}_{m,\varepsilon}\partial_{y_{0}}\varphi^{\pm}_{m,\varepsilon}= \partial_{y_{0}}F^{\pm}_{m,\varepsilon}-2\beta^{2}\frac{1}{(y-y_{0}\pm i \varepsilon)^{3}}\varphi^{\pm}_{m,\varepsilon}.\] In order to use Lemma 7.1, we need to control \(\big{\|}\partial_{y_{0}}\varphi^{\pm}_{m,\varepsilon}\big{\|}_{L^{2}_{y}(J^{ \varepsilon}_{2}\cap J_{3})}\) and \(\big{\|}\frac{1}{(y-y_{0}\pm i\varepsilon)^{3}}\varphi^{\pm}_{m,\varepsilon} \big{\|}_{L^{2}_{y}(J^{\varepsilon}_{2})}\). We begin by estimating \[\int_{y_{0}+\frac{2\beta}{m}}^{y_{0}+\frac{3\beta}{m}}|\partial_{y_{0}}\varphi^{ \pm}_{m,\varepsilon}(y,y_{0})|^{2}\mathrm{d}y\lesssim\sum_{\sigma=0,1}\int_{y _{0}+\frac{2\beta}{m}}^{y_{0}+\frac{3\beta}{m}}|\mathcal{B}^{\pm}_{m, \varepsilon}(y,y_{0},\sigma)|^{2}\mathrm{d}y+Q^{2}_{1,m}\int_{y_{0}+\frac{2 \beta}{m}}^{y_{0}+\frac{3\beta}{m}}\frac{1}{m^{2+2\mu}}|y-y_{0}|^{-1-2\mu} \mathrm{d}y\] Now, for \(\beta^{2}<1/4\) we have \(\mu\neq 0\) and \[Q_{1,m}^{2}\int_{y_{0}+2\frac{\beta}{m}}^{y_{0}+3\frac{\beta}{m}}\frac{1}{m^{2+2 \mu}}|y-y_{0}|^{-1-2\mu}\mathrm{d}y\lesssim\frac{1}{m^{2}}Q_{1,m}^{2},\] while for \(\beta^{2}>1/4\), we have \(\mu=0\) and therefore the bound still becomes \[Q_{1,m}^{2}\int_{y_{0}+2\frac{\beta}{m}}^{y_{0}+3\frac{\beta}{m}}\frac{1}{m^{2 }}|y-y_{0}|^{-1}\mathrm{d}y\lesssim\frac{1}{m^{2}}Q_{1,m}^{2}\left(\log\left( \frac{3\beta}{m}\right)-\log\left(\frac{2\beta}{m}\right)\right)\lesssim\frac{ 1}{m^{2}}Q_{1,m}^{2}.\] Therefore, we conclude that \[\left\|\partial_{y_{0}}\varphi_{m,\varepsilon}^{\pm}\right\|_{L_{y}^{2}(J_{ \mathbb{Z}}^{c}\cap J_{3})}\lesssim\frac{1}{m}Q_{1,m}+\sum_{\sigma=0,1}\| \mathcal{B}_{m,\varepsilon}^{\pm}(\cdot,y_{0},\sigma)\|_{L_{y}^{2}(J_{ \mathbb{Z}}^{c}\cap J_{3})}\] On the other hand, we use Proposition 7.2 applied to \(\varphi_{m,\varepsilon}^{\pm}(y,y_{0})\) to estimate \[\left\|\frac{1}{(y-y_{0}\pm i\varepsilon)^{3}}\varphi_{m,\varepsilon}^{\pm} \right\|_{L_{y}^{2}(J_{\mathbb{Z}}^{c}\cap J_{3})}^{2}\lesssim\int_{2\beta \leq m|y-y_{0}|\leq 3\beta}\frac{1}{m^{2+2\mu}}|y-y_{0}\pm i\varepsilon|^{-5-2 \mu}Q_{0,m}\mathrm{d}y\lesssim m^{2}Q_{0,m}\] and \[\left\|\frac{1}{(y-y_{0}\pm i\varepsilon)^{3}}\varphi_{m,\varepsilon}^{\pm} \right\|_{L_{y}^{2}(J_{\mathbb{Z}}^{c})}^{2}\lesssim m^{6}\|\varphi_{m, \varepsilon}^{\pm}\|_{L_{y}^{2}(J_{\mathbb{Z}}^{c})}\lesssim m^{2}Q_{0,m}.\] The result follows from applying Lemma 7.1. The next proposition gives bounds on \(\partial_{y_{0}}^{2}\varphi_{m,\varepsilon}^{\pm}\) and therefore also on \(\partial_{y_{0}}^{2}\widetilde{\psi_{m}}(y,y_{0})\). **Proposition 9.2**.: _The following holds true._ * _For_ \(m|y-y_{0}|\leq 3\beta\) _and_ \(\beta^{2}\neq 1/4\)_, we have that_ \[|\partial_{y_{0}}^{2}\varphi_{m,\varepsilon}^{\pm}(y,y_{0})|\lesssim\frac{ 1}{m^{1+\mu}}|y-y_{0}|^{-\frac{3}{2}-\mu}Q_{2,m}+\sum_{\sigma=0,1}\Big{(}| \partial_{y}\mathcal{B}_{m,\varepsilon}^{\pm}(y,y_{0},\sigma)|+|\widetilde{ \mathcal{B}_{m,\varepsilon}^{\pm}}(y,y_{0},\sigma)|\Big{)},\] _where the bounds for_ \(|\partial_{y}\mathcal{B}_{m,\varepsilon}^{\pm}(y,y_{0},\sigma)|\) _and_ \(|\widetilde{\mathcal{B}_{m,\varepsilon}^{\pm}}(y,y_{0},\sigma)|\) _are given in Propositions_ 8.22 _and_ 8.19_, respectively._ * _For_ \(m|y-y_{0}|\leq 3\beta\) _and_ \(\beta^{2}=1/4\)_, we have that_ \[|\partial_{y_{0}}^{2}\varphi_{m,\varepsilon}^{\pm}(y,y_{0})| \lesssim\frac{1}{m}|y-y_{0}|^{-\frac{3}{2}}\left(1+\big{|}\log \left(m|y-y_{0}\pm i\varepsilon|\right)\big{|}\right)Q_{2,m}\] \[\quad+\sum_{\sigma=0,1}\Big{(}|\partial_{y}\mathcal{B}_{m, \varepsilon}^{\pm}(y,y_{0},\sigma)|+|\widetilde{\mathcal{B}_{m,\varepsilon}^{ \pm}}(y,y_{0},\sigma)|\Big{)},\] _where the bounds for_ \(|\partial_{y}\mathcal{B}_{m,\varepsilon}^{\pm}(y,y_{0},\sigma)|\) _and_ \(|\widetilde{\mathcal{B}_{m,\varepsilon}^{\pm}}(y,y_{0},\sigma)|\) _are given in Propositions_ 8.23 _and_ 8.20_, respectively._ * _For_ \(m|y-y_{0}|\geq 3\beta\)_, we have that_ \[\|\partial_{y_{0}}^{2}\varphi_{m,\varepsilon}^{\pm}\|_{L_{y}^{2}(J_{ \mathbb{Z}}^{c})} \lesssim Q_{2,m}+\sum_{\sigma=0,1}\Big{(}\|\partial_{y}\mathcal{B}_ {m,\varepsilon}^{\pm}(\cdot,y_{0},\sigma)\|_{L_{y}^{2}(J_{\mathbb{Z}}^{c}\cap J _{3})}+\|\widetilde{\mathcal{B}_{m,\varepsilon}^{\pm}}(\cdot,y_{0},\sigma)\|_{ L_{y}^{2}(J_{\mathbb{Z}}^{c}\cap J_{3})}\Big{)}\] \[\quad+m\sum_{\sigma=0,1}\|B_{m,\varepsilon}^{\pm}(\cdot,y_{0}, \sigma)\|_{L_{y}^{2}(J_{\mathbb{Z}}^{c}\cap J_{3})},\] _where the estimates for_ \(\|\mathcal{B}_{m,\varepsilon}^{\pm}(\cdot,y_{0},\sigma)\|_{L_{y}^{2}(J_{ \mathbb{Z}}^{c}\cap J_{3})}\)_,_ \(\|\partial_{y}\mathcal{B}_{m,\varepsilon}^{\pm}(\cdot,y_{0},\sigma)\|_{L_{y}^{ 2}(J_{\mathbb{Z}}^{c}\cap J_{3})}\) _and_ \(|\widetilde{\mathcal{B}_{m,\varepsilon}^{\pm}}(\cdot,y_{0},\sigma)\|_{L_{y}^{2}(J _{\mathbb{Z}}^{c}\cap J_{3})}\) _are given in Corollaries_ 8.13_,_ 8.21 _and_ 8.24_, respectively._ _In particular, these bounds also apply to \(\partial_{y_{0}}^{2}\widetilde{\psi_{m}}(y,y_{0})\)._ Proof.: The first two statements of the proposition follow from Proposition 3.5 and Proposition 7.2. For the third part of the proposition, we argue for \(\beta^{2}\neq 1/4\). Taking \(\partial_{y_{0}}^{2}\) derivatives to (2.11), we see that \(\partial_{y_{0}}^{2}\varphi_{m,\varepsilon}^{\pm}(y,y_{0})\) solves \[\operatorname{TG}_{m,\varepsilon}^{\pm}\partial_{y_{0}}^{2}\varphi_{m, \varepsilon}^{\pm}=\partial_{y_{0}}^{2}F_{m,\varepsilon}^{\pm}-4\beta^{2} \frac{\partial_{y_{0}}\varphi_{m,\varepsilon}^{\pm}}{(y-y_{0}\pm i\varepsilon) ^{3}}-6\beta^{2}\frac{\varphi_{m,\varepsilon}^{\pm}}{(y-y_{0}\pm i\varepsilon )^{4}}.\] In order to use Lemma 7.1, we need to bound \(\|\partial_{y_{0}}^{2}\varphi_{m,\varepsilon}^{\pm}\|_{L_{y}^{2}(J_{2}^{ \varepsilon}\cap J_{3})}\), as well as \(\|\frac{\partial_{y_{0}}\varphi_{m,\varepsilon}^{\pm}}{(y-y_{0}\pm i \varepsilon)^{3}}\|_{L_{y}^{2}(J_{2}^{\varepsilon}\cap J_{3})}\) and \(\|\frac{\varphi_{m,\varepsilon}^{\pm}}{(y-y_{0}\pm i\varepsilon)^{4}}\|_{L_{ y}^{2}(J_{2}^{\varepsilon}\cap J_{3})}\). We estimate \[\int_{2\beta\leq m|y-y_{0}|\leq 3\beta} |\partial_{y_{0}}^{2}\varphi_{m,\varepsilon}^{\pm}(y,y_{0})|^{ 2}\mathrm{d}y\] \[\lesssim Q_{2,m}\int_{2\beta\leq|y-y_{0}|\leq 3\beta}\frac{1}{m^{2 +2\mu}}|y-y_{0}|^{-3-2\mu}\mathrm{d}y\] \[\quad+\sum_{\sigma=0,1}\Big{(}\|\partial_{y}\mathcal{B}_{m, \varepsilon}^{\pm}(\cdot,y_{0},\sigma)\|_{L_{y}^{2}(J_{2}^{\varepsilon}\cap J _{3})}+\|\widetilde{\mathcal{B}_{m,\varepsilon}^{\pm}}(\cdot,y_{0},\sigma)\|_ {L_{y}^{2}(J_{2}^{\varepsilon}\cap J_{3})}\Big{)}\] \[\lesssim Q_{2,m}+\sum_{\sigma=0,1}\Big{(}\|\partial_{y}\mathcal{B }_{m,\varepsilon}^{\pm}(\cdot,y_{0},\sigma)\|_{L_{y}^{2}(J_{2}^{\varepsilon} \cap J_{3})}+\|\widetilde{\mathcal{B}_{m,\varepsilon}^{\pm}}(\cdot,y_{0}, \sigma)\|_{L_{y}^{2}(J_{2}^{\varepsilon}\cap J_{3})}\Big{)}.\] Similarly, from Proposition 9.1 we have that \[\left\|\frac{\partial_{y_{0}}\varphi_{m,\varepsilon}^{\pm}}{(y-y_{0}\pm i \varepsilon)^{3}}\right\|_{L_{y}^{2}(J_{2}^{\varepsilon})}\lesssim m^{3}\| \partial_{y_{0}}\varphi_{m,\varepsilon}^{\pm}\|_{L_{y}^{2}(J_{2}^{\varepsilon} \cap J_{3})}\lesssim m^{2}Q_{1,m}+m^{3}\sum_{\sigma=0,1}\|\mathcal{B}_{m, \varepsilon}^{\pm}(\cdot,y_{0},\sigma)\|_{L_{y}^{2}(J_{2}^{\varepsilon}\cap J _{3})},\] while using Proposition 7.2 we obtain \[\left\|\frac{\varphi_{m,\varepsilon}^{\pm}}{(y-y_{0}\pm i\varepsilon)^{4}} \right\|_{L_{y}^{2}(J_{2}^{\varepsilon})}\lesssim m^{4}\|\varphi_{m, \varepsilon}^{\pm}\|_{L_{y}^{2}(J_{2}^{\varepsilon}\cap J_{3})}\lesssim m^{2} Q_{0,m}.\] With this, the proof is complete. We finish the subsection by providing the estimates for \(\widetilde{\rho_{m}}\) and \(\partial_{y_{0}}\widetilde{\rho_{m}}\). **Proposition 9.3**.: _The following holds true._ * _For_ \(m|y-y_{0}|\leq 3\beta\) _and_ \(\beta^{2}\neq 1/4\)_, we have that_ \[|\widetilde{\rho_{m}}(y,y_{0})|\lesssim\frac{1}{m^{1+\mu}}|y-y_{0}|^{-\frac{ 1}{2}-\mu}Q_{0,m}.\] _and_ \[|\partial_{y_{0}}\widetilde{\rho_{m}}(y,y_{0})|\lesssim\frac{1}{m^{1+\mu}}|y-y _{0}|^{-\frac{3}{2}-\mu}Q_{1,m}+\sup_{0\leq\varepsilon\leq\varepsilon_{0}} \sum_{\sigma=0,1}\sum_{\kappa\in\{+,-\}}|y-y_{0}+\kappa i\varepsilon|^{-1}| \mathcal{B}_{m,\varepsilon}^{\kappa}(y,y_{0},\sigma)|,\] _where the bounds for_ \(|\mathcal{B}_{m,\varepsilon}^{\pm}(y,y_{0},\sigma)|\) _for_ \(\sigma=0,1\) _are given in Proposition_ 8.11_._ * _For_ \(m|y-y_{0}|\leq 3\beta\) _and_ \(\beta^{2}=1/4\)_, we have that_ \[|\widetilde{\rho_{m}}(y,y_{0})|\lesssim\frac{1}{m}|y-y_{0}|^{-\frac{1}{2}}\left( 1+\big{|}\log\left(m|y-y_{0}\pm i\varepsilon|\right)\big{|}\right)Q_{0,m}.\] _and_ \[|\partial_{y_{0}}\widetilde{\rho_{m}}(y,y_{0})|\lesssim\frac{1}{m}|y-y _{0}|^{-\frac{3}{2}}\left(1+\big{|}\log\left(m|y-y_{0}\pm i\varepsilon|\right) \big{|}\right)Q_{1,m}\] \[\quad+\sup_{0\leq\varepsilon\leq\varepsilon_{0}}\sum_{\sigma=0,1} \sum_{\kappa\in\{+,-\}}|y-y_{0}+\kappa i\varepsilon|^{-1}|\mathcal{B}_{m, \varepsilon}^{\kappa}(y,y_{0},\sigma)|,\] _where the bounds for \(|\mathcal{B}_{m,\varepsilon}^{\pm}(y,y_{0},\sigma)|\) for \(\sigma=0,1\) are given in Proposition 8.12._ * _For_ \(m|y-y_{0}|\geq 3\beta\)_, we have that_ \[\|\widetilde{\rho_{m}}\|_{L_{y}^{2}(J_{\mathcal{S}}^{\varepsilon})}\lesssim \frac{1}{m}Q_{0,m}\] _and_ \[\|\partial_{y_{0}}\widetilde{\rho_{m}}\|_{L_{y}^{2}(J_{\mathcal{S}}^{ \varepsilon})}\lesssim Q_{1,m}+m\sum_{\sigma=0,1}\sum_{\kappa\in\{+,-\}}\| \mathcal{B}_{m,\varepsilon}^{\kappa}(y,y_{0},\sigma)\|_{L_{y}^{2}(J_{\mathcal{ S}}^{\varepsilon}\cap J_{3})}\] Proof.: The bounds follow directly from Proposition 3.6, Proposition 7.2 and Proposition 9.1. ## 10. Time-decay estimates This section is devoted to the proof of the time decay estimate rates of the stream function \(\psi_{m}(t,y)\), its \(\partial_{y}\psi_{m}(t,y)\) derivative and the density \(\rho_{m}(t,y)\). Let us recall that we can write \[\psi_{m}(t,y)=\frac{1}{2\pi i}\lim_{\varepsilon\to 0}\int_{0}^{1}\mathrm{e}^{- imy_{0}t}\left(\psi_{m}^{-}(y,y_{0})-\psi_{m}^{+}(y,y_{0})\right)\mathrm{d}y_{0},\] and \[\rho_{m}(t,y)=\frac{1}{2\pi i}\lim_{\varepsilon\to 0}\int_{0}^{1}\mathrm{e}^{- imy_{0}t}\left(\rho_{m}^{-}(y,y_{0})-\rho_{m}^{+}(y,y_{0})\right)\mathrm{d}y_{0}.\] A simple integration by parts provides \[\psi_{m}(t,y) =-\frac{1}{2\pi i}\frac{1}{imt}\lim_{\varepsilon\to 0}\left[ \mathrm{e}^{-imy_{0}t}\left(\psi_{m,\varepsilon}^{-}(y,y_{0})-\psi_{m, \varepsilon}^{+}(y,y_{0})\right)\right]_{y_{0}=0}^{y_{0}=1}\] \[\quad+\frac{1}{2\pi i}\frac{1}{imt}\lim_{\varepsilon\to 0}\int_{0}^{1} \mathrm{e}^{-imy_{0}t}\left(\partial_{y_{0}}\psi_{m,\varepsilon}^{-}(y,y_{0}) -\partial_{y_{0}}\psi_{m,\varepsilon}^{+}(y,y_{0})\right)\mathrm{d}y_{0}\] \[=\frac{1}{2\pi i}\frac{1}{imt}\lim_{\varepsilon\to 0}\int_{0}^{1} \mathrm{e}^{-imy_{0}t}\left(\partial_{y_{0}}\psi_{m,\varepsilon}^{-}(y,y_{0}) -\partial_{y_{0}}\psi_{m,\varepsilon}^{+}(y,y_{0})\right)\mathrm{d}y_{0}\] where we use Theorem 4 to show that the solid terms associated to the spectral boundary vanish. Throughout the entire section, let us consider \(\beta^{2}\neq 1/4\), unless we state otherwise. We begin proving the following result. **Proposition 10.1**.: _Let \(t\geq 1\). Then,_ \[\|\psi_{m}(t)\|_{L_{y}^{2}}\lesssim m^{-\frac{3}{2}}t^{-\frac{3}{2}+\mu}Q_{2,m}\] Proof.: We write \[\psi_{m}(t,y)=\frac{1}{2\pi i}\frac{1}{imt}\lim_{\varepsilon\to 0}\int_{0}^{1} \mathrm{e}^{-imy_{0}t}\left(\partial_{y_{0}}\psi_{m,\varepsilon}^{-}(y,y_{0}) -\partial_{y_{0}}\psi_{m,\varepsilon}^{+}(y,y_{0})\right)\mathrm{d}y_{0}.\] Let us denote \(\delta_{0}:=\min\left(\frac{3\beta}{m},\frac{1}{2}\right)\) and let \(\delta\in\left(0,\frac{\delta_{0}}{2}\right)\). In particular, we note that \(m\delta\leq 3\beta\), it is bounded. We shall first show the decay rates for \(\|\psi_{m}(t)\|_{L_{y}^{2}(\delta,1-\delta)}\) and then for \(\|\psi_{m}(t)\|_{L_{y}^{2}(0,\delta)}\) and \(\|\psi_{m}(t)\|_{L_{y}^{2}(1-\delta,1)}\). \(\bullet\)**Step 1.** For \(y\in(\delta,1-\delta)\), we write \[\psi_{m}(t,y) =\frac{1}{2\pi i}\frac{1}{imt}\lim_{\varepsilon\to 0}\left(\int_{0}^{y- \frac{\delta}{2}}+\int_{y-\frac{\delta}{2}}^{y+\frac{\delta}{2}}+\int_{y+ \frac{\delta}{2}}^{1}\right)\mathrm{e}^{-imy_{0}t}\left(\partial_{y_{0}}\psi_{ m,\varepsilon}^{-}(y,y_{0})-\partial_{y_{0}}\psi_{m,\varepsilon}^{+}(y,y_{0}) \right)\mathrm{d}y_{0}\] \[=\mathcal{T}_{1}+\mathcal{T}_{2}+\mathcal{T}_{3}.\] and we begin with estimating \(\mathcal{T}_{2}\). There, we have that \(|y-y_{0}|\leq\frac{\delta}{2}\leq\frac{\delta_{0}}{4}\) and we can use the bounds from Proposition 9.1 to bound \[|\mathcal{T}_{2}| \lesssim\frac{1}{mt}\int_{y-\frac{\delta}{2}}^{y+\frac{\delta}{2}} \frac{1}{m^{1+\mu}}|y-y_{0}|^{-\frac{1}{2}+\mu}Q_{1,m}\mathrm{d}y_{0}+\frac{1} {mt}\int_{y-\frac{\delta}{2}}^{y+\frac{\delta}{2}}\sum_{\sigma=0,1}\sum_{ \kappa\in\{+,-\}}|\mathcal{B}_{m,\varepsilon}^{\kappa}(y,y_{0},\sigma)| \mathrm{d}y_{0}\] \[=\mathcal{T}_{2,1}+\mathcal{T}_{2,2}.\] We can integrate directly to obtain \[|\mathcal{T}_{2,1}|\lesssim\frac{1}{m^{2+\mu}t}\delta^{\frac{1}{2}-\mu}Q_{1,m}.\] For \(\mathcal{T}_{2,2}\), for \(\sigma=0\) we decompose \[\frac{1}{mt}\int_{y-\frac{\delta}{2}}^{y+\frac{\delta}{2}}|\mathcal{B}_{m, \varepsilon}^{\pm}(y,y_{0},\sigma)|\mathrm{d}y_{0}=\frac{1}{mt}\int_{y-\frac{ \delta}{2}}^{y+\frac{\delta}{2}}\left(\chi_{y_{0}\leq\frac{3\delta}{m}}+\chi_{ y_{0}>\frac{3\delta}{m}}\right)|\mathcal{B}_{m,\varepsilon}^{\pm}(y,y_{0}, \sigma)|\mathrm{d}y_{0}=\mathcal{T}_{2,2,1}+\mathcal{T}_{2,2,2}.\] We use Proposition 8.11 to compute \[\mathcal{T}_{2,2,1}\lesssim\frac{1}{mt}\int_{y-\frac{\delta}{2}}^{y+\frac{ \delta}{2}}m^{-1+\mu}y_{0}^{-\frac{1}{2}+\mu}Q_{0,m}\mathrm{d}y_{0}\lesssim \frac{1}{m^{2-\mu}t}\delta^{\frac{1}{2}+\mu}Q_{0,m}\] and \[\mathcal{T}_{2,2,2}\lesssim\frac{1}{mt}\int_{y-\frac{\delta}{2}}^{y+\frac{ \delta}{2}}Q_{0,m}\mathrm{d}y_{0}\lesssim\frac{1}{mt}\delta Q_{0,m}.\] The bounds for the terms of \(T_{2,2}\) for \(\sigma=1\) are the same, we omit the details. We summarize these estimates into \[\|\mathcal{T}_{2}\|_{L_{y}^{2}(\delta,1-\delta)}\lesssim\frac{1}{mt}\left(m^{ -\frac{3}{2}}(m\delta)^{\frac{1}{2}-\mu}+m^{-\frac{3}{2}}(m\delta)^{\frac{1}{ 2}+\mu}+\delta\right)Q_{1,m}\] We shall next estimate \(\mathcal{T}_{1}\), the bounds of \(\mathcal{T}_{3}\) are the same and the arguments to prove them are identical. For \(\mathcal{T}_{1}\), note that we can further integrate by parts, \[\mathcal{T}_{1} =-\frac{1}{2\pi i}\frac{1}{m^{2}t^{2}}\lim_{\varepsilon\to 0}\left[ \mathrm{e}^{-imy_{0}t}\partial_{y_{0}}\left(\psi_{m,\varepsilon}^{-}(y,y_{0}) -\psi_{m,\varepsilon}^{+}(y,y_{0})\right)\right]_{y_{0}=\frac{\delta}{2}}^{y_{ 0}=y-\frac{\delta}{2}}\] \[\quad+\frac{1}{2\pi i}\frac{1}{imt}\lim_{\varepsilon\to 0}\int_{0}^{ \frac{\delta}{2}}\mathrm{e}^{-imy_{0}t}\partial_{y_{0}}\left(\psi_{m, \varepsilon}^{-}(y,y_{0})-\psi_{m,\varepsilon}^{+}(y,y_{0})\right)\mathrm{d}y _{0}\] \[\quad+\frac{1}{2\pi i}\frac{1}{m^{2}t^{2}}\lim_{\varepsilon\to 0} \int_{\frac{\delta}{2}}^{y-\frac{\delta}{2}}\mathrm{e}^{-imy_{0}t}\partial_{y_{ 0}}^{2}\left(\psi_{m,\varepsilon}^{-}(y,y_{0})-\psi_{m,\varepsilon}^{+}(y,y_{ 0})\right)\mathrm{d}y_{0}\] \[=\mathcal{T}_{1,1}+\mathcal{T}_{1,2}+\mathcal{T}_{1,3}.\] We shall treat each \(\mathcal{T}_{1,i}\), for \(i=1,2,3\) separately. \(\diamond\)_Estimates for \(\mathcal{T}_{1,1}\)._ For the boundary terms of \(\mathcal{T}_{1,1}\), consider first \(y_{0}=y-\frac{\delta}{2}\). Then, \(|y-y_{0}|=\frac{\delta}{2}\leq\frac{\delta_{0}}{4}\), so that from Proposition 9.1 we have \[\frac{1}{m^{2}t^{2}}\left|\partial_{y_{0}}\widetilde{\psi_{m}}\left(y,y-\tfrac {\delta}{2}\right)\right|\lesssim\frac{1}{m^{2}t^{2}}\left(\frac{1}{m^{1+\mu} }\delta^{-\frac{1}{2}-\mu}Q_{1,m}+\sum_{\sigma=0,1}\sum_{\kappa\in\{+,-\}} \left|\mathcal{B}_{m,\varepsilon}^{\kappa}\left(y,y-\tfrac{\delta}{2},\sigma \right)\right|\right).\] Now, from Proposition 8.11 we have \[\sum_{\sigma=0,1}\sum_{\kappa\in\{+,-\}}\left|\mathcal{B}_{m,\varepsilon}^{ \kappa}\left(y,y-\tfrac{\delta}{2},\sigma\right)\right|\lesssim\left(1+m^{- \frac{1}{2}}(m\delta)^{-\frac{1}{2}+\mu}\right)Q_{0,m},\] since \(y\in(\delta,1-\delta)\) ensures \(y-\frac{\delta}{2}>\frac{\delta}{2}\). For the boundary term \(\mathcal{T}_{1,1}\) associated to \(y_{0}=\frac{\delta}{2}\), since \(y\in(\delta,1-\delta)\), we have that \(1-\frac{\delta}{2}\geq y-y_{0}\geq\frac{\delta}{2}\). Hence, for those \(y\in(\delta,1-\delta)\) such that \(m|y-y_{0}|\leq 3\beta\), we use Proposition 9.1 to pointwise estimate \[\frac{1}{m^{2}t^{2}}\left|\partial_{y_{0}}\widetilde{\psi_{m}}\left(y,\tfrac{ \delta}{2}\right)\right|\lesssim\frac{1}{m^{2}t^{2}}\left(\frac{1}{m^{1+\mu}} \delta^{-\frac{1}{2}-\mu}Q_{1,m}+\sum_{\sigma=0,1}\sum_{\kappa\in\{+,-\}} \left|\mathcal{B}_{m,\varepsilon}^{\kappa}\left(y,\tfrac{\delta}{2},\sigma \right)\right|\right),\] where we further have from Proposition 8.11 that \[\sum_{\sigma=0,1}\sum_{\kappa\in\{+,-\}}\|\mathcal{B}_{m,\varepsilon}^{\pm} \left(y,\tfrac{\delta}{2},0\right)\|_{L_{y}^{2}(J_{3})}\lesssim\left(m^{- \frac{3}{2}}\delta^{-\frac{1}{2}}+m^{-\frac{1}{2}}\right)Q_{0,m}.\] Next, for those \(y\in(\delta,1-\delta)\) such that \(m|y-y_{0}|\geq 3\beta\) we can directly estimate in \(L_{y}^{2}\) using Proposition 9.1 to deduce that \[\frac{1}{m^{2}t^{2}}\left\|\partial_{y_{0}}\widetilde{\psi_{m}}\left(y,\tfrac {\delta}{2}\right)\right\|_{L_{y}^{2}(J_{3}^{\varepsilon})}\lesssim\frac{1}{m ^{2}t^{2}}\left(\frac{1}{m}Q_{1,m}+\sum_{\sigma=0,1}\sum_{\kappa\in\{+,-\}} \|\mathcal{B}_{m,\varepsilon}^{\kappa}\left(y,\tfrac{\delta}{2},\sigma\right) \|_{L_{y}^{2}(J_{2}^{\varepsilon}\cap J_{3})}\right),\] while from Corollary 8.13 we are able to bound \[\|\mathcal{B}_{m,\varepsilon}^{\pm}\left(y,\tfrac{\delta}{2},0\right)\|_{L_{y }^{2}(J_{2}^{\varepsilon}\cap J_{3})}\lesssim m^{-\frac{3}{2}}\delta^{-\frac {1}{2}}Q_{0,m},\quad\|\mathcal{B}_{m,\varepsilon}^{\pm}\left(y,\tfrac{\delta} {2},1\right)\|_{L_{y}^{2}(J_{2}^{\varepsilon}\cap J_{3})}\lesssim m^{-\frac{1 }{2}}Q_{0,m}.\] Therefore, we have \[\|\mathcal{T}_{1,1}\|_{L_{y}^{2}(\delta,1-\delta)}\lesssim\frac{1}{m^{2}t^{2} }\left(1+m^{-\frac{1}{2}}(m\delta)^{-\frac{1}{2}-\mu}+m^{-\frac{1}{2}}(m\delta )^{-\frac{1}{2}+\mu}+m^{-\frac{3}{2}}\delta^{-\frac{1}{2}}+m^{-\frac{1}{2}} \right)Q_{0,m}\] This concludes the analysis of \(\mathcal{T}_{1,1}\). \(\diamond\)_Estimates for \(\mathcal{T}_{1,2}\)._ We begin by splitting \[|\mathcal{T}_{1,2}|\lesssim\frac{1}{mt}\int_{0}^{\frac{\delta}{2}}\left(\chi_ {m|y-y_{0}|\leq 3\beta}+\chi_{m|y-y_{0}|>3\beta}\right)|\partial_{y_{0}} \widetilde{\psi_{m}}(y,y_{0})|\mathrm{d}y_{0}=\mathcal{T}_{1,2,1}+\mathcal{T}_ {1,2,2}\] We use Proposition 9.1 to estimate \[|\mathcal{T}_{1,2,1}|\lesssim\frac{1}{mt}\int_{0}^{\frac{\delta}{2}}\left( \frac{1}{m^{1+\mu}}|y-y_{0}|^{-\frac{1}{2}-\mu}Q_{1,m}+\chi_{m|y-y_{0}|\leq 3 \beta}\sum_{\sigma=0,1}\sum_{\kappa\in\{+,-\}}|\mathcal{B}_{m,\varepsilon}^{ \pm}(y,y_{0},\sigma)|\right)\mathrm{d}y_{0}\] Now, since \(y\in(\delta,1-\delta)\) and \(y_{0}\leq\frac{\delta}{2}\), we have \(|y-y_{0}|\geq\frac{\delta}{2}\). Hence, \[\int_{0}^{\frac{\delta}{2}}\frac{1}{m^{1+\mu}}|y-y_{0}|^{-\frac{1}{2}-\mu}Q_{ 1,m}\mathrm{d}y_{0}\lesssim\frac{1}{m^{1+\mu}}\delta^{\frac{1}{2}-\mu}Q_{1,m}.\] Moreover, Proposition 8.11 provides \[\sum_{\sigma=0,1}\sum_{\kappa\in\{+,-\}} \left\|\int_{0}^{\frac{\delta}{2}}\chi_{m|y-y_{0}|\leq 3\beta}| \mathcal{B}_{m,\varepsilon}^{\pm}(y,y_{0},\sigma)|\mathrm{d}y_{0}\right\|_{L_{y }^{2}(\delta,1-\delta)}\] \[\lesssim\sum_{\sigma=0,1}\sum_{\kappa\in\{+,-\}}\int_{0}^{\frac {\delta}{2}}\left\|\mathcal{B}_{m,\varepsilon}^{\pm}(y,y_{0},\sigma)\right\|_{L _{y}^{2}(J_{3})}\mathrm{d}y_{0}\] \[\lesssim m^{-\frac{1}{2}}\delta^{\frac{1}{2}}\left(m^{-1}+\delta^ {\frac{1}{2}}\right)Q_{0,m}.\] As a result, we are able to bound \[\|\mathcal{T}_{1,2,1}\|_{L_{y}^{2}(\delta,1-\delta)}\lesssim\frac{1}{mt} \left(m^{-1-\mu}\delta^{\frac{1}{2}-\mu}+m^{-\frac{3}{2}}\delta^{\frac{1}{2}}+ m^{-\frac{1}{2}}\delta\right)Q_{1,m}.\] We again use Proposition 9.1 and Corollary 8.13 to estimate \[\|\mathcal{T}_{1,2,2}\|_{L^{2}_{y}(\delta,1-\delta)} \lesssim\frac{1}{mt}\int_{0}^{\frac{\delta}{2}}\|\partial_{y_{0}} \widetilde{\psi_{m}}(y,y_{0})\|_{L^{2}_{y}(J^{c}_{3})}\mathrm{d}y_{0}\] \[\lesssim\frac{1}{mt}\int_{0}^{\frac{\delta}{2}}\left(\frac{1}{m}Q _{1,m}+\sum_{\sigma=0,1}\sum_{\kappa\in\{+,-\}}\|\mathcal{B}^{\kappa}_{m, \varepsilon}\left(y,\tfrac{\delta}{2},\sigma\right)\|_{L^{2}_{y}(J^{c}_{2} \cap J_{3})}\right)\mathrm{d}y_{0}\] \[\lesssim\frac{1}{mt}\left(m^{-1}\delta+m^{-\frac{3}{2}}\delta^{ \frac{1}{2}}+m^{-\frac{1}{2}}\delta\right)Q_{1,m}\] so that we can conclude \[\|\mathcal{T}_{1,2}\|_{L^{2}_{y}(\delta,1-\delta)}\lesssim\frac{1}{mt}\left(m ^{-1-\mu}\delta^{\frac{1}{2}-\mu}+m^{-1}\delta+m^{-\frac{3}{2}}\delta^{\frac{ 1}{2}}+m^{-\frac{1}{2}}\delta\right)Q_{1,m}.\] \(\diamond\) _Estimates for \(\mathcal{T}_{1,3}\)._ We shall split again \[|\mathcal{T}_{1,3}|\lesssim\frac{1}{m^{2}t^{2}}\int_{\frac{\delta}{2}}^{y- \frac{\delta}{2}}\left(\chi_{m|y-y_{0}|\leq 3\beta}+\chi_{m|y-y_{0}|>3\beta} \right)|\partial_{y_{0}}^{2}\widetilde{\psi_{m}}(y,y_{0})|\mathrm{d}y_{0}= \mathcal{T}_{1,3,1}+\mathcal{T}_{1,3,2}.\] Now, we use Proposition 9.2 to estimate \[|\mathcal{T}_{1,3,1}| \lesssim\frac{1}{m^{2}t^{2}}\int_{\frac{\delta}{2}}^{y-\frac{ \delta}{2}}\frac{1}{m^{1+\mu}}|y-y_{0}|^{-\frac{3}{2}-\mu}Q_{2,m}\mathrm{d}y_{0}\] \[\quad+\frac{1}{m^{2}t^{2}}\sum_{\sigma=0,1}\sum_{\kappa\in\{+,-\} }\int_{\frac{\delta}{2}}^{y-\frac{\delta}{2}}\chi_{m|y-y_{0}|\leq 3\beta} \Big{(}|\partial_{y}\mathcal{B}^{\kappa}_{m,\varepsilon}(y,y_{0},\sigma)|+| \widetilde{\mathcal{B}^{\kappa}_{m,\varepsilon}}(y,y_{0},\sigma)|\Big{)} \mathrm{d}y_{0}.\] Clearly, since \(y\in(\delta,1-\delta)\), we have that \[\int_{\frac{\delta}{2}}^{y-\frac{\delta}{2}}\frac{1}{m^{1+\mu}}|y-y_{0}|^{- \frac{3}{2}-\mu}Q_{2,m}\mathrm{d}y_{0}\lesssim\frac{1}{m^{1+\mu}}\left[(y-y_{0 })^{-\frac{1}{2}-\mu}\right]_{y_{0}=\frac{\delta}{2}}^{y_{0}=y-\frac{\delta}{2 }}Q_{2,m}\lesssim\frac{1}{m^{1+\mu}}\delta^{-\frac{1}{2}-\mu}Q_{2,m}.\] Similarly, Proposition 8.19 provides \[\sum_{\sigma=0,1}\sum_{\kappa\in\{+,-\}} \left\|\int_{\frac{\delta}{2}}^{y-\frac{\delta}{2}}\chi_{m|y-y_{0 }|\leq 3\beta}|\widetilde{\mathcal{B}^{\kappa}_{m,\varepsilon}}(y,y_{0}, \sigma)|\mathrm{d}y_{0}\right\|_{L^{2}_{y}(\delta,1-\delta)}\] \[\lesssim\sum_{\sigma=0,1}\sum_{\kappa\in\{+,-\}}\int_{\frac{ \delta}{2}}^{1-\frac{\delta}{2}}\left\|\widetilde{\mathcal{B}^{\kappa}_{m, \varepsilon}}(y,y_{0},\sigma)\right\|_{L^{2}_{y}(J_{3})}\mathrm{d}y_{0}\] \[\lesssim\left(m^{-1}\delta^{-\frac{1}{2}}+m^{-\frac{1}{2}}\delta^ {\frac{1}{2}}+m^{\frac{1}{2}}\right)Q_{1,m},\] while Proposition 8.22 gives \[\sum_{\sigma=0,1}\sum_{\kappa\in\{+,-\}}\int_{\frac{\delta}{2}}^{y-\frac{ \delta}{2}}\left|\partial_{y}\mathcal{B}^{\kappa}_{m,\varepsilon}(y,y_{0}, \sigma)\right|\mathrm{d}y_{0}\lesssim\left(m^{\frac{1}{2}-\mu}+(m\delta)^{ \frac{1}{2}-\mu}+m^{-\frac{1}{2}}\delta^{-\frac{1}{2}+\mu}\right)Q_{0,m}.\] Therefore, we have \[\|\mathcal{T}_{1,3,1}\|_{L^{2}_{y}(\delta,1-\delta)}\lesssim\frac{1}{m^{2}t^{2}} \left(m^{\frac{1}{2}}+m^{-\frac{1}{2}}(m\delta)^{-\frac{1}{2}-\mu}+m(m\delta)^ {\frac{1}{2}-\mu}+(m\delta)^{-\frac{1}{2}}\right)Q_{2,m}.\] For \(\mathcal{T}_{1,3,2}\), we use Minkowski inequality, Proposition 9.2 and Corollaries 8.13, 8.21 and 8.24 to estimate \[\|\mathcal{T}_{1,3,2}\|_{L_{y}^{2}(\delta,1-\delta)} \lesssim\frac{1}{m^{2}t^{2}}\int_{\frac{\delta}{2}}^{1-\frac{ \delta}{2}}\left(Q_{2,m}+m\sum_{\sigma=0,1}\sum_{\kappa\in\{+,-\}}\|\mathcal{B }_{m,\varepsilon}^{\kappa}(y,y_{0},\sigma)\|_{L_{y}^{2}(J_{2}^{c}\cap J_{3})} \right)\mathrm{d}y_{0}\] \[\quad+\frac{1}{m^{2}t^{2}}\sum_{\sigma=0,1}\sum_{\kappa\in\{+,- \}}\int_{\frac{\delta}{2}}^{1-\frac{\delta}{2}}\|\partial_{y}\mathcal{B}_{m, \varepsilon}^{\kappa}(y,y_{0},\sigma)\|_{L_{y}^{2}(J_{2}^{c}\cap J_{3})} \mathrm{d}y_{0}\] \[\quad+\frac{1}{m^{2}t^{2}}\sum_{\sigma=0,1}\sum_{\kappa\in\{+,- \}}\int_{\frac{\delta}{2}}^{1-\frac{\delta}{2}}\|\widetilde{B_{m,\varepsilon }^{\kappa}}(y,y_{0},\sigma)\|_{L_{y}^{2}(J_{2}^{c}\cap J_{3})}\mathrm{d}y_{0}\] \[\lesssim\frac{1}{m^{2}t^{2}}\left(m^{\frac{1}{2}}+m^{-\frac{1}{2 }}\delta^{\frac{1}{2}}+m^{-1}\delta^{-\frac{1}{2}}\right)Q_{2,m}.\] Hence, we conclude that \[\|\mathcal{T}_{1,3}\|_{L_{y}^{2}(\delta,1-\delta)} \lesssim\frac{1}{m^{2}t^{2}}\left(m^{\frac{1}{2}}+m^{-\frac{1}{2 }}(m\delta)^{-\frac{1}{2}-\mu}+m(m\delta)^{\frac{1}{2}-\mu}+m^{-\frac{1}{2}} \delta^{\frac{1}{2}}\right)Q_{2,m}\] and thus \[\|\mathcal{T}_{1}\|_{L_{y}^{2}(\delta,1-\delta)} \lesssim\frac{1}{mt}m^{-\frac{3}{2}}(m\delta)^{\frac{1}{2}}Q_{2,m }+\frac{1}{m^{2}t^{2}}\left(m^{\frac{1}{2}}+m^{-\frac{1}{2}}(m\delta)^{-\frac {1}{2}-\mu}\right)Q_{2,m}.\] In particular, gathering the estimates for \(\mathcal{T}_{2}\) and \(\mathcal{T}_{1,i}\), for \(i=1,2,3\), we obtain \[\|\psi_{m}(t,y)\|_{L_{y}^{2}(\delta,1-\delta)} \lesssim\frac{1}{m^{2}t}(m\delta)^{\frac{1}{2}-\mu}Q_{2,m}+\frac{1}{m ^{2}t^{2}}\left(m^{\frac{1}{2}}+m^{-\frac{1}{2}}(m\delta)^{-\frac{1}{2}-\mu} \right)Q_{2,m}\] \(\bullet\)**Step 2.** For \(y\in(0,\delta)\), we have that \[\psi_{m}(t,y)=\frac{1}{2\pi i}\frac{1}{imt}\lim_{\varepsilon\to 0}\left(\int_{0}^ {y+\frac{\delta}{2}}+\int_{y+\frac{\delta}{2}}^{1}\right)e^{imy0t}\left( \partial_{y_{0}}\psi_{m}^{-}(y,y_{0})-\psi_{m}^{+}(y,y_{0})\right)\mathrm{d}y_ {0}=\widetilde{\mathcal{T}}_{1}+\widetilde{\mathcal{T}}_{2}.\] One can see that the bounds for \(\widetilde{\mathcal{T}}_{2}\) here are the same as the ones for \(\mathcal{T}_{3}\), the procedure to obtain them is the same, we thus omit the details. On the other hand, for \(\widetilde{\mathcal{T}}_{1}\) we argue as follows. Note that for \(0\leq y_{0}\leq y+\frac{\delta}{2}\), we have that \(|y-y_{0}|\leq\delta\leq\frac{3\beta}{m}\) and therefore we have from Proposition 9.1, \[|\widetilde{\mathcal{T}}_{1}|\lesssim\frac{1}{mt}\int_{0}^{y+ \frac{\delta}{2}}\left(\frac{1}{m^{1+\mu}}|y-y_{0}|^{-\frac{1}{2}-\mu}Q_{1,m}+ \sum_{\sigma=0,1}\sum_{\kappa\in\{+,-\}}\left|\mathcal{B}_{m,\varepsilon}^{ \kappa}(y,y_{0},\sigma)\right|\right)\mathrm{d}y_{0}.\] Since \(y\in(0,\delta)\), we trivially have that \[\frac{1}{mt}\int_{0}^{y+\frac{\delta}{2}}\frac{1}{m^{1+\mu}}|y-y_{0}|^{-\frac{ 1}{2}-\mu}Q_{1,m}\mathrm{d}y_{0}\lesssim\frac{1}{mt}m^{-\frac{3}{2}}(m\delta)^ {\frac{1}{2}-\mu}Q_{1,m}.\] Similarly, using the bounds from Proposition 8.11, \[\sum_{\sigma=0,1}\sum_{\kappa\in\{+,-\}}\int_{0}^{y+\frac{\delta} {2}}\left|\mathcal{B}_{m,\varepsilon}^{\kappa}(y,y_{0},\sigma)\right|\mathrm{d }y_{0} \lesssim\int_{0}^{y+\frac{\delta}{2}}\left(1+m^{-1+\mu}y_{0}^{-\frac{1}{2 }+\mu}\right)\mathrm{d}y_{0}\] \[\lesssim y+\tfrac{\delta}{2}+m^{-1+\mu}\left(y+\tfrac{\delta}{2} \right)^{\frac{1}{2}+\mu}.\] As a result, we compute \[\|\widetilde{\mathcal{T}}_{1}\|_{L_{y}^{2}(0,\delta)} \lesssim\frac{1}{mt}\left(m^{-2}(m\delta)^{1-\mu}+\delta^{\frac{3}{2}}+m^{-2} (m\delta)^{1+\mu}\right)Q_{1,m}\lesssim\frac{1}{mt}m^{-\frac{3}{2}}(m\delta)^{1 -\mu}Q_{1,m},\] and thus we obtain \[\|\psi_{m}(t)\|_{L_{y}^{2}(0,\delta)} \lesssim\frac{1}{mt}m^{-\frac{3}{2}}(m\delta)^{\frac{1}{2}}Q_{2,m }+\frac{1}{m^{2}t^{2}}\left(m^{\frac{1}{2}}+m^{-\frac{1}{2}}(m\delta)^{-\frac {1}{2}-\mu}\right)Q_{2,m}.\] The same bounds can be achieved for \(\|\psi_{m}(t)\|_{L^{2}_{y}(1-\delta,1)}\), the proof of which follows along the same ideas, we omit the details. With all these bounds, we are now able to estimate \[\|\psi_{m}(t)\|_{L^{2}_{y}} \lesssim\frac{1}{mt}m^{-1}(m\delta)^{\frac{1}{2}-\mu}Q_{2,m}+\frac {1}{m^{2}t^{2}}\left(m^{\frac{1}{2}}+m^{-\frac{1}{2}}(m\delta)^{-\frac{1}{2}- \mu}\right)Q_{2,m}\] \[\lesssim m^{-\frac{3}{2}}t^{-\frac{3}{2}+\mu}Q_{2,m}\] once we choose \(\delta=\frac{c_{0}}{mt}\), for \(c_{0}=\frac{1}{1000}\min(\beta,1)\). The proof is complete. **Proposition 10.2**.: _Let \(t\geq 1\). Then,_ \[\|\partial_{y}\psi_{m}(t)\|_{L^{2}_{y}}\lesssim m^{-\frac{1}{2}}t^{-\frac{1}{2 }+\mu}Q_{1,m}.\] Proof.: The argument follows the same lines as the proof for \(\|\psi_{m}(t,y)\|_{L^{2}_{y}}\). Hence, we shall only give the bounds for the terms involved in the computation. Their proof have already been carried out in the proof of Proposition 10.1. \(\bullet\)**Step 1.** For \(y\in(\delta,1-\delta)\) we shall write \[\partial_{y}\psi_{m}(t,y) =\frac{1}{2\pi i}\lim_{\varepsilon\to 0}\left(\int_{0}^{y- \frac{\delta}{2}}+\int_{y-\frac{\delta}{2}}^{y+\frac{\delta}{2}}+\int_{y+\frac {\delta}{2}}^{1}\right)e^{-imy_{0}t}\left(\partial_{y}\psi_{m,\varepsilon}^{-} (y,y_{0})-\partial_{y}\psi_{m,\varepsilon}^{+}(y,y_{0})\right)\mathrm{d}y_{0}\] \[=\mathcal{I}_{1}+\mathcal{I}_{2}+\mathcal{I}_{3}\] We begin by using Proposition 7.2 to bound \[\|\mathcal{I}_{2}\|_{L^{2}_{y}(\delta,1-\delta)}\lesssim\left\|\int_{y-\frac{ \delta}{2}}^{y+\frac{\delta}{2}}\frac{1}{m^{1+\mu}}|y-y_{0}|^{-\frac{1}{2}-\mu }\mathrm{d}y_{0}\right\|_{L^{2}_{y}(\delta,1-\delta)}\lesssim m^{-\frac{3}{2} }(m\delta)^{\frac{1}{2}-\mu}Q_{0,m}.\] As before, for \(\mathcal{I}_{1}\) we split it into \[\mathcal{I}_{1} =-\frac{1}{2\pi i}\frac{1}{imt}\lim_{\varepsilon\to 0}\left[e^{- imy_{0}t}\left(\partial_{y}\psi_{m,\varepsilon}^{-}(y,y_{0})-\partial_{y} \psi_{m,\varepsilon}^{+}(y,y_{0})\right)\right]_{y_{0}=\frac{\delta}{2}}^{y_{0} =y-\frac{\delta}{2}}\] \[\quad+\frac{1}{2\pi i}\lim_{\varepsilon\to 0}\int_{0}^{\frac{ \delta}{2}}\mathrm{e}^{-imy_{0}t}\partial_{y}\left(\psi_{m,\varepsilon}^{-}(y, y_{0})-\psi_{m,\varepsilon}^{+}(y,y_{0})\right)\mathrm{d}y_{0}\] \[\quad+\frac{1}{2\pi i}\frac{1}{imt}\lim_{\varepsilon\to 0}\int_{ \frac{\delta}{2}}^{y-\frac{\delta}{2}}\mathrm{e}^{-imy_{0}t}\partial_{y,y_{0}} ^{2}\left(\psi_{m,\varepsilon}^{-}(y,y_{0})-\psi_{m,\varepsilon}^{+}(y,y_{0}) \right)\mathrm{d}y_{0}\] \[=\mathcal{I}_{1,1}+\mathcal{I}_{1,2}+\mathcal{I}_{1,3}.\] From Proposition 7.2 we see that \[\|\mathcal{I}_{1,1}\|_{L^{2}_{y}(\delta,1-\delta)}\lesssim\frac{1}{mt}m^{- \frac{1}{2}}(m\delta)^{-\frac{1}{2}-\mu}Q_{0,m},\quad\|\mathcal{I}_{1,2}\|_{L^{ 2}_{y}(\delta,1-\delta)}\lesssim m^{-\frac{3}{2}}(m\delta)^{\frac{1}{2}-\mu}Q _{0,m}.\] Similarly, from Proposition 9.1, we obtain \[\|\mathcal{I}_{1,3}\|_{L^{2}_{y}(\delta,1-\delta)}\lesssim\frac{1}{mt}\left( m^{\frac{1}{2}}+(m\delta)^{-\frac{1}{2}-\mu}\right)Q_{1,m}.\] The bounds for \(\mathcal{I}_{3}\) are the same as the ones for \(\mathcal{I}_{1}\), we omit the details. Recovering all terms, we conclude that \[\|\partial_{y}\psi_{m}(t)\|_{L^{2}_{y}(\delta,1-\delta)}\lesssim m^{-\frac{3}{ 2}}(m\delta)^{\frac{1}{2}-\mu}Q_{0,m}+\frac{1}{mt}\left(m^{\frac{1}{2}}+(m \delta)^{-\frac{1}{2}-\mu}\right)Q_{1,m}.\] \(\bullet\)**Step 2.** For \(y\in(0,\delta)\), we shall split now \[\partial_{y}\psi_{m}(t,y)=\frac{1}{2\pi i}\lim_{\varepsilon\to 0}\left(\int_{0}^{y+ \frac{\delta}{2}}+\int_{y+\frac{\delta}{2}}^{1}\right)e^{-imy_{0}t}\left( \partial_{y}\psi_{m}^{-}(y,y_{0})-\partial_{y}\psi_{m}^{+}(y,y_{0})\right) \mathrm{d}y_{0}=\widetilde{\mathcal{I}}_{1}+\widetilde{\mathcal{I}}_{2}.\] As before, the bound for \(\widetilde{\mathcal{I}}_{2}\) is the same as the bound for \(\mathcal{I}_{3}\). For \(\widetilde{\mathcal{I}}_{1}\), note that \(|y-y_{0}|\leq\delta\leq\frac{3\beta}{m}\) so that we shall use Proposition 7.2 to find that \[\|\widetilde{\mathcal{I}}_{1}\|_{L^{2}_{y}(0,\delta)}\lesssim m^{-2}(m\delta)^{ 1-\mu}Q_{0,m}.\] Gathering the previous bound, we obtain \[\|\partial_{y}\psi_{m}(t)\|_{L^{2}_{y}}\lesssim m^{-\frac{3}{2}}(m\delta)^{ \frac{1}{2}-\mu}Q_{0,m}+\frac{1}{mt}\left(m^{\frac{1}{2}}+(m\delta)^{-\frac{1} {2}-\mu}\right)Q_{1,m}.\] As before, the conclusion follows for \(\delta=\frac{c_{0}}{mt}\), with \(c_{0}=\frac{1}{1000}\min\left(\beta,1\right)\). We next obtain the decay rates for the perturbed density. **Proposition 10.3**.: _Let \(t\geq 1\). Then,_ \[\|\rho_{m}(t)\|_{L^{2}_{y}}\lesssim m^{-\frac{1}{2}}t^{-\frac{1}{2}+\mu}Q_{1,m}.\] Proof.: The demonstration also follows the same strategy as the proof for Proposition 10.1, we just present the main ideas and bounds. \(\bullet\)**Step 1.** For \(y\in(\delta,1-\delta)\) we write \[\rho_{m}(t,y) =\frac{1}{2\pi i}\lim_{\varepsilon\to 0}\left(\int_{0}^{y-\frac{ \delta}{2}}+\int_{y-\frac{\delta}{2}}^{y+\frac{\delta}{2}}+\int_{y+\frac{ \delta}{2}}^{1}\right)e^{-imy_{0}t}\left(\rho_{m,\varepsilon}^{-}(y,y_{0})- \rho_{m,\varepsilon}^{+}(y,y_{0})\right)\mathrm{d}y_{0}\] \[=\mathcal{S}_{1}+\mathcal{S}_{2}+\mathcal{S}_{3}\] We use Proposition 9.3 to bound \[\|\mathcal{S}_{2}\|_{L^{2}_{y}(\delta,1-\delta)}\lesssim m^{-\frac{3}{2}}(m \delta)^{\frac{1}{2}-\mu}.\] As before, both the bounds for \(\mathcal{S}_{3}\) and \(\mathcal{S}_{1}\) and the manner to show them are the same, we just comment on \(\mathcal{S}_{1}\), which we split as follows. \[\mathcal{S}_{1} =-\frac{1}{2\pi i}\frac{1}{imt}\lim_{\varepsilon\to 0}\left[ \mathrm{e}^{-imy_{0}t}\left(\rho_{m,\varepsilon}^{-}(y,y_{0})-\rho_{m, \varepsilon}^{+}(y,y_{0})\right)\right]_{y_{0}=\frac{\delta}{2}}^{y_{0}=y- \frac{\delta}{2}}\] \[\quad+\frac{1}{2\pi i}\lim_{\varepsilon\to 0}\int_{0}^{\frac{ \delta}{2}}\mathrm{e}^{-imy_{0}t}\left(\rho_{m,\varepsilon}^{-}(y,y_{0})-\rho_ {m,\varepsilon}^{+}(y,y_{0})\right)\mathrm{d}y_{0}\] \[\quad+\frac{1}{2\pi i}\frac{1}{imt}\lim_{\varepsilon\to 0}\int_{ \frac{\delta}{2}}^{y-\frac{\delta}{2}}\mathrm{e}^{-imy_{0}t}\partial_{y_{0}} \left(\rho_{m,\varepsilon}^{-}(y,y_{0})-\rho_{m,\varepsilon}^{+}(y,y_{0}) \right)\mathrm{d}y_{0}\] \[=\mathcal{S}_{1,1}+\mathcal{S}_{1,2}+\mathcal{S}_{1,3}.\] From Proposition 9.3 we easily deduce \[\|\mathcal{S}_{1,1}\|_{L^{2}_{y}(\delta,1-\delta)}\lesssim\frac{1}{mt}m^{- \frac{1}{2}}(m\delta)^{-\frac{1}{2}-\mu}Q_{0,m},\quad\|\mathcal{S}_{1,2}\|_{L^ {2}_{y}(\delta,1-\delta)}\lesssim m^{-\frac{3}{2}}(m\delta)^{\frac{1}{2}-\mu}Q _{0,m}.\] On the other hand, Proposition 9.3 also yields \[\|\mathcal{S}_{1,3}\|_{L^{2}_{y}(\delta,1-\delta)}\lesssim\frac{1}{mt}\left(m ^{\frac{1}{2}}+(m\delta)^{-\frac{1}{2}-\mu}\right)Q_{1,m}.\] Gathering the bounds we get \[\|\rho_{m}(t)\|_{L^{2}_{y}(\delta,1-\delta)}\lesssim m^{-\frac{3}{2}}(m\delta)^ {\frac{1}{2}-\mu}Q_{0,m}+\frac{1}{mt}\left(m^{\frac{1}{2}}+(m\delta)^{-\frac{1}{ 2}-\mu}\right)Q_{1,m}.\] \(\bullet\)**Step 2.** For \(y\in(0,\delta)\) we shall now consider \[\rho_{m}(t,y)=\frac{1}{2\pi i}\lim_{\varepsilon\to 0}\left(\int_{0}^{y+\frac{ \delta}{2}}+\int_{y+\frac{\delta}{2}}^{1}\right)e^{-imy_{0}t}\left(\rho_{m}^{- }(y,y_{0})-\rho_{m}^{+}(y,y_{0})\right)\mathrm{d}y_{0}=\widetilde{\mathcal{S} }_{1}+\widetilde{\mathcal{S}}_{2}.\] The bounds for \(\widetilde{\mathcal{S}_{2}}\) are the same as the ones for \(\mathcal{S}_{3}\), we just focus on \(\widetilde{\mathcal{S}_{1}}\). From Proposition 9.3, we see that \[\|\widetilde{\mathcal{S}_{1}}\|_{L^{2}_{y}(0,\delta)}\lesssim m^{-2}(m\delta)^{1- \mu}Q_{0,m}.\] With this, it follows that \[\|\rho_{m}(t)\|_{L^{2}_{y}(0,1)}\lesssim m^{-\frac{3}{2}}(m\delta)^{\frac{1}{2} -\mu}Q_{0,m}+\frac{1}{mt}\left(m^{\frac{1}{2}}+(m\delta)^{-\frac{1}{2}-\mu} \right)Q_{1,m}\] and thus the Proposition is proved once we choose \(\delta=\frac{c_{0}}{mt}\), with \(c_{0}=\frac{1}{1000}\min\left(\beta,1\right)\). We next prove the inviscid damping decay estimates for the case \(\beta^{2}=1/4\). The precise bounds are recorded in the following proposition. **Proposition 10.4**.: _Let \(t\geq 1\). Then,_ \[\|\psi_{m}(t)\|_{L^{2}_{y}} \lesssim m^{-\frac{3}{2}}t^{-\frac{3}{2}}(1+\log(t))\left(\| \rho_{m}^{0}\|_{H^{4}_{y}}+\|\omega_{m}^{0}\|_{H^{4}_{y}}\right)\] \[\|\partial_{y}\psi_{m}(t)\|_{L^{2}_{y}} \lesssim m^{-\frac{1}{2}}t^{-\frac{1}{2}}(1+\log(t))\left(\| \rho_{m}^{0}\|_{H^{3}_{y}}+\|\omega_{m}^{0}\|_{H^{3}_{y}}\right)\] \[\|\rho_{m}(t)\|_{L^{2}_{y}} \lesssim m^{-\frac{1}{2}}t^{-\frac{1}{2}}(1+\log(t))\left(\| \rho_{m}^{0}\|_{H^{3}_{y}}+\|\omega_{m}^{0}\|_{H^{3}_{y}}\right).\] Proof.: The proof follows along the same lines for the case \(\beta^{2}\neq 1/4\), the only difference is the logarithmic singularity present in the bounds of several quantities. For this, we note that for \(m\delta<1\), \[\int_{y-\frac{\delta}{2}}^{y+\frac{\delta}{2}}\frac{1}{m}|y-y_{0}| ^{-\frac{1}{2}} \left(1+\big{|}\log\left(m|y-y_{0}|\right)\big{|}\right)\mathrm{d} y_{0}\] \[\lesssim m^{-\frac{3}{2}}\int_{-m\frac{\delta}{2}}^{\frac{\delta} {2}}|\eta|^{-\frac{1}{2}}(1-\log|\eta|)\mathrm{d}\eta\] \[\lesssim m^{-\frac{3}{2}}(m\delta)^{\frac{1}{2}}\left(1+\big{|} \log\left(m\delta\right)\big{|}\right).\] Here, we have used that, for \(0<m\delta\leq 1\), \[-\int_{0}^{m\delta}\eta^{-\frac{1}{2}}\log(\eta)\mathrm{d}\eta=- \int_{0}^{m\delta}2\partial_{\eta}(\eta^{\frac{1}{2}})\log(\eta)\mathrm{d}\eta =\left[-2\eta^{\frac{1}{2}}\log(\eta)\right]_{\eta=0}^{\eta=m \delta}+2\int_{0}^{m\delta}\eta^{-\frac{1}{2}}\mathrm{d}\eta\] \[\lesssim(m\delta)^{\frac{1}{2}}\left(1+\big{|}\log\left(m\delta \right)\big{|}\right).\] The same argument also yields \[\int_{y-\frac{\delta}{2}}^{y+\frac{\delta}{2}}\frac{1}{m}|y-y_{0}|^{-\frac{3}{ 2}}\left(1+\big{|}\log\left(m|y-y_{0}|\right)\big{|}\right)\mathrm{d}y_{0} \lesssim m^{-\frac{1}{2}}(m\delta)^{-\frac{1}{2}}\left(1+\big{|}\log\left(m \delta\right)\big{|}\right).\] With this, the result follows thanks to the estimates obtained in Propositions 9.1-9.3, we omit the details. Finally, Theorem 1 is a direct consequence of Propositions 10.1-10.4 together with Parseval identity. ## Appendix A Properties of the Whittaker functions Here we state and prove some properties of the Whittaker functions that are used throughout the paper, we refer to [26] for a complete description of the Whittaker functions. ### Basic definitions and asymptotic expansions For \(\gamma,\zeta\in\mathbb{C}\), the Whittaker function \(M_{0,\gamma}(\zeta)\) is given by \[M_{0,\gamma}(\zeta)=e^{-\frac{1}{2}\zeta}\zeta^{\frac{1}{2}+\gamma}M\left(\tfrac{ 1}{2}+\gamma,1+2\gamma,\zeta\right),\quad M(a,b,\zeta)=\sum_{s=0}^{\infty} \frac{(a)_{s}}{(b)_{s}s!}\zeta^{s},\] where \((a)_{s}=a(a+1)(a+2)\ldots(a+s-1)\) denotes the Pochhammer symbol. For \(\gamma=0\), we also introduce \[W_{0,0}(\zeta)=e^{-\frac{1}{2}\zeta}\sqrt{\frac{\zeta}{\pi}}\sum_{s=0}^{\infty }\frac{\left(\tfrac{1}{2}\right)_{s}}{(s!)^{2}}\zeta^{s}\left(2\frac{\Gamma^{ \prime}(1+s)}{\Gamma(1+s)}-\frac{\Gamma^{\prime}(\tfrac{1}{2}+s)}{\Gamma( \tfrac{1}{2}+s)}-\log(\zeta)\right),\] where \(\Gamma(x)\) denotes the Gamma function. We recall that \(\mu=\operatorname{Re}\left(\sqrt{1/4-\beta^{2}}\right)\) and \(\nu=\operatorname{Im}\left(\sqrt{1/4-\beta^{2}}\right)\), and set \(\gamma=\mu+i\nu\). We begin by recording some basic properties regarding complex conjugation for \(M_{0,\gamma}(\zeta)\), which can be deduce from the series definition of \(M_{0,\gamma}\) and \(W_{0,0}\). **Lemma A.1**.: _We have the following_ * _For_ \(\beta^{2}>1/4\)_, then_ \(M_{0,i\nu}(\zeta)=\overline{M_{0,-i\nu}\left(\overline{\zeta}\right)}\)_._ * _For_ \(\beta^{2}\leq 1/4\)_, then_ \(M_{0,\mu}(\zeta)=\overline{M_{0,\mu}\left(\overline{\zeta}\right)}\)_. Additionally, for_ \(x\in\mathbb{R}\) _then_ \(M_{0,\mu}(x),W_{0,0}(x)\in\mathbb{R}\)_._ We next state an analytic continuation property, which is key in studying the Wronskian of the Green's function and is directly determined by the analytic continuation of the nonentire term of \(M_{0,\gamma}(\zeta)\), which is \(\zeta^{\frac{1}{2}+\gamma}\), for \(\zeta\in\mathbb{C}\). **Lemma A.2** ([26]).: _Let \(\beta^{2}>0\). Then_ \[M_{0,\gamma}(\zeta\mathrm{e}^{\pm\pi i})=\pm ie^{\pm\gamma\pi i}M_{0,\gamma}( \zeta),\quad W_{0,0}(\zeta e^{\pm i\pi})=\sqrt{\pi}M_{0,0}(\zeta)\pm iW_{0,0}(\zeta)\] _for all \(\zeta\in\mathbb{C}\)._ The next result gives a precise description of the asymptotic expansion of \(M_{\pm}(\zeta)\) and its derivatives, for \(\zeta\) in a bounded domain. **Lemma A.3**.: _Let \(\zeta\in\mathbb{C}\). Let \(B_{R}\subset\mathbb{C}\) denote the closed unit ball of radius \(R>0\) centered in the origin. Then,_ \[M_{0,\pm\gamma}(\zeta)=\zeta^{\frac{1}{2}\pm\gamma}\mathcal{E}_{0,\pm\gamma}( \zeta),\quad M_{0,\pm\gamma}^{\prime}(\zeta)=\zeta^{-\frac{1}{2}\pm\gamma} \mathcal{E}_{1,\pm\gamma}(\zeta),\] _where \(\mathcal{E}_{j,\pm\gamma}\in L^{\infty}(B_{R})\) and \(\|\mathcal{E}_{j,\pm\gamma}\|_{L^{\infty}(B_{R})}\lesssim_{\gamma,R}1\), for all \(j\in\{0,1,2\}\)._ _Moreover, for \(R_{m}:=\frac{R}{2m}\) and \(M_{\pm}(\zeta)=M_{0,\pm\gamma}(2m\zeta)\), let \(B_{R_{m}}\subset\mathbb{C}\) denote the closed ball centered in the origin of radius \(R_{m}\). We have that_ \[M_{\pm}(\zeta)=\zeta^{\frac{1}{2}\pm\gamma}\mathcal{E}_{m,0,\pm\gamma}(\zeta), \quad M_{\pm}^{\prime}(\zeta)=\zeta^{-\frac{1}{2}\pm\gamma}\mathcal{E}_{m,1, \pm\gamma}(\zeta),\] _where \(\mathcal{E}_{m,j,\pm\gamma}\in L^{\infty}(B_{R_{m}})\) and \(\|\mathcal{E}_{m,j,\pm\gamma}\|_{L^{\infty}(B_{R_{m}})}\lesssim_{\gamma}(2m)^{ \frac{1}{2}\pm\mu}\), for all \(j\in\{0,1,2\}\)._ Proof.: Firstly, from [26] we know that \[M_{0,\pm\gamma}(\zeta)=\mathrm{e}^{-\frac{1}{2}\zeta}\zeta^{\frac{1}{2}\pm \gamma}M\left(\frac{1}{2}\pm\gamma,1\pm 2\gamma,\zeta\right)=\zeta^{\frac{1}{2}\pm \gamma}\mathcal{E}_{0,\pm\gamma}(\zeta),\] where \(\mathcal{E}_{0,\pm\gamma}(\zeta)\) is entire and \(\|\mathcal{E}_{0,\pm\gamma}\|_{L^{\infty}(B_{R})}\lesssim_{\gamma,R}1\). On the other hand, note that \[M_{0,\pm\gamma}^{\prime}(\zeta)=-\frac{1}{2}M_{0,\pm\gamma}(\zeta)+\left(\frac {1}{2}\pm\gamma\right)\frac{M_{0,\pm\gamma}(\zeta)}{\zeta}+\frac{1}{2}\zeta^{- \frac{1}{2}}M_{-\frac{1}{2},\frac{1}{2}\pm\gamma}(\zeta),\] (A.1) where further \[M_{-\frac{1}{2},\frac{1}{2}\pm\gamma}(\zeta)=\mathrm{e}^{-\frac{1}{2}\zeta} \zeta^{1\pm\gamma}M\left(\frac{3}{2}\pm\gamma,2\pm 2\gamma,\zeta\right)=\zeta^{1\pm\gamma} \mathcal{H}_{\pm\gamma}(\zeta),\] with \(\mathcal{H}_{\pm\gamma}(\zeta)\) entire and thus uniformly bounded in \(B_{R}\). Hence, \[M^{\prime}_{0,\pm\gamma}(\zeta)=\zeta^{-\frac{1}{2}-\gamma}\left(\left(\frac{1}{2 }\pm\gamma\right)\mathcal{E}_{0,\pm\gamma}(\zeta)+\frac{1}{2}\zeta\left( \mathcal{H}_{\pm\gamma}(\zeta)-\mathcal{E}_{0,\pm\gamma}(\zeta)\right)\right)= \zeta^{-\frac{1}{2}-\gamma}\mathcal{E}_{1,\pm\gamma}(\zeta),\] with \(\|\mathcal{E}_{1,\pm\gamma}(\zeta)\|_{L^{\infty}(B_{R})}\lesssim_{\gamma,R}1\). The formulas and bounds for \(M_{\pm}(\zeta)=M_{0,\pm\gamma}(2m\zeta)\) and its derivatives follow from those for \(M_{0,\pm\gamma}\), the chain rule and the observation that \(2m\zeta\in B_{R}\) provided that \(\zeta\in B_{R_{m}}\). **Lemma A.4**.: _Let \(\beta^{2}=1/4\) and \(\zeta\in\mathbb{C}\). Let \(B_{R}\subset\mathbb{C}\) denote the closed ball of radius \(R>0\) centered at the origin. Then,_ \[W_{0,0}(\zeta)=\zeta^{\frac{1}{2}}\big{(}\mathcal{E}_{0,1}(\zeta)-\log(\zeta )\mathcal{E}_{0,2}(\zeta)\big{)},\quad W^{\prime}_{0,0}(\zeta)=\zeta^{-\frac{ 1}{2}}\big{(}\mathcal{E}_{1,1}(\zeta)-\log(\zeta)\mathcal{E}_{1,2}(\zeta) \big{)},\] _where \(\mathcal{E}_{j,k}(\zeta)\) are entire functions in \(\mathbb{C}\) and \(\|\mathcal{E}_{j,k}\|_{L^{\infty}(B_{R})}\lesssim 1\), for \(j=0,1\) and \(k=1,2\)._ Proof.: We begin with noting that \(W_{0,0}(2\zeta)=\sqrt{\frac{2\zeta}{\pi}}K_{0}(\zeta)\), where \(K_{0}(\cdot)\) is the modified Bessel function of second kind of order 0. Moreover, we have that \[K_{0}(\zeta)=-\left(\ln\left(\frac{\zeta}{2}\right)+\varsigma\right)I_{0}( \zeta)+2\sum_{k=1}^{\infty}\frac{I_{2k}(\zeta)}{k},\] where \[I_{2k}(\zeta)=\left(\frac{\zeta}{2}\right)^{2k}\sum_{j\geq 0}\frac{\left(\frac{ \zeta^{2}}{4}\right)^{j}}{j!(2k+j)!}.\] Here, \(I_{j}(\zeta)\) denotes the modified Bessel function of first kind of order \(j\in\mathbb{N}\). In particular, one observes that \(|I_{2k}(\zeta)|\leq I_{2k}(|\zeta|)\). Additionally, since \(\cosh(|\zeta|)=I_{0}(|\zeta|)+2\sum_{k=1}^{\infty}I_{2k}(|\zeta|)\), see [26], we can bound \[\left|2\sum_{k=1}^{\infty}\frac{I_{2k}(z)}{k}\right|\leq 2\sum_{k=1}^{\infty}I_{ 2k}(|\zeta|)=\cosh(|\zeta|)-I_{0}(|\zeta|)<\cosh(|\zeta|).\] Therefore, since \(I_{j}(\zeta)\) is analytic in \(\mathbb{C}\) for all \(j\in\mathbb{N}\) and \(\frac{1}{2}\zeta\in B_{R}\), we can write \[W_{0,0}(\zeta)=\zeta^{\frac{1}{2}}\big{(}\mathcal{E}_{0,1}(\zeta)-\log(\zeta) \mathcal{E}_{0,2}(\zeta)\big{)},\] where \[\mathcal{E}_{0,1}(\zeta)=\left(\log(2)-\varsigma\right)I_{0}(\zeta)+2\sum_{k=1 }^{+\infty}\frac{I_{2k}(\zeta)}{k},\quad\mathcal{E}_{0,2}(\zeta)=I_{0}(\zeta)\] and they are such that \(\|\mathcal{E}_{0,j}(\zeta)\|_{L^{\infty}(B_{R})}\lesssim 1\), for \(j=1,2\). For \(W^{\prime}_{0,0}(\zeta)\), note that \(W^{\prime}_{0,0}(\zeta)=\frac{1}{2\sqrt{\pi\zeta}}\left(K_{0}(\zeta/2)+\zeta K ^{\prime}_{0}(\zeta/2)\right)\). As before, we can write \[K^{\prime}_{0}(\zeta)=-K_{1}(\zeta)=-\left[\frac{1}{\zeta}I_{0}(\zeta)+\left( \log\left(\frac{1}{2}\zeta\right)+\varsigma-1\right)I_{1}(\zeta)-\sum_{k\geq 1 }\frac{(1+2k)}{k(1+k)}I_{1+2k}(\zeta)\right].\] Since \(\sinh(\zeta)=I_{1}(\zeta)+2\sum_{k\geq 1}I_{1+2k}(\zeta)\), confer [26], we bound \[\left|\sum_{k\geq 1}\frac{(1+2k)}{k(1+k)}I_{1+2k}(\zeta)\right|\leq\sinh(|\zeta| )-I_{1}(|\zeta|)\leq\sinh(|\zeta|).\] and we conclude the existence of two entire functions \(\mathcal{E}_{1,1}(\zeta)\) and \(\mathcal{E}_{1,2}(\zeta)\) such that \(\|\mathcal{E}_{1,j}(\zeta)\|_{L^{\infty}(B_{R})}\lesssim 1\), for \(j=1,2\) and for which \[W^{\prime}_{0,0}(\zeta)=\zeta^{-\frac{1}{2}}\big{(}\mathcal{E}_{1,1}(\zeta)- \log(\zeta)\mathcal{E}_{0,2}(\zeta)\big{)}.\] ### Lower bounds for Whittaker functions The next lemma shows that there are no zeroes of \(M_{+}(x)\), for any \(x\in(0,\infty)\). **Lemma A.5**.: _Let \(x>0\). We have the following._ * _For_ \(\beta^{2}\leq 1/4\)_, then_ \(M_{0,\mu}(x)\) _is monotone increasing and_ \[M_{0,\mu}(x)>x^{\frac{1}{2}+\mu},\quad M\left(\tfrac{1}{2}+\mu,1+2\mu,x\right) \geq e^{\frac{1}{2}x}.\] * _For_ \(\beta^{2}>1/4\)_, then_ \(|M_{0,i\nu}(x)|\) _is monotone increasing and_ \[x|\Gamma(1+i\nu)|^{2}\frac{\sinh(\nu\pi)}{\nu\pi}\leq|M_{0,i\nu}(x)|^{2}\leq x \cosh(x)|\Gamma(1+i\nu)|^{2}\frac{\sinh(\nu\pi)}{\nu\pi},\] _with also_ \[\big{|}M\left(\tfrac{1}{2}+i\nu,1+2i\nu,x\right)\big{|}\geq\mathrm{e}^{\frac{ 1}{2}x}|\Gamma(1+i\nu)|\sqrt{\frac{\sinh(\nu\pi)}{\nu\pi}}.\] Proof.: From [26], we have \[M_{0,\gamma}\left(2x\right)=2^{2\gamma+\frac{1}{2}}\Gamma\left(1+\gamma \right)\sqrt{x}I_{\gamma}\left(x\right).\] For \(\beta^{2}\leq 1/4\), we have \(\gamma=\mu\) and the conclusion is straightforward, since we can use the power series representation of \(I_{\mu}(x)\) to obtain \[M_{0,\mu}(2x)>(2x)^{\frac{1}{2}+\mu}.\] On the other hand, for \(\beta^{2}>1/4\), we have \(\gamma=i\nu\) and \[M_{0,i\nu}\left(2x\right)=2^{2i\nu+\frac{1}{2}}\Gamma\left(1+i\nu\right)\sqrt{ x}I_{i\nu}\left(x\right).\] Therefore, \[|M_{0,i\nu}(2x)|^{2}=2x|\Gamma(1+i\nu)|^{2}I_{i\nu}(x)I_{-i\nu}(x)=2x|\Gamma(1 +i\nu)|^{2}\frac{2}{\pi}\int_{0}^{\frac{\pi}{2}}I_{0}(2x\cos\theta)\cosh(2\nu \theta)\mathrm{d}\theta\] The upper and lower bound follow from the fact that \(1\leq I_{0}(x)\leq\cosh(x)\), for all \(x\geq 0\). See [26] for the product formula for \(I_{i\nu}(x)I_{-i\nu}(x)\). ### Growth bounds and comparison estimates for \(\beta^{2}>1/4\) In this subsection we treat the case \(\beta^{2}>1/4\), so that \(\mu=0\) and \(\nu=\sqrt{\beta^{2}-1/4}\). **Lemma A.6**.: _Denote \(a:=\frac{1}{2}+i\nu\) and \(b:=2a\). Then, there exists \(C>0\) and \(N_{\nu,0}>0\) such that_ \[\mathrm{e}^{-\frac{1}{8}\nu\pi}\mathrm{e}^{\frac{1}{2}\mathrm{Re}\zeta}\leq \left|\frac{\Gamma(a)}{\Gamma(b)}M_{+}(\zeta)\right|\leq\mathrm{e}^{\frac{1}{8 }\nu\pi}\mathrm{e}^{\frac{1}{2}\mathrm{Re}\zeta},\] _and_ \[\left|\frac{M_{\pm}^{\prime}(\zeta)}{M_{\pm}(\zeta)}-\frac{1}{2}\right|\leq \frac{1}{4},\] _for all \(\mathrm{Re}\zeta\geq N_{\nu,0}\)._ Proof.: Let \(\zeta\in\mathbb{C}\). We recall that \[M_{+}(\zeta)=\mathrm{e}^{-\frac{1}{2}\zeta}\zeta^{a}M(a,b,\zeta)=\mathrm{e}^ {-\frac{1}{2}\zeta}\zeta^{a}\frac{\Gamma(b)}{\Gamma(a)}\left(\mathrm{e}^{-i \pi a}U(a,b,\zeta)+\mathrm{e}^{i\pi a}\mathrm{e}^{\zeta}U(a,b,\mathrm{e}^{i \pi}\zeta)\right).\] Moreover, we have that \(U(a,b,\zeta)=\zeta^{-a}+\mathcal{E}_{1}(\zeta)\), where further \[|\zeta^{a}\mathcal{E}_{1}(\zeta)|\leq\frac{2\beta^{2}}{|\zeta|}\mathrm{e}^{ \frac{2\beta^{2}}{|\zeta|}}.\] In the sequel, we write \(x:=\frac{2\beta^{2}}{|\zeta|}\). Therefore, we can write \[M_{+}(\zeta)=\frac{\Gamma(b)}{\Gamma(a)}\mathrm{e}^{\frac{1}{2}\zeta}\left(\left[ 1+(\mathrm{e}^{i\pi}\zeta)^{a}\mathcal{E}_{1}(\mathrm{e}^{i\pi}\zeta)\right]+ \mathrm{e}^{-\zeta}\mathrm{e}^{-i\pi a}\left[1+\zeta^{a}\mathcal{E}_{1}(\zeta )\right]\right)\] (A.2) We shall focus on obtaining upper and lower bound estimates for \[\left|\left[1+(\mathrm{e}^{i\pi}\zeta)^{a}\mathcal{E}_{1}(\mathrm{e}^{i\pi} \zeta)\right]+\mathrm{e}^{-\zeta}\mathrm{e}^{-i\pi a}\left[1+\zeta^{a} \mathcal{E}_{1}(\zeta)\right]\right|\] when \(\mathrm{Re}\zeta\) is large. To this end, we note that \(|\zeta^{a}\mathcal{E}_{1}(\zeta)|\leq 2x\), for \(x\leq\frac{1}{2}\). Moreover, \[|1+\zeta^{a}\mathcal{E}_{1}(\zeta)|\leq\mathrm{e}^{\nu\frac{\pi}{16}},\] provided that \(x\leq\frac{1}{2}\min\left\{1,\mathrm{e}^{\nu\frac{\pi}{16}}-1\right\}\). Similarly, we also have that \[1+\mathrm{e}^{-\zeta}\mathrm{e}^{\nu\pi}\leq\mathrm{e}^{\nu\frac{\pi}{16}},\] for all \(\mathrm{Re}\zeta>\nu\pi-\log(\mathrm{e}^{\frac{\nu\pi}{16}}-1)\). Hence, \[\left|\left[1+(\mathrm{e}^{i\pi}\zeta)^{a}\mathcal{E}_{1}(\mathrm{e}^{i\pi} \zeta)\right]+\mathrm{e}^{-\zeta}\mathrm{e}^{-i\pi a}\left[1+\zeta^{a} \mathcal{E}_{1}(\zeta)\right]\right|\leq\mathrm{e}^{\frac{\nu\pi}{8}}.\] On the other hand, for \(x\leq\min\left\{\frac{1}{4}\left(1-\mathrm{e}^{-\frac{1}{8}\nu\pi}\right), \frac{1}{2}\right\}\), we have that \[|\zeta^{a}\mathcal{E}_{1}(\zeta)|\leq\frac{1}{2}\left(1-\mathrm{e}^{-\nu\frac{ \pi}{8}}\right),\] and also \[\left|\mathrm{e}^{-\zeta}\mathrm{e}^{\nu\pi}\left(1+\zeta^{a}\mathcal{E}_{1}( \zeta)\right)\right|\leq\frac{1}{2}\left(1-\mathrm{e}^{-\nu\frac{\pi}{8}} \right),\] provided that \(\mathrm{Re}\zeta>\nu\pi+\log\left(\frac{4}{1-\mathrm{e}^{-\frac{1}{8}\nu\pi}}\right)\). Therefore, we can lower bound \[\left|\left[1+(\mathrm{e}^{i\pi}\zeta)^{a}\mathcal{E}_{1}(\mathrm{e}^{i\pi} \zeta)\right]+\mathrm{e}^{-\zeta}\mathrm{e}^{-i\pi a}\left[1+\zeta^{a} \mathcal{E}_{1}(\zeta)\right]\right|\geq\mathrm{e}^{-\frac{\nu\pi}{8}}.\] We choose \(N_{\nu}>0\) so that all the above conditions are satisfied when \(\mathrm{Re}\zeta\geq N_{\nu}\). For the second part of the lemma, we take a \(\frac{\mathrm{d}}{\mathrm{d}\zeta}\) derivative in (A.2) to obtain \[M_{+}^{\prime}(\zeta) =\frac{1}{2}M_{+}(\zeta)+\frac{\Gamma(b)}{\Gamma(a)}\mathrm{e}^{ \frac{1}{2}\zeta}\left(\frac{a}{\zeta}(\mathrm{e}^{i\pi}\zeta)^{a}\mathcal{E}_ {1}(\mathrm{e}^{i\pi}\zeta)+\mathrm{e}^{i\pi}(\mathrm{e}^{i\pi}\zeta)^{a} \mathcal{E}_{1}^{\prime}(\mathrm{e}^{i\pi}\zeta)\right)\] \[\quad+\frac{\Gamma(b)}{\Gamma(a)}\mathrm{e}^{-\frac{1}{2}\zeta} \mathrm{e}^{i\pi a}\left(\frac{a}{\zeta}\zeta^{a}\mathcal{E}_{1}(\zeta)+\zeta ^{a}\mathcal{E}_{1}^{\prime}(\zeta)-1-\zeta^{a}\mathcal{E}_{1}(\zeta)\right).\] Since \(|\zeta^{a}\mathcal{E}_{1}(\zeta)|\leq\frac{2\beta^{2}}{|\zeta|}\mathrm{e}^{ \frac{2\beta^{2}}{|\zeta|}}\) and \(|\zeta^{a}\mathcal{E}_{1}^{\prime}(\zeta)|\leq\frac{4\beta^{2}}{|\zeta|} \mathrm{e}^{\frac{2\beta^{2}}{|\zeta|}}\), confer [26], we find that \[\left|\frac{a}{\zeta}(\mathrm{e}^{i\pi}\zeta)^{a}\mathcal{E}_{1}(\mathrm{e}^{i \pi}\zeta)+\mathrm{e}^{i\pi}(\mathrm{e}^{i\pi}\zeta)^{a}\mathcal{E}_{1}^{ \prime}(\mathrm{e}^{i\pi}\zeta)\right|\leq\left(\frac{|a|}{|\zeta|}+2\right) \frac{2\beta^{2}}{|\zeta|}\mathrm{e}^{\frac{2\beta^{2}}{|\zeta|}}\leq 6x,\] and \[\left|\mathrm{e}^{i\pi a}\left(\frac{a}{\zeta}\zeta^{a}\mathcal{E}_{1}(\zeta) +\zeta^{a}\mathcal{E}_{1}^{\prime}\zeta)-1-\zeta^{a}\mathcal{E}_{1}(\zeta) \right)\right|\leq\left(1+\left(\frac{|a|}{|\zeta|}+3\right)\frac{2\beta^{2}} {|\zeta|}\mathrm{e}^{\frac{2\beta^{2}}{|\zeta|}}\right)\leq 5,\] for \(|\zeta|\geq|a|\) and \(x\leq\frac{1}{2}\). Therefore, \[\left|\frac{M_{+}^{\prime}(\zeta)}{M_{+}(\zeta)}-\frac{1}{2}\right| \leq\left|\frac{\frac{a}{\zeta}(\mathrm{e}^{i\pi}\zeta)^{a} \mathcal{E}_{1}(\mathrm{e}^{i\pi}\zeta)+\mathrm{e}^{i\pi}(\mathrm{e}^{i\pi} \zeta)^{a}\mathcal{E}_{1}^{\prime}(\mathrm{e}^{i\pi}\zeta)}{1+(\mathrm{e}^{i \pi}\zeta)^{a}\mathcal{E}_{1}(\mathrm{e}^{i\pi}\zeta)+\mathrm{e}^{-\zeta} \mathrm{e}^{-i\pi a}\left(1+\zeta^{a}\mathcal{E}_{1}(\zeta)\right)}\right|\] \[\quad+\mathrm{e}^{-\mathrm{Re}(\zeta)}\left|\frac{\mathrm{e}^{i \pi a}\left(\frac{a}{\zeta}\zeta^{a}\mathcal{E}_{1}(\zeta)+\zeta^{a}\mathcal{E}_ {1}^{\prime}(\zeta)-1-\zeta^{a}\mathcal{E}_{1}(\zeta)\right)}{1+(\mathrm{e}^{i \pi}\zeta)^{a}\mathcal{E}_{1}(\mathrm{e}^{i\pi}\zeta)+\mathrm{e}^{-\zeta} \mathrm{e}^{-i\pi a}\left(1+\zeta^{a}\mathcal{E}_{1}(\zeta)\right)}\right|,\] which can be made arbitrarily small due to the previous bounds for \(\mathrm{Re}(\zeta)\) sufficiently large. **Lemma A.7**.: _Let \(y_{0}\in[0,1]\) such that \(2my_{0}\leq N_{\nu,0}\). Then, there exists \(\varepsilon_{0}>0\) such that_ \[\left|\frac{M_{+}(y_{0}-i\varepsilon)}{M_{+}(y_{0}+i\varepsilon)}\right|\leq \mathrm{e}^{\frac{5}{4}\nu\pi},\] _for all \(\varepsilon\leq\varepsilon_{0}\)._ Proof.: Let \(\theta=\arg(y_{0}-i\varepsilon)\in\left[-\frac{\pi}{2},0\right]\). Recall that for \(a=\frac{1}{2}+i\nu\) and \(b=2a\), for \(\zeta\in\mathbb{C}\), \[M_{+}(\zeta)=\mathrm{e}^{-\frac{1}{2}\zeta}\zeta^{a}M(a,b,\zeta).\] Therefore, we can estimate \[\left|\frac{M_{+}(y_{0}-i\varepsilon)}{M_{+}(y_{0}+i\varepsilon)}\right|= \left|\mathrm{e}^{-i\varepsilon}\mathrm{e}^{i\theta}\mathrm{e}^{-2\nu\theta} \frac{M(a,b,2m(y_{0}-i\varepsilon))}{M(a,b,2m(y_{0}+i\varepsilon))}\right|\leq \mathrm{e}^{\nu\pi}\left|\frac{M(a,b,2m(y_{0}-i\varepsilon))}{M(a,b,2m(y_{0} +i\varepsilon))}\right|.\] Now, since \(\frac{\mathrm{d}}{\mathrm{d}\zeta}M(a,b,\zeta)=\frac{1}{2}M(a+1,b+1,\zeta)\), which is entire in \(\zeta\in\mathbb{C}\), we have that \[M(a,b,2m(y_{0}+i\varepsilon))=M(a,b,2my_{0})+\int_{0}^{\varepsilon}imM(a+1,b+ 1,2m(y_{0}+is))\mathrm{d}s.\] We can further bound the error term by noting that \(|2m(y_{0}+is)|\leq N_{\nu}+10\beta^{2}\), for all \(|s|\leq|\varepsilon|\). As a result, there exists \(C_{\nu}\) such that \(|M(a+1,b+1,2m(y_{0}+is))|\leq C_{\nu}\), for all \(|s|\leq|\varepsilon|\). Therefore, \[\left|\int_{0}^{\varepsilon}imM(a+1,b+1,2m(y_{0}+is))\mathrm{d}s\right| \leq|M(a,b,2my_{0})|\frac{C_{\nu}m|\varepsilon|}{|M(a,b,2my_{0})|}\] \[\leq C_{\nu}|M(a,b,2my_{0})|\mathrm{e}^{-my_{0}}\sqrt{\frac{\nu \pi\cosh\nu\pi}{\sinh\nu\pi}}m|\varepsilon|\] \[\leq(1-\mathrm{e}^{-\frac{1}{8}\nu\pi})|M(a,b,2my_{0})|,\] for all \(0\leq|\varepsilon|\leq\varepsilon_{0}=\frac{1-\mathrm{e}^{-\frac{1}{8}\nu\pi }}{m}\sqrt{\frac{\sinh\nu\pi}{\nu\pi\cosh\nu\pi}}C_{\nu}\). Consequently, we have that \[\left|\frac{M(a,b,2m(y_{0}-i\varepsilon))}{M(a,b,2m(y_{0}+i\varepsilon))} \right|\leq\mathrm{e}^{\frac{1}{4}\nu\pi}.\] **Lemma A.8**.: _Let \(N_{\nu,0}\) be given as above and \(N_{\nu,1}>0\). Let \(\sigma\in\{+,-\}\). If \(N_{\nu,1}<N_{\nu,0}\), then, there exists \(\varepsilon_{0}>0\) such that_ \[|M_{\sigma}(y_{0})|\mathrm{e}^{-\frac{1}{8}\nu\pi}\leq|M_{\sigma}(y_{0}+i \varepsilon)|\leq|M_{\sigma}(y_{0})|\mathrm{e}^{\frac{1}{8}\nu\pi},\] _for all \(y_{0}\in[0,1]\) such that \(N_{\nu,1}\leq 2my_{0}\leq N_{\nu,0}\), and all \(0\leq|\varepsilon|\leq\varepsilon_{0}\)._ Proof.: The result follows from the Fundamental Theorem of Calculus, the asymptotic expansions of \(M_{\sigma}\) and \(M_{\sigma}^{\prime}\) for small arguments from Lemma tal and the lower bounds on \(|M_{\sigma}|\) from Lemma Qual. More precisely, assume without loss of generality that \(0\leq\varepsilon\) and note that \[M_{\sigma}(y_{0}+i\varepsilon)=M_{\sigma}(y_{0})+\int_{0}^{\varepsilon}\frac{ \mathrm{d}}{\mathrm{d}s}M_{\sigma}(y_{0}+is)\mathrm{d}s.\] Thanks to the asymptotic expansions for small arguments we next estimate \[\left|\int_{0}^{\varepsilon}\frac{\mathrm{d}}{\mathrm{d}s}M_{\sigma}(y_{0}+is) \mathrm{d}s\right|\leq\int_{0}^{\varepsilon}|M_{\sigma}^{\prime}(y_{0}+is)| \mathrm{d}s\leq C_{\nu}\int_{0}^{\varepsilon}\frac{(2m)^{\frac{1}{2}}}{|y_{0}+is |^{\frac{1}{2}}}\mathrm{d}s\leq C_{\nu}(2m\varepsilon)(2my_{0})^{-\frac{1}{2 }}.\] Using the lower bound \((2my_{0})^{\frac{1}{2}}\leq\sqrt{\frac{\nu\pi\cosh\nu\pi}{\sinh\nu\pi}}|M_{ \sigma}(y_{0})|\) we have that \[\left|\int_{0}^{\varepsilon}\frac{\mathrm{d}}{\mathrm{d}s}M_{\sigma}(y_{0}+is )\mathrm{d}s\right|\leq C_{\nu}|M_{\sigma}(y_{0})|\frac{\varepsilon}{y_{0}} \leq C_{\nu}N_{\nu,2}^{-1}2m\varepsilon.\] We now choose \(\varepsilon_{0}=\frac{N_{\nu,2}}{2mC_{\nu}}(1-\mathrm{e}^{-\frac{1}{8}\nu\pi})\). The conclusion of the lemma follows swiftly for all \(\varepsilon\leq\varepsilon_{0}\). **Lemma A.9**.: _Let \(y_{0}\in[0,1]\) and \(0\leq\varepsilon\leq\frac{\beta}{m}\). Then,_ * _If_ \(my_{0}\leq 3\beta\)_, there exists_ \(\varepsilon_{0}>0\) _such that_ \[(m|y_{0}+i\varepsilon|)^{\frac{1}{2}}\lesssim|M_{\pm}(y_{0}+i\varepsilon)|\] _for all_ \(0\leq\varepsilon\leq\varepsilon_{0}\)_._ * _If_ \(my_{0}\geq 3\beta\)_,_ \[1\lesssim|M_{\pm}(y_{0}+i\varepsilon)|.\] Proof.: For the first part of the Lemma, recall \(M_{+}(\zeta)=e^{-\frac{1}{2}\zeta}\zeta^{a}M\left(a,b,\zeta\right)\), the lemma follows once we obtain lower bounds on \(e^{-\frac{1}{2}\zeta}M\left(a,b,\zeta\right)\). For this, note that since \(\frac{\mathrm{d}}{\mathrm{d}\zeta}M(a,b,\zeta)=\frac{1}{2}M(a+1,b+1,\zeta)\), which is entire in \(\zeta\in\mathbb{C}\), we have that \[M(a,b,2m(y_{0}+i\varepsilon))=M(a,b,2my_{0})+\int_{0}^{\varepsilon}imM(a+1,b+1,2m(y_{0}+is))\mathrm{d}s.\] We further bound the error term by noting that \(|2m(y_{0}+is)|\leq 10\beta\), for all \(|s|\leq|\varepsilon|\). As a result, there exists \(C>0\) such that \(|M(a+1,b+1,2m(y_{0}+is))|\leq C\), for all \(|s|\leq|\varepsilon|\). Therefore, using the lower bounds on \(|M(a,b,2my_{0})|\) from Lemma A.5, \[\left|\int_{0}^{\varepsilon}imM(a+1,b+1,2m(y_{0}+is))\mathrm{d}s\right| \leq|M(a,b,2my_{0})|\frac{Cm|\varepsilon|}{|M(a,b,2my_{0})|}\] \[\leq C|M(a,b,2my_{0})||\Gamma(1+i\nu)|\sqrt{\frac{\nu\pi}{\sinh \nu\pi}}m|\varepsilon|.\] In particular, there exists \(\varepsilon_{0}>0\) such that for all \(0\leq\varepsilon\leq\varepsilon_{0}\), \[e^{-my_{0}}|M(a,b,2m(y_{0}+i\varepsilon))| \geq e^{-my_{0}}|M(a,b,2my_{0})|\left(1-Cm\frac{\varepsilon}{|M( a,b,2my_{0})|}\right)\] \[\geq\frac{1}{2}\frac{1}{|\Gamma(1+i\nu)|}\sqrt{\frac{\sinh(\nu\pi )}{\nu\pi}},\] and the first part of the lemma follows. As for the second statement, it is a direct consequence of Lemma A.6 and the fact that \(|M_{\pm}(\cdot)|\) is bounded in compact domains (it is entire). ### Growth bounds and comparison estimates for \(\beta^{2}=1/4\) **Lemma A.10**.: _Let \(\beta^{2}=1/4\) and let \(\mu=\sqrt{1/4-\beta^{2}}\). Denote \(a:=\frac{1}{2}\) and \(b:=2a=1\). Then, there exists \(N_{0}>0\) such that_ \[\left|\frac{W_{0,0}(\zeta)}{M_{0,0}(\zeta)}\right|\leq 2\sqrt{\pi}\mathrm{e}^{- \mathrm{Re}\zeta},\quad\left|\frac{W_{0,0}^{\prime}(\zeta)}{W_{0,0}(\zeta)}+ \frac{1}{2}\right|\leq\frac{1}{4},\quad\frac{1}{2}e^{-\frac{1}{2}\mathrm{Re} \zeta}\leq|W_{0,0}(\zeta)|\leq\frac{3}{2}\mathrm{e}^{-\frac{1}{2}\mathrm{Re}\zeta}\] _for all \(\mathrm{Re}\zeta\geq N_{0}\)._ Proof.: Let \(\zeta\in\mathbb{C}\) and \(\delta>0\). We recall that \[M_{0,0}(\zeta) =\mathrm{e}^{-\frac{1}{2}\zeta}\zeta^{\frac{1}{2}}M(1/2,1,\zeta)\] \[=\mathrm{e}^{-\frac{1}{2}\zeta}\zeta^{1/2}\frac{\Gamma(1)}{\Gamma (1/2)}\left(-iU(1/2,1,\zeta)+i\mathrm{e}^{\zeta}U(1/2,1,\mathrm{e}^{i\pi} \zeta)\right),\] while also \[W_{0,0}(\zeta)=\mathrm{e}^{-\frac{1}{2}\zeta}\zeta^{\frac{1}{2}}U(1/2,1,\zeta).\] Thus, we have that \[\frac{W_{0,0}(\zeta)}{M_{0,0}(\zeta)}=-i\sqrt{\pi}\frac{U(1/2,1,\zeta)}{\mathrm{e} ^{\zeta}U(1/2,1,\mathrm{e}^{i\pi}\zeta)-U(1/2,1,\zeta)}=-i\sqrt{\pi}\frac{1}{ \mathrm{e}^{\zeta}\frac{U(1/2,1,\mathrm{e}^{i\pi}\zeta)}{U(1/2,1,\zeta)}-1}.\] Now, we also recall that \(U(1/2,1,\zeta)=\zeta^{-\frac{1}{2}}\left(1+\zeta^{\frac{1}{2}}\mathcal{E}_{1} (\zeta)\right)\), with \(|\zeta^{\frac{1}{2}}\mathcal{E}_{1}(\zeta)|\leq\frac{1}{2|\zeta|}\mathrm{e}^{ \frac{1}{2|\zeta|}}\). Therefore, we have the lower bound \[\left|\frac{U(1/2,1,\mathrm{e}^{i\pi}\zeta)}{U(1/2,1,\zeta)}\right|\geq\frac{3 }{4},\] for \(|\zeta|\) sufficiently large. Moreover, \(\frac{3}{4}\mathrm{e}^{\mathrm{Re}\zeta}-1\geq\frac{1}{2}\mathrm{e}^{\mathrm{ Re}\zeta}\), for all \(\mathrm{Re}\zeta\geq 2\ln 2\). The desired conclusion follows. For the second part of the Lemma, since \(W_{0,0}(\zeta)=e^{-\frac{1}{2}}\zeta^{\frac{1}{2}}\left(\zeta^{-\frac{1}{2}}+ \mathcal{E}_{1}(\zeta)\right)\), we note that \[W_{0,0}^{\prime}(\zeta)=-\frac{1}{2}W_{0}(\zeta)+\frac{1}{2}\frac{W_{0}(\zeta )}{\zeta}+e^{-\frac{1}{2}\zeta}\left(-\frac{1}{2\zeta}+\zeta^{\frac{1}{2}} \mathcal{E}_{1}^{\prime}(\zeta)\right),\] where we recall that \(\left|\zeta^{\frac{1}{2}}\mathcal{E}_{1}^{\prime}(\zeta)\right|\leq\frac{1}{4 |\zeta|}e^{\frac{1}{2|\zeta|}}\leq x\), for \(x=\frac{1}{2|\zeta|}\leq\frac{1}{2}.\) Hence, \[\left|\frac{W_{0,0}^{\prime}(\zeta)}{W_{0,0}(\zeta)}+\frac{1}{2}\right|\leq x +2x\left|\frac{1}{1+\zeta^{\frac{1}{2}}\mathcal{E}_{1}(\zeta)}\right|\leq 2x \leq\frac{1}{4},\] for all \(x\leq\frac{1}{8}\). For the third statement of the Lemma, note that \(W_{0,0}(\zeta)=e^{-\frac{1}{2}\zeta}\left(1+\zeta^{\frac{1}{2}}\mathcal{E}_{1 }(\zeta)\right)\), the conclusion follows for \(|\zeta|\) large enough so that \(\left|\zeta^{\frac{1}{2}}\mathcal{E}_{1}(\zeta)\right|\leq\frac{1}{2}\). **Lemma A.11**.: _Let \(\beta^{2}=1/4\). Denote \(a:=\frac{1}{2}\pm\mu\) and \(b:=2a=1\). Then, for all \(\epsilon>0\) there exists \(\delta_{0}>0\) such that_ \[\left|\frac{M_{0,0}(\zeta)}{W_{0,0}(\zeta)}\right|\leq\epsilon,\] _for all \(\zeta\in\mathbb{C}\) such that \(|\zeta|\leq\delta\)._ Proof.: We use the functional relation between the Whittaker functions and the modified Bessel functions in order to extract the correct asymptotic behaviour of the functions near the origin and estimate the quotient precisely. In this direction, recall that \[M_{0,0}(2\zeta)=\sqrt{2\zeta}I_{0}(\zeta),\quad W_{0,0}(2\zeta)=\sqrt{\frac{2 \zeta}{\pi}}K_{0}(\zeta),\] where \(I_{0}(\zeta)\) and \(K_{0}(\zeta)\) denote the modified Bessel functions of order 0. Moreover, we have that \[K_{0}(\zeta)=-\left(\ln\left(\frac{\zeta}{2}\right)+\varsigma\right)I_{0}( \zeta)+2\sum_{k=1}^{\infty}\frac{I_{2k}(\zeta)}{k},\] where \[I_{2k}(\zeta)=\left(\frac{\zeta}{2}\right)^{2k}\sum_{j\geq 0}\frac{\left(\frac{ \zeta^{2}}{4}\right)^{j}}{j!(2k+j)!}.\] In particular, one observes that \(|I_{2k}(\zeta)|\leq I_{2k}(|\zeta|)\). Moreover, under the observation that \((2k+j)!\geq(2k)!j!\), we can bound \[\left|2\sum_{k=1}^{\infty}\frac{I_{2k}(z)}{k}\right|\leq 2\sum_{k=1}^{\infty}I_{0}( |\zeta|)\frac{\left(\frac{1}{2}|\zeta|\right)^{2k}}{k(2k)!}\leq 2I_{0}(| \zeta|)\left(\cosh\frac{1}{2}|\zeta|-1\right).\] With this, together with the fact that \(I_{0}(\cdot)\) is analytic in \(\mathbb{C}\) and \(I_{0}(\zeta)\to 1\) when \(\zeta\to 0\), we have that \[|K_{0}(\zeta)|\geq-\frac{1}{2}\ln\left(\frac{1}{2}|\zeta|\right),\] for \(|\zeta|\) sufficiently small. The conclusion follows, since for \(|\zeta|\) sufficiently small we have \[\left|\frac{M_{0,0}(\zeta)}{W_{0,0}(\zeta)}\right|\leq-\frac{3}{\ln\left(\frac {1}{4}|\zeta|\right)}\] **Lemma A.12**.: _Let \(\beta^{2}=1/4\) and let \(y_{0}\in[0,1]\) such that \(N_{2}\leq 2my_{0}\leq N_{1}\). Then, for all \(\epsilon>0\) there exists \(\varepsilon_{0}>\) such that_ \[\left|\frac{W_{0}(y_{0}-i\varepsilon)}{M_{0}(y_{0}-i\varepsilon)}-\frac{W_{0} (y_{0})}{M_{0}(y_{0})}\right|\leq\epsilon,\quad\left|\frac{W_{0}(y_{0}-i \varepsilon)}{M_{0}(y_{0}-i\varepsilon)}\right|\leq C\] _for all \(\varepsilon\leq\varepsilon_{0}\) and some \(C>0\). In particular,_ \[\left|\operatorname{Im}\left(\frac{W_{0}(y_{0}-i\varepsilon)}{M_{0}(y_{0}-i \varepsilon)}\right)\right|\leq\epsilon,\] Proof.: It follows from the continuity of the functions involved, plus the fact that \(M_{0,0}(x)\) does not vanish and \(W_{0,0}(x)\) is bounded, for any \(x>0\) such that \(0<N_{2}\leq x\leq N_{1}<\infty\). **Lemma A.13**.: _There exists \(\delta_{2}>0\) such that_ \[|\zeta|^{\frac{1}{2}}\left(1+\big{|}\log|\zeta|\big{|}\right)\lesssim|W_{0,0} (\zeta)|,\] _for all \(|\zeta|\leq\delta_{2}\)._ Proof.: Recall that \(W_{0,0}(\zeta)=\sqrt{\frac{\zeta}{\pi}}K_{0}(\zeta/2)\) and the fact that \(|K_{0}(\zeta)|\geq-\frac{1}{2}\log\left(\frac{|\zeta|}{2}\right)\) for \(\zeta\to 0\). Then, \[|W_{0,0}(\zeta)|\geq\frac{1}{2\sqrt{\pi}}|\zeta|^{\frac{1}{2}}\big{|}\log| \zeta|-\log 4\big{|}\geq\frac{1}{20\sqrt{\pi}}|\zeta|^{\frac{1}{2}}\big{|}\log| \zeta|\big{|}\geq\frac{1}{40\sqrt{\pi}}|\zeta|^{\frac{1}{2}}\left(1+\big{|} \log|\zeta|\big{|}\right),\] for \(|\zeta|\) sufficiently small. **Lemma A.14**.: _Let \(y_{0}\in[0,1]\) and \(0\leq\varepsilon\leq\frac{\beta}{m}\). Then,_ * _If_ \(my_{0}\leq 3\beta\)_, there exists_ \(\varepsilon_{0}>0\) _such that_ \[(m|y_{0}+i\varepsilon|)^{\frac{1}{2}}\lesssim|M_{0}(y_{0}+i\varepsilon)|\] _for all_ \(0\leq\varepsilon\leq\varepsilon_{0}\)_._ * _If_ \(my_{0}\geq 3\beta\)_,_ \[1\lesssim|M_{\pm}(y_{0}+i\varepsilon)|.\] Proof.: The proof uses the ideas from Lemma A.9 together with the bounds from Lemma A.15. We omit the details. ### Growth bounds and comparison estimates for \(\beta^{2}<1/4\) In this subsection we consider the case \(\beta^{2}<1/4\), for which \(\mu=\sqrt{1/4-\beta^{2}}\) with \(\mu\in\left(0,\frac{1}{2}\right)\) and \(\nu=0\). **Lemma A.15**.: _Denote \(a_{\pm}:=\frac{1}{2}\pm\mu\) and \(b_{\pm}:=2a_{\pm}\). Then,_ \[\lim_{\operatorname{Re}(\zeta)\to+\infty}\left|\frac{\Gamma(a)}{\Gamma(b)} \mathrm{e}^{-\frac{1}{2}\zeta}M_{+}(\zeta)\right|=1.\] _Moreover, let \(C_{\mu}=2^{-4\mu}\frac{\Gamma(1-\mu)}{\Gamma(1+\mu)}\). There exists \(N_{\mu,0}>0\) such that_ \[\left|\frac{M_{-}(\zeta)}{M_{+}(\zeta)}-2^{-4\mu}\frac{\Gamma(1-\mu)}{\Gamma(1 +\mu)}\right|\leq\min\left(5,\frac{\tan\mu\pi}{1+\tan\mu\pi}\right)\frac{C_{ \mu}}{4}.\] _for all \(\mathrm{Re}\zeta\geq N_{\mu,0}\)._ Proof.: Let \(\zeta\in\mathbb{C}\) and \(\delta>0\). We recall that \[M_{\pm}(\zeta) =\mathrm{e}^{-\frac{1}{2}\zeta}\zeta^{a_{\pm}}M(a_{\pm},b_{\pm},\zeta)\] \[=\mathrm{e}^{-\frac{1}{2}\zeta}\zeta^{a_{\pm}}\frac{\Gamma(b_{\pm })}{\Gamma(a_{\pm})}\left(\mathrm{e}^{-i\pi a_{\pm}}U(a_{\pm},b_{\pm},\zeta)+ \mathrm{e}^{i\pi a_{\pm}}\mathrm{e}^{\zeta}U(a_{\pm},b_{\pm},\mathrm{e}^{i\pi }\zeta)\right).\] Moreover, we have that \(U(a_{\pm},b_{\pm},\zeta)=\zeta^{-a_{\pm}}+\mathcal{E}_{\pm}(\zeta)\), where further \[|\zeta^{a_{\pm}}\mathcal{E}_{\pm}(\zeta)|\leq\frac{2\beta^{2}}{|\zeta|} \mathrm{e}^{\frac{2\beta^{2}}{|\zeta|}}.\] In the sequel, we write \(x:=\frac{2\beta^{2}}{|\zeta|}\). Therefore, we can write \[M_{\pm}(\zeta)=\frac{\Gamma(b_{\pm})}{\Gamma(a_{\pm})}\mathrm{e}^{\frac{1}{2} \zeta}\left(\left[1+(\mathrm{e}^{i\pi}\zeta)^{a_{\pm}}\mathcal{E}_{\pm}( \mathrm{e}^{i\pi}\zeta)\right]+\mathrm{e}^{-\zeta}\mathrm{e}^{-i\pi a_{\pm}} \left[1+\zeta^{a_{\pm}}\mathcal{E}_{\pm}(\zeta)\right]\right)\] Now, since \(b_{\pm}=2a_{\pm}\), we have that \[\frac{\Gamma(b_{\pm})}{\Gamma(a_{\pm})}=\pi^{-\frac{1}{2}}2^{2a_{\pm}-1} \Gamma\left(a_{\pm}+\frac{1}{2}\right),\] confer, [26]. Therefore, \[\frac{\Gamma(b_{-})/\Gamma(a_{-})}{\Gamma(b_{+})/\Gamma(a_{+})}=2^{-4\mu} \frac{\Gamma(1-\mu)}{\Gamma(1+\mu)}\] and we note that \[\frac{M_{-}(\zeta)}{M_{+}(\zeta)}=2^{-4\mu}\frac{\Gamma(1-\mu)}{\Gamma(1+\mu) }\frac{1+(\mathrm{e}^{i\pi}\zeta)^{a_{-}}\mathcal{E}_{-}(\mathrm{e}^{i\pi} \zeta)+\mathrm{e}^{-\zeta}\mathrm{e}^{-i\pi a_{-}}\left[1+\zeta^{a_{-}} \mathcal{E}_{-}(\zeta)\right]}{1+(\mathrm{e}^{i\pi}\zeta)^{a_{+}}\mathcal{E}_ {+}(\mathrm{e}^{i\pi}\zeta)+\mathrm{e}^{-\zeta}\mathrm{e}^{-i\pi a_{+}} \left[1+\zeta^{a_{+}}\mathcal{E}_{+}(\zeta)\right]}.\] Moreover, we observe that that \(|\zeta^{a}\mathcal{E}_{1}(\zeta)|\leq\frac{4\beta^{2}}{|\zeta|}\), for \(|\zeta|\geq 4\beta^{2}\). Hence, for any \(\delta>0\), \[\left|(\mathrm{e}^{i\pi}\zeta)^{a_{\pm}}\mathcal{E}_{\pm}(\mathrm{e}^{i\pi} \zeta)+\mathrm{e}^{-\zeta}\mathrm{e}^{-i\pi a_{\pm}}\left(1+\zeta^{a_{\pm}} \mathcal{E}_{\pm}(\zeta)\right)\right|\leq\delta\] provided that \(\mathrm{Re}\zeta>N_{\mu,0}\) for some \(N_{\mu,0}>0\). **Lemma A.16**.: _Denote \(a_{\pm}:=\frac{1}{2}\pm\mu\) and \(b_{\pm}:=2a_{\pm}\). Then,_ \[\lim_{\zeta\to 0}\frac{M_{+}(\zeta)}{M_{-}(\zeta)}=0.\] _Therefore, there exists \(\delta_{\mu,1}>0\) such that_ \[\left|\frac{M_{+}(\zeta)}{M_{-}(\zeta)}\right|\leq\min\left(\frac{1}{5M(a_{-},b_{-},2N_{\mu,0})},\frac{1}{3C_{\mu}}\right)\] _for all \(\zeta\in\mathbb{C}\) such that \(|\zeta|\leq\delta_{\mu,1}\)._ Proof.: We recall once again that \[M_{\pm}(\zeta)=\mathrm{e}^{-\frac{1}{2}\zeta}\zeta^{a_{\pm}}M(a_{\pm},b_{\pm},\zeta).\] Hence, we directly compute \[\frac{M_{+}(\zeta)}{M_{-}(\zeta)}=\zeta^{2\mu}\frac{M(a_{+},b_{+},\zeta)}{M(a_{ -},b_{-},\zeta)}.\] Since \(M(a_{\pm},b_{\pm},\zeta)\to 1\) for \(\zeta\to 0\), and \(2\mu>0\), the conclusion follows for \(|\zeta|\) small enough. **Lemma A.17**.: _Denote \(a_{\pm}:=\frac{1}{2}\pm\mu\) and \(b_{\pm}:=2a_{\pm}\). Let \(y_{0}\in[0,1]\) such that \(N_{\mu,1}\leq 2my_{0}\leq N_{\mu,0}\), for some \(N_{\mu,1}\in\mathbb{R}\). Then, there exists \(\varepsilon_{0}>0\) such that_ \[\left|\frac{M_{\pm}(y_{0}+i\varepsilon)}{M_{\pm}(y_{0})}-1\right|\leq\frac{ \sin\mu\pi}{5},\] _for all \(0<|\varepsilon|\leq\varepsilon_{0}\)._ Proof.: Assume without loss of generality that \(\varepsilon>0\). Then, \[M_{\pm}(y_{0}+i\varepsilon)=M_{\pm}(y_{0})+\int_{0}^{\varepsilon}\frac{\mathrm{ d}}{\mathrm{d}s}M_{\pm}(y_{0}+is)\mathrm{d}s.\] Thanks to the asymptotic expansions for small arguments we next estimate \[\left|\int_{0}^{\varepsilon}\frac{\mathrm{d}}{\mathrm{d}s}M_{\pm}(y_{0}+is) \mathrm{d}s\right|\leq\int_{0}^{\varepsilon}|M_{\pm}^{\prime}(y_{0}+is)| \mathrm{d}s\leq C_{\mu}\int_{0}^{\varepsilon}\frac{(2m)^{\frac{1}{2}\pm\mu}}{ |y_{0}+is|^{\frac{1}{2}\mp\mu}}\mathrm{d}s\leq C_{\mu}(2m\varepsilon)(2my_{0}) ^{-\frac{1}{2}}.\] Using the lower bound \((2my_{0})^{\frac{1}{2}\pm\mu}\leq M_{\pm}(y_{0})\) we have that \[\left|\int_{0}^{\varepsilon}\frac{\mathrm{d}}{\mathrm{d}s}M_{\pm}(y_{0}+is) \mathrm{d}s\right|\leq C_{\mu}M_{\pm}(y_{0})\frac{\varepsilon}{y_{0}}\leq C_{ \mu}M_{\pm}(y_{0})N_{2}^{-1}2m\varepsilon.\] Hence, \[M_{\pm}(y_{0})\left(1-C_{\mu}\frac{\varepsilon}{N_{2}}\right)\leq|M_{\pm}(y_{ 0}+i\varepsilon)|\leq M_{\pm}(y_{0})\left(1+C_{\mu}\frac{\varepsilon}{N_{2}}\right)\] and now choose \(\varepsilon_{0}>0\) sufficiently small, so that the conclusion of the lemma follows swiftly for all \(\varepsilon\leq\varepsilon_{0}\). **Lemma A.18**.: _Let \(y_{0}\in[0,1]\) and \(0\leq\varepsilon\leq\frac{\beta}{m}\). Then,_ * _If_ \(my_{0}\leq 3\beta\)_, there exists_ \(\varepsilon_{0}>0\) _such that_ \[(m|y_{0}+i\varepsilon|)^{\frac{1}{2}\pm\mu}\lesssim|M_{\pm}(y_{0}+i \varepsilon)|\] _for all_ \(0\leq\varepsilon\leq\varepsilon_{0}\)_._ * _If_ \(my_{0}\geq 3\beta\)_,_ \[1\lesssim|M_{\pm}(y_{0}+i\varepsilon)|.\] Proof.: The proof uses the ideas from Lemma A.9 together with the bounds from Lemma A.15. We omit the details. ## Acknowledgments The research of MCZ was partially supported by the Royal Society URF\(\backslash\)R1\(\backslash\)191492 and EPSRC Horizon Europe Guarantee EP/X020886/1.
2309.07776
Disentangling the Entangled Linkages of Relative Magnetic Helicity
Magnetic helicity, $H$, measures magnetic linkages in a volume. The early theoretical development of helicity focused on magnetically closed systems in $\mathcal{V}$ bounded by $\mathcal{S}$. For magnetically closed systems, $\mathcal{V}\in\mathbb{R}^3=\mathcal{V}+\mathcal{V}^*$, no magnetic flux threads the boundary, $\hat{\boldsymbol{n}}\cdot\boldsymbol{B}|_\mathcal{S}=0$. Berger and Field (1984) and Finn and Antonsen (1985) extended the definition of helicity to relative helicity, $\mathcal{H}$, for magnetically open systems where magnetic flux may thread the boundary. Berger (1999,2003) expressed this relative helicity as two gauge invariant terms that describe the self helicity of magnetic field that closes inside $\mathcal{V}$ and the mutual helicity between the magnetic field that threads the boundary $\mathcal{S}$ and the magnetic field that closes inside $\mathcal{V}$. The total magnetic field that permeates $\mathcal{V}$ entangles magnetic fields that are produced by current sources $\boldsymbol{J}$ in $\mathcal{V}$ with magnetic fields that are produced by current sources $\boldsymbol{J}^*$ in $\mathcal{V}^*$. Building on this fact, we extend Berger's expressions for relative magnetic helicity to eight gauge invariant quantities that simultaneously characterize both of these self and mutual helicities and attribute their origins to currents $\boldsymbol{J}$ in $\mathcal{V}$ and/or $\boldsymbol{J}^*$ in $\mathcal{V}^*$, thereby disentangling the domain of origin for these entangled linkages. We arrange these eight terms into novel expressions for internal and external helicity (self) and internal-external helicity (mutual) based on their domain of origin. The implications of these linkages for interpreting magnetic energy is discussed and new boundary observables are proposed for tracking the evolution of the field that threads the boundary.
Peter W. Schuck, Mark G. Linton
2023-09-14T15:07:58Z
http://arxiv.org/abs/2309.07776v1
# Disentangling the Entangled Linkages of Relative Magnetic Helicity ###### Abstract Magnetic helicity, \(H\), measures magnetic linkages in a volume. The early theoretical development of helicity focused on magnetically closed systems in \(\mathcal{V}\) bounded by \(\partial\mathcal{V}\). For magnetically closed systems, \(\mathcal{V}\in\mathbb{R}^{3}=\mathcal{V}+\mathcal{V}^{*}\), no magnetic flux threads the boundary, \(\tilde{\mathbf{n}}\cdot\mathbf{B}|_{\partial\mathcal{V}}=0\). Berger & Field (1984) and Finn & Antonsen (1985) extended the definition of helicity to relative helicity, \(\mathcal{H}\), for magnetically open systems where magnetic flux may thread the boundary. Berger (1999, 2003) expressed this relative helicity as two gauge invariant terms that describe the self helicity of magnetic field that closes inside \(\mathcal{V}\) and the mutual helicity between the magnetic field that threads the boundary \(\partial\mathcal{V}\) and the magnetic field that closes inside \(\mathcal{V}\). The total magnetic field that permeates \(\mathcal{V}\) entangles magnetic fields that are produced by current sources \(\mathbf{J}\) in \(\mathcal{V}\) with magnetic fields that are produced by current sources \(\mathbf{J}^{*}\) in \(\mathcal{V}^{*}\). Building on this fact, we extend Berger's expressions for relative magnetic helicity to eight gauge invariant quantities that simultaneously characterize both of these self and mutual helicities and attribute their origins to currents \(\mathbf{J}\) in \(\mathcal{V}\) and/or \(\mathbf{J}^{*}\) in \(\mathcal{V}^{*}\), thereby disentangling the domain of origin for these entangled linkages. We arrange these eight terms into novel expressions for internal and external helicity (self) and internal-external helicity (mutual) based on their domain of origin. The implications of these linkages for interpreting magnetic energy is discussed and new boundary observables are proposed for tracking the evolution of the field that threads the boundary. + Footnote †: slugcomment: This is the Accepted Manuscript version of an article accepted for publication in the Astrophysical Journal. IOP Publishing Ltd is not responsible for any errors or omission in this version of the manuscript of any version derived from it. This Accepted Manuscript is published under a CC BY license. ## 1 Introduction Magnetic helicity is an important astrophysical quantity for understanding dynamos (Moffatt, 1978), the emergence of large scale magnetic fields in the primodial universe (Field & Carroll, 2000; Brandenburg, 2006), galactic jets (Koenigl & Choudhuri, 1985), the structure of stars (Schrijver & Zwaan, 2000; Brandenburg, 2020), stellar eruptive phenomena (Berger, 1984), and coronal heating (Heyvaerts & Priest, 1984). The concept of helicity has its mathematical origins in linkages with Gauss (1867), Calugareanu (1959), and White (1969) and vortex motion with Thomson (1868). There have been five major developments in understanding magnetic helicity. First, Woltjer (1958) proved that magnetic helicity is preserved in ideal magnetically closed plasma systems and that a linear force-free magnetic configuration represents the absolute minimum energy state for a magnetically closed plasma with a prescribed magnetic helicity. Second, Taylor (1974, 1986) conjectured that magnetic helicity was preserved under turbulent reconnection, thus providing a pathway for plasma to relax to a linear force-free Woltjer state. Third, Frisch et al. (1975) demonstrated that helicity can inverse cascade in the spectral domain to the largest scales accessible to the system, producing large scale magnetic fields. Fourth, Berger & Field (1984) and Finn & Antonsen (1985) extended the definition of magnetic helicity to magnetically open systems by introducing a reference magnetic field that matches the 'open' flux threading the boundary surface \(\partial\mathcal{V}\) of the volume of interest \(\mathcal{V}\)--the so-called "relative magnetic helicity." Fifth, Berger & Field (1984) also showed that the evolution of this relative magnetic helicity for an ideal plasma could be determined from boundary observables. Further refinements on these five major developments have since been made. Berger (1984) adapted Taylor's conjecture to the relative helicity of open systems, arguing that the relative helicity is preserved during solar flares. Berger (1999, 2003) later partitioned the relative magnetic helicity into two further gauge invariant topological quantities: the'self' helicity representing the linkages of the magnetic field that closes in \(\mathcal{V}\) that we term the "closed-closed helicity" and the'mutual' helicity representing the linkages between the open magnetic field that threads the boundary and the magnetic field that closes inside \(\mathcal{V}\) that we term the "open-closed helicity." We have modified this terminology because'self' and/or'mutual' helicity have a variety of meanings in the literature in terms of isolated flux tubes (Berger and Field, 1984; Berger, 1984, 1985; Demoulin et al., 2006), relative helicity of distributed fields in a volume \(\mathcal{V}\)(Berger, 1999, 2003), relative helicity in multiple disjoint subdomains \(\mathcal{V}=\cup_{i=1}^{N}\mathcal{V}_{i}\)(Longcope and Malanushenko, 2008), winding helicity in subdomains (Candelaresi et al., 2021), etc. Recently, Schuck and Antiochos (2019) recast the helicity transport across the boundary in Berger and Field (1984) in a manifestly gauge invariant way and proved that the instantaneous time rate of change of relative helicity was independent of the instantaneous time rate of change of the flux threading the boundary \(\partial\mathcal{V}\). The magnetic helicity is \[H\equiv\int\limits_{\mathcal{V}}d^{3}x\,\mathbf{A}\cdot\mathbf{B}=\int\limits_{ \mathcal{V}}d^{3}x\,\mathbf{A}\cdot\mathbf{\nabla}\mathbf{\times}\mathbf{A},\] (1a) where \[\mathbf{A}\] is the vector potential and \[\mathbf{B}=\mathbf{\nabla}\mathbf{\times}\mathbf{A},\] (1b) is the magnetic field. Magnetic helicity is challenging to quantify because because \[\mathbf{A}\] itself is not directly observable and thus there is gauge freedom in specifying the vector potential \[\mathbf{A}\] that determines \[\mathbf{B}\] through Equation ( 1b ). Thus, under a local gauge transformation1\[\mathbf{A}\to\mathbf{A}+\mathbf{\nabla}\Lambda\], the magnetic field remains unchanged, but the helicity becomes (see for example Schuck and Antiochos, 2019) \[H\to H-\oint\limits_{\partial\mathcal{V}}dS\,\hat{\mathbf{n}}\cdot(\Lambda\,\mathbf{B })\,,\] (2) where \[\hat{\mathbf{n}}\] is the normal pointing into \[\mathcal{V}\] on \[\partial\mathcal{V}\]. Footnote 1: The local gauge symmetry of Maxwell’s equations implies a conserved Noether current by Emma Noether’s second theorem (1918). However, these currents do not generally correspond to physical observables as these currents are not themselves gauge invariant (Karatas and Kowalski, 1990). The gauge non-invariance of the magnetic helicity \(H\) is closely related to the flux threading the bounding surface \(\partial\mathcal{V}\). This flux is often mis-attributed to 'exterior linkage' similar to the way the potential field \(\mathbf{P}\) is often confused with 'external linkage' (see pp 30-31 in Blackman, 2014). Consider the Cartesian \((\hat{\mathbf{x}},\hat{\mathbf{y}},\hat{\mathbf{z}})\) geometry of Figure 1 where \(\mathcal{V}=(x,y,z>0)\) corresponds to the domain of interest bounded by \(\partial\mathcal{V}\) at \(z=0\). Let this domain \(\mathcal{V}\) contain a line current \(I\,\hat{\mathbf{y}}\) at \(x/a=0\) and \(z/a=1\). Figure 0(a) shows the physical current source indicated by the red dot and contours of the vector potential tracing the magnetic field lines. The color scale represents \(B_{z}\) at \(z=0\) (the normal component at \(\partial\mathcal{V}\)) with red being positive and blue negative. _All of the physical current sources in this example are contained inside the volume of interest \(\mathcal{V}\)!_ Using the generalized "method of images" (Thomson, 1845; Hammond, 1960) any magnetic field \(\mathbf{B}\) in \(\mathcal{V}\) may be decomposed into two components \(\mathbf{B}=\mathbf{P}+\mathbf{B}_{\rm cl}\): a magnetic field \(\mathbf{P}\) that is potential in \(\mathcal{V}\) and threads the boundary and a magnetic field \(\mathbf{B}_{\rm cl}\) that closes on itself in \(\mathcal{V}\), i.e., \(\hat{\mathbf{n}}\cdot\mathbf{B}_{\rm cl}|_{\partial\mathcal{V}}=0\). For example, one Figure 1: The entangled physical origins of 1 the magnetic field \(\mathbf{B}=\mathbf{P}+\mathbf{B}_{\rm cl}\) for \(z>0\) when it is decomposed into the fields (b) \(\mathbf{P}\) which thread the boundary at \(z=0\) and are potential for \(z>0\) and (c) \(\mathbf{B}_{\rm cl}\) which close on themselves for \(z>0\). A red dot indicates a line current, \(I\,\hat{\mathbf{y}}\), directed away from the observer and a blue dot indicates a line current, \(-I\,\hat{\mathbf{y}}\), towards the observer. The black lines are contours of the vector potential that trace magnetic field lines. The color scale along \(z=0\) corresponds to the vertical magnetic field component with red/blue corresponding to up/down. All three magnetic fields, \(\mathbf{B}\), \(\mathbf{P}\), and \(\mathbf{B}_{\rm cl}\), are _produced_ by a physical current \(I\,\hat{\mathbf{y}}\) at \(x/a=0;z/a=1\), but \(\mathbf{P}\) is _represented_ by an image current \(I\,\hat{\mathbf{y}}\) at \(x/a=0;z/a=-1\), while \(\mathbf{B}_{\rm cl}\) is _represented_ by a physical current \(I\,\hat{\mathbf{y}}\) at \(x/a=0;z/a=1\) and an image current \(-I\,\hat{\mathbf{y}}\) at \(x/a=0;z/a=-1\). can always find a mathematically unique potential field \(\mathbf{P}\) in \(\mathcal{V}\) corresponding to the normal component of \(\mathbf{B}\) on the surface \(z=0\), i.e., corresponding to the flux threading this bounding surface. This potential field, shown in Figure 0(b), is represented by an image current \(I\,\hat{\mathbf{y}}\)_outside_ the volume of interest \(\mathcal{V}\) at \(x/a=0\) and \(z/a=-1\) in \(\mathcal{V}^{*}\), as \(\mathbf{\nabla}\mathbf{\times}\mathbf{P}=0\) must be zero inside \(\mathcal{V}\). Thus, the representation of flux threading the boundary by a potential field \(\mathbf{P}\) misattributes the origin of this flux to a current source outside the volume of interest \(\mathcal{V}\)(Schuck et al., 2022). Similarly, the magnetic component that closes in \(\mathcal{V}\), defined as \(\mathbf{B}_{\text{cl}}\equiv\mathbf{B}-\mathbf{P}\) and shown in Figure 0(c), is represented by two anti-parallel currents: one corresponds to the physical current \(I\,\hat{\mathbf{y}}\) in \(\mathcal{V}\) and the other corresponds to its image current \(-I\,\hat{\mathbf{y}}\). The image current is symmetrically placed across the boundary \(\partial\mathcal{V}\) to ensure \(\hat{\mathbf{n}}\cdot\mathbf{B}_{\text{cl}}|_{\partial\mathcal{V}}=0\) at \(z=0\) by construction. While mathematically \(\mathbf{P}\) is an 'external linkage,' its _physical origin_ is _internal_ to \(\mathcal{V}\)! The superposition of \(\mathbf{P}\) and \(\mathbf{B}_{\text{cl}}\) recovers the total magnetic field because the image currents in \(\mathcal{V}^{*}\) cancel and all that remains is the physical current source inside \(\mathcal{V}\). However, the decomposition for this example results in the _apparent non-sequitur_ that the potential field \(\mathbf{P}\) is curl free \(\mathbf{\nabla}\mathbf{\times}\mathbf{P}=0\) in \(\mathcal{V}\) but indeed physically produced by currents in \(\mathcal{V}\)! This example shows how easily the origins of magnetic fields can be confused by expressing them in forms that are mathematically convenient, for example, for calculating relative helicity. Yet the origins of these fields are of critical importance for understanding cause and effect, and so a means for tracking these origins while simultaneously calculating global quantities such as the relative helicity or magnetic energy is key for a complete understanding of dynamical astrophysical phenomena. The primary purpose of this paper is to disentangle the linkages that originate with internal and external current sources in relative helicity for open systems. This work is organized as follows: SS2 establishes the framework for attributing magnetic fields to electric current sources, SS3 briefly discusses helicity in _magnetically closed systems_, SS4 reviews the concepts of relative helicity for _magnetically open systems_, SS5 extends relative helicity to simultaneously characterize the open-open and open-closed helicities as well as the domains of origin of the linked magnetic fields and develops novel expressions for internal and external relative helicity and internal-external relative helicity based on the domain of origin of the magnetic field in currents, SS6 describes some of the implications of this work for the concept of free energy and SS7 discusses the implications of these results for theory and observation. ## 2 The Attribution of Magnetic of Field to Internal and External Current Sources The attribution of magnetic fields to physical current sources is necessary to fully understand cause and effect, the linkages of helicity, and changes in magnetic energy within a volume of interest \(\mathcal{V}\). In classical electromagnetic theory, currents create magnetic fields. This statement is inherent in the Biot-Savart law (Biot and Savart, 1820) written in continuous form \[\mathbf{B}\left(t,\mathbf{x}\right)=\frac{1}{c}\,\mathbf{\nabla}\mathbf{\times}\int\limits_{ \mathbb{R}^{3}}d^{3}x^{\prime}\,\frac{\mathbf{J}\left(t,\mathbf{x}^{\prime}\right)}{ \left|\mathbf{x}-\mathbf{x}^{\prime}\right|}=\frac{1}{c}\,\int\limits_{\mathbb{R}^{3}} d^{3}x^{\prime}\,\mathbf{J}\left(t,\mathbf{x}^{\prime}\right)\mathbf{\times}\,\frac{\mathbf{x}-\mathbf{x}^{ \prime}}{\left|\mathbf{x}-\mathbf{x}^{\prime}\right|^{3}}\qquad\mathbf{x}\in\mathbb{R}^{3}, \tag{3}\] as a convolution with spatial moments of the free space Green's function, where \(c\) is the speed of light. In the right-most expression there are no spatial or temporal derivatives operating on the source \(\mathbf{J}\). Green's functions form the basis for understanding cause and effect in physics. Loosely speaking, the Green's function propagates a "cause" at \(\mathbf{x}^{\prime}\) to an "effect" at \(\mathbf{x}\). This is how nature works despite the practice in MHD analysis to substitute \(\mathbf{J}\Longrightarrow c\,\mathbf{\nabla}\mathbf{\times}\mathbf{B}/\left(4\,\pi\right)\) into the \(\mathbf{J}\mathbf{\times}\mathbf{B}\) force to eliminate any explicit reference to \(\mathbf{J}\) in MHD. Physically, the current \(\mathbf{J}\) is manifestly the _source_ of the magnetic vorticity. The Biot-Savart law provides attribution of a current element at \(\mathbf{x}^{\prime}\) to the magnetic field at the location \(\mathbf{x}\). In the pre-Maxwell formulation of electrodynamics, the magnetic field \(\mathbf{B}\) at \(\mathbf{x}\) depends on currents at all other points in the universe \(\mathbb{R}^{3}\). Realistically, this universe dynamically corresponds to \(\mathbb{R}\ll c\,\Delta t\). This has deep implications--the magnetic field is a non-local field despite the fact that it is often conceptually treated as a local object in MHD. Changes in \(\mathbf{B}\left(t,\mathbf{x}\right)\) imply changes in \(\mathbf{J}\left(t,\mathbf{x}^{\prime}\right)\) somewhere else! While Equation (3) is intuitive, it is nearly impossible to apply in practice because access to complete information about all currents in the entire universe \(\mathbf{x}\in\mathbb{R}^{3}\) is rare. Rather, in most cases, knowledge is limited to currents and magnetic fields in a volume \(\mathcal{V}\) bounded by a surface \(\partial\mathcal{V}\). Consider a simply connected _internal_ volume \(\mathcal{V}\) bounded by closed surface \(\partial\mathcal{V}\) and an _external_ domain denoted \(\mathcal{V}^{*}\) such that \(\mathbb{R}^{3}=\mathcal{V}+\mathcal{V}^{*}\). Suppose that both domains contain corresponding current systems \(\mathbf{J}\) and \(\mathbf{J}^{*}\). By the electromagnetic superposition principle, the total magnetic field in Equation (3) is then \[\mathbf{B}\left(t,\mathbf{x}\right)=\mathbf{B}_{\text{BS}}\left(\mathbf{J};t,\mathbf{x}\right)+\mathbf{ B}_{\text{BS}}\left(\mathbf{J}^{*};t,\mathbf{x}\right)\qquad\mathbf{x}\in\mathbb{R}^{3}, \tag{4a}\] with \[\mathbf{B}_{\mathrm{BS}}\left(\mathbf{J};t,\mathbf{x}\right)=\overbrace{\frac{1}{c}\mathbf{ \nabla}\times\int\limits_{\mathcal{V}}d^{3}x^{\prime}\,\frac{\mathbf{J}\left(t,\mathbf{x }^{\prime}\right)}{\left|\mathbf{x}-\mathbf{x}^{\prime}\right|}}^{\text{Internal Sources}}\qquad\mathbf{x}\in\mathbb{R}^{3}, \tag{4b}\] \[\mathbf{B}_{\mathrm{BS}}\left(\mathbf{J}^{*};t,\mathbf{x}\right)=\overbrace{ \frac{1}{c}\mathbf{\nabla}\times\int\limits_{\mathcal{V}^{*}}d^{3}x^{\prime}\, \frac{\mathbf{J}^{*}\left(t,\mathbf{x}^{\prime}\right)}{\left|\mathbf{x}-\mathbf{x}^{\prime} \right|}}^{\text{External Sources}}\qquad\mathbf{x}\in\mathbb{R}^{3}, \tag{4c}\] where the total field is comprised of two _integrants_: one produced by _internal_ sources, \(\mathbf{J}\) in \(\mathcal{V}\), and one produced by _external_ sources, \(\mathbf{J}^{*}\) in \(\mathcal{V}^{*}\). Both integrants (4b) and (4c) are continuous vector fields for \(\mathbf{x}\in\mathbb{R}^{3}\). If \(\mathbf{J}\) and \(\mathbf{B}\) are completely known in \(\mathcal{V}\) then \(\mathbf{B}_{\mathrm{BS}}\left(\mathbf{J};t,\mathbf{x}\right)\) can be computed directly by convolution and \(\mathbf{B}_{\mathrm{BS}}\left(\mathbf{J}^{*};t,\mathbf{x}\right)\) in \(\mathcal{V}\) may be computed from Equation (4a). Analogously, if \(\mathbf{B}\) in \(\mathcal{V}\) is known and \(\mathbf{B}_{\mathrm{BS}}\left(\mathbf{J}^{*};t,\mathbf{x}\right)\) can be estimated then \(\mathbf{B}_{\mathrm{BS}}\left(\mathbf{J};t,\mathbf{x}\right)\) in \(\mathcal{V}\) may be computed from Equation (4a). Below we show that \(\mathbf{B}_{\mathrm{BS}}\left(\mathbf{J};t,\mathbf{x}\right)\) and \(\mathbf{B}_{\mathrm{BS}}\left(\mathbf{J}^{*};t,\mathbf{x}\right)\) in \(\mathcal{V}\) and \(\mathbf{B}_{\mathrm{BS}}\left(\mathbf{J};t,\mathbf{x}\right)\) in \(\mathcal{V}^{*}\) may be computed from \(\mathbf{B}\) in \(\mathcal{V}\cup\partial\mathcal{V}\)_without_ performing computationally expensive Biot-Savart convolution integrals by leveraging the powerful fundamental theorem of vector calculus. For the remainder of the paper we include the source \(\mathbf{J}\) or \(\mathbf{J}^{*}\) as an argument to the vector field when the source of the magnetic field is of interest. For example, \(\mathbf{P}\left(\mathbf{J};t,\mathbf{x}\right)\) and \(\mathbf{B}_{\mathrm{cl}}\left(\mathbf{J};t,\mathbf{x}\right)\) are, respectively, the potential magnetic field and the magnetic field that closes in \(\mathcal{V}\) determined from the magnetic field \(\mathbf{B}_{\mathrm{BS}}\left(\mathbf{J};t,\mathbf{x}\right)\) produced by currents \(\mathbf{J}\) in \(\mathcal{V}\). Correspondingly \(\mathbf{P}\left(\mathbf{J}^{*};t,\mathbf{x}\right)\) and \(\mathbf{B}_{\mathrm{cl}}\left(\mathbf{J}^{*};t,\mathbf{x}\right)\) are, respectively, the potential magnetic field and the magnetic field that closes in \(\mathcal{V}\) determined from the magnetic field \(\mathbf{B}_{\mathrm{BS}}\left(\mathbf{J}^{*};t,\mathbf{x}\right)\) produced by currents \(\mathbf{J}^{*}\) in \(\mathcal{V}^{*}\). And finally, \(\mathbf{B}\left(t,\mathbf{x}\right)\) without the argument of current represents the total magnetic field at \(t\) and \(\mathbf{x}\) in Equation (4a). ### The Fundamental Theorem of Vector Calculus: The Helmholtz Decomposition Consider the fundamental theorem of vector calculus (the Helmholtz Decomposition, HD) for a vector field \(\mathbf{F}\left(\mathbf{x}\right)\) in \(\mathcal{V}\)(Morse & Feshbach, 1953; Gui & Dou, 2007; Kustepeli, 2016) \[\mathbf{\mathcal{F}}_{\mathrm{HD}}\left(\mathbf{x}\right)=\mathbf{\nabla}\times\mathbf{A} \left(\mathbf{x}\right)-\mathbf{\nabla}\Psi\left(\mathbf{x}\right),\] (5a) where \[\mathbf{A}\left(\mathbf{x}\right)= \frac{1}{4\,\pi}\,\left[\int\limits_{\mathcal{V}}\!\!d^{3}x^{\prime }\frac{\mathbf{\nabla}^{\prime}\times\mathbf{F}\left(\mathbf{x}^{\prime}\right)}{\left| \mathbf{x}-\mathbf{x}^{\prime}\right|}+\oint\limits_{\partial\mathcal{V}}dS^{\prime} \,\frac{\hat{\mathbf{n}}^{\prime}\times\mathbf{F}\left(\mathbf{x}^{\prime}\right)}{\left| \mathbf{x}-\mathbf{x}^{\prime}\right|}\right], \tag{5b}\] \[\Psi\left(\mathbf{x}\right)= \frac{1}{4\,\pi}\,\left[\int\limits_{\mathcal{V}}\!\!d^{3}x^{ \prime}\frac{\mathbf{\nabla}^{\prime}\cdot\mathbf{F}\left(\mathbf{x}^{\prime}\right)}{ \left|\mathbf{x}-\mathbf{x}^{\prime}\right|}+\oint\limits_{\partial\mathcal{V}}dS^{ \prime}\,\frac{\hat{\mathbf{n}}^{\prime}\cdot\mathbf{F}\left(\mathbf{x}^{\prime}\right)}{ \left|\mathbf{x}-\mathbf{x}^{\prime}\right|}\right], \tag{5c}\] where \(\hat{\mathbf{n}}\) points into \(\mathcal{V}\). There is a jump discontinuity in the value of the surface integrals as the _observation point_\(\mathbf{x}\) passes from \(\mathcal{V}\) to \(\mathcal{V}^{*}\) across a smooth surface \(\partial\mathcal{V}\) producing \[\mathbf{\mathcal{F}}_{\mathrm{HD}}\left(\mathbf{x}\right)=\left\{\begin{array}{ll} \mathbf{F}\left(\mathbf{x}\right)&\mathbf{x}\in\mathcal{V},\\ \mathbf{F}\left(\mathbf{x}\right)/2&\mathbf{x}\in\partial\mathcal{V},\\ 0&\mathbf{x}\in\mathcal{V}^{*}.\end{array}\right. \tag{5d}\] The HD is a mathematical reconstruction theorem. It is ignorant of electromagnetic theory and does not inherently preserve physical properties of the field \(\mathbf{F}\left(\mathbf{x}\right)\) across the boundary \(\partial\mathcal{V}\). For example, if \(\mathbf{F}\) is solenoidal for \(\mathbf{x}\in\mathbb{R}^{3}\), then generally Equation (5d) will not maintain this property, e.g., continuity of \(\hat{\mathbf{n}}\cdot\mathbf{F}\), across \(\partial\mathcal{V}\). Furthermore, its value on a smooth surface converges to half the value just inside the boundary, which is an inconvenient property for astrophysical problems that involve physics in notional surfaces between domains, such as a photosphere. This motivates the alternative definition (see for example Kempka et al., 1996) \[\alpha\left(\mathbf{x}\right)\mathbf{\mathfrak{F}}_{\mathrm{HD}}\left(\mathbf{x}\right)=\mathbf{ \nabla}\times\mathbf{A}\left(\mathbf{x}\right)-\mathbf{\nabla}\Psi\left(\mathbf{x}\right), \tag{6a}\] \[\alpha\left(\mathbf{x}\right)=\frac{\chi\left(\mathbf{x}\right)}{4\,\pi}=\left\{\begin{array}[ ]{ll}1&\mathbf{x}\in\mathcal{V}\\ 1/2\,\,\,\text{smooth surfaces}\\ 1/4\,\,\,\text{edges of}\,\,\mathcal{V}\\ 1/8\,\,\,\text{vertices of}\,\,\mathcal{V}\\ 0&\mathbf{x}\in\mathcal{V}^{*}\end{array}\right., \tag{6b}\] where \(\chi\left(\mathbf{x}\right)\) is the local internal solid angle of the _principal volume_ at the observation point on \(\partial\mathcal{V}\)(Kellogg, 1929; Courant and Hilbert, 1989, 1989, 1991). The factor \(\alpha\left(\mathbf{x}\right)\) is a constant, and therefore continuous and differentiable, on the open sets \(\mathbf{x}\in\mathcal{V}\) and \(\mathbf{x}\in\mathcal{V}^{*}\) which do not contain \(\mathbf{x}\in\partial\mathcal{V}\). The factor \(\alpha\left(\mathbf{x}\right)\) takes on other values when \(\mathbf{x}\) lies in the boundary \(\partial\mathcal{V}\) because the principle volume of the observation point projects into both domains \(\mathbf{x}\in\mathcal{V}\) and \(\mathbf{x}\in\mathcal{V}^{*}\). On smooth boundaries \(\partial\mathcal{V}\) with well-defined tangent surfaces \(\alpha=1/2\), i.e., half the principle volume lies in \(\mathcal{V}\) and half in \(\mathcal{V}^{*}\). By analogy, for a cube, which is smooth almost everywhere, \(\alpha=1/2\) on faces, \(\alpha=1/4\) on edges, and \(\alpha=1/8\) at vertices (of a cuboid) and of course \(\alpha=1\) for \(\mathbf{x}\in\mathcal{V}\) and \(\alpha=0\) for \(\mathbf{x}\in\mathcal{V}^{*}\), consistent with the projections of the fractions of the principal volumes into \(\mathcal{V}\). The \(\alpha\left(\mathbf{x}\right)\) on the left of Equation (6a) ensures that the surface values of \(\mathfrak{F}_{\text{HD}}\left(\mathbf{x}\right)\) are continuous from within the volume \(\mathcal{V}\) as defined by the one-sided limiting process \[\lim_{\mathbf{x}\in\mathcal{V}\rightarrow\mathbf{x}\in\partial\mathcal{V}}\mathfrak{F }_{\text{HD}}\left(\mathbf{x}\right)=\mathbf{F}\left(\mathbf{x}\right). \tag{6c}\] Consequently \[\mathfrak{F}_{\text{HD}}\left(\mathbf{x}\right)=\left\{\begin{array}{ll}\mathbf{F} \left(\mathbf{x}\right)&\mathbf{x}\in\mathcal{V}\cup\partial\mathcal{V},\\ \text{Arbitrary}&\mathbf{x}\in\mathcal{V}^{*}.\end{array}\right. \tag{6d}\] \(\mathfrak{F}_{\text{HD}}\left(\mathbf{x}\right)\) is arbitrary in \(\mathcal{V}^{*}\) because \(\alpha\left(\mathbf{x}\right)=0\) on the left-hand side of Equation (6a) for \(\mathbf{x}\in\mathcal{V}^{*}\). Thus, \(\mathfrak{F}_{\text{HD}}\left(\mathbf{x}\right)\) can formally be defined in \(\mathcal{V}^{*}\) to properly preserve physical properties of \(\mathbf{F}\left(\mathbf{x}\right)\) across \(\partial\mathcal{V}\). For a solenoidal field, the divergence term in (5c) may be ignored and expression \(\mathfrak{F}_{\text{HD}}\left(\mathbf{x}\right)\) for the magnetic field becomes \[\alpha\left(\mathbf{x}\right)\,\mathbf{B}_{\text{HD}}\left(t,\mathbf{x}\right)=\frac{1}{4 \,\pi}\,\mathbf{\nabla}\mathbf{\times}\int\limits_{\mathcal{V}}d^{3}x^{\prime}\,\frac{ \mathbf{\nabla}^{\prime}\mathbf{\times}\,\mathbf{B}\left(t,\mathbf{x}^{\prime}\right)}{|\mathbf{x }-\mathbf{x}^{\prime}|}+\frac{1}{4\,\pi}\,\left[\mathbf{\nabla}\mathbf{\times}\oint\limits_ {\partial\mathcal{V}}dS^{\prime}\,\frac{\hat{\mathbf{n}}^{\prime}\mathbf{\times}\,\mathbf{B }\left(t,\mathbf{x}^{\prime}\right)}{|\mathbf{x}-\mathbf{x}^{\prime}|}-\mathbf{\nabla}\oint \limits_{\partial\mathcal{V}}dS^{\prime}\,\frac{\hat{\mathbf{n}}^{\prime}\cdot\bm {B}\left(t,\mathbf{x}^{\prime}\right)}{|\mathbf{x}-\mathbf{x}^{\prime}|}\right]\qquad\mathbf{ x}\in\mathbb{R}^{3}. \tag{7}\] As mentioned above, this does not constrain \(\mathbf{B}_{\text{HD}}\) in the external universe where \(\alpha\left(\mathbf{x}\right)=0\). Strictly speaking, if there is flux threading the boundary \(\partial\mathcal{V}\) then the magnetic field determined by Equation (7) for \(\mathbf{x}\in\mathcal{V}\cup\partial\mathcal{V}\) should be formally matched to a potential field in the external universe \(\mathbf{x}\in\mathcal{V}^{*}\) to preserve the solenoidal property of \(\mathbf{B}\) across \(\partial\mathcal{V}\), i.e., as discussed in relation to Equation (6d) above for \(\mathbf{x}\in\mathcal{V}^{*}\). However, practically speaking, this matching procedure is usually unnecessary as we are often interested in reconstructing (i) \(\mathbf{B}\) for \(\mathbf{x}\in\mathcal{V}\cup\partial\mathcal{V}\) or determining (ii) \(\mathbf{B}_{\text{BS}}\left(\mathbf{J}^{*};t,\mathbf{x}\right)\) for \(\mathbf{x}\in\mathcal{V}\cup\partial\mathcal{V}\) or (iii) \(\mathbf{B}_{\text{BS}}\left(\mathbf{J};t,\mathbf{x}\right)\) for \(\mathbf{x}\in\mathbb{R}^{3}\) as discussed below. ### Linking Magnetic Fields to their Current Sources If the _net_ displacement current is ignorable, then Ampere's law \[\mathbf{\nabla}\mathbf{\times}\mathbf{B}=\frac{4\,\pi}{c}\,\mathbf{J}, \tag{8}\] may be substituted into the volume integral to produce \[\alpha\left(\mathbf{x}\right)\,\mathbf{B}_{\text{HD}}\left(t,\mathbf{x}\right)=\frac{1}{c} \,\mathbf{\nabla}\mathbf{\times}\int\limits_{\mathcal{V}}d^{3}x^{\prime}\frac{\mathbf{J} \left(t,\mathbf{x}^{\prime}\right)}{|\mathbf{x}-\mathbf{x}^{\prime}|}+\frac{1}{4\,\pi}\, \left[\mathbf{\nabla}\mathbf{\times}\oint\limits_{\partial\mathcal{V}}dS^{\prime}\, \frac{\hat{\mathbf{n}}^{\prime}\mathbf{\times}\,\mathbf{B}\left(t,\mathbf{x}^{\prime}\right)}{| \mathbf{x}-\mathbf{x}^{\prime}|}-\mathbf{\nabla}\oint\limits_{\partial\mathcal{V}}dS^{ \prime}\,\frac{\hat{\mathbf{n}}^{\prime}\cdot\mathbf{B}\left(t,\mathbf{x}^{\prime}\right)}{| \mathbf{x}-\mathbf{x}^{\prime}|}\right]\qquad\mathbf{x}\in\mathbb{R}^{3}. \tag{9}\] Equation (4a) unambiguously associates the Biot-Savart integrals over current systems \(\mathbf{J}\) and \(\mathbf{J}^{*}\) to their corresponding magnetic field components \(\mathbf{B}_{\text{BS}}\left(\mathbf{J};t,\mathbf{x}\right)\) and \(\mathbf{B}_{\text{BS}}\left(\mathbf{J}^{*};t,\mathbf{x}\right)\), establishing cause and effect. This pre-Maxwell equation also implies that the magnetic field \(\mathbf{B}\left(t,\mathbf{x}\right)\) at any location contains entangled magnetic contributions from both internal \(\mathbf{J}\) and external \(\mathbf{J}^{*}\) current systems. Thus, the surface integrals in Equation (9) _implicitly_ also contain entangled magnetic contributions from both internal \(\mathbf{J}\) and external \(\mathbf{J}^{*}\) current systems. As shown below, these contributions separate cleanly when \(\mathbf{x}\in\mathcal{V}\) or \(\mathbf{x}\in\mathcal{V}^{*}\) but are entangled when the observation point is in the boundary \(\mathbf{x}\in\partial\mathcal{V}\). Since the factor \(\alpha\left(\mathbf{x}\right)\) is chosen to enforce continuity of the HD from \(\mathcal{V}\) to \(\partial\mathcal{V}\), as in Equation (6c), the discussion of (9) is divided logically into two domains \(\mathbf{x}\in\mathcal{V}\cup\partial\mathcal{V}\) and \(\mathbf{x}\in\mathcal{V}^{*}\). For \(\mathbf{x}\in\mathcal{V}\cup\partial\mathcal{V}\), Equation (9) becomes \[\mathbf{B}\left(t,\mathbf{x}\right)=\frac{1}{\alpha\left(\mathbf{x}\right)\,c}\,\mathbf{\nabla }\times\int\limits_{\mathcal{V}}d^{3}x^{\prime}\frac{\mathbf{J}\left(t,\mathbf{x}^{ \prime}\right)}{\left|\mathbf{x}-\mathbf{x}^{\prime}\right|}+\frac{1}{4\,\pi\,\alpha \left(\mathbf{x}\right)}\,\left[\mathbf{\nabla}\times\oint\limits_{\partial\mathcal{V} }dS^{\prime}\,\frac{\hat{\mathbf{n}}^{\prime}\,\mathbf{\times}\,\mathbf{B}\left(t,\mathbf{x}^ {\prime}\right)}{\left|\mathbf{x}-\mathbf{x}^{\prime}\right|}-\mathbf{\nabla}\oint\limits_ {\partial\mathcal{V}}dS^{\prime}\,\frac{\hat{\mathbf{n}}^{\prime}\,\mathbf{\times}\, \mathbf{B}\left(t,\mathbf{x}^{\prime}\right)}{\left|\mathbf{x}-\mathbf{x}^{\prime}\right|} \right]\qquad\mathbf{x}\in\mathcal{V}\cup\partial\mathcal{V}, \tag{10}\] which addresses the reconstruction in item (i) above. Equations (4a) and (10) are equivalent in the intersection of their domain of validity \(x\in\mathcal{V}\cup\partial\mathcal{V}\). This equivalence will be used to establish the formal correspondence between \(\mathbf{B}_{\text{BS}}\left(\mathbf{J};t,\mathbf{x}\right)\) and \(\mathbf{B}_{\text{BS}}\left(\mathbf{J}^{*};t,\mathbf{x}\right)\) in Equations (4b)-(4c) and the HD in Equation (10). To establish the correspondence between internal and external sources in Equation (4a) and terms in (10), the Biot-Savart magnetic field produced by _internal sources_ in the volume \(\mathcal{V}\) from Equation (4b) is added to and subtracted from Equation (10) to produce for \(\mathbf{x}\in\mathcal{V}\cup\partial\mathcal{V}\) \[\mathbf{B}\left(t,\mathbf{x}\right)= \underbrace{\frac{1}{c}\,\mathbf{\nabla}\times\int\limits_{\mathcal{V }}d^{3}x^{\prime}\frac{\mathbf{J}\left(t,\mathbf{x}^{\prime}\right)}{\left|\mathbf{x}-\bm {x}^{\prime}\right|}}_{\mathbf{B}_{\text{BS}}\left(\mathbf{J};t,\mathbf{x}\right)} \tag{11}\] \[+\underbrace{\frac{1-\alpha\left(\mathbf{x}\right)\,c}{\alpha\left( \mathbf{x}\right)\,c}\,\mathbf{\nabla}\times\int\limits_{\mathcal{V}}d^{3}x^{\prime} \frac{\mathbf{J}\left(t,\mathbf{x}^{\prime}\right)}{\left|\mathbf{x}-\mathbf{x}^{\prime}\right| }+\frac{1}{4\,\pi\,\alpha\left(\mathbf{x}\right)}\,\left[\mathbf{\nabla}\times\oint \limits_{\partial\mathcal{V}}dS^{\prime}\,\frac{\hat{\mathbf{n}}^{\prime}\,\mathbf{ \times}\,\mathbf{B}\left(t,\mathbf{x}^{\prime}\right)}{\left|\mathbf{x}-\mathbf{x}^{\prime} \right|}-\mathbf{\nabla}\oint\limits_{\partial\mathcal{V}}dS^{\prime}\,\frac{\hat{ \mathbf{n}}^{\prime}\,\mathbf{\cdot}\,\mathbf{B}\left(t,\mathbf{x}^{\prime}\right)}{\left|\mathbf{ x}-\mathbf{x}^{\prime}\right|}\right]}_{\mathbf{B}_{\text{BS}}\left(\mathbf{J}^{*};t,\mathbf{x}\right)},\] where the terms are now grouped according to their physical interpretation. This resolves items (ii) and (iii) for \(\mathbf{x}\in\mathcal{V}\cup\partial\mathcal{V}\). However, as discussed below, there are more efficient computational expressions for \(\mathbf{B}_{\text{BS}}\left(\mathbf{J}^{*};t,\mathbf{x}\right)\) and \(\mathbf{B}_{\text{BS}}\left(\mathbf{J}^{*};t,\mathbf{x}\right)\) when the bounding surface is excluded, i.e., \(\mathbf{x}\in\mathcal{V}\) or \(\mathbf{x}\in\mathcal{V}^{*}\). Note that the formal appearance of the integrant due to internal sources proportional to \(\mathbf{B}_{\text{BS}}\left(\mathbf{J};t,\mathbf{x}\right)\) under "External Sources" in Equation (11) is a consequence of the entanglement of internal and external sources of \(\mathbf{B}\) in evaluation of the surface integrals _when the observation point is in the boundary \(\mathbf{x}\in\partial\mathcal{V}\)_. The surface integrals depend on total magnetic field \(\mathbf{B}\left(t,x\right)\) which implicitly contains entangled magnetic contributions from internal and external current sources. If we exclude the observation points in the surface, then \(\alpha\left(\mathbf{x}\right)=1\) and, for observation points in the volume of interest, Equation (11) reduces to the intuitive form \[\mathbf{B}\left(t,\mathbf{x}\right)=\underbrace{\frac{1}{c}\,\mathbf{\nabla}\times\int \limits_{\mathcal{V}}d^{3}x^{\prime}\frac{\mathbf{J}\left(t,\mathbf{x}^{\prime}\right)} {\left|\mathbf{x}-\mathbf{x}^{\prime}\right|}}_{\mathbf{B}_{\text{BS}}\left(\mathbf{J};t,\mathbf{x }\right)}+\underbrace{\frac{1}{4\,\pi\,\left[\mathbf{\nabla}\times\oint\limits_{ \partial\mathcal{V}}dS^{\prime}\,\frac{\hat{\mathbf{n}}^{\prime}\,\mathbf{\times}\,\mathbf{B} \left(t,\mathbf{x}^{\prime}\right)}{\left|\mathbf{x}-\mathbf{x}^{\prime}\right|}-\mathbf{ \nabla}\oint\limits_{\partial\mathcal{V}}dS^{\prime}\,\frac{\hat{\mathbf{n}}^{ \prime}\,\mathbf{\times}\,\mathbf{B}\left(t,\mathbf{x}^{\prime}\right)}{\left|\mathbf{x}-\mathbf{x}^ {\prime}\right|}\right]}_{\mathbf{B}_{\text{BS}}\left(\mathbf{J}^{*};t,\mathbf{x}\right)} \qquad\mathbf{x}\in\mathcal{V}. \tag{12}\] The surface integrals now provide an efficient expression for the magnetic field in \(\mathcal{V}\) produced by external sources \[\mathbf{B}_{\text{BS}}\left(\mathbf{J}^{*};t,\mathbf{x}\right)= \mathbf{B}\left(t,\mathbf{x}\right)-\mathbf{B}_{\text{BS}}\left(\mathbf{J};t, \mathbf{x}\right), \tag{13a}\] \[= \frac{1}{4\,\pi}\,\left[\mathbf{\nabla}\times\oint\limits_{\partial \mathcal{V}}dS^{\prime}\,\frac{\hat{\mathbf{n}}^{\prime}\,\mathbf{\times}\,\mathbf{B} \left(t,\mathbf{x}^{\prime}\right)}{\left|\mathbf{x}-\mathbf{x}^{\prime}\right|}-\mathbf{ \nabla}\oint\limits_{\partial\mathcal{V}}dS^{\prime}\,\frac{\hat{\mathbf{n}}^{\prime} \,\mathbf{\cdot}\,\mathbf{B}\left(t,\mathbf{x}^{\prime}\right)}{\left|\mathbf{x}-\mathbf{x}^{ \prime}\right|}\right]=\,\overbrace{1}^{\overbrace{c}\,\mathbf{\nabla}\times\int \limits_{\mathcal{V}^{*}}d^{3}x^{\prime}\frac{\mathbf{J}^{*}\left(t,\mathbf{x}^{\prime} \right)}{\left|\mathbf{x}-\mathbf{x}^{\prime}\right|}}_{\mathbf{x}\in\mathcal{V}}. \tag{13b}\] This establishes \(\boldsymbol{B}_{\mathrm{BS}}\left(\boldsymbol{J}^{*};t,\boldsymbol{x}\right)\) for \(\boldsymbol{x}\in\mathcal{V}\) by surface convolution alone. This expression may be subtracted from the total field \(\boldsymbol{B}\) to provide an expression for the internal sources by surface convolution that is equivalent to the Biot-Savart law for internal sources \[\boldsymbol{B}_{\mathrm{BS}}\left(\boldsymbol{J};t,\boldsymbol{x}\right)= \boldsymbol{B}\left(t,\boldsymbol{x}\right)-\boldsymbol{B}_{\mathrm{BS}}\left( \boldsymbol{J}^{*};t,\boldsymbol{x}\right)=\overbrace{\frac{1}{c}\, \boldsymbol{\nabla}\times\int\limits_{\mathcal{V}}d^{3}x^{\prime}\frac{ \boldsymbol{J}\left(t,\boldsymbol{x}^{\prime}\right)}{|\boldsymbol{x}- \boldsymbol{x}^{\prime}|}}^{\text{Internal Sources}}\qquad\boldsymbol{x}\in\mathcal{V}. \tag{13c}\] Analogously, if we consider observation points in the external universe then \(\alpha\left(\boldsymbol{x}\right)=0\) and the surface terms _extinguish_ the internal terms as Equation (9) becomes \[0=\underbrace{\overbrace{\frac{1}{c}\boldsymbol{\nabla}\times\int\limits_{ \mathcal{V}}d^{3}x^{\prime}\frac{\boldsymbol{J}\left(t,\boldsymbol{x}^{\prime} \right)}{|\boldsymbol{x}-\boldsymbol{x}^{\prime}|}}^{\text{Internal Sources}}}_{ \boldsymbol{B}_{\mathrm{BS}}\left(\boldsymbol{J};t,\boldsymbol{x}\right)}+ \overbrace{\frac{1}{4\,\pi}\,\left[\boldsymbol{\nabla}\times\oint\limits_{ \partial\mathcal{V}}dS^{\prime}\,\frac{\hat{\boldsymbol{n}}^{\prime}\times \boldsymbol{B}\left(t,\boldsymbol{x}^{\prime}\right)}{|\boldsymbol{x}- \boldsymbol{x}^{\prime}|}-\boldsymbol{\nabla}\oint\limits_{\partial\mathcal{V}} dS^{\prime}\,\frac{\hat{\boldsymbol{n}}^{\prime}\cdot\boldsymbol{B}\left(t,\boldsymbol{x}^{ \prime}\right)}{|\boldsymbol{x}-\boldsymbol{x}^{\prime}|}\right]}^{\text{Internal Sources}}\qquad \boldsymbol{x}\in\mathcal{V}^{*},\] (14a) where \[\boldsymbol{B}_{\mathrm{BS}}\left(\boldsymbol{J};t,\boldsymbol{x}\right)=-\frac{ 1}{4\,\pi}\,\left[\boldsymbol{\nabla}\times\oint\limits_{\partial\mathcal{V}} dS^{\prime}\,\frac{\hat{\boldsymbol{n}}^{\prime}\times\boldsymbol{B}\left(t, \boldsymbol{x}^{\prime}\right)}{|\boldsymbol{x}-\boldsymbol{x}^{\prime}|}- \boldsymbol{\nabla}\oint\limits_{\partial\mathcal{V}}dS^{\prime}\,\frac{\hat{ \boldsymbol{n}}^{\prime}\cdot\boldsymbol{B}\left(t,\boldsymbol{x}^{\prime} \right)}{|\boldsymbol{x}-\boldsymbol{x}^{\prime}|}\right]=\overbrace{\frac{1}{ c}\,\boldsymbol{\nabla}\times\int\limits_{\mathcal{V}}d^{3}x^{\prime}\frac{ \boldsymbol{J}\left(t,\boldsymbol{x}^{\prime}\right)}{|\boldsymbol{x}- \boldsymbol{x}^{\prime}|}}^{\text{Internal Sources}}\qquad\boldsymbol{x}\in \mathcal{V}^{*}. \tag{14b}\] Equations (13b) and (14b) manifestly show that the surface integrals contain contributions to the magnetic field from both internal and external currents and that these contributions separate out cleanly for observation points \(\boldsymbol{x}\in\mathcal{V}\) or \(\boldsymbol{x}\in\mathcal{V}^{*}\) but are entangled for \(\boldsymbol{x}\in\partial\mathcal{V}\). Equations (13b)-(13c) and (14b) establish \(\boldsymbol{B}_{\mathrm{BS}}\left(\boldsymbol{J};t,\boldsymbol{x}\right)\) for \(\boldsymbol{x}\in\mathcal{V}\) in item (iii) by surface convolution alone. Note that even if \(\boldsymbol{B}_{\mathrm{BS}}\left(\boldsymbol{J};t,\boldsymbol{x}\right)\) or \(\boldsymbol{B}_{\mathrm{BS}}\left(\boldsymbol{J}^{*};t,\boldsymbol{x}\right)\) are required on \(\partial\mathcal{V}\), this computation necessitates the evaluation of the Biot-Savart convolution only for surface points \(\boldsymbol{x}\in\partial\mathcal{V}\). Furthermore, there are other more direct techniques for separating magnetic fields into \(\boldsymbol{B}_{\mathrm{BS}}\left(\boldsymbol{J};t,\boldsymbol{x}\right)\) or \(\boldsymbol{B}_{\mathrm{BS}}\left(\boldsymbol{J}^{*};t,\boldsymbol{x}\right)\) on a closed smooth surface (see Schuck et al., 2022, for a technique applicable to a spherical boundary). Recently, Leake et al. (In Prep., 2023) have developed a tool for applying the HD in Equations (6a)-(6b) and (7) for astrophysical MHD simulations. Having established this framework for the attribution of magnetic fields to their origin in internal and external current sources, we now turn our attention to the implications of this causality for magnetic helicity and magnetic energy. ## 3 Helicity for magnetically 'closed' systems A magnetically 'closed' system has no magnetic flux threading the boundary \(\partial\mathcal{V}\) anywhere, i.e., \(\left.\boldsymbol{B}\cdot\hat{\boldsymbol{n}}\right|_{\partial\mathcal{V}}=0\). As demonstrated below, even a magnetically closed system with \(\hat{\boldsymbol{n}}\cdot\boldsymbol{B}|_{\partial\mathcal{V}}=0\) is not completely electrodynamically isolated from the external universe \(\mathcal{V}^{*}\). For an ideal plasma, the evolution of the vector potential in the incomplete Gibbs gauge2 is determined by Footnote 2: This gauge condition is referenced as the “Gibbs,” “Weyl,” “Hamiltonian,” and “temporal” gauge in the literature (Gibbs, 1896; Przeszowski et al., 1996; Jackson, 2002). \[\frac{\partial\boldsymbol{A}}{\partial t}=\boldsymbol{v}\boldsymbol{\times} \boldsymbol{B}, \tag{15}\] where \(\boldsymbol{v}\) is the plasma velocity. Woltjer (1958) showed that the magnetic helicity is invariant \[\frac{dH}{dt}=\frac{\partial}{\partial t}\int\limits_{\mathcal{V}}d^{3}x\, \boldsymbol{A}\cdot\boldsymbol{\nabla}\boldsymbol{\times}\boldsymbol{A}= \oint\limits_{\partial\mathcal{V}}dS\,\hat{\boldsymbol{n}}\cdot\boldsymbol{A} \boldsymbol{\times}\,\frac{\partial\boldsymbol{A}}{\partial t}=0, \tag{16}\] in a closed system, stating The surface integral vanishes because we consider a closed system. For then the motions inside the system may not affect the vector potential outside, and, as the vector potential is continuous, even when surface currents are present, \(\partial\mathbf{A}/\partial t\) must vanish at the surface of the system. The physical implications of these boundary conditions are discussed in more detail in Appendix A. However, we touch on some obvious points here. First, \(\partial\mathbf{A}/\partial t|_{\partial\mathcal{V}}=0\) implies that \(\hat{\mathbf{n}}\cdot\partial\mathbf{B}/\partial t|_{\partial\mathcal{V}}=0\) but does not imply either \(\hat{\mathbf{n}}\cdot\mathbf{J}|_{\partial\mathcal{V}}=0\) or \(\hat{\mathbf{n}}\cdot\partial\mathbf{J}/\partial t|_{\partial\mathcal{V}}=0\)--the two domains may share time-dependent current systems that pass through \(\partial\mathcal{V}\).3 Footnote 3: Consider the Cartesian example with \(\partial\mathcal{V}\) defined as \(z=0\) with \(\hat{\mathbf{n}}=\hat{\mathbf{z}}\) \[\partial_{t}\mathbf{A}=\mathbf{\nabla}\mathbf{\times}\left(\partial_{t}\psi\,\hat{\mathbf{z}} \right)+\partial_{t}A_{z}\,\hat{\mathbf{z}}+\mathbf{\nabla}_{\perp}\partial_{t}\phi \qquad\text{with }\partial_{t}\mathbf{A}|_{\partial\mathcal{V}}=0\iff\partial_{t}\psi|_{ \partial\mathcal{V}}=\text{constant, }\partial_{t}A_{z}|_{\partial \mathcal{V}}=0\text{ and }\partial_{t}\phi|_{\partial\mathcal{V}}=\text{constant}\] and \(\mathbf{\nabla}_{\perp}\equiv\hat{\mathbf{x}}\,\partial_{x}+\hat{\mathbf{y}}\,\partial_{y}\) and where we have used the short-hand \(\partial_{x}=\partial/\partial x\) in this footnote. Then \[\hat{\mathbf{z}}\cdot\mathbf{\nabla}\mathbf{\times}\mathbf{\times}\mathbf{\times}\partial_{t}\bm {A}|_{\partial\mathcal{V}}=\nabla_{\perp}^{2}\partial_{t}\partial_{t}\phi|_{ \partial\mathcal{V}}\neq 0,\] which is not required to be zero. Note that strictly speaking the vector potential must also satisfy \(\mathbf{B}\cdot\partial_{t}\mathbf{A}=0\), i.e., Equation (15). For example, consider the case where there is a cylindrically symmetric vertical current \(I\,\hat{\mathbf{z}}\) passing through the domain, generating an azimuthal \(B_{\theta}\) in the domain, with no other magnetic field. This field has no linkages, and so no helicity. Even if the current amplitude is changed, the system remains in a zero helicity state. Second, or \(\hat{\mathbf{n}}\cdot\partial\mathbf{J}/\partial t|_{\partial\mathcal{V}}=0\)--the two domains may share time-dependent current systems that pass through \(\partial\mathcal{V}\).3 Second, hidden in Equation (16) is the assumption of gauge invariance which requires \(\hat{\mathbf{n}}\cdot\mathbf{B}|_{\partial\mathcal{V}}=0\), e.g., \(\hat{\mathbf{n}}\times\mathbf{A}|_{\partial\mathcal{V}}=0\) to within a gauge transformation (see Appendix A). This is a stronger assumption than \(\hat{\mathbf{n}}\cdot\partial\mathbf{B}/\partial t|_{\partial\mathcal{V}}=0\). Third, within the volume of interest \(\mathcal{V}\), the magnetic field produced by "surface currents" is electrodynamically indistinguishable from the magnetic field produced by external currents \(\mathbf{J}^{*}\) in \(\mathcal{V}^{*}\). This last point suggests that even with the mathematical boundary conditions imposed by Woltjer (1958) that under special, albeit contrived, conditions, the motions in \(\mathcal{V}^{*}\) can affect the magnetic field and plasma in \(\mathcal{V}\). For example, consider the pedagogical system \(\mathcal{V}\) and \(\mathcal{V}^{*}\) bounded by \(\partial\mathcal{V}\) at \(z=0\) shown in Figure 2. The vector potential of this system is given by Footnote 3: Consider the Cartesian example with \(\partial\mathcal{V}\) defined as \(z=0\) with \(\hat{\mathbf{n}}=\hat{\mathbf{z}}\) \[\partial_{t}\mathbf{A}=\mathbf{\nabla}\mathbf{\times}\left(\partial_{t}\psi\,\hat{\mathbf{z}} \right)+\partial_{t}A_{z}\,\hat{\mathbf{z}}+\mathbf{\nabla}_{\perp}\partial_{t}\phi \qquad\text{with }\partial_{t}\mathbf{A}|_{\partial\mathcal{V}}=0\iff\partial_{t}\psi|_{ \partial\mathcal{V}}=\text{constant, }\partial_{t}A_{z}|_{\partial \mathcal{V}}=0\text{ and }\partial_{t}\phi|_{\partial\mathcal{V}}=\text{constant}\] and \(\mathbf{\nabla}_{\perp}\equiv\hat{\mathbf{x}}\,\partial_{x}+\hat{\mathbf{y}}\,\partial_{y}\) and where we have used the short-hand \(\partial_{x}=\partial/\partial x\) in this footnote. Then \[\hat{\mathbf{z}}\cdot\mathbf{\nabla}\mathbf{\times}\mathbf{\times}\mathbf{\times}\partial_{t} \mathbf{A}|_{\partial\mathcal{V}}=\nabla_{\perp}^{2}\partial_{t}\partial_{t}\phi|_{ \partial\mathcal{V}}\neq 0,\] which is not required to be zero. Note that strictly speaking the vector potential must also satisfy \(\mathbf{B}\cdot\partial_{t}\mathbf{A}=0\), i.e., Equation (15). For example, consider the case where there is a cylindrically symmetric vertical current \(I\,\hat{\mathbf{z}}\) passing through the domain, generating an azimuthal \(B_{\theta}\) in the domain, with no other magnetic field. This field has no linkages, and so no helicity. Even if the current amplitude is changed, the system remains in a zero helicity state. \[A_{y}\left(t,x,z\right)=-\frac{I\left(t\right)}{c}\,\log\left[\frac{x^{2}+(z-a )^{2}}{x^{2}+(z+a)^{2}}\right], \tag{17}\] where \(I(t)\) is the current in the two thin, oppositely directed current channels at \(x/a=0\) and \(z/a=\pm 1\). The normal component, \(B_{z}\), which is a superposition of these two current sources, has been contrived to precisely cancel at \(z=0\), and \(A_{y}(t,x,z=0)=0\) and thus \(\mathcal{V}\) is a magnetically closed system by the mathematical boundary condition \(\partial\mathbf{A}/\partial t|_{\partial\mathcal{V}}=0\) in Woltjer (1958). While there is no flux threading the boundary \(\partial\mathcal{V}\) at \(z=0\), Figure 1(b) shows that flux produced by the physical current source \(I\left(t\right)\,\hat{\mathbf{y}}\) at \(x/a=0\) and \(z/a=1\) threads the boundary and permeates \(\mathcal{V}^{*}\). Similarly Figure 1(c) shows that flux produced by the physical current source \(-I\left(t\right)\,\hat{\mathbf{y}}\) at \(x/a=0\) and \(z/a=-1\) threads the boundary and permeates \(\mathcal{V}\). The total magnetic field \(\mathbf{B}\left(t,\mathbf{x}\right)\) for \(\mathcal{V}\) shown in Figure 1(a) enlarges the magnetic field from these two physical current sources and thus \(\mathcal{V}\) and \(\mathcal{V}^{*}\) are "communicating" in collusion to satisfy \(B_{z}=0\) at \(z=0\). This system results in the apparent non-sequuitar that there can be magnetic field in \(\mathcal{V}\) produced by currents \(\mathbf{J}^{*}\) in \(\mathcal{V}^{*}\) and magnetic field in \(\mathcal{V}^{*}\) produced by currents \(\mathbf{J}\) in \(\mathcal{V}\) when no magnetic flux threads \(\partial\mathcal{V}\), the boundary between \(\mathcal{V}\) and \(\mathcal{V}^{*}\). Of course this highly idealized system is not in force balance and is likely to relax violently to a lower energy state. Nonetheless, this example serves to demonstrate that a magnetically 'closed' system is not necessarily electrodynamically isolated and may be implicitly coupled to the external universe. _A magnetically closed system may Figure 2: The entangled origins of (a) the intrinsically solenoidal magnetic field \(\mathbf{B}_{\text{cl}}\) of a magnetically closed system for \(z>0\) with \(\hat{\mathbf{n}}\cdot\mathbf{B}_{\text{cl}}|_{\partial\mathcal{V}}=0\) that is produced by two physical current systems: (b) one internal, \(I\left(t\right)\,\hat{\mathbf{y}}\) at \(x/a=0\) and \(z/a=1\), and (c) one external, \(-I\left(t\right)\,\hat{\mathbf{y}}\) at \(x/a=0\) and \(z/a=-1\). A red dot indicates a line current directed away from the observer and a blue dot indicates a line current towards the observer. The black lines are contours of the vector potential that trace magnetic field lines. The color scale along \(z=0\) corresponds to the vertical magnetic field component with red/blue corresponding to up/down. _contain magnetic field produced by the external universe_. Furthermore, the collusion between \(\mathcal{V}\) and \(\mathcal{V}^{*}\) described here is in common use in solar physics. It is analogous to the collusion between \(\mathcal{V}\) and \(\mathcal{V}^{*}\) required to impose flux preserving boundary conditions (\(\hat{\boldsymbol{n}}\cdot\partial\boldsymbol{B}/\partial t|_{\partial\mathcal{V }}=0\)) on the photosphere in photosphere-to-corona MHD simulations of the solar atmosphere (e.g., Kusano et al., 1995; Knizhnik et al., 2017; Lian et al., 2020; Bian & Jiang, 2023). We remark that the remainder of this paper is devoted to understanding the situation where two systems \(\mathcal{V}\) and \(\mathcal{V}^{*}\) are magnetically open and manifestly electrodynamically coupled. Woltjer (1958) further showed that a force-free field \(\boldsymbol{J}=\lambda\,\boldsymbol{B}\) with constant \(\lambda\) represents the lowest state of magnetic energy that a magnetically closed system containing an ideal plasma can achieve while constrained by a prescribed helicity \(H\). However, there was no obvious pathway for an ideal plasma to relax to the Woltjer state, because the equations of motion for an ideal MHD plasma exhibit an infinite number of symmetries corresponding to dynamical invariants, by Noether's (1918) first theorem (Frenkel et al., 1982).4 Thus, while the magnetic helicity, \(H\), is preserved in an ideal plasma,5 it is not particularly unique or useful for describing ideal plasma dynamics--it is invariant, but it is just one of the infinity of invariants. The situation is different for a non-ideal plasma because Taylor (1974) conjectured that the magnetic helicity \(H\) remains invariant even in the presence of weak dissipation which destroys the conservation of the other quantities. The Taylor conjecture provided a pathway for a closed system containing a near-ideal plasma to relax to the Woltjer linear force-free state while constrained by a prescribed helicity \(H\). Helicity is a so-called "robust invariant," meaning that it is approximately preserved during a rapid plasma relaxation to a lower energy state even if that involves dissipation, reconnection, and magnetic reorganization. Thus, while challenging to quantify, the magnetic helicity is an important measure of magnetic complexity in a near-ideal plasma. Footnote 4: This is sometimes called Noether’s (1918) second theorem, but see footnote 1 and Brading & Brown (2002) and Brading (2002). Footnote 5: See also Moffatt (1969) and pp. 44-45 in Moffatt (1978) for a different approach to helicity conservation. ## 4 Relative Helicity for Magnetically 'Open' Systems The concept of helicity was then extended, by Berger & Field (1984) and Finn & Antonsen (1985), from magnetically 'closed' systems where magnetic "communication" is limited to current systems \(\boldsymbol{J}\) in \(\mathcal{V}\) and \(\boldsymbol{J}^{*}\) in \(\mathcal{V}^{*}\) that act in collusion to preserve \(\hat{\boldsymbol{n}}\cdot\boldsymbol{B}|_{\partial\mathcal{V}}=0\) to magnetically 'open' systems where magnetic communication is manifest because magnetic flux threads the boundary \(\partial\mathcal{V}\) and \(\boldsymbol{J}\) in \(\mathcal{V}\) and \(\boldsymbol{J}^{*}\) in \(\mathcal{V}^{*}\) can each independently contribute to \(\hat{\boldsymbol{n}}\cdot\boldsymbol{B}|_{\partial\mathcal{V}}\). In this context, the helicity is measured relative to a _reference magnetic field_\(\boldsymbol{B}_{\mathrm{R}}\) which threads the boundary \(\partial\mathcal{V}\) of \(\mathcal{V}\) in the same way as the magnetic field \(\boldsymbol{B}\). Irregardless, for either open or closed systems, the linkages produced by currents in the external universe \(\mathcal{V}^{*}\) can become entangled with the linkages produced by currents in the internal volume \(\mathcal{V}\). The goal of this paper is to extend the work of Woltjer (1958), Berger & Field (1984), Finn & Antonsen (1985), and Berger (1999, 2003) and provide a clear distinction between the origin of the linkages in \(\mathcal{V}\). The relative magnetic helicity measure for systems supporting magnetic fields that thread the boundary \(\partial\mathcal{V}\) proposed by Berger & Field (1984) and Finn & Antonsen (1985) is \[\mathcal{H}=\int\limits_{\mathcal{V}}d^{3}x\left(\boldsymbol{A}+\boldsymbol{A }_{\mathrm{R}}\right)\cdot\left(\boldsymbol{B}-\boldsymbol{B}_{\mathrm{R}} \right), \tag{18}\] where the reference magnetic field \(\boldsymbol{B}_{\mathrm{R}}\) that threads the boundary \(\partial\mathcal{V}\) is defined as \[\boldsymbol{B}_{\mathrm{R}}=\boldsymbol{\nabla}\boldsymbol{\times}\boldsymbol{A }_{\mathrm{R}}\qquad\text{where}\qquad\left(\boldsymbol{B}-\boldsymbol{B}_{ \mathrm{R}}\right)\cdot\hat{\boldsymbol{n}}|_{\partial\mathcal{V}}=0, \tag{19}\] and the magnetic field that closes in \(\mathcal{V}\) is then defined as \[\boldsymbol{A}_{\mathrm{cl}}= \boldsymbol{A}-\boldsymbol{A}_{\mathrm{R}}, \tag{20a}\] \[\boldsymbol{B}_{\mathrm{cl}}= \boldsymbol{B}-\boldsymbol{B}_{\mathrm{R}}=\boldsymbol{\nabla} \boldsymbol{\times}\boldsymbol{A}_{\mathrm{cl}}. \tag{20b}\] The reference field represents the 'open' magnetic field that threads \(\partial\mathcal{V}\) because \(\hat{\boldsymbol{n}}\cdot\boldsymbol{B}_{\mathrm{R}}\) is nonzero on \(\partial\mathcal{V}\), i.e., \(\boldsymbol{B}_{\mathrm{R}}\) has components that enter and leave \(\mathcal{V}\). The magnetic field \(\boldsymbol{B}_{\mathrm{cl}}\) is solenoidal \(\boldsymbol{\nabla}\cdot\boldsymbol{B}_{\mathrm{cl}}=0\), closes on itself in \(\mathcal{V}\), and thus exhibits no normal component on \(\partial\mathcal{V}\)--it is an _intrinsically solenoidal_ vector field in \(\mathcal{V}\)(Kemmer, 1977; Schuck & Antiochos, 2019). Note that expression (18) is gauge invariant because any gauge transformation of \(\boldsymbol{A}\) or \(\boldsymbol{A}_{\mathrm{R}}\) will involve the integral of the dot product between the gradient of a scalar \(\boldsymbol{\nabla}\Lambda\) and an intrinsically solenoidal vector \(\mathbf{B}-\mathbf{B}_{\rm R}\) that is _tangent_ to \(\partial\mathcal{V}\) and _perpendicular_ to \(\hat{\mathbf{n}}\) on \(\partial\mathcal{V}\) \[\mathcal{H}\to\mathcal{H}+\int\limits_{\mathcal{V}}d^{3}x\,\mathbf{\nabla}\Lambda \cdot(\mathbf{B}-\mathbf{B}_{\rm R})=\mathcal{H}-\oint\limits_{\partial\mathcal{V}}dS \,\hat{\mathbf{n}}\cdot[\Lambda\,\,(\mathbf{B}-\mathbf{B}_{\rm R})]=\mathcal{H}. \tag{21}\] All of the helicity terms developed in SS5 have the analogous form and are similarly gauge invariant. The potential magnetic field \(\mathbf{P}\) is often used as a convenient reference field \(\mathbf{B}_{\rm R}\). The potential field is harmonic and thus admits a dual representation in terms of a vector potential or in terms of the gradient of a scalar field \[\mathbf{B}_{\rm R}\equiv\mathbf{P}=\mathbf{\nabla}\mathbf{\times}\mathbf{A}_{\rm P}=-\mathbf{\nabla}\psi \qquad\mathbf{x}\in\mathcal{V},\] (22a) which satisfies \[\mathbf{\nabla}\mathbf{\times}\mathbf{P}=\mathbf{\nabla}\mathbf{\times}\mathbf{\times} \mathbf{A}_{\rm P}=-\mathbf{\nabla}\mathbf{\times}\mathbf{\nabla}\psi=0\qquad\mathbf{x}\in \mathcal{V},\qquad\text{(No Currents)} \tag{22b}\] \[\mathbf{\nabla}\cdot\mathbf{P}=\mathbf{\nabla}\cdot\mathbf{\nabla}\mathbf{\times}\bm {A}_{\rm P}=\nabla^{2}\psi=0\qquad\mathbf{x}\in\mathcal{V}.\qquad\text{(No Monopoles)} \tag{22c}\] A unique solution6 for the vector potential, \(\mathbf{A}_{\rm P}\), requires an arbitrary gauge condition for which the Coulomb gauge is a convenient choice (see Theorem 3.5 and Equations (3.23)-(3.25) in Girault & Raviart 1986) Footnote 6: This uniqueness in the Coulomb gauge is only important for establishing a well-posed problem for determining \(\mathbf{A}_{\rm P}\). Once \(\mathbf{A}_{\rm P}\) is determined, it may be gauge transformed without affecting Equation (18). \[\mathbf{\nabla}\cdot\mathbf{A}_{\rm P}=0\quad\mathbf{x}\in\mathcal{V}\quad\text{and}\quad \hat{\mathbf{n}}\cdot\mathbf{A}_{\rm P}=0\quad\mathbf{x}\in\partial\mathcal{V}, \tag{22d}\] and with this gauge choice, the vector potential satisfies the vector Poisson equation \[\nabla^{2}\mathbf{A}_{\rm P}=0\qquad\mathbf{x}\in\mathcal{V}, \tag{22e}\] with boundary condition \[\hat{\mathbf{n}}\cdot\mathbf{B}=\hat{\mathbf{n}}\cdot\mathbf{\nabla}\mathbf{\times}\mathbf{A}_{\rm P }=-\hat{\mathbf{n}}\cdot\mathbf{\nabla}\psi\qquad\mathbf{x}\in\partial\mathcal{V}. \tag{22f}\] Note that Equations (22c) and (22f) define the Neumann problem for the scalar potential \(\psi\) which is unique to within an arbitrary scalar \(\psi_{0}\) and Equations (22d)-(22f) define a unique vector potential for the same potential field \(\mathbf{P}\). The field \(\mathbf{P}\) is also the unique potential field that matches the normal component of \(\mathbf{B}\) on the boundary \(\partial\mathcal{V}\). A reference field \(\mathbf{B}_{\rm R}\) that is potential is 'convenient' because no currents are supported by the potential field \(\mathbf{P}\) in the volume of interest and thus the helicity of \(\mathbf{B}_{\rm R}=\mathbf{P}\) in a simply connected domain may be intuitively defined as zero (Berger 1999). However, as noted in the Introduction (SS1), this convenience comes at the price of possibly misrepresenting the origin of the fields. Foreshadowing the development of SS5, while \(\mathbf{P}\) supports no internal currents in \(\mathcal{V}\), Equation (22b) should not be interpreted to imply that \(\mathbf{P}\) is _produced_ exclusively by external currents! For example see Figure 1. Furthermore, while \(\mathbf{B}_{\rm cl}\) supports the internal currents \(\mathbf{J}\) in \(\mathcal{V}\), it is generally produced by both these internal currents and by external currents \(\mathbf{J}^{*}\) in \(\mathcal{V}^{*}\). For example see Figure 2. This will be expanded on further in SS5.2. In particular, non-potential magnetic fields in \(\mathcal{V}\) can be produced by currents \(\mathbf{J}^{*}\) in the external universe \(\mathcal{V}^{*}\) when these currents thread the boundary \(\partial\mathcal{V}\) to enter and leave \(\mathcal{V}\). We emphasize that \(\mathbf{B}=\mathbf{B}_{\rm cl}+\mathbf{P}\) is a _mathematical_ decomposition determined by the geometry of the bounding surface \(\partial\mathcal{V}\) that has no unique relationship with the origin of the magnetic field in currents. Berger & Field (1984) showed that the evolution of the relative magnetic helicity for an ideal plasma depends only on boundary terms that may be computed from observables7 Footnote 7: Note that \(\mathbf{A}_{\rm P}\)_must_ be in the Coulomb gauge in Equation (23) as the surface term is not manifestly gauge invariant (see Schuck & Antiochos 2019, for an alternative formulation). \[\frac{d\mathcal{H}}{dt}=\frac{\partial}{\partial t}\int\limits_{\mathcal{V}}d^ {3}x\,(\mathbf{A}+\mathbf{A}_{\rm P})\cdot(\mathbf{B}-\mathbf{P})=-2\,c\,\oint\limits_{\partial \mathcal{V}}dS\,\hat{\mathbf{n}}\cdot\mathbf{A}_{\rm PC}\mathbf{\times}\mathbf{E}, \tag{23}\] where \(\mathbf{\nabla}\cdot\mathbf{A}_{\rm PC}=0\) in \(\mathcal{V}\) is explicitly in the Coulomb gauge and the electric field \(\mathbf{E}=-\mathbf{v}\mathbf{\times}\mathbf{B}/c\) is determined from the ideal Ohm's law. Berger (1984) further argued that this relative helicity \(\mathcal{H}\) is a robust invariant for finite volumes such as those enclosing flaring magnetic fields in the solar corona. A linear force-free field is the absolute minimum energy state of a plasma in volume \(\mathcal{V}\) with a prescribed relative helicity \(\mathcal{H}\) and a specified or 'line-tied' magnetic boundary \(\left.\hat{\mathbf{n}}\cdot\mathbf{B}\right|_{\partial\mathcal{V}}=g\left(\mathbf{x}\right)\) condition (Berger & Field, 1984; Berger, 1985; Jensen & Chu, 1984; Dixon et al., 1989; Laurence & Avellaneda, 1991). Berger (1999, 2003) partitioned the relative helicity in Equation (18) into two further gauge invariant topological quantities: the closed-closed helicity \(\mathcal{H}_{\mathrm{cl}}^{\mathrm{cl}}\) representing the linkages of magnetic field that closes in \(\mathcal{V}\), and the open-closed helicity \(\mathcal{H}_{\mathrm{cl}}^{\mathrm{o}}\) representing the linkages between the open magnetic field that threads the boundary \(\partial\mathcal{V}\) and the magnetic field that closes inside \(\mathcal{V}\): \[\mathcal{H}=\int\limits_{\mathcal{V}}d^{3}x\underbrace{\overbrace{\left(\mathbf{A }-\mathbf{A}_{\mathrm{P}}\right)}^{\mathbf{A}_{\mathrm{cl}}}\cdot\overbrace{\left(\mathbf{ B}-\mathbf{P}\right)}^{\mathbf{B}_{\mathrm{cl}}}}_{\mathcal{H}_{\mathrm{cl}}^{ \mathrm{o}}}+2\,\int\limits_{\mathcal{V}}d^{3}x\underbrace{\overbrace{\mathbf{A}_ {\mathrm{P}}\cdot\overbrace{\left(\mathbf{B}-\mathbf{P}\right)}^{\mathbf{B}_{\mathrm{cl}}}} ^{\mathbf{B}_{\mathrm{cl}}}}_{\mathcal{H}_{\mathrm{cl}}^{\mathrm{o}}}, \tag{24}\] where the Neumann potential magnetic field \(\mathbf{P}\) has been implemented as the reference field \(\mathbf{B}_{\mathrm{R}}\). Recently Pariat et al. (2017) (see also Moraitis et al., 2014; Linan et al., 2018; Zuccarello et al., 2018) have suggested that dynamic changes in the closed-closed helicity \(\mathcal{H}_{\mathrm{cl}}^{\mathrm{cl}}\) may be a useful diagnostic of latent solar eruptivity leading to flares and coronal mass ejections. Pariat et al. (2017) denotes the first term in (24) \(\mathcal{H}_{J}\) and designates it the "current carrying helicity" and denotes the second term \(\mathcal{H}_{\mathrm{P}J}\) and designates it the "mutual helicity." There is nothing physically special about the linkages that close on themselves in \(\mathcal{V}\) versus those that thread the boundary \(\partial\mathcal{V}\): these fields are defined relative to a surface \(\partial\mathcal{V}\) which often conveniently contains part of the photosphere in solar observational investigations where magnetic field data is regularly estimated from remote sensing observations, e.g., \(SDO\)/HMI. In other words, \(\mathbf{P}\) and \(\mathbf{B}_{\mathrm{cl}}\) are unique and topologically distinct only in the context of the field \(\mathbf{B}\)_and_ the volume \(\mathcal{V}\) or equivalently \(\partial\mathcal{V}\). A different volume \(\mathcal{V}^{\prime}\) bounded by a different surface \(\partial\mathcal{V}^{\prime}\) will lead to different potential fields \(\mathbf{P}^{\prime}\neq\mathbf{P}\) and magnetic fields \(\mathbf{B}^{\prime}_{\mathrm{cl}}\neq\mathbf{B}_{\mathrm{cl}}\) that close in \(\mathcal{V}^{\prime}\) bounded by \(\partial\mathcal{V}^{\prime}\), but the same total \(\mathbf{B}=\mathbf{P}+\mathbf{B}_{\mathrm{cl}}=\mathbf{P}^{\prime}+\mathbf{B}^{\prime}_{\mathrm{cl}}\), at the same location \(\mathbf{x}\in\mathcal{V}\cap\mathcal{V}^{\prime}\). The local force \(\mathbf{J}\mathbf{\times}\mathbf{B}\) on the plasma is ultimately produced by the total magnetic field which contains no information about the boundary \(\partial\mathcal{V}\), thus the magnetic field may be decomposed in whatever way is convenient to identify the magnetic topology and/or physical processes involved in the evolution of the plasma. ## 5 Relative Helicity with Attribution for Magnetically 'Open' Systems Berger & Field (1984) and Finn & Antonsen (1985) established the relative helicity \(\mathcal{H}\) as a gauge invariant measure of magnetic complexity in magnetically open systems. The potential field \(\mathbf{P}\) is a convenient reference field as, _mathematically_, the origin of \(\mathbf{P}\) is in currents supported by the external universe \(\mathcal{V}^{*}\)--hence its helicity \(\int_{\mathcal{V}}d^{3}x\,\mathbf{A}_{\mathrm{P}}\cdot\mathbf{P}\) in \(\mathcal{V}\) may be defined as zero in a simply connected domain (Berger, 1999). However, as noted in Schuck et al. (2022) and SS1 and SS2 here, the _physical_ origin of potential field may be in current supported in \(\mathcal{V}\). Thus the potential field \(\mathbf{P}\) can misattribute the origin of flux threading the bounding surface to external current sources. This insight suggests that Berger's decomposition of helicity in Equation (24) may be further disentangled when the origin of the magnetic field in currents is considered. The Berger & Field (1984) and Finn & Antonsen (1985) formula (18) is convenient for describing the attribution of helicity because it is gauge agnostic--we are free to write \(\mathbf{A}\) and \(\mathbf{A}_{\mathrm{R}}\) in any gauge. Below in SS5.1-5.3 we extend Berger's decomposition of relative helicity in Equation (24) to include attribution of the fields to their current sources in \(\mathcal{V}\) and \(\mathcal{V}^{*}\). This motivates new definitions of internal and external and internal-external relative helicity distinguished by the domain of the current system that produces the magnetic linkages. Our presentation is general in that it is easily extended mutatis mutandis to a coronal volume bounded by the photosphere and a boundary in the high corona or a box of length \(L\) on a side. Internal Relative Helicity in \(\mathcal{V}\) Produced by Internal Sources: \(\mathbf{J}\) in \(\mathcal{V}\) To compute the internal relative helicity produced by internal sources we need to construct the field pairs \(\left(\mathbf{A},\mathbf{B}\right)\) and \(\left(\mathbf{A}_{\mathrm{P}},\mathbf{P}\right)\) produced by _internal_ sources \(\mathbf{J}\). For the current system \(\mathbf{J}\) in the domain of interest \(\mathcal{V}\) the vector potential and magnetic field follows directly from Equation (4b) \[\mathbf{A}\left(\mathbf{J};t,\mathbf{x}\right)\equiv \frac{1}{c}\,\int\limits_{\mathcal{V}}d^{3}x^{\prime}\,\frac{ \mathbf{J}\left(t,\mathbf{x}^{\prime}\right)}{\left|\mathbf{x}-\mathbf{x}^{\prime}\right|} \mathbf{x}\in\mathbb{R}^{3}, \tag{25a}\] \[\mathbf{B}_{\mathrm{BS}}\left(\mathbf{J};t,\mathbf{x}\right)\equiv \mathbf{\nabla}\mathbf{\times}\mathbf{A}\left(\mathbf{J};t,\mathbf{x}\right) \mathbf{x}\in\mathbb{R}^{3}, \tag{25b}\] which completely describes the attribution of the vector potential and magnetic field produced by currents \(\mathbf{J}\) in \(\mathcal{V}\).8 This field _integrant_ may be decomposed in the usual fashion into magnetic fields that close in \(\mathcal{V}\) and magnetic fields that thread the boundary \(\partial\mathcal{V}\) using the potential field methodology described above in Equations (22a)-(22f). Explicitly this is Footnote 8: This is the most intuitive form for \(\mathbf{B}\left(\mathbf{J};t,\mathbf{x}\right)\), but there may be more efficient techniques for computing it as described by Equations (13a)-(13c) in §2. \[\mathbf{P}\left(\mathbf{J};t,\mathbf{x}\right)= \mathbf{\nabla}\mathbf{\times}\mathbf{A}_{\mathrm{P}}\left(\mathbf{J};t,\mathbf{x} \right)\quad\text{and}\quad\mathbf{P}\left(\mathbf{J};t,\mathbf{x}\right)=-\mathbf{\nabla}\psi \left(\mathbf{J};t,\mathbf{x}\right) \mathbf{x}\in\mathcal{V}, \tag{26a}\] \[\mathbf{\nabla}\cdot\mathbf{A}_{\mathrm{P}}\left(\mathbf{J};t,\mathbf{x}\right)= 0\quad\mathbf{x}\in\mathcal{V}\quad\text{and}\quad\hat{\mathbf{n}}\cdot \mathbf{A}_{\mathrm{P}}\left(\mathbf{J};t,\mathbf{x}\right)=0 \mathbf{x}\in\partial\mathcal{V},\] (26b) \[\nabla^{2}\mathbf{A}_{\mathrm{P}}\left(\mathbf{J};t,\mathbf{x}\right)= 0\quad\text{and}\quad\nabla^{2}\psi\left(\mathbf{J};t,\mathbf{x} \right)=0 \mathbf{x}\in\mathcal{V},\] (26c) \[\hat{\mathbf{n}}\cdot\mathbf{B}_{\mathrm{BS}}\left(\mathbf{J};t,\mathbf{x}\right)= \hat{\mathbf{n}}\cdot\mathbf{\nabla}\mathbf{\times}\mathbf{A}_{\mathrm{P}}\left( \mathbf{J};t,\mathbf{x}\right)=-\hat{\mathbf{n}}\cdot\mathbf{\nabla}\psi\left(\mathbf{J};t,\mathbf{x}\right) \mathbf{x}\in\partial\mathcal{V}. \tag{26d}\] Note that Equations (26a)-(26d) differ from (22a), and (22d)-(22f) in that the former represents the potential field produced on the boundary by _physically_ internal sources and the latter represents the potential field produced on the boundary by _all physical_ sources (_internal and external_).9 This distinction is imposed by the boundary conditions (26d) and (22f), which in the former case is determined by the normal component of the Biot-Savart law integrated over just the _internal_ sources \(\mathbf{J}\) and in the latter case by the total field \(\mathbf{B}\). Footnote 9: Recall that the potential magnetic field \(\mathbf{P}\) mathematically represents all current sources as external regardless of their physical origin. See for example, the discussion and Figure 1 in the Introduction (§1). The magnetic fields that close in \(\mathcal{V}\) and are produced by internal current sources in \(\mathcal{V}\) are then described as \[\mathbf{A}_{\mathrm{cl}}\left(\mathbf{J};t,\mathbf{x}\right)\equiv \mathbf{A}\left(\mathbf{J};t,\mathbf{x}\right)-\mathbf{A}_{\mathrm{P}}\left(\mathbf{J };t,\mathbf{x}\right), \tag{27a}\] \[\mathbf{B}_{\mathrm{cl}}\left(\mathbf{J};t,\mathbf{x}\right)\equiv \mathbf{B}_{\mathrm{BS}}\left(\mathbf{J};t,\mathbf{x}\right)-\mathbf{P}\left(\mathbf{ J};t,\mathbf{x}\right),\] \[= \mathbf{\nabla}\mathbf{\times}\mathbf{A}_{\mathrm{cl}}\left(\mathbf{J};t,\mathbf{x} \right)=\mathbf{\nabla}\mathbf{\times}\left[\mathbf{A}\left(\mathbf{J};t,\mathbf{x}\right)-\mathbf{A}_ {\mathrm{P}}\left(\mathbf{J};t,\mathbf{x}\right)\right]. \tag{27b}\] The internal relative helicity which corresponds to the _internal_ current sources is then \[\mathcal{H}\left(\mathbf{J},\mathbf{J}\right)=\underbrace{\int_{\mathcal{V}}d^{3}x \overbrace{\left[\mathbf{A}\left(\mathbf{J}\right)-\mathbf{A}_{\mathrm{P}}\left(\mathbf{J} \right)\right]}^{\mathbf{A}_{\mathrm{cl}}\left(\mathbf{J}\right)}\cdot\overbrace{ \left[\mathbf{B}_{\mathrm{BS}}\left(\mathbf{J}\right)-\mathbf{P}\left(\mathbf{J}\right)\right]} ^{\mathbf{B}_{\mathrm{cl}}\left(\mathbf{J}\right)}}_{\mathcal{H}_{\mathrm{cl}}^{2} \left(\mathbf{J},\mathbf{J}\right)}+\underbrace{2\int_{\mathcal{V}}d^{3}x\,\mathbf{A}_{ \mathrm{P}}\left(\mathbf{J}\right)\cdot\overbrace{\left[\mathbf{B}_{\mathrm{BS}} \left(\mathbf{J}\right)-\mathbf{P}\left(\mathbf{J}\right)\right]}^{\mathbf{B}_{\mathrm{cl}} \left(\mathbf{J}\right)}}_{\mathcal{H}_{\mathrm{cl}}^{2}\left(\mathbf{J},\mathbf{J} \right)}, \tag{28}\] where the independent variables \(t\) and \(\mathbf{x}\) have been suppressed for brevity. Both integrals are gauge invariant because \(\mathbf{\nabla}\cdot\mathbf{B}_{\mathrm{cl}}\left(\mathbf{J}\right)=0\) and \(\hat{\mathbf{n}}\cdot\mathbf{B}_{\mathrm{cl}}\left(\mathbf{J}\right)|_{\partial\mathcal{V}}=0\) by construction (see Equation (21)). In the Berger (1999, 2003) paradigm, this expression describes the closed-closed helicity \(\mathcal{H}_{\mathrm{cl}}^{\mathrm{cl}}\left(\mathbf{J},\mathbf{J}\right)\) of the magnetic field that is produced by internal current sources and closes in \(\mathcal{V}\) and the open-closed helicity \(\mathcal{H}_{\mathrm{cl}}^{\mathrm{o}}\left(\mathbf{J},\mathbf{J}\right)\) between the magnetic field that is produced by internal current sources and closes in \(\mathcal{V}\) and the magnetic field that is produced by internal current sources and threads the boundary \(\partial\mathcal{V}\). However, in our new paradigm \(\mathcal{H}\left(\mathbf{J},\mathbf{J}\right)\) represents the total internal relative helicity in \(\mathcal{V}\) of magnetic field produced by currents \(\mathbf{J}\) in \(\mathcal{V}\). This is arguably the true self-helicity of the current system \(\mathbf{J}\) in \(\mathcal{V}\). If there were no external currents \(\mathbf{J}^{*}\), then Equations (24) and (28) would produce identical values. External Relative Helicity in \(\mathcal{V}\) Produced by External Sources: \(\mathbf{J}^{*}\) in \(\mathcal{V}^{*}\) To compute the external relative helicity produced by external sources we need to construct the field pairs \(\left(\mathbf{A},\mathbf{B}\right)\) and \(\left(\mathbf{A}_{\mathrm{P}},\mathbf{P}\right)\) produced by _external_ sources \(\mathbf{J}^{*}\). The magnetic vector potential \(\mathbf{A}\) and corresponding magnetic field \(\mathbf{B}\) produced by external current sources follows directly from (4c) \[\mathbf{A}\left(\mathbf{J}^{*};t,\mathbf{x}\right)\equiv \frac{1}{c}\int_{\mathcal{V}^{*}}d^{3}x^{\prime}\,\frac{\mathbf{J}^{*} \left(t,\mathbf{x}^{\prime}\right)}{\left|\mathbf{x}-\mathbf{x}^{\prime}\right|} \mathbf{x}\in\mathbb{R}^{3}, \tag{29a}\] \[\mathbf{B}_{\mathrm{BS}}\left(\mathbf{J}^{*};t,\mathbf{x}\right)\equiv \mathbf{\nabla}\mathbf{\times}\mathbf{A}\left(\mathbf{J}^{*};t,\mathbf{x}\right) \mathbf{x}\in\mathbb{R}^{3}, \tag{29b}\] where the domain of integration is over the entire external volume \(\mathcal{V}^{*}\) that contains current sources \(\mathbf{J}^{*}\). However, in practice, we do not have access to this information. Usually, at _best_, we have information about the currents \(\mathbf{J}\) in our domain of interest \(\mathcal{V}\) and information on the boundary \(\partial\mathcal{V}\) and so while Equations (29a)-(29b) are formally correct and useful for developing insight, they are not practical for computation. However, the magnetic field due to external sources can be computed from (13a)-(13b). We emphasize again here that determining \(\mathbf{B}_{\mathrm{BS}}\left(\mathbf{J}^{*};t,\mathbf{x}\right)\) in \(\mathcal{V}\) does not require performing the Biot-Savart integral over \(\mathbf{J}^{*}\) in \(\mathcal{V}^{*}\). Again, this field may be decomposed in the usual fashion into magnetic fields that close in \(\mathcal{V}\) and magnetic fields that thread the boundary \(\partial\mathcal{V}\) using the potential field methodology described above in equations (22a)-(22f). Explicitly this is \[\mathbf{P}\left(\mathbf{J}^{*};t,\mathbf{x}\right)= \mathbf{\nabla}\mathbf{\times}\mathbf{A}_{\mathrm{P}}\left(\mathbf{J}^{*};t,\mathbf{x }\right)\quad\text{and}\quad\mathbf{P}\left(\mathbf{J}^{*};t,\mathbf{x}\right)=-\mathbf{ \nabla}\psi\left(\mathbf{J}^{*};t,\mathbf{x}\right) \mathbf{x}\in\mathcal{V}, \tag{30a}\] \[\mathbf{\nabla}\cdot\mathbf{A}_{\mathrm{P}}\left(\mathbf{J}^{*};t,\mathbf{x}\right)= 0\quad\mathbf{x}\in\mathcal{V}\quad\text{and}\quad\hat{\mathbf{n}}\cdot \mathbf{A}_{\mathrm{P}}\left(\mathbf{J}^{*};t,\mathbf{x}\right)=0 \mathbf{x}\in\partial\mathcal{V},\] (30b) \[\nabla^{2}\mathbf{A}_{\mathrm{P}}\left(\mathbf{J}^{*};t,\mathbf{x}\right)= 0\quad\text{and}\quad\nabla^{2}\psi\left(\mathbf{J}^{*};t,\mathbf{x} \right)=0 \mathbf{x}\in\mathcal{V},\] (30c) \[\hat{\mathbf{n}}\cdot\mathbf{B}_{\mathrm{BS}}\left(\mathbf{J}^{*};t,\mathbf{x}\right)= \hat{\mathbf{n}}\cdot\mathbf{\nabla}\mathbf{\times}\mathbf{A}_{\mathrm{P}}\left( \mathbf{J}^{*};t,\mathbf{x}\right)=-\hat{\mathbf{n}}\cdot\mathbf{\nabla}\psi\left(\mathbf{J}^{*};t,\mathbf{x}\right) \mathbf{x}\in\partial\mathcal{V}. \tag{30d}\] Combining the results in Equations (22a), (26a) and (30a), and making use of Equations (22f), (26d), (30d), and (4a), \[\psi\left(t,\mathbf{x}\right)=\psi\left(\mathbf{J};t,\mathbf{x}\right)+\psi\left(\mathbf{J}^{ *};t,\mathbf{x}\right)\qquad\mathbf{x}\in\mathcal{V}, \tag{31}\] or \[\mathbf{P}\left(t,\mathbf{x}\right)=\mathbf{P}\left(\mathbf{J};t,\mathbf{x}\right)+\mathbf{P}\left( \mathbf{J}^{*};t,\mathbf{x}\right)\qquad\mathbf{x}\in\mathcal{V}, \tag{32}\] and we see that the traditional Neumann potential field described by \(\mathbf{P}=-\mathbf{\nabla}\psi\) conflates the magnetic field produced by internal current sources \(\mathbf{J}\) and external current sources \(\mathbf{J}^{*}\) as discussed in the introduction. A similar conflation occurs for the closed field \(\mathbf{B}_{\mathrm{cl}}\). The closed field produced by external currents is \[\mathbf{B}_{\mathrm{cl}}\left(\mathbf{J}^{*};t,x\right)=\mathbf{B}_{\mathrm{BS}}\left(\bm {J}^{*};t,\mathbf{x}\right)-\mathbf{P}\left(\mathbf{J}^{*};t,\mathbf{x}\right)\qquad\mathbf{x}\in \mathcal{V}, \tag{33}\] and then combining the results in Equations (20b), (27b) and (33), and making use of Equations (22a), (4a), and (32), \[\mathbf{B}_{\mathrm{cl}}\left(t,x\right)=\mathbf{B}_{\mathrm{cl}}\left(\mathbf{J};t,x \right)+\mathbf{B}_{\mathrm{cl}}\left(\mathbf{J}^{*};t,x\right)=\mathbf{B}\left(t,x\right) -\mathbf{P}\left(t,x\right)\qquad\mathbf{x}\in\mathcal{V}. \tag{34}\] For the external current system \(\mathbf{J}\) to _contribute_ to the closed field \(\mathbf{B}_{\mathrm{cl}}\) in \(\mathcal{V}\) is perhaps not surprising given Figure 2. However, the external current system \(\mathbf{J}^{*}\) in \(\mathcal{V}^{*}\) can produce closed-closed helicity \(\mathcal{H}_{\mathrm{cl}}^{\mathrm{cl}}\) in \(\mathcal{V}\) by generating closed field in \(\mathcal{V}\) on its own! The presence of current systems that pass from \(\mathcal{V}\) to \(\mathcal{V}^{*}\) or vice versa also implies that \[\mathbf{\nabla}\mathbf{\times}\mathbf{B}_{\mathrm{cl}}\left(\mathbf{J}^{*};t,\mathbf{x}\right)\neq 0 \qquad\mathbf{x}\in\mathcal{V}. \tag{35}\] _The external current, \(\mathbf{J}^{*}\), in volume, \(\mathcal{V}^{*}\), injects magnetic vorticity into \(\mathcal{V}\)._ This is apparent if we consider the curl of (29b) \[\mathbf{\nabla}\mathbf{\times}\mathbf{B}_{\mathrm{BS}}\left(\mathbf{J}^{*};t,\mathbf{x}\right)= \frac{1}{c}\,\mathbf{\nabla}\mathbf{\times}\mathbf{\nabla}\mathbf{\times}\int\limits_{\mathcal{ V}^{*}}d^{3}x^{\prime}\;\frac{\mathbf{J}^{*}\left(t,\mathbf{x}^{\prime}\right)}{|\mathbf{x}- \mathbf{x}^{\prime}|}\qquad\mathbf{x}\in\mathbb{R}^{3}, \tag{36}\] where the observation point \(\mathbf{x}\) is in \(\mathcal{V}\) not \(\mathcal{V}^{*}\). Using the vector relationship \[\mathbf{\nabla}\mathbf{\times}\mathbf{\times}\mathbf{a}=\mathbf{\nabla}\left(\mathbf{\nabla}\cdot\mathbf{a} \right)-\nabla^{2}\mathbf{a}, \tag{37}\] this becomes \[\mathbf{\nabla}\mathbf{\times}\mathbf{B}_{\mathrm{BS}}\left(\mathbf{J}^{*};t,\mathbf{x}\right)= \frac{1}{c}\,\mathbf{\nabla}\left[\mathbf{\nabla}\cdot\int\limits_{\mathcal{V}^{*}}d^{3} x^{\prime}\;\frac{\mathbf{J}^{*}\left(t,\mathbf{x}^{\prime}\right)}{|\mathbf{x}-\mathbf{x}^{\prime}|} \right]-\frac{1}{c}\,\nabla^{2}\int\limits_{\mathcal{V}^{*}}d^{3}x^{\prime}\; \frac{\mathbf{J}^{*}\left(t,\mathbf{x}^{\prime}\right)}{|\mathbf{x}-\mathbf{x}^{\prime}|} \qquad\mathbf{x}\in\mathbb{R}^{3}. \tag{38}\] The kernel in the second term has the form of a delta distribution because \[\nabla^{2}\left|\mathbf{x}-\mathbf{x}^{\prime}\right|^{-1}=-4\,\pi\left\{\begin{array}[ ]{ll}\delta\left(\mathbf{x}-\mathbf{x}^{\prime}\right)&\mathbf{x}\in\mathcal{V}^{*},\\ \alpha^{*}\left(x\right)\,\delta\left(\mathbf{x}-\mathbf{x}^{\prime}\right)&\mathbf{x}\in \partial\mathcal{V}^{*},\\ 0&\mathbf{x}\in\mathcal{V}.\end{array}\right. \tag{39}\] leading to \[\boldsymbol{\nabla}\boldsymbol{\times}\boldsymbol{B}_{\mathrm{BS}}\left( \boldsymbol{J}^{*};t,\boldsymbol{x}\right)-\frac{1}{c}\,\boldsymbol{\nabla} \left[\boldsymbol{\nabla}\cdot\int\limits_{\mathcal{V}^{*}}d^{3}x^{\prime}\, \frac{\boldsymbol{J}^{*}\left(t,\boldsymbol{x}^{\prime}\right)}{\left| \boldsymbol{x}-\boldsymbol{x}^{\prime}\right|}\right]=\frac{4\,\pi}{c}\, \alpha^{*}\left(\boldsymbol{x}\right)\,\boldsymbol{J}^{*}\left(t,\boldsymbol{x }\right)\qquad\boldsymbol{x}\in\mathbb{R}^{3}, \tag{40}\] where \[\alpha^{*}\left(\boldsymbol{x}\right)=1-\alpha\left(\boldsymbol{x}\right)= \left\{\begin{array}{ll}1&\boldsymbol{x}\in\mathcal{V}^{*}\\ 1/2\,\,\,\text{smooth surfaces}\\ 3/4\,\,\,\text{edges of $\mathcal{V}$}\\ 7/8\,\,\,\,\text{vertices of $\mathcal{V}$}\\ 0&\boldsymbol{x}\in\mathcal{V}\end{array}\right., \tag{41}\] follows from Equation (6b) for \(\alpha\left(\boldsymbol{x}\right)\) and for the values in braces we have assumed that \(\mathcal{V}^{*}\) encloses \(\mathcal{V}\). Passing the divergence under the integral operator, using \[\boldsymbol{\nabla}\frac{1}{\left|\boldsymbol{x}-\boldsymbol{x}^{\prime} \right|}=-\boldsymbol{\nabla}^{\prime}\frac{1}{\left|\boldsymbol{x}- \boldsymbol{x}^{\prime}\right|}, \tag{42}\] and \[\boldsymbol{\nabla}\cdot\left(\phi\,\boldsymbol{a}\right)=\boldsymbol{a} \cdot\boldsymbol{\nabla}\phi+\phi\,\boldsymbol{\nabla}\cdot\boldsymbol{a}, \tag{43}\] this simplifies to \[\boldsymbol{\nabla}\boldsymbol{\times}\boldsymbol{B}_{\mathrm{BS}}\left( \boldsymbol{J}^{*};t,\boldsymbol{x}\right)+\frac{1}{c}\,\boldsymbol{\nabla} \left[\int\limits_{\mathcal{V}^{*}}d^{3}x^{\prime}\,\boldsymbol{\nabla}^{ \prime}\cdot\,\frac{\boldsymbol{J}^{*}\left(t,\boldsymbol{x}^{\prime}\right) }{\left|\boldsymbol{x}-\boldsymbol{x}^{\prime}\right|}-\int\limits_{ \mathcal{V}^{*}}d^{3}x^{\prime}\,\frac{\boldsymbol{\nabla}^{\prime}\cdot \boldsymbol{J}^{*}\left(t,\boldsymbol{x}^{\prime}\right)}{\left|\boldsymbol{x} -\boldsymbol{x}^{\prime}\right|}\right]=\frac{4\,\pi}{c}\,\alpha^{*}\left( \boldsymbol{x}\right)\,\boldsymbol{J}^{*}\left(t,\boldsymbol{x}\right)\qquad \boldsymbol{x}\in\mathbb{R}^{3}. \tag{44}\] The Gauss-Ostrogradsky theorem where \(\hat{\boldsymbol{n}}\) points into \(\mathcal{V}\) \[\int\limits_{\mathcal{V}}d^{3}x\,\boldsymbol{\nabla}\cdot\boldsymbol{a}=- \oint\limits_{\partial\mathcal{V}}dS\,\hat{\boldsymbol{n}}\cdot\boldsymbol{a}, \tag{45}\] relates the volume integral of the divergence to a surface integral over the normal component at the boundary of the volume. Then with the Gauss-Ostrogradsky theorem, Equation (44) becomes \[\boldsymbol{\nabla}\boldsymbol{\times}\boldsymbol{B}_{\mathrm{BS}}\left( \boldsymbol{J}^{*};t,\boldsymbol{x}\right)-\frac{1}{c}\,\overbrace{\boldsymbol{ \nabla}\left[\oint\limits_{\partial\mathcal{V}^{*}}dS^{\prime}\,\frac{\hat{ \boldsymbol{n}}^{\prime}\cdot\boldsymbol{J}^{*}\left(t,\boldsymbol{x}^{\prime} \right)}{\left|\boldsymbol{x}-\boldsymbol{x}^{\prime}\right|}+\int\limits_{ \mathcal{V}^{*}}d^{3}x^{\prime}\,\frac{\boldsymbol{\nabla}^{\prime}\cdot \boldsymbol{J}^{*}\left(t,\boldsymbol{x}^{\prime}\right)}{\left|\boldsymbol{x} -\boldsymbol{x}^{\prime}\right|}\right]}^{\frac{\partial\boldsymbol{E}( \boldsymbol{J}^{*};t,\boldsymbol{x})/\partial t}{\left|\boldsymbol{x}- \boldsymbol{x}^{\prime}\right|}}=\frac{4\,\pi}{c}\,\alpha^{*}\left(\boldsymbol{ x}\right)\,\boldsymbol{J}^{*}\left(t,\boldsymbol{x}\right)\qquad \boldsymbol{x}\in\mathbb{R}^{3},\] (46a) or with \[\boldsymbol{\nabla}\cdot\boldsymbol{J}^{*}=0\] for \[\boldsymbol{x}\in\mathcal{V}\] \[\boldsymbol{\nabla}\boldsymbol{\times}\boldsymbol{B}_{\mathrm{BS}}\left( \boldsymbol{J}^{*};t,\boldsymbol{x}\right)+\frac{1}{c}\,\overbrace{ \boldsymbol{\nabla}\oint\limits_{\partial\mathcal{V}}dS^{\prime}\,\frac{\hat{ \boldsymbol{n}}^{\prime}\cdot\boldsymbol{J}\left(t,\boldsymbol{x}^{\prime} \right)}{\left|\boldsymbol{x}-\boldsymbol{x}^{\prime}\right|}}^{-\partial \boldsymbol{E}(\boldsymbol{J}^{*};t,\boldsymbol{x})/\partial t}{\left| \boldsymbol{x}-\boldsymbol{x}^{\prime}\right|}=0\qquad\boldsymbol{x}\in \mathcal{V}. \tag{46b}\] where in the last expression we have taken the surface integral with respect to \(\partial\mathcal{V}\) instead of \(\partial\mathcal{V}^{*}\), used \(\boldsymbol{\nabla}\cdot\boldsymbol{J}^{*}=0\) as implied by Ampere's law (8), and assumed that the normal component of the current is continuous across the boundary \(\hat{\boldsymbol{n}}\cdot\left(\boldsymbol{J}-\boldsymbol{J}^{*}\right)=0\) for \(\boldsymbol{x}\in\partial\mathcal{V}\). Equation (46b) has the form of the Ampere-Maxwell equation \[\boldsymbol{\nabla}\boldsymbol{\times}\boldsymbol{B}_{\mathrm{BS}}\left( \boldsymbol{J}^{*};t,\boldsymbol{x}\right)-\frac{1}{c}\,\frac{\partial \boldsymbol{E}\left(\boldsymbol{J}^{*};t,\boldsymbol{x}\right)}{\partial t}=0 \qquad\boldsymbol{x}\in\mathcal{V}, \tag{46c}\] where there is no external material current \(\boldsymbol{J}^{*}\) in \(\mathcal{V}\). Thus, the magnetic vorticity produced in \(\mathcal{V}\) is _balanced, but not generated_, by a time-dependent electric field in \(\mathcal{V}\), the so-called 'displacement current,' and both are produced by \(\boldsymbol{J}^{*}\) in \(\mathcal{V}^{*}\) or on \(\partial\mathcal{V}\). We emphasize that the displacement current \(\partial\boldsymbol{E}/\partial t\)_is not a source of magnetic field_. Closed magnetic field in the sense of \(\oint\,\mathbf{B}\left(\mathbf{J}^{*};t,\mathbf{x}\right)\cdot d\ell\neq 0\) for \(\mathbf{x}\in\mathcal{V}\), indicates the presence of magnetic vorticity--and not the exclusive presence of a local material current \(\mathbf{J}\). The source of this magnetic vorticity in \(\mathcal{V}\) may be a non-local current source, e.g., \(\mathbf{J}^{*}\) in \(\mathcal{V}^{*}\). The presence of displacement currents must be reconciled with Ampere's law (8). Consider \(\mathbf{\nabla}\mathbf{\times}\mathbf{B}_{\mathrm{BS}}\left(\mathbf{J};t,\mathbf{x}\right)\) by following the derivation of (46a) mutatis mutandis \[\mathbf{\nabla}\mathbf{\times}\mathbf{B}_{\mathrm{BS}}\left(\mathbf{J};t,\mathbf{x}\right)-\frac{ 1}{c}\ \overbrace{\mathbf{\nabla}\left[\oint\limits_{\mathcal{V}}dS^{\prime}\, \hat{\mathbf{n}}^{\prime}\cdot\,\frac{\mathbf{J}\left(t,\mathbf{x}^{\prime}\right)}{|\mathbf{ x}-\mathbf{x}^{\prime}|}+\int\limits_{\mathcal{V}}d^{3}x^{\prime}\,\frac{\mathbf{ \nabla}^{\prime}\cdot\mathbf{J}\left(t,\mathbf{x}^{\prime}\right)}{|\mathbf{x}-\mathbf{x}^{ \prime}|}\right]}^{\partial\mathbf{E}\left(\mathbf{J};t,\mathbf{x}\right)/\partial t}= \frac{4\,\pi}{c}\,\alpha\left(\mathbf{x}\right)\,\mathbf{J}\left(t,\mathbf{x}\right)\qquad \mathbf{x}\in\mathbb{R}^{3},\] (47a) or with \[\mathbf{\nabla}\cdot\mathbf{J}=0\] for \[\mathbf{x}\in\mathcal{V} \tag{47b}\] which has a material current because \(\nabla^{2}\left|\mathbf{x}-\mathbf{x}^{\prime}\right|^{-1}=-4\,\pi\,\delta\left(\mathbf{x }-\mathbf{x}^{\prime}\right)\) is a delta distribution for \(\mathbf{x}\in\mathcal{V}\) and \(\mathbf{x}^{\prime}\in\mathcal{V}\). This also has the form of the Ampere-Maxwell equation \[\mathbf{\nabla}\mathbf{\times}\mathbf{B}_{\mathrm{BS}}\left(\mathbf{J};t,\mathbf{x}\right)-\frac{ 1}{c}\,\frac{\partial\mathbf{E}\left(\mathbf{J};t,\mathbf{x}\right)}{\partial t}=\frac{4 \,\pi}{c}\,\mathbf{J}\left(t,\mathbf{x}\right)\qquad\mathbf{x}\in\mathcal{V}. \tag{47c}\] Combining Equations (46a) and (47a) \[\mathbf{\nabla}\mathbf{\times}\mathbf{B}_{\mathrm{BS}}\left(\mathbf{J};t,\mathbf{x} \right)+\mathbf{\nabla}\mathbf{\times}\mathbf{B}_{\mathrm{BS}}\left(\mathbf{J}^{*};t,\mathbf{x} \right)-\frac{1}{c}\ \overbrace{\mathbf{\nabla}\oint\limits_{\partial\mathcal{V}}dS^{ \prime}\,\frac{\hat{\mathbf{n}}^{\prime}\cdot\mathbf{J}\left(t,\mathbf{x}^{\prime}\right)}{| \mathbf{x}-\mathbf{x}^{\prime}|}-\mathbf{\nabla}\oint\limits_{\partial\mathcal{V}}dS^{ \prime}\,\frac{\hat{\mathbf{n}}^{\prime}\cdot\mathbf{J}\left(t,\mathbf{x}^{\prime}\right)} {|\mathbf{x}-\mathbf{x}^{\prime}|}}^{\partial\mathbf{E}\left(\mathbf{J};t,\mathbf{x}\right)/\partial t }=\\ \frac{4\,\pi}{c}\,\left[\alpha\left(\mathbf{x}\right)\,\mathbf{J}\left(t, \mathbf{x}\right)+\alpha^{*}\left(\mathbf{x}\right)\,\mathbf{J}^{*}\left(t,\mathbf{x}\right) \right]\qquad\mathbf{x}\in\mathbb{R}^{3},\] (48a) the displacement currents cancel and \[\mathbf{\nabla}\mathbf{\times}\overbrace{\left[\mathbf{B}_{\mathrm{BS}}\left( \mathbf{J};t,\mathbf{x}\right)+\mathbf{B}_{\mathrm{BS}}\left(\mathbf{J}^{*};t,\mathbf{x}\right) \right]}^{\mathbf{B}}=\frac{4\,\pi}{c}\,\mathbf{j}\qquad\mathbf{x}\in\mathbb{R}^{3},\] (48b) \[\mathbf{j}\left(t;\mathbf{x}\right)=\alpha\left(\mathbf{x}\right)\,\mathbf{J}\left(t, \mathbf{x}\right)+\left[1-\alpha\left(\mathbf{x}\right)\right]\,\mathbf{J}^{*}\left(t,\mathbf{ x}\right)\qquad\mathbf{x}\in\mathbb{R}^{3}\] (48c) recovers Ampere's law ( 8 ) with \[\mathbf{\nabla}\cdot\mathbf{j}=0\] for \[\mathbf{x}\in\mathbb{R}^{3}\]. Thus, even when the _net displacement current density is zero in_ \[\mathcal{V}\], as implied by Ampere's law ( 8 ), there may be external contributions from \[\mathbf{J}^{*}\] in \[\mathcal{V}^{*}\] to \[\mathbf{B}_{\mathrm{cl}}\] and displacement currents in \[\mathcal{V}\]. To summarize the results to this point, determining the helicities due to internal sources requires computation of the vector potential \(\mathbf{A}\left(\mathbf{J};t,\mathbf{x}\right)\) and magnetic field \(\mathbf{B}_{\mathrm{BS}}\left(\mathbf{J};t,\mathbf{x}\right)\) via the Biot-Savart law (25a)-(25b). The magnetic field produced by external sources \(\mathbf{B}_{\mathrm{BS}}\left(\mathbf{J}^{*};t,\mathbf{x}\right)\) may be computed by subtracting \(\mathbf{B}_{\mathrm{BS}}\left(\mathbf{J};t,\mathbf{x}\right)\) from the total field \(\mathbf{B}\) as in Equation (13a). The decomposition of these attributed fields into components that close in \(\mathcal{V}\) and that thread the boundary \(\partial\mathcal{V}\) requires constructing field pairs \(\mathbf{A}_{\mathrm{P}}\left(\mathbf{J};t,\mathbf{x}\right)\) and \(\mathbf{P}\left(\mathbf{J};t,\mathbf{x}\right)\) and \(\mathbf{A}_{\mathrm{P}}\left(\mathbf{J}^{*};t,\mathbf{x}\right)\) and \(\mathbf{P}\left(\mathbf{J}^{*};t,\mathbf{x}\right)\) in Equations (26a)-(26d) and Equations (30a)-(30d) which in turn may be used to construct \(\mathbf{A}_{\mathrm{cl}}\left(\mathbf{J};t,\mathbf{x}\right)\), \(\mathbf{B}_{\mathrm{cl}}\left(\mathbf{J};t,\mathbf{x}\right)\) and \(\mathbf{B}_{\mathrm{cl}}\left(\mathbf{J}^{*};t,\mathbf{x}\right)\) in Equations (27a), (27b), and (33). The last missing piece is to compute \(\mathbf{A}_{\mathrm{cl}}\left(\mathbf{J}^{*};t,\mathbf{x}\right)\) from what we already know. First recall that \[\mathbf{\nabla}\cdot\mathbf{B}_{\mathrm{cl}}\left(\mathbf{J}^{*};t,\mathbf{x}\right)=0\quad\mathbf{ x}\in\mathcal{V}\quad\text{and}\quad\hat{\mathbf{n}}\cdot\mathbf{B}_{\mathrm{cl}}\left( \mathbf{J}^{*};t,\mathbf{x}\right)=0\quad\mathbf{x}\in\partial\mathcal{V}, \tag{49}\] is an _intrinsically solenoidal_ vector field. Thus, \(\mathbf{A}_{\mathrm{cl}}\left(\mathbf{J}^{*};t,\mathbf{x}\right)\) may be reconstructed in the Coulomb gauge with the Biot-Savart operator \[\mathbf{A}_{\mathrm{cl}}\left(\mathbf{J}^{*};t,\mathbf{x}\right)=\frac{1}{4\,\pi}\,\mathbf{ \nabla}\mathbf{\times}\int\limits_{\mathcal{V}}d^{3}x^{\prime}\,\frac{\mathbf{B}_{ \mathrm{cl}}\left(\mathbf{J}^{*};t,\mathbf{x}\right)}{|\mathbf{x}-\mathbf{x}^{\prime}|} \qquad\mathbf{x}\in\mathcal{V}\cup\partial\mathcal{V}. \tag{50}\] This is perhaps the conceptually simplest expression for \(\mathbf{A}_{\rm cl}\left(\mathbf{J}^{*};t,\mathbf{x}\right)\), but alternatives are presented in Appendix B. The external relative helicity which corresponds to the _external_ current sources is then \[\mathcal{H}\left(\mathbf{J}^{*},\mathbf{J}^{*}\right)=\underbrace{\int_{\mathcal{V}}d^{ 3}x\,\overbrace{\left[\mathbf{A}\left(\mathbf{J}^{*}\right)-\mathbf{A}_{\rm P}\left(\mathbf{J }^{*}\right)\right]}^{\mathbf{A}_{\rm cl}\left(\mathbf{J}^{*}\right)}\cdot\overbrace{ \left[\mathbf{B}_{\rm BS}\left(\mathbf{J}^{*}\right)-\mathbf{P}\left(\mathbf{J}^{*}\right) \right]}^{\mathbf{B}_{\rm cl}\left(\mathbf{J}^{*}\right)}}_{\mathcal{H}_{\rm cl}^{\rm cl }\left(\mathbf{J}^{*},\mathbf{J}^{*}\right)}, \tag{51}\] where we have again dropped the temporal and spatial variables for convenience. In the Berger (1999, 2003) paradigm, this expression describes the closed-closed helicity \(\mathcal{H}_{\rm cl}^{\rm cl}\left(\mathbf{J}^{*},\mathbf{J}^{*}\right)\) of the magnetic field that is produced by external current sources and closes in \(\mathcal{V}\) and the open-closed helicity \(\mathcal{H}_{\rm cl}^{\rm o}\left(\mathbf{J}^{*},\mathbf{J}^{*}\right)\) between magnetic field that is produced by external current sources and closes in \(\mathcal{V}\) and the magnetic field that is produced by external current sources and threads the boundary \(\partial\mathcal{V}\). However, in our new paradigm \(\mathcal{H}\left(\mathbf{J}^{*},\mathbf{J}^{*}\right)\) represents the total external relative helicity in \(\mathcal{V}\) of magnetic field produced by currents \(\mathbf{J}^{*}\) in \(\mathcal{V}^{*}\). This is arguably the true self-helicity of the current system \(\mathbf{J}^{*}\) in \(\mathcal{V}\). If there were no internal currents \(\mathbf{J}\), then Equations (24) and (51) would produce identical values. The Relative Helicity of the Mutual Linkages Between the Internal \(\mathbf{J}\) and External \(\mathbf{J}^{*}\) Sources Above we have established four gauge invariant quantities that describe the relative helicity of the linkages produced by currents \(\mathbf{J}\) and \(\mathbf{J}^{*}\) in \(\mathcal{V}\) and \(\mathcal{V}^{*}\), respectively: \(\mathcal{H}_{\rm cl}^{\rm cl}\left(\mathbf{J},\mathbf{J}\right)\), \(\mathcal{H}_{\rm cl}^{\rm o}\left(\mathbf{J},\mathbf{J}\right)\), \(\mathcal{H}_{\rm cl}^{\rm cl}\left(\mathbf{J}^{*},\mathbf{J}^{*}\right)\), and \(\mathcal{H}_{\rm cl}^{\rm o}\left(\mathbf{J}^{*},\mathbf{J}^{*}\right)\). Four other gauge invariant quantities may be constructed that describe the relative helicity of the mutual linkages between fields that have their origin in currents \(\mathbf{J}\) and \(\mathbf{J}^{*}\) in the internal \(\mathcal{V}\) and external \(\mathcal{V}^{*}\) volumes, respectively: \[\mathcal{H}\left(\mathbf{J},\mathbf{J}^{*}\right)= \underbrace{\int_{\mathcal{V}}d^{3}x\,\overbrace{\left[\mathbf{A} \left(\mathbf{J}\right)-\mathbf{A}_{\rm P}\left(\mathbf{J}\right)\right]}^{\mathbf{A}_{\rm cl }\left(\mathbf{J}\right)}\cdot\overbrace{\left[\mathbf{B}_{\rm BS}\left(\mathbf{J}^{*} \right)-\mathbf{P}\left(\mathbf{J}^{*}\right)\right]}^{\mathbf{B}_{\rm cl}\left(\mathbf{J}^{*} \right)}}_{\mathcal{H}_{\rm cl}^{\rm cl}\left(\mathbf{J},\mathbf{J}^{*}\right)}+ \underbrace{2\,\int_{\mathcal{V}}d^{3}x\,\mathbf{A}_{\rm P}\left(\mathbf{J}\right) \cdot\overbrace{\left[\mathbf{B}_{\rm BS}\left(\mathbf{J}^{*}\right)-\mathbf{P}\left(\mathbf{ J}^{*}\right)\right]}^{\mathbf{B}_{\rm cl}\left(\mathbf{J}^{*}\right)}}_{ \mathcal{H}_{\rm cl}^{\rm o}\left(\mathbf{J},\mathbf{J}^{*}\right)}, \tag{52a}\] \[\mathcal{H}\left(\mathbf{J}^{*},\mathbf{J}\right)= \underbrace{\int_{\mathcal{V}}d^{3}x\,\overbrace{\left[\mathbf{A} \left(\mathbf{J}^{*}\right)-\mathbf{A}_{\rm P}\left(\mathbf{J}^{*}\right)\right]}^{\mathbf{A}_ {\rm cl}\left(\mathbf{J}^{*}\right)}\cdot\overbrace{\left[\mathbf{B}_{\rm BS}\left(\bm {J}\right)-\mathbf{P}\left(\mathbf{J}\right)\right]}^{\mathbf{B}_{\rm cl}\left(\mathbf{J} \right)}}_{\mathcal{H}_{\rm cl}^{\rm cl}\left(\mathbf{J}^{*},\mathbf{J}\right)}+ \underbrace{2\,\int_{\mathcal{V}}d^{3}x\,\mathbf{A}_{\rm P}\left(\mathbf{J}^{*}\right) \cdot\overbrace{\left[\mathbf{B}_{\rm BS}\left(\mathbf{J}\right)-\mathbf{P}\left(\mathbf{J} \right)\right]}^{\mathbf{B}_{\rm cl}\left(\mathbf{J}\right)}}_{\mathcal{H}_{\rm cl}^{ \rm o}\left(\mathbf{J}^{*},\mathbf{J}\right)}. \tag{52b}\] Note that \(\mathcal{H}_{\rm cl}^{\rm cl}\left(\mathbf{J},\mathbf{J}^{*}\right)=\mathcal{H}_{\rm cl }^{\rm cl}\left(\mathbf{J}^{*},\mathbf{J}\right)\) by reciprocity, but \(\mathcal{H}\left(\mathbf{J},\mathbf{J}^{*}\right)\neq\mathcal{H}\left(\mathbf{J}^{*},\mathbf{J}\right)\) because \(\mathcal{H}_{\rm cl}^{\rm o}\left(\mathbf{J},\mathbf{J}^{*}\right)\neq\mathcal{H}_{\rm cl }^{\rm o}\left(\mathbf{J}^{*},\mathbf{J}\right)\). To prove reciprocity, the difference between the integrands of the first term in Equations (52a) and (52b) may be expressed \[\mathbf{A}_{\rm cl}\left(\mathbf{J}\right)\cdot\mathbf{B}_{\rm cl}\left(\mathbf{J}^ {*}\right)-\mathbf{A}_{\rm cl}\left(\mathbf{J}^{*}\right)\cdot\mathbf{B}_{\rm cl}\left(\mathbf{J }\right)= \mathbf{A}_{\rm cl}\left(\mathbf{J}\right)\cdot\mathbf{\nabla}\times\mathbf{A}_{\rm cl} \left(\mathbf{J}^{*}\right)-\mathbf{\nabla}\times\mathbf{A}_{\rm cl}\left(\mathbf{J}\right) \cdot\mathbf{A}_{\rm cl}\left(\mathbf{J}^{*}\right), \tag{53a}\] \[= \mathbf{\nabla}\cdot\left[\mathbf{A}_{\rm cl}\left(\mathbf{J}^{*}\right)\mathbf{ \times}\mathbf{A}_{\rm cl}\left(\mathbf{J}\right)\right]. \tag{53b}\] Recalling that \(\mathcal{H}_{\rm cl}^{\rm cl}\) is gauge invariant and a vector potential that produces closed magnetic field on \(\partial\mathcal{V}\) may be expressed \(\mathbf{A}_{\rm cl}=A\,\hat{\mathbf{n}}+\mathbf{\nabla}\Lambda\) for \(\mathbf{x}\in\partial\mathcal{V}\), the integral of Equation (53b) with Equation (45) becomes \[\mathcal{H}_{\rm cl}^{\rm cl}\left(\mathbf{J},\mathbf{J}^{*}\right)-\mathcal{H}_{\rm cl }^{\rm cl}\left(\mathbf{J}^{*},\mathbf{J}\right)= -\oint_{\partial\mathcal{V}}dS\,\hat{\mathbf{n}}\cdot\left[\mathbf{A}_{\rm cl} \left(\mathbf{J}^{*}\right)\mathbf{\times}\mathbf{A}_{\rm cl}\left(\mathbf{J}\right)\right]=0. \tag{54}\] ### Discussion The superposition of Equations (28), (51), (52a), and (52b) reconstructs the relative helicity in Equation (18), for \(\mathbf{B}_{\rm R}=\mathbf{P}\). Thus the traditional relative helicity may be decomposed into eight gauge invariant quantities that describe both the self-linking of magnetic field that closes in \(\mathcal{V}\) and the mutual linking between magnetic field that closes in \(\mathcal{V}\) and magnetic field that threads the boundary, while simultaneously distinguishing the physical origin of the magnetic field with currents \(\mathbf{J}\) in \(\mathcal{V}\) and \(\mathbf{J}^{*}\) in \(\mathcal{V}^{*}\). In the Berger (1999, 2003) paradigm these eight terms are arranged into'self' and'mutual' helicity _based on the magnetic field properties open, \(\mathbf{P}\), or closed, \(\mathbf{B}_{\rm cl}\), on the boundary \(\partial\mathcal{V}\),_ regardless of their origin in currents \(\mathbf{J}\) and \(\mathbf{J}^{*}\) in \(\mathcal{V}\) and \(\mathcal{V}^{*}\): \[\mathcal{H}=\underbrace{\mathcal{H}_{\rm cl}^{\rm cl}\left(\mathbf{J},\mathbf{J} \right)+\mathcal{H}_{\rm cl}^{\rm cl}\left(\mathbf{J}^{*},\mathbf{J}^{*}\right)+ \mathcal{H}_{\rm cl}^{\rm cl}\left(\mathbf{J},\mathbf{J}^{*}\right)+\mathcal{H}_{\rm cl }^{\rm cl}\left(\mathbf{J}^{*},\mathbf{J}\right)}_{\text{closed-closed $\mathcal{H}_{\rm cl}^{\rm o}\left(\text{`self'}\right)$}}+ \underbrace{\mathcal{H}_{\rm cl}^{\rm o}\left(\mathbf{J},\mathbf{J}\right)+\mathcal{H}_{\rm cl }^{\rm o}\left(\mathbf{J}^{*},\mathbf{J}^{*}\right)+\mathcal{H}_ In our new paradigm, these terms are arranged into internal or external (self) and internal-external (mutual) helicity _based on their origin in currents \(\mathbf{J}\) and \(\mathbf{J}^{*}\) in \(\mathcal{V}\) and \(\mathcal{V}^{*}\), respectively,_ regardless of the magnetic field properties on the boundary \(\partial\mathcal{V}\): \[\begin{split}\text{internal external}&\text{ external}\\ \mathcal{H}&=\overbrace{\mathcal{H}\left(\mathbf{J},\mathbf{J} \right)+\overbrace{\mathcal{H}\left(\mathbf{J}^{*},\mathbf{J}^{*}\right)}+}^{ \text{external-external (mutual)}}\\ &=\overbrace{\mathcal{H}\left(\mathbf{J},\mathbf{J}\right)+\mathcal{H} _{\text{cl}}^{\text{cl}}\left(\mathbf{J},\mathbf{J}\right)}^{\text{cl}}+\underbrace{ \mathcal{H}_{\text{cl}}^{\text{cl}}\left(\mathbf{J}^{*},\mathbf{J}^{*}\right)+ \mathcal{H}_{\text{cl}}^{\text{cl}}\left(\mathbf{J}^{*},\mathbf{J}^{*}\right)}_{ \text{external (self)}}+\underbrace{\mathcal{H}_{\text{cl}}^{\text{cl}}\left(\mathbf{J},\mathbf{J}^ {*}\right)+\mathcal{H}_{\text{cl}}^{\text{cl}}\left(\mathbf{J},\mathbf{J}^{*}\right)+ \mathcal{H}_{\text{cl}}^{\text{cl}}\left(\mathbf{J}^{*},\mathbf{J}\right)}_{\text{ external (self)}}+\underbrace{\mathcal{H}_{\text{cl}}^{\text{cl}}\left(\mathbf{J},\mathbf{J}^{*} \right)+\mathcal{H}_{\text{cl}}^{\text{cl}}\left(\mathbf{J},\mathbf{J}^{*}\right)+ \mathcal{H}_{\text{cl}}^{\text{cl}}\left(\mathbf{J}^{*},\mathbf{J}\right)}_{\text{ external (self)}}+\underbrace{\mathcal{H}_{\text{cl}}^{\text{cl}}\left(\mathbf{J},\mathbf{J}^{*}\right)+ \mathcal{H}_{\text{cl}}^{\text{cl}}\left(\mathbf{J},\mathbf{J}^{*}\right)+\mathcal{H }_{\text{cl}}^{\text{cl}}\left(\mathbf{J}^{*},\mathbf{J}\right)}_{\text{external (self)}}+\underbrace{\mathcal{H}_{\text{cl}}^{\text{cl}}\left(\mathbf{J},\mathbf{J}^{*} \right)+\mathcal{H}_{\text{cl}}^{\text{cl}}\left(\mathbf{J},\mathbf{J}^{*}\right)+ \mathcal{H}_{\text{cl}}^{\text{cl}}\left(\mathbf{J}^{*},\mathbf{J}\right)}_{\text{ external (mutual)}}.\end{split} \tag{56a}\] Seven of these components are independent as \(\mathcal{H}_{\text{cl}}^{\text{cl}}\left(\mathbf{J}^{*},\mathbf{J}\right)=\mathcal{H }_{\text{cl}}^{\text{cl}}\left(\mathbf{J},\mathbf{J}^{*}\right)\). We emphasize that each of the seven independent components of relative helicity in this decomposition is gauge invariant, in isolation, a quality of a valid observable also emphasized recently by Schuck & Antiochos (2019). This more comprehensive set of seven helicity components provides a basis for a more detailed examination of the interplay between internally and externally sourced magnetic fields involved in reconnection during solar eruptions and potentially reconnection in the tail and magnetopause during terrestrial geomagnetic storms. ## 6 The Magnetic Energy As described above, the magnetic field in \(\mathcal{V}\) may be decomposed with the magnetic field components that simultaneously distinguish their physical origin as \[\mathbf{B}=\mathbf{P}\left(\mathbf{J}\right)+\mathbf{B}_{\text{cl}}\left(\mathbf{J}\right)+\mathbf{P} \left(\mathbf{J}^{*}\right)+\mathbf{B}_{\text{cl}}\left(\mathbf{J}^{*}\right). \tag{57}\] The _local_ magnetic energy density is then comprised of 10 distinct terms proportional to: \[B^{2}= \mathbf{P}^{2}\left(\mathbf{J}\right)+\mathbf{P}^{2}\left(\mathbf{J}^{*}\right)+2 \,\mathbf{P}\left(\mathbf{J}\right)\cdot\mathbf{P}\left(\mathbf{J}^{*}\right)\] \[+\mathbf{B}_{\text{cl}}^{2}\left(\mathbf{J}\right)+\mathbf{B}_{\text{cl}}^{2} \left(\mathbf{J}^{*}\right)+2\,\mathbf{B}_{\text{cl}}\left(\mathbf{J}\right)\cdot\mathbf{B}_{ \text{cl}}\left(\mathbf{J}^{*}\right)\] \[+2\,\mathbf{B}_{\text{cl}}\left(\mathbf{J}\right)\cdot\left[\mathbf{P}\left( \mathbf{J}\right)+\mathbf{P}\left(\mathbf{J}^{*}\right)\right]+2\,\mathbf{B}_{\text{cl}}\left( \mathbf{J}^{*}\right)\cdot\left[\mathbf{P}\left(\mathbf{J}\right)+\mathbf{P}\left(\mathbf{J}^{*} \right)\right]. \tag{58}\] The first row involves exclusively the energy density of magnetic field that threads the boundary \(\partial\mathcal{V}\). The second row of terms involves exclusively the energy density of magnetic field that closes in \(\mathcal{V}\). The bottom row describes the mutual energy density between magnetic field that threads the boundary \(\partial\mathcal{V}\) and the magnetic field that closes in \(\mathcal{V}\). The magnetic energy is \[E\equiv\frac{1}{8\,\pi}\int\limits_{\mathcal{V}}d^{3}x\,B^{2}. \tag{59}\] Note that mathematically both \(\mathbf{P}\left(\mathbf{J}\right)=-\mathbf{\nabla}\psi\left(\mathbf{J}\right)\) and \(\mathbf{P}\left(\mathbf{J}^{*}\right)=-\mathbf{\nabla}\psi\left(\mathbf{J}^{*}\right)\) may be described as the gradient of a scalar in the volume of interest \(\mathcal{V}\). Thus, the bottom row of terms in (58) does not contribute to the net magnetic energy in \(\mathcal{V}\) as with identities (43) and (45) \[\int\limits_{\mathcal{V}}d^{3}x\,\mathbf{B}_{\text{cl}}\cdot\mathbf{\nabla}\psi=\int \limits_{\mathcal{V}}d^{3}x\,\mathbf{\nabla}\cdot\left(\psi\,\mathbf{B}_{\text{cl}} \right)=-\oint\limits_{\partial\mathcal{V}}dS\,\psi\,\hat{\mathbf{n}}\cdot\mathbf{B}_{ \text{cl}}=0, \tag{60}\] resulting in \[E= \overbrace{-\frac{1}{8\,\pi}\int\limits_{\mathcal{V}}d^{3}x\,\left[ \mathbf{P}\left(\mathbf{J}\right)\cdot\mathbf{\nabla}\psi\left(\mathbf{J}\right)+\mathbf{P}\left( \mathbf{J}^{*}\right)\cdot\mathbf{\nabla}\psi\left(\mathbf{J}^{*}\right)+2\,\mathbf{P}\left( \mathbf{J}\right)\cdot\mathbf{\nabla}\psi\left(\mathbf{J}^{*}\right)\right]}^{E_{\text{ Potential}}}\] \[+\underbrace{\frac{1}{8\,\pi}\int\limits_{\mathcal{V}}d^{3}x\, \left[\mathbf{B}_{\text{cl}}^{2}\left(\mathbf{J}\right)+\mathbf{B}_{\text{cl}}^{2}\left( \mathbf{J}^{*}\right)+2\,\mathbf{B}_{\text{cl}}\left(\mathbf{J}\right)\cdot\mathbf{B}_{\text{ cl}}\left(\mathbf{J}^{*}\right)\right]}_{E_{\text{Free}}}. \tag{61}\] ### The Pre and Post-Eruptive State of the Corona: Is the 'Free Energy' Relevant? The potential field is a useful reference field for solar eruptions because only small changes in the normal component of the magnetic field are observed when comparing pre and post solar eruptions (Wang, 1992; Wang et al., 1994; Sudol and Harvey, 2005; Wang, 2006; Sun et al., 2017). Thus, the potential field \(\mathbf{P}\) is believed to remain constant during the eruption. The potential state \(E_{\mathrm{Potential}}\) with \(\mathbf{P}\) matching \(\hat{\mathbf{n}}\cdot\mathbf{B}\) on \(\partial\mathcal{V}\) is often proven to be the'minimum energy state' for volume \(\mathcal{V}\) (see for example Priest, 2014). Consequently, the maximum 'free energy' \(E_{\mathrm{Free}}\) of the corona that is available to drive solar eruptions while holding that normal component fixed in the photosphere has been computed as the difference between the energy of the magnetic field, \(E\), in the coronal volume \(\mathcal{V}\) and the energy of this potential magnetic field, \(E_{\mathrm{Potential}}\), where (Tanaka and Nakagawa, 1973; Yang et al., 1983; Gary et al., 1987; Sakurai, 1987; Low and Lou, 1990; Klimchuk and Sturrock, 1992; Tarr et al., 2013; Zhang, 2016; Schuck and Antiochos, 2019; Liu et al., 2023) \[E_{\mathrm{Potential}}=-\frac{1}{8\,\pi}\int\limits_{\mathcal{V}}d^{3}x\,\left[ \mathbf{P}\left(\mathbf{J}\right)\cdot\mathbf{\nabla}\psi\left(\mathbf{J}\right)+\mathbf{P}\left( \mathbf{J}^{*}\right)\cdot\mathbf{\nabla}\psi\left(\mathbf{J}^{*}\right)+2\,\mathbf{P}\left( \mathbf{J}\right)\cdot\mathbf{\nabla}\psi\left(\mathbf{J}^{*}\right)\right],\] (62a) and \[E_{\mathrm{Free}}=E-E_{\mathrm{Potential}}=\frac{1}{8\,\pi}\,\int\limits_{ \mathcal{V}}d^{3}x\,\left[\mathbf{B}_{\mathrm{cl}}^{2}\left(\mathbf{J}\right)+\mathbf{B}_ {\mathrm{cl}}^{2}\left(\mathbf{J}^{*}\right)+2\,\mathbf{B}_{\mathrm{cl}}\left(\mathbf{J} \right)\cdot\mathbf{B}_{\mathrm{cl}}\left(\mathbf{J}^{*}\right)\right]. \tag{62b}\] Writing this 'potential energy' and 'free energy' explicitly in terms of magnetic fields with their origins makes it manifestly clear that 'potential energy' involves physical currents \(\mathbf{J}\) in \(\mathcal{V}\) (see Figure 1) and the 'free energy' involves physical currents \(\mathbf{J}^{*}\) in the external universe (see Figure 2). Generally some magnetic energy must be pilfered from currents \(\mathbf{J}^{*}\) in the external universe if \(\mathbf{B}_{\mathrm{cl}}\) is completely dissipated or converted to kinetic energy in a solar eruption and some magnetic energy must be pilfered from the external universe to replace the flux threading the boundary that is produced by coronal currents \(\mathbf{J}\) (see Figure 2b). This thioery makes 'free energy' a dubious concept. The minimum energy state \(E_{\mathrm{Potential}}\) is achieved _if and only if all of the current sources of that potential field are external to the volume_ in \(\mathcal{V}^{*}\), which in terms of Equation (57) is: \[\mathbf{P}\left(\mathbf{J}\right)= 0 \mathbf{x}\in\mathcal{V}, \tag{63a}\] \[\mathbf{P}= \mathbf{P}\left(\mathbf{J}^{*}\right) \mathbf{x}\in\mathcal{V},\] (63b) \[\mathbf{B}_{\mathrm{cl}}\left(\mathbf{J}\right)= 0 \mathbf{x}\in\mathcal{V},\] (63c) \[\mathbf{B}_{\mathrm{cl}}\left(\mathbf{J}^{*}\right)= 0 \mathbf{x}\in\mathcal{V}, \tag{63d}\] i.e., \(\mathbf{B}=\mathbf{P}\left(\mathbf{J}^{*}\right)=-\mathbf{\nabla}\psi\left(\mathbf{J}^{*}\right)\) for \(\mathbf{x}\in\mathcal{V}\). This critical caveat elucidates important assumptions underlying the achievability of this minimum energy state for the post eruptive state of the corona. Note that if (63c) is true then (63a) must be true--there must be a magnetic field component that closes in \(\mathcal{V}\) for there to be a potential field \(\mathbf{P}\left(\mathbf{J}\right)\) produced by currents \(\mathbf{J}\) in \(\mathcal{V}\). However, (63a) may be true when (63c) is false, e.g., the case of (core) currents \(\mathbf{J}_{\mathrm{C}}\) sheahed by the opposing (neutralizing) currents \(\mathbf{J}_{\mathrm{S}}\) that shield the boundary \(\partial\mathcal{V}\) from flux produced by any internal currents \(\mathbf{J}=\mathbf{J}_{\mathrm{C}}+\mathbf{J}_{\mathrm{S}}\) where \(\hat{\mathbf{n}}\cdot\mathbf{B}_{\mathrm{BS}}\left(\mathbf{J}_{\mathrm{C}}+\mathbf{J}_{ \mathrm{S}};t,\mathbf{x}\right)|_{\partial\mathcal{V}}=0\). The traditional minimum energy proof (Priest, 2014) leaves the reader with the impression that \(\mathbf{P}\) and \(\mathbf{B}_{\mathrm{cl}}\) are independent. However this impression is destroyed by Equations (62b)-(62a) which include the origin of these fields. These fields are only independent when there is no flux threading the boundary produced by _internal_ currents \(\mathbf{J}\) in \(\mathcal{V}\), i.e., all internal currents are perfectly shielded, and additionally no currents thread the boundary, \(\hat{\mathbf{n}}\cdot\mathbf{J}|_{\partial\mathcal{V}}=0\), \[\mathbf{P}\left(\mathbf{J}\right)= 0 \mathbf{x}\in\mathcal{V}, \tag{64a}\] \[\mathbf{P}= \mathbf{P}\left(\mathbf{J}^{*}\right) \mathbf{x}\in\mathcal{V},\] (64b) \[\mathbf{B}_{\mathrm{cl}}= \mathbf{B}_{\mathrm{cl}}\left(\mathbf{J}\right) \mathbf{x}\in\mathcal{V},\] (64c) \[\mathbf{B}_{\mathrm{cl}}\left(\mathbf{J}^{*}\right)= 0 \mathbf{x}\in\mathcal{V}. \tag{64d}\] In this case it is possible to dissipate all the closed field, \(\mathbf{B}_{\mathrm{cl}}\) and hold \(\hat{\mathbf{n}}\cdot\mathbf{B}|_{\partial\mathcal{V}}\) constant on the boundary without modifying the external universe \(\mathbf{J}^{*}\). However, \(\mathbf{P}\left(\mathbf{J}\right)=0\) for \(\mathbf{x}\in\mathcal{V}\) does not, in general, hold for the pre-eruptive state of the solar corona (Schuck et al., 2022). Suppose instead that the volume of interest \(\mathcal{V}\) starts with the initial magnetic field described by (57) with the general current systems \(\mathbf{J}_{\mathrm{i}}^{*}\) and \(\mathbf{J}_{\mathrm{i}}\). The potential field of the initial state is then \[\mathbf{P}=\mathbf{P}\left(\mathbf{J}_{\mathrm{i}}^{*}\right)+\mathbf{P}\left(\mathbf{J}_{\mathrm{i} }\right)\qquad\mathbf{x}\in\mathcal{V}. \tag{65}\] The important question for a coronal volume \(\mathcal{V}\) is not whether a potential field is the minimum energy state of that volume (it is!), but rather whether that state is _accessible_ from an initial state in \(\mathcal{V}\) by _only_ dissipating energy in the volume (it possible but unlikely!). Possible examples of a rapid dissipation process where this question arises involve solar campfires (Berghmans et al., 2021), jets (Newton, 1942), flares (Carrington, 1859), and coronal mass ejections (Tousey, 1973). If we then hold \(\mathbf{P}\) constant on the boundary (photosphere) and _only_ rapidly dissipate currents in \(\mathcal{V}\) the volume of interest (the corona) then (i) All currents through \(\partial\mathcal{V}\) must quickly rearrange so that they close in \(\mathcal{V}^{*}\) (the convection zone) which leads to \(\mathbf{B}_{\mathrm{cl}}\left(\mathbf{J}_{\mathrm{f}}^{*}\right)=0\) for \(\mathbf{x}\in\mathcal{V}\), where \(\mathbf{J}_{\mathrm{f}}^{*}\) and \(\mathbf{J}_{\mathrm{f}}\) are the current systems in the final state. (ii) There can be no currents \(\mathbf{J}_{\mathrm{f}}\) in \(\mathcal{V}\), i.e., \(\mathbf{B}_{\mathrm{cl}}\left(\mathbf{J}_{\mathrm{f}}\right)=0\) which then implies \(\mathbf{P}\left(\mathbf{J}_{\mathrm{f}}\right)=0\). (iii) The convection zone currents must rapidly rearrange to replace the flux threading the boundary that was initially produced by currents in the coronal volume: \[\mathbf{P}\left(\mathbf{J}_{\mathrm{f}}^{*}\right)=\mathbf{P}\left(\mathbf{J}_{\mathrm{i}}^{* }\right)+\mathbf{P}\left(\mathbf{J}_{\mathrm{i}}\right)\qquad\mathbf{x}\in\mathcal{V}. \tag{66}\] This scenario where the convection zone responds nearly instantaneously to the dissipation of currents in the corona as it relaxes to a current free potential state is at odds with the high Alfven speeds in the corona and low Alfven speeds in the convection zone. Furthermore, this requires new currents \(\mathbf{J}_{\mathrm{f}}^{*}\) in the convection zone to replace the flux threading the photosphere that was initially produced by coronal currents \(\mathbf{J}_{\mathrm{i}}\). In other words, the convection zone must _add energy_ to \(\mathcal{V}\) during the eruption for the solar atmosphere to achieve a potential state consistent with the initial boundary condition. In this scenario the traditional 'free energy' is not really free, nor is the energy necessary to reach the potential state completely contained in the corona prior to the eruption. As such the 'free energy' calculation is dubious for this scenario. A more likely scenario is that currents through the boundary (photosphere) change, but more importantly, coronal currents rearrange into thin chromospheric current layers \(\mathbf{K}_{\mathrm{f}}\) to minimize their energy and shield the photosphere from changes in coronal currents (see the magnetic analog to Thomson's theorem derived in Fiolhais & Providencia, 2008). The post-eruptive state of the solar atmosphere above the photosphere is then \[\mathbf{B}=\mathbf{P}\left(\mathbf{J}_{\mathrm{f}}^{*}\right)+\mathbf{B}_{\mathrm{cl}}\left( \mathbf{J}_{\mathrm{f}}^{*}\right)+\mathbf{P}\left(\mathbf{J}_{\mathrm{f}}+\mathbf{K}_{\mathrm{ f}}\right)+\mathbf{B}_{\mathrm{cl}}\left(\mathbf{J}_{\mathrm{f}}+\mathbf{K}_{\mathrm{ f}}\right)\qquad\mathbf{x}\in\mathcal{V}.\] (67a) The potential field as inferred from the photosphere will remain constant and it continues to be physically produced both by external \[\mathbf{J}_{\mathrm{f}}^{*}\] and internal \[\mathbf{J}_{\mathrm{f}}+\mathbf{K}_{\mathrm{f}}\] currents, but it is primarily the corona/chromosphere responding to changes in coronal currents not the convection zone \[\mathbf{B}\neq\mathbf{P}=\mathbf{P}\left(\mathbf{J}_{\mathrm{f}}^{*}\right)+\mathbf{P}\left(\mathbf{J} _{\mathrm{f}}+\mathbf{K}_{\mathrm{f}}\right)=\mathbf{P}\left(\mathbf{J}_{\mathrm{i}}^{*} \right)+\mathbf{P}\left(\mathbf{J}_{\mathrm{i}}\right)\qquad\mathbf{x}\in\mathcal{V},\] (67b) with \[\mathbf{P}\left(\mathbf{J}_{\mathrm{f}}^{*}\right)\approx\mathbf{P}\left(\mathbf{J}_{\mathrm{ i}}^{*}\right)\] and \[\mathbf{P}\left(\mathbf{J}_{\mathrm{f}}+\mathbf{K}_{\mathrm{f}}\right)\approx\mathbf{P}\left(\mathbf{J}_ {\mathrm{i}}\right)\]. The corona-chromosphere system above the photosphere will be non-potential post-eruption \[\mathbf{\nabla}\mathbf{\times}\mathbf{B}_{\mathrm{BS}}\left(\mathbf{J}_{\mathrm{f}}+\mathbf{K}_{ \mathrm{f}};t,\mathbf{x}\right)\neq 0\qquad\mathbf{x}\in\mathcal{V}. \tag{67c}\] Thus, the solar atmosphere cannot achieve the minimum energy state and the 'free energy' calculation is dubious for this scenario as well. The scenario where the free energy is most relevent corresponds to the limit between the two scenarios above, when all the currents in the volume \(\mathcal{V}\) are pushed to the boundary \(\partial\mathcal{V}\) in the form of current sheets, i.e., in the photosphere or at infinity \(|\mathbf{x}|\to\infty\). This is the magnetic analog of Thomson's theorem (Fiolhais & Providencia, 2008) which preserves \(\hat{\mathbf{n}}\cdot\mathbf{B}|_{\partial\mathcal{V}}\) and establishes a potential field in \(\mathcal{V}\). Then the energy that may be released in an eruption through dynamics and heating is exactly the free energy. Of course, this scenario will generate large forces in the photosphere, and in particular torsional forces which cannot be balanced by pressure gradient forces. These forces should manifest themselves as observable changes in the plasma flows and horizontal magnetic fields. Using identities (43) and (45) on the first row of Equation (61), the magnetic energy in \(\mathcal{V}\) becomes \[E=\overbrace{\frac{1}{8\,\pi}\left[\oint\limits_{\partial\mathcal{V}}dS\,\psi\left( \boldsymbol{J}\right)\,\hat{\boldsymbol{n}}\cdot\boldsymbol{P}\left(\boldsymbol {J}\right)+\oint\limits_{\partial\mathcal{V}}dS\,\psi\left(\boldsymbol{J}^{ \ast}\right)\,\hat{\boldsymbol{n}}\cdot\boldsymbol{P}\left(\boldsymbol{J}^{ \ast}\right)+2\oint\limits_{\partial\mathcal{V}}dS\,\psi\left(\boldsymbol{J}^{ \ast}\right)\,\hat{\boldsymbol{n}}\cdot\boldsymbol{P}\left(\boldsymbol{J} \right)\right]\] \[\qquad+\underbrace{\frac{1}{8\,\pi}\int\limits_{\mathcal{V}}d^{3}x \,\left[\boldsymbol{B}_{\mathrm{cl}}^{2}\left(\boldsymbol{J}\right)+ \boldsymbol{B}_{\mathrm{cl}}^{2}\left(\boldsymbol{J}^{\ast}\right)+2\, \boldsymbol{B}_{\mathrm{cl}}\left(\boldsymbol{J}\right)\cdot\boldsymbol{B}_{ \mathrm{cl}}\left(\boldsymbol{J}^{\ast}\right)\right]}_{E_{\mathrm{Free}}}. \tag{68}\] The volume integral computation requires modeling, MHD simulations, or very dense coronal magnetic field observations to calculate the integrals involving magnetic fields that close in \(\mathcal{V}\). However, the surface integrals may be computed from boundary observations alone. Using \(\boldsymbol{P}\left(\boldsymbol{J}\right)=\boldsymbol{P}-\boldsymbol{P}\left( \boldsymbol{J}^{\ast}\right)\) and \(\psi\left(\boldsymbol{J}\right)=\psi-\psi\left(\boldsymbol{J}^{\ast}\right)\), each surface term may be computed independently as each surface integral is invariant in isolation under the local gauge transformation \(\psi\to\psi+\psi_{0}\) where \(\psi_{0}\) is a constant \[\oint\limits_{\partial\mathcal{V}}dS\,\psi\,\hat{\boldsymbol{n}}\cdot \boldsymbol{P}=\oint\limits_{\partial\mathcal{V}}dS\,\left(\psi+\psi_{0} \right)\,\hat{\boldsymbol{n}}\cdot\boldsymbol{P},\] (69a) because of the solenoidal property of magnetic fields \[\oint\limits_{\partial\mathcal{V}}dS\,\psi_{0}\,\hat{\boldsymbol{n}}\cdot \boldsymbol{P}=\psi_{0}\oint\limits_{\partial\mathcal{V}}dS\,\hat{\boldsymbol {n}}\cdot\boldsymbol{P}=0. \tag{69b}\] Consider Equation (68) in the solar context where \(\mathcal{V}\) represents the volume from the photosphere up through the corona and \(\mathcal{V}^{\ast}\) represents the convection zone below the photosphere. Then \(\boldsymbol{P}\) represents the traditional potential field computed from the normal component of the magnetic field in the photosphere that satisfies \(\boldsymbol{P}\to 0\) as \(|\boldsymbol{x}|\to\infty\). Furthermore, the three surface integrals in (68) may be computed from photospheric vector magnetograms with Carl's Indirect Coronal Current Imager (CICCI) described in Schuck et al. (2022) which computes the surface values of both \(\boldsymbol{P}\) and \(\boldsymbol{P}\left(\boldsymbol{J}^{\ast}\right)\).10 The CICCI software is released at the project gitlab ([https://git.smce.nasa.gov/cicci](https://git.smce.nasa.gov/cicci)) under a NASA open source license. The sum of the three surface integrals is simply the potential field energy in \(\mathcal{V}\). If this sum changes during eruptive phenomena then the traditional potential field, \(\boldsymbol{P}\), has changed. In principle \(\boldsymbol{P}\) (and \(\psi\)) may remain constant if changes in \(\hat{\boldsymbol{n}}\cdot\boldsymbol{P}\left(\boldsymbol{J}^{\ast}\right)\) and \(\hat{\boldsymbol{n}}\cdot\boldsymbol{P}\left(\boldsymbol{J}\right)\) cancel out or changes in \(\hat{\boldsymbol{n}}\cdot\boldsymbol{P}\left(\boldsymbol{J}^{\ast}\right)\) and \(\hat{\boldsymbol{n}}\cdot\boldsymbol{P}\left(\boldsymbol{J}\right)\) are balanced overall by the changes in mutual energy--changes in the angle between \(\boldsymbol{P}\left(\boldsymbol{J}^{\ast}\right)\) and \(\boldsymbol{P}\left(\boldsymbol{J}\right)\) in \(\mathcal{V}\). However, the former cancellation requires detailed balance between changes in coronal \(\boldsymbol{J}\) and convection zone \(\boldsymbol{J}^{\ast}\) currents--collusion between \(\mathcal{V}\) and \(\mathcal{V}^{\ast}\)! The three individual surface integrals may be tracked in observations and simulations to determine how the _origin_ of the flux threading the photosphere changes during eruptions and how a detailed balance is maintained if the photospheric flux remains constant during explosive coronal phenomena. These surface terms provide a definitive test: is the convection zone responding to replace flux lost during the eruption or is the corona/chromosphere system responding to shield the photosphere from losing flux during the eruption? Footnote 10: \(\boldsymbol{P}\left(\boldsymbol{J}^{\ast}\right)=\boldsymbol{B}_{\mathrm{F}}^ {<}\) but \(\boldsymbol{P}\left(\boldsymbol{J}\right)\neq\boldsymbol{B}_{\mathrm{P}}^{>}\) in the notation of Schuck et al. (2022). ## 7 Summary and Conclusions: Implications for Modeling and Observation This work has described the attribution of magnetic fields to current systems for astrophysical problems. A common approach in solar physics is to decompose a general magnetic field in \(\mathcal{V}\) into a potential field \(\boldsymbol{P}\) that threads the boundary \(\partial\mathcal{V}\) with flux and a component \(\boldsymbol{B}_{\mathrm{cl}}\) that closes on itself within the volume \(\mathcal{V}\). Both of these components can have their physical origin in currents \(\boldsymbol{J}\) in the internal volume \(\mathcal{V}\) and \(\boldsymbol{J}^{\ast}\) in the external universe \(\mathcal{V}^{\ast}\). Thus, this representation \(\left(\boldsymbol{P},\boldsymbol{B}_{\mathrm{cl}}\right)\), while mathematically convenient, entangles magnetic field that has its physical origin inside the volume of interest \(\mathcal{V}\) with magnetic field that has its physical origin outside the volume of interest in the external universe \(\mathcal{V}^{\ast}\). In particular, the naive implementation of the potential magnetic field \(\boldsymbol{P}\) creates a cognitive dissonance that \(\mathbf{P}\) is potential and curl free \(\mathbf{\nabla}\mathbf{\times}\mathbf{P}=0\) for \(\mathbf{x}\in\mathcal{V}\) but physically generated by currents \(\mathbf{J}\) in \(\mathcal{V}\) (see Figure 1 and discussion in SS1, Introduction). Alternatively, there can be magnetic field in \(\mathcal{V}\) produced by currents \(\mathbf{J}^{*}\) in \(\mathcal{V}^{*}\) and magnetic field in \(\mathcal{V}^{*}\) produced by currents \(\mathbf{J}\) in \(\mathcal{V}\) when no _net_ magnetic flux threads \(\partial\mathcal{V}\), the boundary between \(\mathcal{V}\) and \(\mathcal{V}^{*}\) (see Figure 2 and discussion in SS3). We have described how these non sequiturs may be resolved by attributing the magnetic field to its origin in \(\mathbf{J}\) or \(\mathbf{J}^{*}\) first and then decomposing these fields into potential \(\mathbf{P}\left(\mathbf{J}\right)\) and \(\mathbf{P}\left(\mathbf{J}^{*}\right)\) and closed \(\mathbf{B}_{\mathrm{cl}}\left(\mathbf{J}\right)\) and \(\mathbf{B}_{\mathrm{cl}}\left(\mathbf{J}^{*}\right)\) components. As presented in SS2, the computation of the magnetic field produced by known internal and unknown external current sources, \(\mathbf{B}_{\mathrm{BS}}\left(\mathbf{J};t,\mathbf{x}\right)\) and \(\mathbf{B}_{\mathrm{BS}}\left(\mathbf{J}^{*};t,\mathbf{x}\right)\), respectively, requires the computation of a Biot-Savart integral for \(\mathbf{B}_{\mathrm{BS}}\left(\mathbf{J};t,\mathbf{x}\right)\) which establishes cause and effect between the internal current \(\mathbf{J}\) and the corresponding magnetic field. This presentation intentionally emphasized this fundamental and intuitive formulation. However, Biot-Savart integrals presented to compute \(\mathbf{A}\left(\mathbf{J};t,\mathbf{x}\right)\) and the corresponding field \(\mathbf{B}_{\mathrm{BS}}\left(\mathbf{J};t,\mathbf{x}\right)\) are computationally intensive for sources \(\mathbf{J}\) in \(\mathcal{V}\) and often cannot be performed for \(\mathbf{A}\left(\mathbf{J}^{*};t,\mathbf{x}\right)\) and \(\mathbf{B}_{\mathrm{BS}}\left(\mathbf{J}^{*};t,\mathbf{x}\right)\) because \(\mathbf{J}^{*}\) is not known in \(\mathcal{V}^{*}\). From a practical perspective, constructing the magnetic field produced by external current sources \(\mathbf{B}_{\mathrm{BS}}\left(\mathbf{J}^{*};t,\mathbf{x}\right)\) via the Helmholtz decomposition first will be more computationally efficient (see SS2). This is particularly advantageous when only the magnetic energy is of interest, i.e., when the vector potentials \(\mathbf{A}\left(\mathbf{J};t,\mathbf{x}\right)\) and \(\mathbf{A}\left(\mathbf{J}^{*};t,\mathbf{x}\right)\) are not needed. This approach involves the evaluation of only surface integrals instead of convolutions over the entire volume of interest \(\mathcal{V}\). Once \(\mathbf{B}_{\mathrm{BS}}\left(\mathbf{J}^{*};t,\mathbf{x}\right)\) has been computed, the magnetic field produced by internal currents may then be constructed by subtraction \(\mathbf{B}_{\mathrm{BS}}\left(\mathbf{J};t,\mathbf{x}\right)=\mathbf{B}-\mathbf{B}_{\mathrm{BS}} \left(\mathbf{J}^{*};t,\mathbf{x}\right)\), thereby establishing attribution of the magnetic field to a current system in a particular domain, i.e., \(\mathbf{J}\) in \(\mathcal{V}\) or \(\mathbf{J}^{*}\) in \(\mathcal{V}^{*}\), respectively. The potential and closed components of these magnetic fields may then be constructed by standard methods (see SS4 and SS5). The last computationally intensive piece is to construct the closed vector potentials \(\mathbf{A}_{\mathrm{cl}}\left(\mathbf{J};t,\mathbf{x}\right)\) and \(\mathbf{A}_{\mathrm{cl}}\left(\mathbf{J}^{*};t,\mathbf{x}\right)\). We provide direct approaches in SS5 and some additional approaches for the latter vector potential in Appendix B. Note that the general results of this work are gauge agnostic, and so our results are not tied in any way to particular computational approaches or choices of gauge. Previous work demonstrated that the relative magnetic helicity in Equation (18) from Berger and Field and Finn and Antonsen (1985) may be decomposed into the gauge invariant'self' and'mutual' helicities in Equation (24) from Berger (1999, 2003). This decomposition uses the terms'self' to describe the linking of closed field \(\mathbf{B}_{\mathrm{cl}}\) with itself, \(\mathbf{A}_{\mathrm{cl}}\), in \(\mathcal{V}\) and'mutual' to describe the linking of closed field \(\mathbf{B}_{\mathrm{cl}}\) with the open field \(\mathbf{A}_{\mathrm{P}}\) in \(\mathcal{V}\). Longcope and Malanushenko (2008) point out that these definitions of'self' and'mutual' helicity are conceptually distinct from the'self' and'mutual' helicity of _isolated_ flux tubes where the'self' helicity depends on the internal field of each isolated flux tube and the'mutual' helicity describes how pairs of tubes are interlinked. Longcope and Malanushenko (2008) develop further definitions of "unconfined self helicity" and "additive self helicity" based on relative magnetic helicity in sub-volumes of \(\mathcal{V}\)(See also Malanushenko et al., 2009; Valori et al., 2020). The novel magnetic field decompositions described in this paper produce natural extensions of relative magnetic helicity and magnetic energy that incorporate the origin of the magnetic fields in currents \(\mathbf{J}\) in the domain of interest \(\mathcal{V}\) or \(\mathbf{J}^{*}\) in the external domain \(\mathcal{V}^{*}\) beyond the boundary \(\partial\mathcal{V}\). As such, we propose new conceptual definitions of self and mutual helicity in \(\mathcal{V}\) that are attributed to their current sources \(\mathbf{J}\) and \(\mathbf{J}^{*}\) in \(\mathcal{V}\) and \(\mathcal{V}^{*}\) respectively. We have extended Berger et al.'s (1998, 2003) representation to eight gauge invariant terms that simultaneously describe the origin of the magnetic fields in \(\mathbf{J}\) and \(\mathbf{J}^{*}\) and how the field components defined by \(\partial\mathcal{V}\), e.g., \((\mathbf{A}_{\mathrm{cl}},\mathbf{B}_{\mathrm{cl}})\) and \((\mathbf{A}_{\mathrm{P}},\mathbf{B}_{\mathrm{cl}})\), link in relative magnetic helicity. Seven of these terms are independent. The sum of the eight terms recovers previous results in Equation (24). Combinations of these terms motivate the new definitions of self helicity and mutual helicity: (i) internal relative helicity \(\mathcal{H}\left(\mathbf{J},\mathbf{J}\right)\) -- the self helicity in \(\mathcal{V}\) of magnetic fields produced by currents \(\mathbf{J}\) in \(\mathcal{V}\); (ii) external relative helicity \(\mathcal{H}\left(\mathbf{J}^{*},\mathbf{J}^{*}\right)\) -- the self helicity in \(\mathcal{V}\) of magnetic fields produced by currents \(\mathbf{J}^{*}\) in \(\mathcal{V}^{*}\); (iii) internal-external relative helicity \(\mathcal{H}\left(\mathbf{J},\mathbf{J}^{*}\right)\)+\(\mathcal{H}\left(\mathbf{J}^{*},\mathbf{J}\right)\) -- the mutual helicity in \(\mathcal{V}\) of magnetic fields produced by currents \(\mathbf{J}\) in \(\mathcal{V}\) with magnetic fields produced by currents \(\mathbf{J}^{*}\) in \(\mathcal{V}^{*}\). Tracking the evolution of the seven independent terms will provide insight into how magnetic linkages change during fundamental stellar processes such as flux emergence, coronal heating, and eruptive phenomena. However, tracking the evolution of these terms requires access to dense magnetic field measurement presently only available in simulations and modeling (Pariat et al., 2017). Nonetheless, further consideration of the helicity transport across boundaries in terms of this framework may reveal observables that can be computed from photospheric observations alone (see the helicity transport representation developed in Schuck and Antiochos, 2019). We have also decomposed the magnetic energy in a volume into terms that describe the origin of the magnetic fields. This representation results directly in terms that may be computed from surface observations alone, and when combined with new theoretical and computational techniques (Schuck et al., 2022), it has the potential to reveal the interplay between the photosphere, convection zone, and corona during solar eruptive phenomena. The concept of cause and effect from currents to magnetic fields outlined in this work has broad application to solar physics. Attributing changes in current systems that lead to changes in magnetic structure has the potential to reveal causality in'sympathetic' solar eruptions (Bumba & Klvana, 1993). Furthermore, combining the attribution of currents in simulations presented here with new attribution techniques, such as CICCI, applicable to the photospheric surface has the potential to unambiguously connect the photospheric/chromospheric magnetic fingerprints of eruptive phenomena to coronal current systems, e.g., the photospheric fingerprints of the formation of the flare current sheet in the corona. We often ignore the interaction between the external universe \(\mathcal{V}^{*}\) or equivalently boundary sources on \(\partial\mathcal{V}\) and the evolution of magnetic fields in our volume of interest \(\mathcal{V}\). However, the current sources in \(\mathcal{V}^{*}\) are often major players in the evolution of the magnetic field in modeling the evolution in \(\mathcal{V}\). Connecting the magnetic field with its origin in currents provides a deeper and clearer understanding of the evolution of astrophysical plasmas and MHD simulations. The authors thank the referee. The authors recognize useful conversation with James Leake, Lars Daldorf, Dana Longcope, and Brian Welsch. Peter W. Schuck dedicates his work on this paper to Henry J. Schuck. The authors acknowledge support from the NASA Living with a Star (H-LWS) Focused Science Topic programs: NNH21ZDA001N-LWS "The Origin of the Photospheric Magnetic Field: Mapping Currents in the Chromosphere and Corona" (Schuck, Linton), NNH17ZDA001N-LWS "Developing Vector Magnetic Maps from SDO/HMI that can Drive Space Weather Models" (Schuck), NNH16ZDA001N-LWS "Implementing and Evaluating a Vector-Magnetogram-Driven Magnetohydrodynamic Model of the Magnetic Field in the Low Solar Atmosphere" (Linton, Schuck), and NNH17ZDA001N-LWS "Investigating Magnetic Flux Emergence with Modeling and Observations to Understand the Onset of Major Solar Eruptions" (Linton, Schuck), NNH17ZDA001N-LWS "Physics-based modeling of the magnetosphere-inosphere-atmosphere system under Carrington-scale solar driving: response modes, missing physics and uncertainty estimates" (Schuck); the NASA Supporting Research (H-SR) programs: NNH18ZDA001N-HSR "Investigating Magnetic Flux Rope Emergence as the Source of Flaring Activity in Delta-Spot Active Regions" (Linton); and the Office of Naval Research (Linton); and from the NASA Internal Science Funding Model (H-ISFM) program "Magnetic Energy Buildup and Explosive Release in the Solar Atmosphere" (Schuck). ## Appendix A Woltjer's Boundary Condition The Woltjer (1958) boundary condition for a magnetically closed system \(\mathcal{V}\) is \(\partial\boldsymbol{A}/\partial t|_{\partial\mathcal{V}}=0\). This boundary condition certainly preserves the helicity in Equation (16), but its complete physical consequences are not manifest and so we clarify them below. From Equation (16), the necessary and sufficient condition for helicity invariance in the Gibbs gauge is: \[\frac{dH}{dt}=\oint\limits_{\partial\mathcal{V}}dS\,\boldsymbol{\hat{n}}\cdot \boldsymbol{A}\boldsymbol{\times}\frac{\partial\boldsymbol{A}}{\partial t}=0.\] (A1) The incomplete Gibbs gauge is defined by a transformation from a potential pair \(\left(\varphi^{\prime},\boldsymbol{A}^{\prime}\right)\) to \(\left(0,\boldsymbol{A}\right)\) via \[\boldsymbol{A}= \boldsymbol{A}^{\prime}+\boldsymbol{\nabla}\Lambda^{\prime},\] (A2a) \[\varphi= \varphi^{\prime}-\frac{1}{c}\,\frac{\partial\Lambda^{\prime}}{ \partial t}=0,\] (A2b) where \[\Lambda^{\prime}=c\,\int\limits_{-\infty}^{t}dt\,\varphi^{\prime},\] (A2c) and \[\mathbf{E}= -\frac{1}{c}\,\frac{\partial\mathbf{A}^{\prime}}{\partial t}-\mathbf{\nabla} \varphi^{\prime}=-\frac{1}{c}\,\frac{\partial\mathbf{A}}{\partial t},\] (A3a) \[\mathbf{B}= \mathbf{\nabla}\mathbf{\times}\mathbf{A}^{\prime}=\mathbf{\nabla}\mathbf{\times}\mathbf{A}.\] (A3b) Rewriting Equation (A1) in an arbitrary gauge \[\frac{dH}{dt}=\oint\limits_{\partial\mathcal{V}}dS\,\hat{\mathbf{n}}\cdot\mathbf{A} \mathbf{\times}\,\frac{\partial\mathbf{A}}{\partial t}+\oint\limits_{\partial\mathcal{ V}}dS\,\hat{\mathbf{n}}\cdot\mathbf{A}\mathbf{\times}\,\frac{\partial\mathbf{\nabla}\Lambda}{ \partial t}+\oint\limits_{\partial\mathcal{V}}dS\,\hat{\mathbf{n}}\cdot\mathbf{\nabla} \Lambda\,\times\frac{\partial\mathbf{A}}{\partial t}=0,\] (A4) and using the vector identity \[\mathbf{\nabla}\mathbf{\times}(\phi\,\mathbf{a})=\mathbf{\nabla}\mathbf{\times}\mathbf{a}+\phi\,\mathbf{ \nabla}\mathbf{\times}\mathbf{a}\] (A5) with Equation (1b) this becomes \[\frac{dH}{dt}=\oint\limits_{\partial\mathcal{V}}dS\,\hat{\mathbf{n}}\cdot\mathbf{A} \mathbf{\times}\,\frac{\partial\mathbf{A}}{\partial t}+\oint\limits_{\partial\mathcal{ V}}dS\,\hat{\mathbf{n}}\cdot\mathbf{\nabla}\mathbf{\times}\left(\Lambda\,\frac{\partial\mathbf{A}}{ \partial t}-\frac{\partial\Lambda}{\partial t}\,\mathbf{A}\right)+\oint\limits_{ \partial\mathcal{V}}dS\,\hat{\mathbf{n}}\cdot\left(\frac{\partial\Lambda}{ \partial t}\,\mathbf{B}-\Lambda\,\frac{\partial\mathbf{B}}{\partial t}\right).\] (A6) The second surface integral involving the curl is identically zero and the third surface integral is zero for an arbitrary gauge transformation if \(\hat{\mathbf{n}}\cdot\mathbf{B}|_{\partial\mathcal{V}}=0\). Thus, the boundary condition \(\hat{\mathbf{n}}\cdot\mathbf{B}|_{\partial\mathcal{V}}=0\) to ensure gauge invariance is an implicit assumption in Equation (16). The well-known jump conditions on the observable electric and magnetic fields across a boundary \(\partial\mathcal{V}\) are: (see pp. 19-20 in Jackson, 1975) \[\llbracket\hat{\mathbf{n}}\cdot\mathbf{E}\rrbracket= 4\,\pi\,\sigma, \llbracket\hat{\mathbf{n}}\mathbf{\times}\mathbf{E}\rrbracket= 0,\] (A7a) \[\llbracket\hat{\mathbf{n}}\cdot\mathbf{B}\rrbracket= 0, \llbracket\hat{\mathbf{n}}\mathbf{\times}\mathbf{B}\rrbracket= \frac{4\,\pi}{c}\,\mathbf{K},\] (A7b) where \(\sigma\) is the surface charge, \(\mathbf{K}\) is the surface current, and \(\llbracket a\rrbracket=a\big{|}_{\partial\mathcal{V}^{+}}-a^{*}\big{|}_{ \partial\mathcal{V}^{-}}\) is shorthand for the jump conditions from above the surface in \(\mathcal{V}\) (denoted with a superscript "\(+\)") to be the surface in \(\mathcal{V}^{*}\) (denoted with a superscript "\(\cdot\)") (see p. 20 in Jackson, 1975). The jump condition on the tangential components of the magnetic field may be recast as a surface continuity equation (Arnoldus, 2006) \[\llbracket\hat{\mathbf{n}}\cdot\mathbf{\nabla}\mathbf{\times}\mathbf{B}\rrbracket=-\frac{4\, \pi}{c}\,\mathbf{\nabla}_{\perp}\cdot\mathbf{K}.\] (A7c) The kinematic boundary condition at a fluid-fluid interface is \[\llbracket\hat{\mathbf{n}}\cdot\mathbf{v}\rrbracket=0.\] (A8) The jump conditions on \(\mathbf{E}\) and \(\mathbf{B}\) rigorously correspond to jump conditions on the vector potential in the Gibbs gauge11 Footnote 11: Note that for a Dupin (1813) surface (in particular see A3.24 in Van Bladel, 2007) \[\hat{\mathbf{n}}\times(\mathbf{\nabla}\mathbf{\times}\mathbf{A})=\mathbf{\nabla}_{ \perp}A_{n}-\frac{A_{1}}{R_{1}}\,\hat{\mathbf{e}}_{1}-\frac{A_{2}}{R_{2}}\,\hat{ \mathbf{e}}_{2}-\frac{\partial\mathbf{A}_{\perp}}{\partial n},\] where \[A_{1}\] and \[A_{2}\] are the components of \[\mathbf{A}\] in the principle directions \[\hat{\mathbf{e}}_{1}\] and \[R_{1}\] and \[R_{2}\] are the respective principle curvatures. Since \[A_{1}\] and \[A_{2}\] are continuous across \[\partial\mathcal{V}\] they do not appear in boundary conditions (A9b) derived from Equation (A7b). and \(\hat{\boldsymbol{n}}\cdot\boldsymbol{B}|_{\partial\mathcal{V}}=0\) requires either \(\hat{\boldsymbol{n}}\cdot\boldsymbol{v}^{\star}|_{\partial\mathcal{V}^{-}}= \hat{\boldsymbol{n}}\cdot\boldsymbol{v}|_{\partial\mathcal{V}^{+}}=0\) or \(\llbracket\hat{\boldsymbol{n}}\boldsymbol{\times}\boldsymbol{B}\rrbracket=0\) (no surface current \(\boldsymbol{K}\)). Since \(\boldsymbol{K}\) is explicitly considered by Woltjer (1958), the former condition \(\hat{\boldsymbol{n}}\cdot\boldsymbol{v}|_{\partial\mathcal{V}}=0\) is implied. This then implies \(\partial\boldsymbol{A}_{\perp}/\partial t|_{\partial\mathcal{V}}=0\)_on the boundary_ (not to be confused with \(\llbracket\partial\boldsymbol{A}_{\perp}/\partial t\rrbracket=0\)), which is sufficient to ensure that the surface integral in Equation (16) is zero, i.e., that the system is _sufficiently 'isolated,'_ to preserve helicity.13 This condition does not inherently preclude the existence of a surface charge \(\sigma\) or surface current \(\boldsymbol{K}\) on \(\partial\mathcal{V}\) in boundary conditions (A9a)-(A9b). Indeed, since \(B_{n}=0\) and \(v_{n}=0\) on \(\partial\mathcal{V}\), then the electric field is always normal to the boundary \(\boldsymbol{E}=E_{n}\,\hat{\boldsymbol{n}}\) and the Poynting vector \(c\,\boldsymbol{E}\times\boldsymbol{B}/\left(4\,\pi\right)=\left(\boldsymbol{ v}_{\perp}\boldsymbol{\times}\boldsymbol{B}_{\perp}\right)\boldsymbol{ \times}\boldsymbol{B}_{\perp}/\left(4\,\pi\right)\) is always tangent to the boundary \(\partial\mathcal{V}\)--no _net_ electromagnetic energy crosses the boundary, but collusion is permitted! Regardless of the gauge condition \(\boldsymbol{\nabla}\cdot\boldsymbol{A}\), the jump condition on the tangential components \(\llbracket\boldsymbol{A}_{\perp}\rrbracket=0\) follows by analogy from the jump conditions on the tangential components of the electric field. Just as \(\llbracket\boldsymbol{E}_{\perp}\rrbracket=0\) because \(\partial\boldsymbol{B}_{\perp}/\partial t\) must be finite in the surface \(\partial\mathcal{V}\), so \(\llbracket A_{\perp}\rrbracket=0\) because \(\boldsymbol{B}_{\perp}\) must be finite in the surface \(\partial\mathcal{V}\). However, in analogy with the jump conditions on the normal component of the electric field, the normal component of the vector potential in the Gibbs gauge may be discontinuous, \(\llbracket A_{n}\rrbracket\neq 0\). Consequently, the jump conditions for the surface current \(\boldsymbol{K}\) involve derivatives of all three components of the vector potential. Of course if \(\boldsymbol{A}^{\prime}\) is in the Coloumb gauge with \(\boldsymbol{\nabla}\cdot\boldsymbol{A}^{\prime}=0\) then it is apparent that \(\llbracket\boldsymbol{A}^{\prime}_{n}\rrbracket=0\) (see p. 242 in Griffiths 1999). Woltjer (1958) imposes quite reasonable, but more stringent, boundary conditions for a magnetically closed system, namely, \(\partial\boldsymbol{A}/\partial t|_{\partial\mathcal{V}}=0\) in the Gibbs gauge, which requires \(\sigma=0\) and \(\boldsymbol{v}\times\boldsymbol{B}|_{\partial\mathcal{V}}=0\). Under these conditions \(\llbracket\partial\boldsymbol{A}/\partial t\rrbracket=0\) and, in the absence of free charge at the boundary, the Gibbs gauge reduces to the Coulomb gauge on the boundary with \(\llbracket\boldsymbol{A}\rrbracket=0\) with \(\llbracket\partial\boldsymbol{A}_{\perp}/\partial\boldsymbol{n}\rrbracket=4 \,\pi c^{-1}\,\boldsymbol{K}\). Footnote 13: We emphasize that enforcing gauge invariance is not sufficient for dynamical conservation of helicity \(H\). See footnote 1. Appendix B Other Expressions for \(\boldsymbol{A}_{\mathrm{cl}}\left(\boldsymbol{J}^{\star};t,\boldsymbol{x}\right)\) from \(\boldsymbol{B}_{\mathrm{cl}}\left(\boldsymbol{J}^{\star};t,\boldsymbol{x}\right)\) As mentioned in SS 5.2, the expression (50) for \(\boldsymbol{A}_{\mathrm{cl}}\left(\boldsymbol{J}^{\star};t,\boldsymbol{x}\right)\) is conceptually simple, but involves a computationally intensive convolution integral of \(\boldsymbol{B}_{\mathrm{cl}}\left(\boldsymbol{J}^{\star};t,\boldsymbol{x}\right)\) in \(\mathcal{V}\). However, alternative representations may be derived from the HD of \(\boldsymbol{B}_{\mathrm{cl}}\left(\boldsymbol{J}^{\star};t,\boldsymbol{x}\right)\) in Equation (7) of SS2 in terms of the internal vorticity of \(\boldsymbol{B}_{\mathrm{cl}}\left(\boldsymbol{J}^{\star};t,\boldsymbol{x}\right)\) and its tangential components on the bounding surface \(\partial\mathcal{V}\) \[\boldsymbol{B}_{\mathrm{cl}}\left(\boldsymbol{J}^{\star};t,\boldsymbol{x}\right)= \boldsymbol{\nabla}\boldsymbol{\times}\left[\frac{1}{4\,\pi}\,\int\limits_{ \mathcal{V}}d^{3}x^{\prime}\,\frac{\boldsymbol{\nabla}^{\prime}\boldsymbol{ \times}\boldsymbol{B}_{\mathrm{cl}}\left(\boldsymbol{J}^{\star};t,\boldsymbol{x}^ {\prime}\right)}{|\boldsymbol{x}-\boldsymbol{x}^{\prime}|}+\frac{1}{4\,\pi} \oint\limits_{\partial\mathcal{V}}dS^{\prime}\,\frac{\hat{\boldsymbol{n}}^{ \prime}\times\boldsymbol{B}_{\mathrm{cl}}\left(\boldsymbol{J}^{\star};t, \boldsymbol{x}^{\prime}\right)}{|\boldsymbol{x}-\boldsymbol{x}^{\prime}|}\right] \qquad\boldsymbol{x}\in\mathcal{V}.\] (B1) This implies \[\boldsymbol{A}_{\mathrm{cl}}\left(\boldsymbol{J}^{\star};t,\boldsymbol{x}\right)= \frac{1}{4\,\pi}\,\int\limits_{\mathcal{V}}d^{3}x^{\prime}\,\frac{\boldsymbol{ \nabla}^{\prime}\boldsymbol{\times}\boldsymbol{B}_{\mathrm{cl}}\left( \boldsymbol{J}^{\star};t,\boldsymbol{x}^{\prime}\right)}{|\boldsymbol{x}- \boldsymbol{x}^{\prime}|}+\frac{1}{4\,\pi}\oint\limits_{\partial\mathcal{V}}dS^{ \prime}\,\frac{\hat{\boldsymbol{n}}^{\prime}\boldsymbol{\times}\boldsymbol{B}_{ \mathrm{cl}}\left(\boldsymbol{J}^{\star};t,\boldsymbol{x}^{\prime}\right)}{| \boldsymbol{x}-\boldsymbol{x}^{\prime}|}\qquad\boldsymbol{x}\in\mathcal{V}.\] (B2) However, as written, this also requires a computationally intensive Biot-Savart type convolution, but over \(\boldsymbol{\nabla}^{\prime}\boldsymbol{\times}\boldsymbol{B}_{\mathrm{cl}} \left(\boldsymbol{J}^{\star};t,\boldsymbol{x}^{\prime}\right)\). One alternative is to substitute (46b) into (B2) \[\boldsymbol{A}_{\mathrm{cl}}\left(\boldsymbol{J}^{\star};t,\boldsymbol{x}\right)= -\frac{1}{4\,\pi}\,c\,\int\limits_{\mathcal{V}}d^{3}x^{\prime}\,\frac{1}{| \boldsymbol{x}-\boldsymbol{x}^{\prime}|}\,\boldsymbol{\nabla}^{\prime}\oint \limits_{\partial\mathcal{V}}dS^{\prime\prime}\,\frac{\hat{\boldsymbol{n}}^{ \prime\prime}\cdot\boldsymbol{J}\left(t,\boldsymbol{x}^{\prime\prime}\right)}{| \boldsymbol{x}^{\prime}-\boldsymbol{x}^{\prime\prime}|}+\frac{1}{4\,\pi} \oint\limits_{\partial\mathcal{V}}dS^{\prime}\,\frac{\hat{\boldsymbol{n}}^{ \prime}\times\boldsymbol{B}_{\mathrm{cl}}\left(\boldsymbol{J}^{\star};t,\boldsymbol {x}\right)}{|\boldsymbol{x}-\boldsymbol{x}^{\prime}|}\qquad\boldsymbol{x}\in \mathcal{V}.\] (B3) Integrating by parts \[\boldsymbol{A}_{\mathrm{cl}}\left(\boldsymbol{J}^{\star};t,\boldsymbol{x}\right)= -\frac{1}{4\,\pi}\,c\,\int\limits_{\mathcal{V}}d^{3}x^{\prime}\, \boldsymbol{\nabla}^{\prime}\,\left[\frac{1}{|\boldsymbol{x}-\boldsymbol{x}^{ \prime}|}\,\oint\limits_{\partial\mathcal{V}}dS^{\prime\prime}\,\frac{\hat{ \boldsymbol{n}}^{\prime\prime}\cdot\boldsymbol{J}\left(t,\boldsymbol{x}^{\prime \prime}\right)}{|\boldsymbol{x}^{\prime}-\boldsymbol{x}^{\prime\prime}|}\right]+ \frac{1}{4\,\pi}\oint\limits_{\partial\mathcal{V}}dS^{\prime}\,\frac{\hat{ \boldsymbol{n}}^{\prime}\times\boldsymbol{B}_{\mathrm{cl}}\left(\boldsymbol{J}^ {\star};t,\boldsymbol{x}\right)}{|\boldsymbol{x}-\boldsymbol{x}^{\prime}|}\] \[\qquad-\frac{1}{4\,\pi}\,c\,\boldsymbol{\nabla}\int\limits_{ \mathcal{V}}\frac{d^{3}x^{\prime}}{|\boldsymbol{x}-\boldsymbol{x}^{\prime}|} \oint\limits_{\partial\mathcal{V}}dS^{\prime\prime}\,\frac{\hat{\boldsymbol{n}}^{ \prime\prime}\cdot\boldsymbol{J}\left(t,\boldsymbol{x}^{\prime\prime}\right)}{| \boldsymbol{x}^{\prime}-\boldsymbol{x}^{\prime\prime}|}\qquad\boldsymbol{x}\in \mathcal{V}.\] (B4) Using the Gauss-Ostrogradsky theorem applied to the gradient of a scalar \[\int\limits_{\mathcal{V}}d^{3}x\,\boldsymbol{\nabla}\phi=-\oint\limits_{ \partial\mathcal{V}}dS\,\hat{\boldsymbol{n}}\,\phi,\] (B5) \(\mathbf{A}_{\rm cl}\) may be written as a double convolution \[\mathbf{A}_{\rm cl}\left(\mathbf{J}^{*};t,\mathbf{x}\right)= \frac{1}{4\,\pi\,c}\oint\limits_{\partial\mathcal{V}}dS^{\prime}\oint \limits_{\partial\mathcal{V}}dS^{\prime\prime}\;\frac{\hat{\mathbf{n}}^{\prime} \hat{\mathbf{n}}^{\prime\prime}\cdot\mathbf{J}\left(t,\mathbf{x}^{\prime\prime}\right)}{ \left|\mathbf{x}-\mathbf{x}^{\prime}\right|}+\frac{1}{4\,\pi}\oint\limits_{\partial \mathcal{V}}dS^{\prime}\;\frac{\hat{\mathbf{n}}^{\prime}\times\mathbf{B}_{\rm cl} \left(\mathbf{J}^{*};t,\mathbf{x}\right)}{\left|\mathbf{x}-\mathbf{x}^{\prime}\right|}\] \[-\frac{1}{4\,\pi\,c}\,\mathbf{\nabla}\int\limits_{\mathcal{V}}\frac{d ^{3}x^{\prime}}{\left|\mathbf{x}-\mathbf{x}^{\prime}\right|}\oint\limits_{\partial \mathcal{V}}dS^{\prime\prime}\;\frac{\hat{\mathbf{n}}^{\prime\prime}\cdot\mathbf{J} \left(t,\mathbf{x}^{\prime\prime}\right)}{\left|\mathbf{x}^{\prime}-\mathbf{x}^{\prime \prime}\right|}\qquad\mathbf{x}\in\mathcal{V}\] (B6) over just boundary values. The last term is simply the gradient of a gauge function and may be ignored in our gauge invariant approach.
2309.03510
Gradient estimates for $Δ_pu-|\nabla u|^q+b(x)|u|^{r-1}u=0$ on a complete Riemannian manifold and Liouville type theorems
In this paper the Nash-Moser iteration method is used to study the gradient estimates of solutions to the quasilinear elliptic equation $\Delta_p u-|\nabla u|^q+b(x)|u|^{r-1}u=0$ defined on a complete Riemannian manifold $(M,g)$. When $b(x)\equiv0$, a unified Cheng-Yau type estimate of the solutions to this equation is derived. Regardless of whether this equation is defined on a manifold or a region of Euclidean space, certain technical and geometric conditions posed in \cite[Theorem E, F]{MR3261111} are weakened and hence some of the estimates due to Bidaut-V\'eron, Garcia-Huidobro and V\'eron (see \cite[Theorem E, F]{MR3261111}) are improved. In addition, we extend their results to the case $p>n=\dim(M)$. When $b(x)$ does not vanish, we can also extend some estimates for positive solutions to the above equation defined on a region of the Euclidean space due to Filippucci-Sun-Zheng \cite{filippucci2022priori} to arbitrary solutions to this equation on a complete Riemannian manifold. Even in the case of Euclidean space, the estimates for positive solutions in \cite{filippucci2022priori} and our results can not cover each other.
Dong Han, Jie He, Youde Wang
2023-09-07T06:46:21Z
http://arxiv.org/abs/2309.03510v2
Gradient estimates for \(\Delta_{p}u-|\nabla u|^{q}+b(x)|u|^{r-1}u=0\) on a complete Riemannian manifold and Liouville type theorems ###### Abstract. In this paper the Nash-Moser iteration method is used to study the gradient estimates of solutions to the quasilinear elliptic equation \(\Delta_{p}u-|\nabla u|^{q}+b(x)|u|^{r-1}u=0\) defined on a complete Riemannian manifold \((M,g)\). When \(b(x)\equiv 0\), a unified Cheng-Yau type estimate of the solutions to this equation is derived. Regardless of whether this equation is defined on a manifold or a region of Euclidean space, certain technical and geometric conditions posed in [2, Theorem E, F] are weakened and hence some of the estimates due to Bidaut-Veron, Garcia-Huidobro and Veron (see [2, Theorem E, F]) are improved. In addition, we extend their results to the case \(p>n=\dim(M)\). When \(b(x)\) does not vanish, we can also extend some estimates for positive solutions to the above equation defined on a region of the Euclidean space due to Filippucci-Sun-Zheng [17] to arbitrary solutions to this equation on a complete Riemannian manifold. Even in the case of Euclidean space, the estimates for positive solutions in [17] and our results can not cover each other. Key words and phrases:non-linear elliptic equation, gradient estimate, \(p\)-Laplace *Corresponding author ###### Contents * 1 Introduction * 2 Preliminaries * 3 Gradient estimates for solutions to \(\Delta_{p}u-|\nabla u|^{q}=0\) * 3.1 An integral inequality * 3.2 \(L^{\alpha_{1}}\)-bound of \(f\) in a geodesic ball with radius \(3R/4\) * 3.3 Proof of Theorem 1.1 and Theorem 1.3 * 3.4 The case \(\dim(M)=n=2\). * 4 Gradient estimate for the solutions of (1.1) * 4.1 The case of \(u\in L^{k}(B_{R}(o))\) * 4.2 The case of \(u\in L^{\infty}(B_{R}(o))\) * 5 Properties of non-negative solutions to (1.1) * 5.1 \(L^{\infty}\)-estimate of non-negative solutions * 5.2 Proof of Theorem 1.8 ## 1. Introduction Gradient estimates are a fundamental and powerful tool in the analysis of partial differential equations on Riemannian manifolds. They can be used to deduce Liouville-type theorems ([3, Introduction Let \(\Omega\subset M\) be a smooth manifold with smooth boundary \(\partial M\) and \(\partial M\) a smooth smooth manifold \(M\) with boundary \(\partial M\). We consider the following nonlinear equation \[\Delta_{p}u-|\nabla u|^{q}+b(x)|u|^{r-1}u=0, \tag{1.1}\] where \(b(x)\) is a smooth smooth function and \(\nabla u\) is a smooth smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function and \(\nabla u\) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function and \(\nabla u\) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function and \(\nabla u\) is a smooth function. The equation (1.1) is a smooth function and \(\nabla u\) is a smooth function and \(\nabla u\) is a smooth function and \(\nabla u\) is a smooth function. In the case \(p=2\) in the above (1.5), Bidaut-Veron, Garcia-Huidobro and Veron (see [3, 4]) also studied the following semilinear equation \[\begin{cases}\Delta u+N|\nabla u|^{q}+|u|^{r-1}u=0\quad\text{in }\Omega\subset \mathbb{R}^{n};\\ n\geq 1,\ r>1,\ q>\frac{2r}{r+1},\ N>0.\end{cases} \tag{1.6}\] By using a delicate combination of refined Bernstein techniques and Keller-Osserman estimate, they proved that any solution \(u\) of equation (1.6) in a domain \(\Omega\subset\mathbb{R}^{n}\) satisfies \[|\nabla u(x)|\leq c(n,q,r)\left(N^{-\frac{r+1}{(r+1)q-2r}}+\left(Nd(x,\partial \Omega)\right)^{-\frac{1}{q-1}}\right),\quad\forall x\in\Omega.\] They also obtained a gradient estimate and Liouville theorems for positive solutions of (1.5) when \(n\geq 2,1<r<(n+3)/(n-1),q<(n+2)/n\) and \(N>0\). Later, Filippucci, Sun and Zheng [17] generalized in 2022 the results in [3] to the \(p\)-Laplace case of equation (1.5). In the case \(N>0\), \(r>\max\{p-1,1\}\) and \(q>pr/(r+1)\), they showed that there exists a positive constant \(c(n,p,q,r)\) such that any positive solution to (1.5) on \(\Omega\subset\mathbb{R}^{n}\) satisfies \[|\nabla u(x)|\leq c(n,p,q,r)\left(N^{-\frac{r+1}{(r+1)q-pr}}+\left(Nd(x, \partial\Omega)\right)^{-\frac{1}{q-p+1}}\right),\quad\forall x\in\Omega.\] The arguments in [17, 3] depend strongly on the invariance of translation of (1.5) on \(\mathbb{R}^{n}\) and the geometric symmetric structure of Euclidean space. Another related equation is the generalized Lane-Emden equation \[\begin{cases}\Delta_{p}u+u^{r}=0\quad\text{in}\quad\mathbb{R}^{n};\\ u>0;\\ n>p>1,\quad r>0,\end{cases} \tag{1.7}\] which plays an important role on modeling meteorological or astrophysical phenomena [10, 11, 12]. Equation (1.7) has been widely studied in the literature [19, 6, 5, 1, 7]. Serrin and Zou ([28]) proved that equation (1.7) has a solution if and only if \(r<np/(n-p)-1\). Very recently, inspired by the work of Wang and Zhang [34], Wang and Wei [36] adopted the Moser iteration to study the nonexistence of positive solutions to the semilinear elliptic equation \[\Delta u+au^{r}=0\] defined on a complete Riemannian manifold \((M,g)\) with \(\dim(M)=n\), where \(a\) is a positive constant. It is shown in [36] that, if the Ricci curvature of the manifold is nonnegative and \[r\in\left(-\infty,\quad\frac{n+1}{n-1}+\frac{2}{\sqrt{n(n-1)}}\right),\] then the above equation does not admit any positive solution. Later, He, Wang and Wei([20]) generalized this result to the equation \(\Delta_{p}u+au^{r}=0\) and improved the range of \(r\) in which the gradient estimates and Liouville type theorems hold true. The results in [36, 20] are an extension on Riemannian manifolds of the Liouville property of the Lane-Emden equation on an Euclidean space and some restriction assumptions in the previous work, for instance \(r>0\) and \(p\leq n\), were removed. On the other hand, the equation (1.1) is also related to the \(L^{p}\)-log Sobolev inequality. Chung and Yau [9] showed that the critical function of the \(L^{2}\)-log-Sobolev inequality \[C_{M}\int_{M}u^{2}\log u^{2}dV\leq\int_{M}|\nabla u|^{2}dV\] satisfies the following elliptic equation \[\Delta u+C_{M}u\log u^{2}=0.\] Motivated by Chung-Yau [9] (see also [14, 18]), one also considered the critical functions which achieve the Sobolev constant \(C_{LS}\) of the \(L^{p}\)-log-Sobolev inequality \[C_{LS}\int_{M}u^{p}\log u^{p}dV\leq\int_{M}|\nabla u|^{p}dV,\quad p>1,\quad u>0.\] It is not difficult to see that the critical functions satisfy the following Euler-Lagrange equation, \[\Delta_{p}u+C_{LS}u^{p-1}\log u^{p}=0.\] By a logarithmic transformation \(v=-(p-1)\log u\), the above equation becomes \[\Delta_{p}v-|\nabla v|^{p}+bv=0, \tag{1.8}\] where \(b=p(p-1)^{p-2}C_{LS}\). In particular, Wang and Xue [37] in 2021 considered the corresponding parabolic equation of (1.8), i.e., the following equation \[\partial_{t}v=\Delta_{p}v-|\nabla v|^{p}+bv. \tag{1.9}\] They used the maximum principle to obtain a global Li-Yau type gradient estimate for solutions to equation (1.9) on compact Riemannian manifolds. It is worth mentioning that some mathematicians have studied some similar questions on metric measure spaces. For instance, Coulhon-Jiang-Koskela-Sikora [13] gave a gradient estimate for heat kernels on a doubling metric measure space; Zhao-Yang [39] studied the gradient estimate of the weighted \(p\)-Laplacian Lichnerowicz equation \[\Delta_{p,f}u+cu^{\sigma}=0\] on manifolds with \(m\)-Bakry-Emery Ricci curvature bounded from below. For more studies of this topic, we refer to [22, 25, 29]. The most important motivation of the present paper is to study the gradient estimate of the equation (1.1) in Riemannian manifolds and extend some results obtained for the equation on a domain in an Euclidean space \[\Delta_{p}u+M|\nabla u|^{q}+|u|^{r-1}u=0\quad\text{in}\quad\Omega\subset \mathbb{R}^{n}\] by Bidaut-Veron, Garcia-Huidobro and Veron [2, 3, 4] and Filippucci, Sun and Zheng [17] to the case on a Riemannian manifold. Instead of the Bernstein method and the Keller-Osserman technique, which were used in [2, 17, 3, 4, 1], we use the Nash-Moser iteration method to approach the gradient estimates inspired by Wang and Zhang [34] (also see [36, 16, 35]). The Nash-Moser iteration method is known as an integral estimate method and seems to be more suitable for equations defined on Riemannian manifolds. Indeed, when one adopts the Bernstein method to approach such problems, one always needs to employ Barrier functions and hence uses the comparison theorems, which may require an assumption on the sectional curvature of the Riemannian manifold (see [2, Theorem E, Theorem F], [21, Lemma 2.2]). However, the Nash-Moser iteration method only needs the condition on Ricci curvature. Now we are in the position to state our results. The following gradient estimate is the main theorem of this paper. **Theorem 1.1**.: _Let \((M,g)\) be a complete Riemannian manifold with \(\mathrm{Ric}_{g}\geq-(n-1)\kappa g\) for some constant \(\kappa\geq 0\). For any solution \(u\in C^{1}(B_{R}(o))\) of the Hamilton-Jacobi equation (1.2) with \(p>1\) and \(q>p-1\), which is defined on a geodesic ball \(B_{R}(o)\), we have_ \[\sup_{B_{\frac{R}{2}}(o)}|\nabla u|\leq C_{n,p,q}\left(\frac{1+\sqrt{\kappa}R} {R}\right)^{\frac{1}{q-p+1}}\] _for some constant \(C_{n,p,q}\) depending on \(n\), \(p\) and \(q\)._ **Remark 1**.: _Theorem 1.1 can cover and improve some previous results._ * _If_ \(q=p\)_, Theorem_ 1.1 _covers the gradient estimate of Cheng and Yau_ _[_8_]_ _for_ \(p=2\) _and Wang and Zhang_ _[_34_]_ _for any_ \(p>1\)_._ * _When_ \(p\neq q\)_, Theorem_ 1.1 _improves some results of_ _[_2_]__. It removes the restriction_ \(p\leq n\) _in_ _[_2_, Theorem A]__, and drops the conditions on convexity radius and sectional curvature growth in_ _[_2_, Theorem E, Theorem F]__._ As a direct consequence, the following Liouville-type theorem is established. **Corollary 1.2**.: _Let \((M,g)\) be a Riemannian manifold with non-negative Ricci curvature, that is, \(\mathrm{Ric}\geq 0\). For any solution \(u\in C^{1}(B_{R}(o))\) of the Hamilton-Jacobi equation (1.2) defined on \(B_{R}(o)\), if \(p>1\) and \(q>p-1\), then there exists a constant \(C=C(n,p,q)\) such that_ \[\sup_{B_{\frac{R}{2}}(o)}|\nabla u|\leq C(n,p,q)\left(\frac{1}{R}\right)^{ \frac{1}{q-p+1}}. \tag{1.10}\] _Moreover, if \(M\) is a non-compact complete manifold and \(u\) is a global solution of equation (1.2) on \(M\), then there holds \(\nabla u\equiv 0\) by letting \(R\to\infty\) in (1.10), which means that \(u\) is a constant._ **Remark 2**.: _Corollary 1.2 can be viewed as an extension and improvement of [2, Theorem A]. If \((M,g)\) is a domain \(\Omega\) in \(\mathbb{R}^{n}\), then, obviously \(B_{d}(x)\subset\Omega\) for \(d=d(x,\partial\Omega)\) and \(\kappa=0\). We can see easily that the estimate in the above Corollary 1.2 is a considerable improvement of the corresponding estimate (1.3), i.e.,_ \[|\nabla u(x)|\leq C(n,p,q)\left(d(x,\partial\Omega)\right)^{-\frac{1}{q-p+1}},\] _stated in [2, Theorem A] and the condition \(p\leq n\) supposed in [2, Theorem A] is removed._ **Theorem 1.3**.: _Assume that \(M\) satisfies the same assumptions as in Theorem 1.1. If \(u\in C^{1}(M)\) is a global solution to the Hamilton-Jacobi equation (1.2) on \(M\), then for any fixed \(a\in M\) and any \(x\in M\), we have_ \[u(a)-c(n,p,q)\kappa^{\frac{1}{2(1-p+q)}}d(x,a)\leq u(x)\leq u(a)+c(n,p,q) \kappa^{\frac{1}{2(1-p+q)}}d(x,a), \tag{1.11}\] _where \(d(x,a)\) denotes the geodesic distance from \(x\) to \(a\). Especially when \(p=q\), equation (1.2) is just the logarithmic transformation of the \(p\)-harmonic equation. Then for any positive \(p\)-harmonic function \(v\) on \(M\), there holds true_ \[v(a)e^{-c(n,p.q)\sqrt{\kappa}d(x,a)}\leq v(x)\leq v(a)e^{c(n,p.q)\sqrt{\kappa}d (x,a)},\quad\forall x\in M. \tag{1.12}\] **Remark 3**.: _Inequality (1.11) is a logarithmic version of Harnack inequality. The Harnack inequality of \(p\)-harmonic functions in the case \(p=2\) and \(\kappa=0\) is due to Cheng and Yau [8]. Kortschwar and Li [21] obtained a similar estimate but with an assumption on the sectional curvature. Bidaut-Veron, Garcia-Huidobrob and Veron [2, Theorem G, F] obtained a similar result to (1.12) but with additional conditions on convexity radius and the growth of sectional curvature._ When \(b(x)\) does not vanish in equation (1.1), the gradient estimate for solutions of (1.1) is more complicated. If \(u\in L^{k}(B_{R}(o))\) for some \(k>2rn\), then we can deduce the following gradient estimate. **Theorem 1.4**.: _Let \((M,g)\) be a complete Riemannian manifold satisfying \(\mathrm{Ric}_{g}\geq-(n-1)\kappa g\) for some constant \(\kappa\geq 0\). Suppose that \(b\in W^{1,\infty}(B_{R}(o))\) is a real function and the constants \(p\), \(q\) and \(r\) associated with (1.1) satisfy_ \[p>1,\quad q>\max\{1,p-1\}\quad\text{and}\quad r\geq 1. \tag{1.13}\] _If \(u\in C^{1}(B_{R}(o))\) is a solution of (1.1) in a geodesic ball \(B_{R}(o)\) and there exists some \(k>2rn\) such that \(u\in L^{k}(B_{R}(o))\), then_ \[\sup_{B_{\frac{R}{2}}(o)}|\nabla u|\leq\max\left\{1,\,C\left[\left(\frac{1+ \sqrt{\kappa}R}{R}\right)^{\frac{1}{q-p+1}}+\left(R\|b\|_{1,\infty}^{2}\|u\|_{k }^{\frac{2k}{k/r-n}}\right)^{\frac{1}{q-p+1}}T^{\frac{n}{k/r-n}\frac{1}{q-p+1} }\right]\right\},\] _where \(T=e^{c_{0}(1+\sqrt{\kappa}R)}V^{-\frac{2}{n}}R^{2}\) is the Sobolev constant of Saloff-Coste's Sobolev inequality (see Lemma 2.2) and the constant \(C=C(n,p,q,r)\) depends on \(n\), \(p\), \(q\) and \(r\)._ If \(u\in L^{\infty}(\Omega)\), the gradient estimate for solutions to (1.1) can also be established. **Theorem 1.5**.: _Let \((M,g)\) be a complete Riemannian manifold with \(\mathrm{Ric}_{g}\geq-(n-1)\kappa g\) for some constant \(\kappa\geq 0\). Suppose that \(b\in W^{1,\infty}(B_{R}(o))\) is a real function and the constants \(p\), \(q\) and \(r\) in (1.1) satisfy_ \[p>1,\quad q>\max\{1,p-1\}\quad\text{and}\quad r\geq 1. \tag{1.14}\] _Then, for any solution \(u\in C^{1}(B_{R}(o))\cap L^{\infty}(B_{R}(o))\) to (1.1) in a geodesic ball \(B_{R}(o)\) there holds true_ \[\sup_{B_{\frac{R}{2}}(o)}|\nabla u|\leq\max\left\{1,\,C\left[\left(\frac{1+ \sqrt{\kappa}R}{R}\right)^{\frac{1}{q-p+1}}+N\right]\right\}\] _where the constants_ \[C=C(n,p,q,r)\quad\text{and}\quad N=\left(\|b\|_{1,\infty}^{2}+\|b\|_{1,\infty }^{2}\|u\|_{\infty}^{2r}\right)^{\frac{1}{2(q-p+1)}}.\] **Remark 4**.: _Assume that \(u\) is an entire solution to equation (1.1) on a non-compact Riemannian manifold \(M\). If \(u\in L^{\infty}\), then \(|\nabla u|\) is uniformly bounded by letting \(R\to\infty\); however, if \(u\in L^{k}\), then Theorem 1.4 can not guarantee that \(|\nabla u|\) is globally bounded._ The completeness of the Riemannian manifold and the lower boundedness of Ricci curvature are necessary for the application of Saloff-Coste's Sobolev inequalities (Lemma 2.2, see [26]). If we consider the equation defined on a region of \(\mathbb{R}^{n}\), then the lower boundedness of the Ricci curvature and the Sobolev inequality holds naturally, thus we can obtain the following result. **Corollary 1.6**.: _Let \(\Omega\subset\mathbb{R}^{n}\) be a domain. Suppose that \(u\in C^{1}(\Omega)\) is a solution to the following equation on \(\Omega\)_ \[\Delta_{p}u+N|\nabla u|^{q}+|u|^{r-1}u=0 \tag{1.15}\] _with \(p\), \(q\) and \(r\) satisfying (1.14). Then, there exists a constant \(C=C(n,p,q,r,\|u\|_{L^{\infty}(\Omega)})\) such that_ \[|\nabla u(x)|\leq C\left(1+d(x,\partial\Omega)^{-\frac{1}{q-p+1}}\right),\quad \forall x\in\Omega.\] **Remark 5**.: _Filippucci-Sun-Zheng [17] employed the Keller-Osserman method to prove that, if_ \[N>0,\quad p>1,\quad r>\max\{1,p-1\}\quad\text{and}\quad q>\frac{rp}{r+1},\] _then any positive solution of (1.15) in a domain \(\Omega\subset\mathbb{R}^{n}\) satisfies_ \[|\nabla u(x)|<C(n,p,q,r)\left(1+d(x,\partial\Omega)^{-\frac{1}{q-p+1}}\right).\] _It is worthy to point out that the range of \(n\), \(p\), \(q\) and \(r\) are wider in Corollary 1.6, but the constant \(C(n,p,q,r)\) in the above estimate does not depend on the \(L^{\infty}\) norm of \(u\). Even in the cases of Euclidean spaces and positive solutions, the estimate of Filippucci-Sun-Zheng and Corollary 1.6 cannot cover each other._ If we consider positive solutions of equation (1.1), we can obtain a local \(L^{\infty}\)-estimate for such solutions. More precisely, we deduce an \(L^{\infty}\)-estimate for the solutions of a second-order partial differential inequality. **Proposition 1.7**.: _Let \((M,g)\) be a complete Riemannian manifold and for \(1<p<n\) the following Sobolev inequality on \(M\) holds true_ \[\left(\int_{M}|u|^{\frac{np}{n-p}}\right)^{\frac{n-p}{pn}}\leq C(M)\left(\int _{M}|u|^{p}+|\nabla u|^{p}\right)^{\frac{1}{p}},\quad\forall u\in W^{1,p}(M). \tag{1.16}\] _Denote a geodesic ball in \((M,g)\) centered at \(o\) and with radius \(r\) by \(B_{R}(o)\). If \(u\in C^{1}(B_{R}(o))\) is a positive function which satisfies_ \[\Delta_{p}u\geq-Cu^{r},\] _where \(0<r\leq p-1\), \(1<p<n\) and \(C\) is a constant, then the following \(C^{0}\)-estimate is true_ \[\sup_{B_{\frac{R}{2}}(o)}|u|\leq\max\left\{1,\,C(n,p,r,M)\|u\|_{L^{p}(B_{R}(o ))}\right\}.\] The gradient estimate in Theorem 1.4 requires \(u\in L^{k}(B_{R}(o))\) for some \(k>2rn\). Combining Lemma 1.7 and Theorem 1.5, we can weaken the condition \(u\in L^{k}(B_{R}(o))\) to \(u\in L^{p}(B_{R}(o))\) for the gradient estimate for positive solutions of equation (1.1). **Theorem 1.8**.: _Let \((M,g)\) be a complete Riemannian manifold satisfying \(\operatorname{Ric}_{g}\geq-(n-1)\kappa g\) for some constant \(\kappa\geq 0\). Suppose that the Sobolev inequality (1.16) holds true on \(M\), \(u\in C^{1}(M)\) is a non-negative solution of (1.1) with \(p\geq 2,\ q>p-1\) and \(1\leq r\leq p-1\). If \(u\in L^{p}(B_{R}(o))\), then there exist two constants_ \[N=N\left(n,p,q,r,\|b\|_{W^{1,\infty}(B_{R}(o))},\|u\|_{L^{p}(B_{R}(o))}\right) \quad\text{and}\quad C=C(n,p,q,r,R)\] _with \(C\) uniformly bounded as \(R\to\infty\) such that_ \[\sup_{B_{\frac{R}{4}}(o)}|\nabla u|\leq\max\left\{1,\,C\left[\left(\frac{1+ \sqrt{\kappa}R}{R}\right)^{\frac{1}{q-p+1}}+N\right]\right\} \tag{1.17}\] _where \(V=\operatorname{Vol}(B_{R}(o))\) is the volume of the geodesic ball \(B_{R}(o)\)._ The paper is organized as follows. In Section 2 we give some fundamental definitions and notations used in this paper, establish several important lemmas on \(|\nabla u|^{2}\), and recall the Saloff-Coste Sobolev inequality on a Riemannian manifold. In Section 3 we give the proof of Theorem 1.1. We show Theorem 1.4 and Theorem 1.5 in Section 4 and discuss the gradient estimates on positive solutions in Section 5. ## 2. Preliminaries Throughout this paper, let \((M,g)\) be an \(n\)-dimensional Riemannian manifold with \(\operatorname{Ric}_{g}\geq-(n-1)\kappa\) for some constant \(\kappa\geq 0\) and \(\nabla\) be the corresponding Levi-Civita connection. For any function \(\varphi\in C^{1}(M)\), we denote \(\nabla\varphi\in\Gamma(T^{*}M)\) by \(\nabla\varphi(X)=\nabla_{X}\varphi\). Denote usually the volume form by \(d\mathrm{vol}=\sqrt{\det(g_{ij})}dx_{1}\wedge\ldots\wedge dx_{n}\) where \((x_{1},\ldots,x_{n})\) is a local coordinate chart, and for simplicity, we may omit the volume form in integration over \(M\). For \(p>1\), the \(p\)-Laplace operator is defined by \[\Delta_{p}\varphi:=\operatorname{div}(|\nabla\varphi|^{p-2}\nabla\varphi).\] The solution \(\varphi\) of the equation \(\Delta_{p}\varphi=0\) is the critical point of the energy functional \[E(\varphi)=\int_{M}|\nabla\varphi|^{p}.\] **Definition 2.1**.: A function \(u\) is said to be a (weak) solution of equation (1.1) if \(u\in C^{1}(M)\) and for all \(\psi\in C^{\infty}_{0}(M)\), we have \[-\int_{M}|\nabla u|^{p-2}\langle\nabla u,\nabla\psi\rangle-\int_{M}|\nabla u| ^{q}\psi+\int_{M}b(x)|u|^{r-1}u\psi=0.\] It is worth mentioning that any solution \(u\) of equation (1.1) satisfies \(u\in W^{2,2}_{loc}(\Omega)\) and \(u\in C^{1,\alpha}(\Omega)\) for some \(\alpha\in(0,1)\)(for example, see [15, 31, 32]). Moreover, away from \(\{\nabla u=0\}\), \(u\) is in fact smooth. In our proof of gradient estimates of the solution to (1.1), the following Sobolev inequality due to Saloff-Coste [26] plays an important role. **Lemma 2.2** (Saloff-Coste).: _Let \((M,g)\) be a complete manifold with \(\mathrm{Ric}\geq-(n-1)\kappa\). For \(n>2\), there exists a constant \(c_{0}\) depending only on \(n\), such that for all ball \(B\subset M\) of radius R and volume \(V\) we have_ \[\|f\|_{L^{\frac{2n}{n-2}}(B)}^{2}\leq e^{c_{0}(1+\sqrt{\kappa}R)}V^{-\frac{2}{n }}R^{2}\left(\int_{B}|\nabla f|^{2}+R^{-2}f^{2}\right)\] _for any \(f\in C_{0}^{\infty}(B)\). For \(n=2\), the above inequality holds with \(n\) replaced by any fixed \(n^{\prime}>2\)._ **Remark 6**.: _For any open region \(\Omega\subset M\), if there exists some geodesic ball \(B\) such that \(\Omega\subset B\), we also have_ \[\|f\|_{L^{\frac{2n}{n-2}}(\Omega)}^{2}\leq e^{c_{0}(1+\sqrt{\kappa}R)}V^{- \frac{2}{n}}R^{2}\left(\int_{\Omega}|\nabla f|^{2}+R^{-2}f^{2}\right),\] _for any \(f\in W^{1,2}(\Omega)\). This can be seen from the fact that we choose \(\{f_{n}\}\subset C_{0}^{\infty}(\Omega)\subset C_{0}^{\infty}(B)\) such that \(f_{n}\to f\) in \(W^{1,2}(\Omega)\)._ For \(p>1\), we consider the linearized operator \(\mathcal{L}\) of \(p\)-Laplace operator at \(u\), \[\mathcal{L}(\psi)=\mathrm{div}\left(f^{\frac{p}{2}-1}\nabla\psi+(p-2)f^{\frac{ p}{2}-2}\langle\nabla\psi,\nabla u\rangle\nabla u\right),\] where \(f=|\nabla u|^{2}\). We need to prove the following identity. **Lemma 2.3**.: _Let \(p>1\) and \(r\geq 1\) in (1.1). If \(u\) is a solution of (1.1) and \(f=|\nabla u|^{2}\), then the equality_ \[\begin{split}\mathcal{L}(f)=&\ \left(\frac{p}{2}-1 \right)f^{\frac{p}{2}-2}|\nabla f|^{2}+2f^{\frac{p}{2}-1}(\mathrm{Ric}(\nabla u,\nabla u)+|\nabla\nabla u|^{2})\\ &+qf^{\frac{q}{2}-1}\langle\nabla f,\nabla u\rangle-2\langle \nabla(b|u|^{r-1}u),\nabla u\rangle\end{split} \tag{2.1}\] _holds point-wisely in \(\{x:f(x)>0\}\)._ Proof.: By direct computations we have \[\begin{split}&\ \mathrm{div}(f^{\frac{p}{2}-1}\nabla f)\\ =&\ \left(\frac{p}{2}-1\right)f^{\frac{p}{2}-2}| \nabla f|^{2}+f^{\frac{p}{2}-1}\Delta f\\ =&\ \left(\frac{p}{2}-1\right)f^{\frac{p}{2}-2}| \nabla f|^{2}+2f^{\frac{p}{2}-1}\left(\langle\nabla\Delta u,\nabla u \rangle+\mathrm{Ric}(\nabla u,\nabla u)+|\nabla\nabla u|^{2}\right)\end{split}\] and \[\begin{split}&\ \mathrm{div}\left((p-2)f^{\frac{p}{2}-2}\langle \nabla f,\nabla u\rangle\nabla u\right)\\ =&\ (p-2)f^{-1}\langle\nabla f,\nabla u\rangle \mathrm{div}(f^{\frac{p}{2}-1}\nabla u)+(p-2)\langle\nabla(f^{-1}\langle \nabla f,\nabla u\rangle),f^{\frac{p}{2}-1}\nabla u\rangle\\ =&\ (p-2)(f^{\frac{q}{2}}-bu^{r})f^{-1}\langle\nabla f,\nabla u\rangle+2\langle\nabla(f^{\frac{q-p}{2}+1}-b(x)|u|^{r-1}uf^{1-\frac{ p}{2}}-\Delta u),f^{\frac{p}{2}-1}\nabla u\rangle\\ =&\ qf^{\frac{q}{2}-1}\langle\nabla f,\nabla u \rangle-2f^{\frac{p}{2}-1}\langle\nabla\Delta u,\nabla u\rangle-2\langle \nabla(b|u|^{r-1}u),\nabla u\rangle.\end{split}\] It is not difficult to obtain (2.1) from the above identities. Furthermore, we need to show the following point-wise estimate for \(\mathcal{L}(f)\), **Lemma 2.4**.: _If \(u\) is a solution of equation (1.1) with \(p>1\) and \(r\geq 1\), then we have the following point-wise estimate on \(\{f\neq 0\}\) for \(\mathcal{L}(f)\),_ \[\begin{split}\mathcal{L}(f)\geq&-\left|q-\frac{2(p-1 )}{n-1}\right|f^{\frac{q-1}{2}}|\nabla f|-2(n-1)\kappa f^{\frac{p}{2}}+\frac{2 f^{q-\frac{p}{2}+1}}{n-1}\\ &-2\langle\nabla(b|u|^{r-1}u),\nabla u\rangle-\frac{4b|u|^{r-1}uf ^{\frac{2-p+q}{2}}}{n-1}.\end{split} \tag{2.2}\] Proof.: Let \(\{e_{1},e_{2},\cdots,e_{n}\}\) be a local orthonormal frame of \(TM\) in a domain with \(f\neq 0\) such that \(e_{1}=\frac{\nabla u}{|\nabla u|}\). Then we have \(u_{1}=f^{\frac{1}{2}}\) and \[u_{11}=\frac{1}{2}f^{-\frac{1}{2}}f_{1}=\frac{1}{2}f^{-1}\langle\nabla u, \nabla f\rangle. \tag{2.3}\] The \(p\)-Laplace operator can be expressed in terms of \(f\), \[\Delta_{p}u= f^{\frac{p}{2}-1}\left((p-1)u_{11}+\sum_{i=2}^{n}u_{ii}\right). \tag{2.4}\] Substituting (2.4) into equation (1.1), we obtain \[(p-1)u_{11}+\sum_{i=2}^{n}u_{ii}=f^{\frac{q-p}{2}+1}-b|u|^{r-1}uf^{1-\frac{p}{2 }}. \tag{2.5}\] Using the fact \(u_{1}=f^{\frac{1}{2}}\) again, we have \[\frac{|\nabla f|^{2}}{4f}=\sum_{i=1}^{n}u_{1i}^{2}. \tag{2.6}\] By equality (2.6) and the Cauchy inequality, we estimate the Hessian of \(u\) by \[|\nabla\nabla u|^{2}\geq \sum_{i=1}^{n}u_{1i}^{2}+\sum_{i=2}^{n}u_{ii}^{2}\geq\frac{| \nabla f|^{2}}{4f}+\frac{1}{n-1}\left(\sum_{i=2}^{n}u_{ii}\right)^{2}. \tag{2.7}\] In view of (2.5), we have \[\begin{split}\frac{1}{n-1}\left(\sum_{i=2}^{n}u_{ii}\right)^{2}& =\frac{1}{n-1}\left(f^{\frac{q-p}{2}+1}-b|u|^{r-1}uf^{1-\frac{p}{2 }}-(p-1)u_{11}\right)^{2}\\ &\geq\frac{1}{n-1}\left(f^{q-p+2}-2b|u|^{r-1}uf^{2+\frac{q}{2}-p }-2(p-1)u_{11}f^{\frac{q-p}{2}+1}\right),\end{split} \tag{2.8}\] where we omit the square term \(\left(b|u|^{r-1}uf^{1-\frac{p}{2}}+(p-1)u_{11}\right)^{2}.\) Now, substituting (2.7), (2.8) and Ricci condition \(\mathrm{Ric}\geq-(n-1)\kappa g\) into (2.1), we have \[\begin{split}\mathcal{L}(f)&\geq\frac{p-1}{2}f^{ \frac{p}{2}-2}|\nabla f|^{2}-2(n-1)\kappa f^{\frac{p}{2}}+2\left(q-\frac{2(p-1 )}{n-1}\right)f^{\frac{q}{2}}u_{11}\\ &\quad-2\langle\nabla(b|u|^{r-1}u),\nabla u\rangle+\frac{2f^{ \frac{p}{2}-1}}{n-1}\Big{(}f^{q-p+2}-2b|u|^{r-1}uf^{2+\frac{q}{2}-p}\Big{)}. \end{split}\] Omitting the non-negative term \(\frac{p-1}{2}f^{\frac{p}{2}-2}|\nabla f|^{2}\) and using (2.3), we deduce from the above inequality that \[\mathcal{L}(f)\geq -2(n-1)\kappa f^{\frac{p}{2}}-\left|q-\frac{2(p-1)}{n-1}\right|f^{ \frac{q-1}{2}}|\nabla f|-2\langle\nabla(b|u|^{r-1}u),\nabla u\rangle\] \[+\frac{2f^{\frac{p}{2}-1}}{n-1}\left(f^{q-p+2}-2b|u|^{r-1}uf^{2+ \frac{q}{2}-p}\right).\] This is the required estimate. **Remark 7**.: _The assumption \(r\geq 1\) in Lemma 2.3 and Lemma 2.4 can guarantee that there is no singularity due to \(\langle\nabla(b|u|^{r-1}u),\nabla u\rangle\). If \(b\equiv 0\), then the estimate of \(\mathcal{L}(f)\) is given by_ \[\mathcal{L}(f)\geq -\left|q-\frac{2(p-1)}{n-1}\right|f^{\frac{q-1}{2}}|\nabla f|-2(n- 1)\kappa f^{\frac{p}{2}}+\frac{2f^{q-\frac{p}{2}+1}}{n-1}, \tag{2.9}\] _and the assumption \(r\geq 1\) is no longer required._ ## 3. Gradient estimates for solutions to \(\Delta_{p}u-|\nabla u|^{q}=0\) In this section, we consider the special case \(b\equiv 0\) of (1.1), i.e., the Hamilton-Jacobi equation \[\Delta_{p}u-|\nabla u|^{q}=0 \tag{3.1}\] with \(p>1\) and \(q>p-1\). We divide the proof of Theorem 1.1 into three parts. In the first part, we derive a basic integral inequality of \(f=|\nabla u|^{2}\), which will be used in the second and third parts. In the second part, we give an \(L^{\alpha_{1}}\)-estimate of \(f\) on a geodesic ball with radius \(3R/4\), where \(L^{\alpha_{1}}\) norm of \(f\) determines the initial state of the Nash-Moser iteration. Finally, we give a complete proof of our theorem using the Nash-Moser iteration method. From now on we always assume that \(\Omega=B_{R}(o)\subset M\) is a geodesic ball, and we always use \(a_{i}\) (\(i=1,\,2,\,3,\,\cdots\)) to denote some positive constants depending only on \(n\), \(p\), \(q\) and \(r\). We now prove the following integral inequality. **Lemma 3.1**.: _Let \(u\in C^{1}(\Omega)\) be a solution to equation (3.1) with \(p>1\) and \(q>p-1\), defined on \(\Omega\subset M\). Then, there exist constants \(a_{1}=\min\{1,p-1\}\), \(a_{2}=\left|q-\frac{2(p-1)}{n-1}\right|\) and \(a_{3}\) such that, for any positive number \(\alpha\) satisfying_ \[\frac{a_{2}^{2}}{a_{1}\alpha}\leq\frac{1}{4(n-1)} \tag{3.2}\] _and any non-negative \(\eta\in C_{0}^{\infty}(\Omega)\), there holds true_ \[\begin{split}&\frac{4a_{1}\alpha}{(2\alpha+p)^{2}}\int_{\Omega} \left|\nabla(f^{\frac{\alpha}{2}+\frac{p}{4}}\eta)\right|^{2}+\frac{1}{n-1} \int_{\Omega}f^{\alpha+q-\frac{p}{2}+1}\eta^{2}\\ \leq&\frac{a_{3}}{\alpha}\int_{\Omega}f^{\frac{p}{2 }+\alpha}|\nabla\eta|^{2}+2(n-1)\kappa\int_{\Omega}f^{\frac{p}{2}+\alpha} \eta^{2}.\end{split} \tag{3.3}\] Proof.: By regular theorem, away from \(\{f=0\}\), \(u\) is smooth. So both sides of (2.9) are in fact smooth. Let \(\epsilon>0\) and \(\psi=f_{\epsilon}^{\alpha}\eta^{2}\), where \(f_{\epsilon}=(f-\epsilon)^{+},\eta\in C_{0}^{\infty}(B_{R}(o))\) is non-negative, \(\alpha>1\) which will be determined later. Multiply \(\psi\) to both sides of (2.9) and integrate over \(\Omega\). Then direct computation yields \[\begin{split}&\int_{\Omega}\alpha f^{\frac{p}{2}-1}f_{\epsilon}^{ \alpha-1}\eta^{2}\left(|\nabla f|^{2}+(p-2)f^{-1}\langle\nabla u,\nabla f \rangle^{2}\right)\\ &+\int_{\Omega}2f^{\frac{p}{2}}f_{\epsilon}^{\alpha-1}\eta \langle\nabla f+(p-2)f^{-1}\langle\nabla u,\nabla f\rangle\nabla u,\nabla \eta\rangle+\frac{2}{n-1}\int_{\Omega}f^{q-\frac{p}{2}+1}f_{\epsilon}^{\alpha} \eta^{2}\\ \leq&\ a_{2}\int_{\Omega}f^{\frac{q-1}{2}}f_{\epsilon}^ {\alpha}|\nabla f|\eta^{2}+2(n-1)\kappa\int_{\Omega}f^{\frac{p}{2}}f_{\epsilon}^ {\alpha}\eta^{2}.\end{split} \tag{3.4}\] Note that the two terms containing inner products in the inequality (3.4) can be controlled as what follows, \[|\nabla f|^{2}+(p-2)f^{-1}\langle\nabla u,\nabla f\rangle^{2}\geq\min\{1,p-1 \}|\nabla f|^{2}=a_{1}|\nabla f|^{2},\] \[2\langle\nabla f+(p-2)f^{-1}\langle\nabla u,\nabla f\rangle\nabla u,\nabla \eta\rangle\geq-2(p+1)|\nabla f||\nabla\eta|.\] Hence, in view of of the above two inequalities, we can derive from (3.4) by letting \(\epsilon\to 0^{+}\) that, \[\begin{split}& a_{1}\alpha\int_{\Omega}f^{\frac{p}{2}+\alpha-2} \eta^{2}|\nabla f|^{2}+\frac{2}{n-1}\int_{\Omega}f^{\alpha+q-\frac{p}{2}+1} \eta^{2}\\ \leq&\ a_{2}\int_{\Omega}f^{\alpha+\frac{q-1}{2}}| \nabla f|\eta^{2}+2(n-1)\kappa\int_{\Omega}f^{\frac{p}{2}+\alpha}\eta^{2}+2(p +1)\int_{\Omega}f^{\frac{p}{2}+\alpha-1}\eta|\nabla f||\nabla\eta|.\end{split} \tag{3.5}\] Let \(R_{i}\) represent the \(i\)-th term on the right-hand side of (3.5). By Cauchy inequality, we have \[\begin{split} R_{1}&\leq\,\frac{a_{2}^{2}}{a_{1} \alpha}\int_{\Omega}f^{\alpha+q-\frac{p}{2}+1}\eta^{2}+\frac{a_{1}\alpha}{4} \int_{\Omega}f^{\frac{p}{2}+\alpha-2}\eta^{2}|\nabla f|^{2},\\ R_{3}&\leq\,\frac{a_{1}\alpha}{4}\int_{\Omega}f^{ \frac{p}{2}+\alpha-2}\eta^{2}|\nabla f|^{2}+\frac{4(p+1)^{2}}{a_{1}\alpha} \int_{\Omega}f^{\frac{p}{2}+\alpha}|\nabla\eta|^{2}.\end{split}\] By the above estimates and the choice of \(\alpha\) such that \[\frac{a_{2}^{2}}{a_{1}\alpha}\leq\frac{1}{n-1},\] we can infer from (3.5) that \[\begin{split}&\frac{a_{1}\alpha}{2}\int_{\Omega}f^{\frac{p}{2}+ \alpha-2}\eta^{2}|\nabla f|^{2}+\frac{1}{n-1}\int_{\Omega}f^{\alpha+q-\frac{p}{ 2}+1}\eta^{2}\\ &\leq\,2(n-1)\kappa\int_{\Omega}f^{\frac{p}{2}+\alpha}\eta^{2}+ \frac{4(p+1)^{2}}{a_{1}\alpha}\int_{\Omega}f^{\frac{p}{2}+\alpha}|\nabla\eta| ^{2}.\end{split} \tag{3.6}\] On the other hand, by using the inequality \((a+b)^{2}\leq 2(a^{2}+b^{2})\) we have \[\int_{\Omega}\left|\nabla\left(f^{\frac{\alpha}{2}+\frac{p}{4}}\eta\right) \right|^{2}\leq\frac{1}{2}\left(\frac{2\alpha+p}{2}\right)^{2}\int_{\Omega}f^{ \alpha+\frac{p}{2}-2}|\nabla f|^{2}\eta^{2}+2\int_{\Omega}f^{\alpha+\frac{p}{ 2}}|\nabla\eta|^{2}. \tag{3.7}\] Immediately, it follows from (3.6) and (3.7) that \[\begin{split}&\frac{4a_{1}\alpha}{(2\alpha+p)^{2}}\int_{\Omega} \left|\nabla\left(f^{\frac{\alpha}{2}+\frac{p}{4}}\eta\right)\right|^{2}+\frac {1}{n-1}\int_{\Omega}f^{\alpha+q-\frac{p}{2}+1}\eta^{2}\\ \leq&\left(\frac{4(p+1)^{2}}{a_{1}\alpha}+\frac{8a_ {1}\alpha}{(2\alpha+p)^{2}}\right)\int_{\Omega}f^{\frac{p}{2}+\alpha}|\nabla \eta|^{2}+2(n-1)\kappa\int_{\Omega}f^{\frac{p}{2}+\alpha}\eta^{2}.\end{split} \tag{3.8}\] By choosing a suitable constant \(a_{3}\) depending only on \(p\) such that \[\frac{4(p+1)^{2}}{a_{1}\alpha}+\frac{8a_{1}\alpha}{(2\alpha+p)^{2}}\leq\frac{a_{ 3}}{\alpha},\] we finish the proof of Lemma 3.1. ### An integral inequality **Lemma 3.2**.: _Let \((M,g)\) be a complete Riemannian manifold satisfying \(\operatorname{Ric}_{g}\geq-(n-1)\kappa g\) for some constant \(\kappa\geq 0\). Assume that \(u\) is a \(C^{1}\)-solution to equation (3.1) and \(f=|\nabla u|^{2}\). If \(p>1\) and \(q>p-1\), then there holds true_ \[\begin{split}& e^{-\alpha_{0}}V^{\frac{2}{n}}\left(\int_{\Omega}f^{ \frac{n}{n-2}(\frac{p}{2}+\alpha)}\eta^{\frac{2n}{n-2}}\right)^{\frac{n-2}{n} }+a_{6}\alpha R^{2}\int_{\Omega}f^{\alpha+q-\frac{p}{2}+1}\eta^{2}\\ \leq& a_{7}\left[R^{2}\int_{\Omega}f^{\frac{p}{2}+ \alpha}|\nabla\eta|^{2}+\alpha_{0}^{2}\alpha\int_{\Omega}f^{\frac{p}{2}+ \alpha}\eta^{2}\right].\end{split} \tag{3.9}\] _Here_ \[\alpha_{0}=(1+\sqrt{\kappa}R)\max\left\{c_{0}+1,\,\frac{4(n-1)a_{2}^{2}}{a_{1} }\right\}.\] Proof.: By Lemma 2.2, there holds \[\left(\int_{\Omega}f^{\frac{n}{n-2}(\frac{p}{2}+\alpha)}\eta^{\frac{2n}{n-2}} \right)^{\frac{n-2}{n}}\leq e^{c_{0}(1+\sqrt{\kappa}R)}V^{-\frac{2}{n}}\left( R^{2}\int_{\Omega}\left|\nabla\left(f^{\frac{p}{4}+\frac{\alpha}{2}}\eta \right)\right|^{2}+\int_{\Omega}f^{\frac{p}{2}+\alpha}\eta^{2}\right),\] where the constant \(c_{0}=c_{0}(n)\) depends only on \(n\). It follows from (3.3) and the above Sobolev inequality that there holds true \[\begin{split}&\frac{4a_{1}\alpha}{(2\alpha+p)^{2}}e^{-c_{0}(1+ \sqrt{\kappa}R)}V^{\frac{2}{n}}R^{-2}\left(\int_{\Omega}f^{\frac{n}{n-2}( \frac{p}{2}+\alpha)}\eta^{\frac{2n}{n-2}}\right)^{\frac{n-2}{n}}+\frac{1}{n-1 }\int_{\Omega}f^{\alpha+q-\frac{p}{2}+1}\eta^{2}\\ \leq&\frac{a_{3}}{\alpha}\int_{\Omega}f^{\frac{p}{2}+ \alpha}|\nabla\eta|^{2}+2(n-1)\kappa\int_{\Omega}f^{\frac{p}{2}+\alpha}\eta^{ 2}+\frac{4a_{1}\alpha}{(2\alpha+p)^{2}R^{2}}\int_{\Omega}f^{\frac{p}{2}+ \alpha}\eta^{2},\end{split} \tag{3.10}\] where we require that \(n\neq 2\). We now choose \[c_{1}=\max\left\{c_{0}+1,\,\,\frac{4(n-1)a_{2}^{2}}{a_{1}}\right\}\] and denote \(\alpha_{0}=c_{1}(1+\sqrt{\kappa}R)\). For \(\alpha\geq\alpha_{0}\), there exist constants \(a_{4}\) depending only on \(n\), \(p\) and \(a_{5}\) depending only on \(p\) such that \[2(n-1)\kappa+\frac{4a_{1}\alpha}{(2\alpha+p)^{2}R^{2}}\leq 2(n-1)\kappa+\frac{a_{ 1}}{pR^{2}}\leq\frac{a_{4}\alpha_{0}^{2}}{R^{2}}\] and \[\frac{a_{5}}{\alpha}\leq\frac{4a_{1}\alpha}{(2\alpha+p)^{2}}.\] It follows that \[\frac{a_{5}}{\alpha}e^{-\alpha_{0}}V^{\frac{2}{n}}R^{-2}\left(\int_ {\Omega}f^{\frac{n}{n-2}(\frac{p}{2}+\alpha)}\eta^{\frac{2n}{n-2}}\right)^{\frac {n-2}{n}}+\frac{1}{n-1}\int_{\Omega}f^{\alpha+q-\frac{p}{2}+1}\eta^{2}\] \[\leq \frac{a_{3}}{\alpha}\int_{\Omega}f^{\frac{p}{2}+\alpha}|\nabla \eta|^{2}+\frac{a_{4}\alpha_{0}^{2}}{R^{2}}\int_{\Omega}f^{\frac{p}{2}+\alpha} \eta^{2}.\] By taking some new suitable constants \(a_{6}\) and \(a_{7}\), we obtain the required result. ### \(L^{\alpha_{1}}\)-bound of \(f\) in a geodesic ball with radius \(3r/4\) **Lemma 3.3**.: _Let \((M,g)\) be a complete Riemannian manifold satisfying \(\mathrm{Ric}_{g}\geq-(n-1)\kappa g\) for some constant \(\kappa\geq 0\). Assume that \(u\) is a \(C^{1}\)-solution to equation (3.1) and \(f=|\nabla u|^{2}\). If \(p>1\) and \(q>p-1\), then there holds true_ \[\|f\|_{L^{\alpha_{1}}(B_{\frac{3R}{4}}(o))}\leq a_{12}V^{\frac{1}{\alpha_{1}} }\left(\frac{1+\sqrt{\kappa}R}{R}\right)^{\frac{2}{q-p+1}},\] _where \(\alpha_{1}=\frac{n}{n-2}\left(\alpha_{0}+\frac{p}{2}\right)\) and the constant \(a_{12}\) depends only on \(n\), \(p\) and \(q\)._ Proof.: We set \(\alpha=\alpha_{0}\) in (3.9) and obtain \[\begin{split}& e^{-\alpha_{0}}V^{\frac{2}{n}}\left(\int_{\Omega}f^{ \frac{n}{n-2}(\frac{p}{2}+\alpha_{0})}\eta^{\frac{2n}{n-2}}\right)^{\frac{n-2 }{n}}+a_{6}\alpha_{0}R^{2}\int_{\Omega}f^{\alpha_{0}+q-\frac{p}{2}+1}\eta^{2} \\ &\leq a_{7}\left[R^{2}\int_{\Omega}f^{\frac{p}{2}+\alpha_{0}}| \nabla\eta|^{2}+\alpha_{0}^{3}\int_{\Omega}f^{\frac{p}{2}+\alpha_{0}}\eta^{2} \right].\end{split} \tag{3.11}\] If \[f\geq\left(\frac{2a_{7}\alpha_{0}^{2}}{a_{6}R^{2}}\right)^{\frac{1}{q-p+1}},\] it is easy to see \[a_{7}\alpha_{0}^{3}f^{\frac{p}{2}+\alpha_{0}}\eta^{2}\leq\frac{1}{2}a_{6} \alpha_{0}R^{2}f^{\alpha_{0}+q-\frac{p}{2}+1}\eta^{2}.\] Let \(\Omega=\Omega_{1}\cup\Omega_{2}\), where \[\Omega_{1}=\left\{x\in\Omega:f\geq\left(\frac{2a_{7}\alpha_{0}^{2}}{a_{6}R^{2} }\right)^{\frac{1}{q-p+1}}\right\}\] and \(\Omega_{2}\) is the complement of \(\Omega_{1}\). Then, it follows \[\begin{split} a_{7}\alpha_{0}^{3}\int_{\Omega}f^{\frac{p}{2}+ \alpha_{0}}\eta^{2}=&\ a_{7}\alpha_{0}^{3}\int_{\Omega_{1}}f^{ \frac{p}{2}+\alpha_{0}}\eta^{2}+a_{7}\alpha_{0}^{3}\int_{\Omega_{2}}f^{\frac{p} {2}+\alpha_{0}}\eta^{2}\\ \leq&\ \frac{a_{6}\alpha_{0}R^{2}}{2}\int_{\Omega}f^{ \alpha_{0}+q-\frac{p}{2}+1}\eta^{2}+\frac{a_{6}\alpha_{0}R^{2}}{2}\left(\frac{2a _{7}\alpha_{0}^{2}}{a_{6}R^{2}}\right)^{\frac{\alpha_{0}+q-\frac{p}{2}+1}{q-p +1}}V,\end{split} \tag{3.12}\] where \(V\) is the volume of \(\Omega\). Choose \(\gamma\) such that \(0\leq\gamma\leq 1\), \(\gamma\in C_{0}^{\infty}(B_{R}(o))\), \(\gamma\equiv 1\) in \(B_{\frac{3R}{4}}(o)\) and \(|\nabla\gamma|\leq\frac{C}{R}\). Let \[\eta=\gamma^{\frac{\alpha_{0}+q-\frac{p}{2}+1}{q-p+1}}.\] We take a direct calculation to see that \[a_{7}R^{2}|\nabla\eta|^{2}\leq a_{7}C^{2}\left(\frac{\alpha_{0}+q-\frac{p}{2}+1}{q-p +1}\right)^{2}\eta^{\frac{2\alpha_{0}+p}{\alpha_{0}+q-\frac{p}{2}+1}}\leq a_{8} \alpha_{0}^{2}\eta^{\frac{2\alpha_{0}+p}{\alpha_{0}+q-\frac{p}{2}+1}}. \tag{3.13}\] It then follows from (3.13), Holder inequality and Young inequality that \[\begin{split} a_{7}R^{2}\int_{\Omega}f^{\frac{p}{2}+\alpha_{0}}| \nabla\eta|^{2}\leq&\ a_{8}\alpha_{0}^{2}\int_{\Omega}f^{\frac{p }{2}+\alpha_{0}}\eta^{\frac{2\alpha_{0}+p}{\alpha_{0}+q-\frac{p}{2}+1}}\\ \leq&\ a_{8}\alpha_{0}^{2}\left(\int_{\Omega}f^{ \alpha_{0}+q-\frac{p}{2}+1}\eta^{2}\right)^{\frac{\alpha_{0}+\frac{p}{2}}{ \alpha_{0}+q-\frac{p}{2}+1}}V^{\frac{a-p+1}{\alpha_{0}+q-\frac{p}{2}+1}}\\ \leq&\ \frac{a_{6}\alpha_{0}R^{2}}{2}\int_{\Omega}f^{ \frac{p}{2}+\alpha_{0}+1}\eta^{2}+\frac{a_{6}\alpha_{0}R^{2}}{2}\left(\frac{2 a_{8}\alpha_{0}}{a_{6}R^{2}}\right)^{\frac{\alpha_{0}+q-\frac{p}{2}+1}{q-p+1}}V.\end{split} \tag{3.14}\] Now, by substituting (3.12) and (3.14) into (3.9) we obtain the following \[\begin{split}&\left(\int_{\Omega}f^{\frac{n}{n-2}\left(\frac{p}{2 }+\alpha_{0}\right)}\eta^{\frac{2n}{n-2}}\right)^{\frac{n-2}{n}}\right.\\ \leq&\ e^{\alpha_{0}}V^{1-\frac{2}{n}}\left[\frac{a_ {6}\alpha_{0}R^{2}}{2}\left(\frac{2a_{7}\alpha_{0}^{2}}{a_{6}R^{2}}\right)^{ \frac{q-\frac{p}{2}+\alpha_{0}+1}{q-p+1}}+\frac{a_{6}\alpha_{0}R^{2}}{2}\left( \frac{2a_{8}\alpha_{0}}{a_{6}R^{2}}\right)^{\frac{\alpha_{0}+q-\frac{p}{2}+1} {q-p+1}}\right]\\ \leq&\ a_{9}e^{\alpha_{0}}V^{1-\frac{2}{n}}a_{10}^{ \alpha_{0}}\alpha_{0}^{3}\left(\frac{\alpha_{0}}{R}\right)^{\frac{p+2\alpha_ {0}}{q-p+1}}.\end{split} \tag{3.15}\] Taking \(1/\left(\alpha_{0}+\frac{p}{2}\right)\) power of the both sides of (3.15), we obtain \[\|f\|_{L^{\alpha_{1}}(B_{\frac{3R}{4}}(o))}\leq a_{11}V^{\frac{1}{\alpha_{1}}} \left(\frac{\alpha_{0}}{R}\right)^{\frac{2}{q-p+1}}. \tag{3.16}\] Hence, the required inequality follows. ### Proof of Theorem 1.1 and Theorem 1.3 Proof.: By discarding the the second term on the left-hand side in (3.9), we obtain \[\left(\int_{\Omega}f^{\frac{n}{n-2}\left(\frac{p}{2}+\alpha\right)}\eta^{ \frac{2n}{n-2}}\right)^{\frac{n-2}{n}}\leq a_{7}e^{\alpha_{0}}V^{-\frac{2}{n} }\left[R^{2}\int_{\Omega}f^{\frac{p}{2}+\alpha}|\nabla\eta|^{2}+\alpha_{0}^{2 }\alpha\int_{\Omega}f^{\frac{p}{2}+\alpha}\eta^{2}\right]. \tag{3.17}\] In order to adopt the Nash-Moser iteration, we set \[\alpha_{l+1}=\frac{n}{n-2}\alpha_{l},\quad\Omega_{l}=B(o,r_{l}),\quad r_{l}= \frac{R}{2}+\frac{R}{4^{l}},\quad l=1,2,\ldots\] and choose \(\eta_{l}\in C_{0}^{\infty}(\Omega_{l})\) such that \[\eta_{l}\equiv 1\ \text{in}\ \Omega_{l+1},\quad 0\leq\eta_{l}\leq 1\quad\text{ and}\quad|\nabla\eta_{l}|\leq\frac{C(n)4^{l}}{R}.\] We now choose \(\alpha\) such that \(\alpha+\frac{p}{2}=\alpha_{l}\) and take \(\eta=\eta_{l}\) in (3.17). By the fact \[\alpha_{l}\alpha_{0}^{2}\eta_{l}^{2}+R^{2}|\nabla\eta_{l}|^{2}\leq\alpha_{0} ^{2}\left(\alpha_{0}+\frac{p}{2}\right)\left(\frac{n}{n-2}\right)^{l}+C^{2}(n )16^{l}\leq a_{12}^{l}\alpha_{0}^{2}\alpha_{1},\] we have \[\left(\int_{\Omega_{l+1}}f^{\alpha_{l+1}}\right)^{\frac{1}{\alpha_{l +1}}} \leq\,\left(a_{7}e^{\alpha_{0}}V^{-\frac{2}{n}}\right)^{\frac{1}{ \alpha_{l}}}\left(\int_{\Omega_{l}}\left(\alpha_{l}\alpha_{0}^{2}\eta_{l}^{2}+R ^{2}|\nabla\eta_{l}|^{2}\right)f^{\alpha_{l}}\right)^{\frac{1}{\alpha_{l}}}\] \[\leq\,\left(a_{7}\alpha_{0}^{2}\alpha_{1}e^{\alpha_{0}}V^{-\frac{ 2}{n}}\right)^{\frac{1}{\alpha_{l}}}a_{12}^{\frac{l}{\alpha_{l}}}\left(\int_{ \Omega_{l}}f^{\alpha_{l}}\right)^{\frac{1}{\alpha_{l}}}.\] By the facts \[\sum_{l=1}^{\infty}\frac{1}{\alpha_{l}}=\frac{n}{2\alpha_{1}}=\frac{n-2}{2 \alpha_{0}+p}\quad\text{ and }\quad\sum_{l=1}^{\infty}\frac{l}{\alpha_{l}}=\frac{n^{2}}{4\alpha_{1}}= \frac{n(n-2)}{2(2\alpha_{0}+p)}<\frac{n(n-2)}{2p},\] the following two quantities \[\left(a_{7}\alpha_{0}^{2}\alpha_{1}e^{\alpha_{0}}\right)^{\sum_{l=1}^{\infty} \frac{1}{\alpha_{l}}}\quad\text{ and }\quad a_{12}^{\sum_{l=1}^{\infty}\frac{l}{ \alpha_{l}}}\] are both uniformly bounded for any \(R>0\) and \(\kappa\geq 0\). By taking a standard iteration procedure, we have \[\|f\|_{L^{\infty}(B_{\frac{R}{2}}(o))} \leq\,\left(a_{7}\alpha_{0}^{2}\alpha_{1}e^{\alpha_{0}}V^{-\frac {2}{n}}\right)^{\sum_{l=1}^{\infty}\frac{1}{\alpha_{l}}}a_{12}^{\sum_{l=1}^{ \infty}\frac{l}{\alpha_{l}}}\|f\|_{L^{\alpha_{1}}(B_{\frac{3R}{4}}(o))} \tag{3.18}\] \[\leq\,a_{13}V^{-\frac{1}{\alpha_{1}}}\|f\|_{L^{\alpha_{1}}(B_{ \frac{3R}{4}}(o))}.\] Combining (3.18) with (3.16) leads to \[\|\nabla u\|_{L^{\infty}(B_{\frac{R}{2}}(o))}\leq a_{14}\left(\frac{1+\sqrt{ \kappa}R}{R}\right)^{\frac{1}{q-p+1}}. \tag{3.19}\] Thus we finish the proof of Theorem 1.1. **Corollary 3.4**.: _Let \(\Omega\subset\mathbb{R}^{n}\) be a region and \(u\in C^{1}(\Omega)\) be a solution to the equation_ \[\Delta_{p}u-|\nabla u|^{q}=0,\] _where \(p>1\) and \(q>p-1\). Then we have_ \[|\nabla u(x)|\leq c(n,p,q)(d(x,\partial\Omega))^{-\frac{1}{q+1-p}}\] _for any \(x\in\Omega\). In addition, if \(\Omega=\mathbb{R}^{n}\), then \(u\) is a constant._ Proof.: Denote \(R=d(x,\partial\Omega)\). Obviously, we have \(B_{R}(x)\subset\Omega\) and \[|\nabla u(x)|\leq\sup_{B_{R}(x)}|\nabla u|\leq a_{16}\left(\frac{1}{R}\right)^ {\frac{1}{q-p+1}}=a_{16}d(x,\partial\Omega)^{-\frac{1}{q-p+1}}.\] Here the constant \(a_{16}\) depends only on \(n\), \(p\), and \(q\). This corollary is an improvement of [2, Theorem A]. Another important application of Theorem 1.1 is that any entire solution to \(\Delta_{p}u-|\nabla u|^{q}=0\) on a non-compact complete Riemannian manifold with \(\text{Ric}_{g}\geq-(n-1)\kappa g\) satisfies Harnack inequality. Theorem 1.3 is just the following **Corollary 3.5**.: _Assume \((M,g)\) satisfies the same assumptions as in Theorem 1.1. Let \(u\in C^{1}(M)\) be a global solution to the equation (1.2) on \(M\), i.e.,_ \[\Delta_{p}u-|\nabla u|^{q}=0,\quad q>p-1,\quad p>1.\] _Then, for any fixed \(a\in M\) and any \(x\in M\) we have_ \[u(a)-c(n,p,q)\kappa^{\frac{1}{2(1-p+q)}}d(x,a)\leq u(x)\leq u(a)+c(n,p,q)\kappa^ {\frac{1}{2(1-p+q)}}d(x,a). \tag{3.20}\] _Especially, when \(p=q\) the equation (1.2) is just the logarithmic transformation of \(p\)-Laplace equation, then for any positive \(p\)-harmonic function \(v\) on \(M\), there holds true_ \[v(a)e^{-c(n,p.q)\sqrt{\kappa}d(x,a)}\leq v(x)\leq v(a)e^{c(n,p.q)\sqrt{\kappa} d(x,a)},\quad\forall x\in M. \tag{3.21}\] Proof.: Letting \(R\to\infty\) in (3.19), we have \[|\nabla u|\leq c(n,p.q)\kappa^{\frac{1}{2(1-p+q)}}. \tag{3.22}\] Let \(d=d(x,a)\) be the distance between \(a\) and \(x\). For any arc minimizing geodesic segment \(\gamma(t):[0,d]\to M\) which connects \(a\) and \(x\), we have \[u(x)=u(a)+\int_{0}^{d}\frac{d}{dt}\left(u\circ\gamma(t)\right)dt. \tag{3.23}\] Since \(|\gamma^{\prime}|=1\), we infer from (3.22) that \[\left|\frac{d}{dt}\left(u\circ\gamma(t)\right)\right|\leq|\nabla u(\gamma(t) )||\gamma^{\prime}(t)|\leq c(n,p.q)\kappa^{\frac{1}{2(1-p+q)}}. \tag{3.24}\] It follows from (3.23) and (3.24) that \[|u(x)-u(a)|\leq c(n,p.q)\kappa^{\frac{1}{2(1-p+q)}}d(x,a),\] which implies (3.20). Then (3.21) follows since \(u=(1-p)\ln v\) satisfies \[\Delta_{p}u-|\nabla u|^{p}=0.\] L. Veron et al. ([2, Theorem G]) obtained (3.21) under the additional condition \[\begin{cases}r_{M}=\infty,&\text{ if }1<p<2;\\ \lim_{d(a,x)\to\infty}\frac{|\text{Sec}(x)|}{d(x,a)}=0,&\text{ if }p>2,\end{cases}\] where \(r_{M}\) is the convexity radius and \(\text{Sec}(x)\) denotes the maximal sectional curvature at \(x\). Corollory 3.5 is an improvement of their results. ### The case \(\dim(M)=n=2\) In the proof of Theorem 1.1 above, we have used Saloff-Coste's Sobolev inequality on the embedding \(W^{1,2}(B)\hookrightarrow L^{\frac{2n}{n-2}}(B)\) on a manifold. Since the Sobolev exponent \(2^{*}=2n/(n-2)\) requires \(n>2\), we did not consider the case \(n=2\) in Theorem 1.1. In this subsection, we will explain briefly that Theorem 1.1 can also be established when \(n=2\). When \(\dim M=n=2\), we need the special case of Saloff-Coste's Sobolev theorem, i.e., Lemma 2.2. For any \(n^{\prime}>2\), there holds \[\|f\|^{2}_{L^{\frac{2n^{\prime}}{n^{\prime}-2}}(B)}\leq e^{c_{0}(1+\sqrt{ \kappa}R)}V^{-\frac{2}{n^{\prime}}}R^{2}\left(\int_{B}|\nabla f|^{2}+R^{-2}f^ {2}\right)\] for any \(f\in C_{0}^{\infty}(B)\). For example, we can choose \(n^{\prime}=4\), then we have \[\left(\int_{\Omega}f^{2\alpha+p}\eta^{4}\right)^{2}\leq e^{c_{0}(1+\sqrt{ \kappa}R)}V^{-\frac{1}{2}}\left(R^{2}\int_{\Omega}\left|\nabla\left(f^{\frac{ p}{4}+\frac{\alpha}{2}}\eta\right)\right|^{2}+\int_{\Omega}f^{\frac{p}{2}+ \alpha}\eta^{2}\right).\] By the above inequality and (3.3), we can deduce the following integral inequality by almost the same method as in Section 3.1 \[\begin{split}& e^{-\alpha_{0}}V^{\frac{1}{2}}\left(\int_{\Omega}f ^{p+2\alpha}\eta^{4}\right)^{\frac{1}{2}}+a_{6}\alpha R^{2}\int_{\Omega}f^{ \alpha+q-\frac{p}{2}+1}\eta^{2}\\ &\leq a_{7}\left[R^{2}\int_{\Omega}f^{\frac{p}{2}+\alpha}|\nabla \eta|^{2}+\alpha_{0}^{2}\alpha\int_{\Omega}f^{\frac{p}{2}+\alpha}\eta^{2} \right],\end{split} \tag{3.25}\] where \(\alpha_{0}\) is the same as the \(\alpha_{0}\) defined in Section 3.1. By repeating the same procedure as in Section 3.2, we can deduce from (3.25) the \(L^{\alpha_{1}}\)-bound of \(f\) in a geodesic ball with radius \(3R/4\) \[\|f\|_{L^{\alpha_{1}}(B_{\frac{3R}{4}}(o))}\leq a_{12}V^{\frac{1}{\alpha_{1}}} \left(\frac{1+\sqrt{\kappa}R}{R}\right)^{\frac{2}{q-p+1}}, \tag{3.26}\] where \(\alpha_{1}=p+2\alpha_{0}\). For the Nash-Moser iteration, we set \(\alpha_{l}=2^{l}(\alpha_{0}+p/2)\) and \(\Omega_{l}\) by the similar way with that in Section 3.3, and can obtain the following (3.18) \[\|f\|_{L^{\infty}(B_{\frac{R}{2}}(o))}\leq a_{13}V^{-\frac{1}{\alpha_{1}}}\|f \|_{L^{\alpha_{1}}(B_{\frac{3R}{4}}(o))}. \tag{3.27}\] Combining (3.26) and (3.27), we finally obtain the Cheng-Yau type gradient estimate. Harnack inequality and Liouville type results follow from the Cheng-Yau type gradient estimate. ## 4. Gradient estimate for the solutions of (1.1) In this section, we consider the case where \(b(x)\) does not vanish. We will provide the proof of Theorem 1.4 and Theorem 1.5. The structure of our proof is similar to Section 3. By an abuse of notations, we still use \(a_{i}\), \(i=1\), \(2\)\(\cdots\), to denote constant depending on \(n\), \(p\), \(q\) and \(r\), but the same \(a_{i}\) may represent different numbers in different sections. **Lemma 4.1**.: _Let \(u\in C^{1}(\Omega)\) be a solution to equation (1.1) with \(p>1\) and \(r\geq 1\), defined on a region \(\Omega\subset M\). Then, there exist constants \(a_{1}=\min\{1,p-1\}\), \(a_{2}=\left|q-\frac{2(p-1)}{n-1}\right|\) and \(a_{3}\) such _that, for any positive number \(\alpha\) satisfying_ \[\frac{a_{2}^{2}}{a_{1}\alpha}\leq\frac{1}{4(n-1)} \tag{4.1}\] _and any non-negative \(\eta\in C_{0}^{\infty}(\Omega)\), there holds true_ \[\begin{split}&\frac{2a_{1}\alpha}{(2\alpha+p)^{2}}\int_{\Omega} \left|\chi\nabla\left(f^{\frac{\alpha}{2}+\frac{p}{4}}\eta\right)\right|^{2}+ \frac{1}{n-1}\int_{\Omega}f^{\alpha+q-\frac{p}{2}+1}\eta^{2}\chi\\ \leq&\frac{a_{3}}{\alpha}\int_{\Omega}f^{\frac{p}{2} +\alpha}|\nabla\eta|^{2}\chi+2(n-1)\left(\kappa+2r^{2}\|b\|_{1,\infty}^{2} \right)\int_{\Omega}f^{\frac{p}{2}+\alpha}\eta^{2}\chi\\ &+a_{4}\|b\|_{1,\infty}^{2}\int_{\Omega}|u|^{2r}f^{\alpha+\frac{ p}{2}}\eta^{2}\chi,\end{split} \tag{4.2}\] _where \(\chi=\chi_{\{f\geq 1\}}\) is the characteristic function of the set \(\{x\in\Omega:f(x)\geq 1\}\)._ Proof.: By the regularity theory of elliptic equations, away from the set \(\{x\in\Omega:f(x)=0\}\), \(u\) is smooth enough since the equation is nondegenerate on such a domain. So, the terms appearing on the both sides of (2.2) are in fact smooth. Let \(\psi=f^{\alpha}\eta^{2}\chi=(f\chi)^{\alpha}\eta^{2}\), where \(\eta\in C_{0}^{\infty}(B_{R}(o))\) is non-negative and \(\alpha>1\) which will be determined later. By multiplying the both sides of (2.2) by \(\psi\) and integrating over \(\Omega\), we take integration by part and a direct computation to derive \[\begin{split}&\int_{\Omega}\alpha f^{\frac{p}{2}+\alpha-2}\eta^{2 }\chi\left(|\nabla f|^{2}+(p-2)f^{-1}\langle\nabla u,\nabla f\rangle^{2} \right)\\ &+\int_{\{x\in\Omega:\,f(x)=1\}}|\nabla f|^{-1}f^{\alpha+\frac{p} {2}-1}\eta^{2}\left(|\nabla f|^{2}+(p-2)f^{-1}\langle\nabla u,\nabla f\rangle^ {2}\right)\\ &+\int_{\Omega}2f^{\frac{p}{2}+\alpha-1}\eta\chi\langle\nabla f +(p-2)f^{-1}\langle\nabla u,\nabla f\rangle\nabla u,\nabla\eta\rangle+\frac{2 }{n-1}\int_{\Omega}f^{\alpha+q-\frac{p}{2}+1}\eta^{2}\chi\\ \leq& 2\int_{\Omega}\langle\nabla(b|u|^{r-1}u), \nabla u\rangle f^{\alpha}\eta^{2}\chi+a_{2}\int_{\Omega}f^{\alpha+\frac{q-1} {2}}|\nabla f|\eta^{2}\chi\\ &+\frac{4}{n-1}\int_{\Omega}b|u|^{r-1}uf^{\alpha+1+\frac{q-p}{2} }\eta^{2}\chi+2(n-1)\kappa\int_{\Omega}f^{\frac{p}{2}+\alpha}\eta^{2}\chi, \end{split} \tag{4.3}\] here we have used Stokes theorem on the region \(\{x\in\Omega:f(x)\geq 1\}\) and fact the outward pointing normal vector of \(\{x\in\Omega:f(x)\geq 1\}\), denoted by \(\nu\), can be expressed by \(\nu=-\nabla f/|\nabla f|\). Three terms containing inner products in the inequality (4.3) can be controlled as what follows, \[|\nabla f|^{2}+(p-2)f^{-1}\langle\nabla u,\nabla f\rangle^{2}\geq\min\{1,p-1 \}|\nabla f|^{2}=a_{1}|\nabla f|^{2}, \tag{4.4}\] \[-2f^{\frac{p}{2}+\alpha-1}\langle\nabla f+(p-2)f^{-1}\langle\nabla u,\nabla f \rangle\nabla u,\nabla\eta\rangle\leq 2(p+1)f^{\frac{p}{2}+\alpha-1}|\nabla f|| \nabla\eta|, \tag{4.5}\] and \[\langle\nabla(b|u|^{r-1}u),\nabla u\rangle f^{\alpha}\leq|\nabla b||u|^{r}f^{ \alpha+\frac{1}{2}}+r|b||u|^{r-1}f^{\alpha+1}. \tag{4.6}\] By (4.4), we can deduce that the boundary integral term is non-negative, i.e., \[\int_{\{x\in\Omega:\,f(x)=1\}}|\nabla f|^{-1}f^{\alpha+\frac{p}{2}-1 }\eta^{2}\left(|\nabla f|^{2}+(p-2)f^{-1}\langle\nabla u,\nabla f\rangle^{2}\right) \tag{4.7}\] \[\geq \int_{\{x\in\Omega:\,f(x)=1\}}a_{1}|\nabla f|f^{\alpha+\frac{p}{2 }-1}\eta^{2}\geq 0. \tag{4.8}\] Now, in view of (4.4), (4.5), (4.6) and (4.7), we can derive from (4.3) that \[\begin{split}& a_{1}\alpha\int_{\Omega}f^{\frac{p}{2}+\alpha-2} \eta^{2}\chi|\nabla f|^{2}+\frac{2}{n-1}\int_{\Omega}f^{\alpha+q-\frac{p}{2}+1 }\eta^{2}\chi\\ \leq& 2\int_{\Omega}|\nabla b||u|^{r-1}uf^{\alpha+ \frac{1}{2}}\eta^{2}\chi+2r\int_{\Omega}|b||u|^{r-1}f^{\alpha+1}\eta^{2}\chi+ a_{2}\int_{\Omega}f^{\alpha+\frac{q-1}{2}}|\nabla f|\eta^{2}\chi\\ &+2(n-1)\kappa\int_{\Omega}f^{\frac{p}{2}+\alpha}\eta^{2}\chi+2( p+1)\int_{\Omega}f^{\frac{p}{2}+\alpha-1}\eta\chi|\nabla f||\nabla\eta|\\ &+\frac{4}{n-1}\int_{\Omega}b|u|^{r}f^{1+\alpha+\frac{q-p}{2}} \eta^{2}\chi.\end{split} \tag{4.9}\] Let \(R_{i}\) represent the \(i\)-th term on the right-hand side of (4.9). By Cauchy inequality, we have \[\begin{split} R_{1}&\leq\frac{1}{4(n-1)}\int_{\Omega }f^{\alpha+q-\frac{p}{2}+1}\eta^{2}\chi+4(n-1)\int_{\Omega}|\nabla b|^{2}|u|^{2 r}f^{\alpha-q+\frac{p}{2}}\eta^{2}\chi,\\ R_{2}&\leq\frac{1}{4(n-1)}\int_{\Omega}f^{\alpha+q- \frac{p}{2}+1}\eta^{2}\chi+4r^{2}(n-1)\int_{\Omega}b^{2}|u|^{2r-2}f^{\alpha-q+ \frac{p}{2}+1}\eta^{2}\chi,\\ R_{3}&\leq\frac{a_{2}^{2}}{a_{1}\alpha}\int_{\Omega }f^{\alpha+q-\frac{p}{2}+1}\eta^{2}\chi+\frac{a_{1}\alpha}{4}\int_{\Omega}f^{ \frac{p}{2}+\alpha-2}\eta^{2}\chi|\nabla f|^{2},\\ R_{5}&\leq\frac{a_{1}\alpha}{4}\int_{\Omega}f^{ \frac{p}{2}+\alpha-2}\eta^{2}\chi|\nabla f|^{2}+\frac{4(p+1)^{2}}{a_{1}\alpha} \int_{\Omega}f^{\frac{p}{2}+\alpha}|\nabla\eta|^{2}\chi,\end{split}\] and \[R_{6}\leq\frac{1}{4(n-1)}\int_{\Omega}f^{\alpha+q-\frac{p}{2}+1}\eta^{2}\chi+ \frac{16}{n-1}\int_{\Omega}b^{2}|u|^{2r}f^{\alpha-\frac{p}{2}+1}\eta^{2}\chi.\] By picking \(\alpha\) such that \[\frac{a_{2}^{2}}{a_{1}\alpha}\leq\frac{1}{4(n-1)},\] we can infer from (4.9) and the above estimates on \(R_{i}\) that \[\begin{split}&\frac{a_{1}\alpha}{4}\int_{\Omega}f^{\frac{p}{2}+ \alpha-2}\eta^{2}\chi|\nabla f|^{2}+\frac{1}{n-1}\int_{\Omega}f^{\alpha+q- \frac{p}{2}+1}\eta^{2}\chi\\ \leq& 2(n-1)\kappa\int_{\Omega}f^{\frac{p}{2}+\alpha} \eta^{2}\chi+\frac{4(p+1)^{2}}{a_{1}\alpha}\int_{\Omega}f^{\frac{p}{2}+\alpha} |\nabla\eta|^{2}\chi+\frac{16}{n-1}\int_{\Omega}f^{\alpha-\frac{p}{2}+1}b^{2} |u|^{2r}\eta^{2}\chi\\ &+4(n-1)\int_{\Omega}|\nabla b|^{2}|u|^{2r}f^{\alpha-q+\frac{p}{2 }}\eta^{2}\chi+4r^{2}(n-1)\int_{\Omega}b^{2}|u|^{2r-2}f^{\alpha-q+\frac{p}{2}+ 1}\eta^{2}\chi.\end{split} \tag{4.10}\] On the other hand, by using the inequality \((a+b)^{2}\leq 2(a^{2}+b^{2})\) we have \[\int_{\Omega}\Big{|}\chi\nabla\left(f^{\frac{\alpha}{2}+\frac{p}{4}}\eta\right) \Big{|}^{2}\leq\frac{1}{2}\left(\frac{2\alpha+p}{2}\right)^{2}\int_{\Omega}f^{ \alpha+\frac{p}{2}-2}|\nabla f|^{2}\eta^{2}\chi+2\int_{\Omega}f^{\alpha+\frac{ p}{2}}|\nabla\eta|^{2}\chi. \tag{4.11}\] It follows from (4.10) and (4.11) that \[\begin{split}&\frac{2a_{1}\alpha}{(2\alpha+p)^{2}}\int_{\Omega} \Big{|}\chi\nabla\left(f^{\frac{\alpha}{2}+\frac{p}{4}}\eta\right)\Big{|}^{2}+ \frac{1}{n-1}\int_{\Omega}f^{\alpha+q-\frac{p}{2}+1}\eta^{2}\chi\\ \leq&\left(\frac{4(p+1)^{2}}{a_{1}\alpha}+\frac{4a_ {1}\alpha}{(2\alpha+p)^{2}}\right)\int_{\Omega}f^{\frac{p}{2}+\alpha}|\nabla \eta|^{2}\chi+2(n-1)\kappa\int_{\Omega}f^{\frac{p}{2}+\alpha}\eta^{2}\chi\\ &+\frac{16}{n-1}\int_{\Omega}f^{\alpha-\frac{p}{2}+1}b^{2}|u|^{2r }\eta^{2}\chi+4(n-1)\int_{\Omega}|\nabla b|^{2}|u|^{2r}f^{\alpha-q+\frac{p}{2 }}\eta^{2}\chi\\ &+4r^{2}(n-1)\int_{\Omega}b^{2}|u|^{2r-2}f^{\alpha-q+\frac{p}{2 }+1}\eta^{2}\chi.\end{split} \tag{4.12}\] By the definition of \(\chi\), we know that \[(\chi f)^{\xi}\leq(\chi f)^{\delta}\] as long as \(0<\xi\leq\delta\). Using the conditions \(p>1\) and \(q\geq 1\), we have \[\alpha-\frac{p}{2}+1\leq\alpha+\frac{p}{2},\quad\alpha-q+\frac{p}{2}<\alpha+ \frac{p}{2},\quad\alpha-q+\frac{p}{2}+1\leq\alpha+\frac{p}{2}.\] Although each of the last three terms on the right-hand side of (4.12) is of different powers of \(f\), we can raise their powers up to \(p/2+\alpha\). Hence \[\begin{split}&\frac{2a_{1}\alpha}{(2\alpha+p)^{2}}\int_{\Omega} \Big{|}\chi\nabla\left(f^{\frac{\alpha}{2}+\frac{p}{4}}\eta\right)\Big{|}^{2} +\frac{1}{n-1}\int_{\Omega}f^{\alpha+q-\frac{p}{2}+1}\eta^{2}\chi\\ \leq&\left(\frac{4(p+1)^{2}}{a_{1}\alpha}+\frac{4a_{ 1}\alpha}{(2\alpha+p)^{2}}\right)\int_{\Omega}f^{\frac{p}{2}+\alpha}|\nabla \eta|^{2}\chi+4r^{2}(n-1)\|b\|_{1,\infty}^{2}\int_{\Omega}|u|^{2r-2}f^{\alpha+ \frac{p}{2}}\eta^{2}\chi\\ &+\left(\frac{16}{n-1}+4(n-1)\right)\|b\|_{1,\infty}^{2}\int_{ \Omega}f^{\alpha+\frac{p}{2}}|u|^{2r}\eta^{2}\chi+2(n-1)\kappa\int_{\Omega}f^ {\frac{p}{2}+\alpha}\eta^{2}\chi.\end{split} \tag{4.13}\] According to Young inequality, we have \[|u|^{2r-2}\leq\frac{1}{r}+\frac{r-1}{r}|u|^{2r}\leq 1+|u|^{2r},\quad\forall r\geq 1. \tag{4.14}\] Hence we can infer from (4.13) that \[\begin{split}&\frac{2a_{1}\alpha}{(2\alpha+p)^{2}}\int_{\Omega} \Big{|}\chi\nabla\left(f^{\frac{\alpha}{2}+\frac{p}{4}}\eta\right)\Big{|}^{2} +\frac{1}{n-1}\int_{\Omega}f^{\alpha+q-\frac{p}{2}+1}\eta^{2}\chi\\ \leq&\frac{a_{3}}{\alpha}\int_{\Omega}f^{\frac{p}{2}+ \alpha}|\nabla\eta|^{2}\chi+2(n-1)\left(\kappa+2r^{2}\|b\|_{1,\infty}^{2} \right)\int_{\Omega}f^{\frac{p}{2}+\alpha}\eta^{2}\chi\\ &+a_{4}\|b\|_{1,\infty}^{2}\int_{\Omega}|u|^{2r}f^{\alpha+\frac{p} {2}}\eta^{2}\chi,\end{split} \tag{4.15}\] where we choose a suitable constant \(a_{3}\) depending only on \(n,p\) and \(q\) such that \[\frac{4(p+1)^{2}}{a_{1}\alpha}+\frac{4a_{1}\alpha}{(2\alpha+p)^{2}}\leq\frac{ a_{3}}{\alpha},\] and \[a_{4}=\frac{16}{n-1}+4(n-1)+4r^{2}(n-1).\] **Lemma 4.2**.: _Assume the same conditions as in Lemma 3.1 are satisfied and additionally \(q\geq 1\). Then, there exist constants \(a_{4}\), \(a_{5}\) and \(a_{6}\), which depend only on \(n\), \(p\), \(q\) and \(r\), such that_ \[\frac{2a_{5}}{\alpha}e^{-c_{0}(1+\sqrt{\kappa}R)}V^{\frac{2}{n}}R ^{-2}\left(\int_{\Omega}(\chi f)^{\frac{n}{n-2}(\frac{p}{2}+\alpha)}\eta^{\frac {2n}{n-2}}\right)^{\frac{n-2}{n}}+\frac{1}{n-1}\int_{\Omega}(\chi f)^{\alpha+q -\frac{p}{2}+1}\eta^{2}\] \[\leq a_{6}\left(\frac{(1+\sqrt{\kappa}R)^{2}}{R^{2}}+B\right)\int_{ \Omega}(\chi f)^{\frac{p}{2}+\alpha}\eta^{2}+\frac{a_{3}}{\alpha}\int_{M}( \chi f)^{\frac{p}{2}+\alpha}|\nabla\eta|^{2}\] \[+a_{4}B\int_{M}|u|^{2r}f^{\alpha+\frac{p}{2}}\eta^{2}\chi,\] _where \(B=\|b\|_{1,\infty}^{2}\)._ Proof.: By Lemma 2.2, there holds \[\left(\int_{\Omega_{1}}f^{\frac{n}{n-2}(\frac{p}{2}+\alpha)}\eta^{\frac{2n}{n -2}}\right)^{\frac{n-2}{n}}\leq e^{c_{0}(1+\sqrt{\kappa}R)}V^{-\frac{2}{n}} \left(R^{2}\int_{\Omega_{1}}\left|\nabla\left(f^{\frac{p}{4}+\frac{\alpha}{2} }\eta\right)\right|^{2}+\int_{\Omega_{1}}f^{\frac{p}{2}+\alpha}\eta^{2}\right),\] where \(\Omega_{1}=\{x\in\Omega:f(x)>1\}\). Since \(\chi^{t}=\chi,\forall t>0\), the above inequality can be re-expressed as \[\left(\int_{\Omega}(\chi f)^{\frac{n}{n-2}(\frac{p}{2}+\alpha)}\eta^{\frac{2 n}{n-2}}\right)^{\frac{n-2}{n}}\leq e^{c_{0}(1+\sqrt{\kappa}R)}V^{-\frac{2}{n}} \left(R^{2}\int_{\Omega}\left|\chi\nabla\left(f^{\frac{p}{4}+\frac{\alpha}{2} }\eta\right)\right|^{2}+\int_{\Omega}(\chi f)^{\frac{p}{2}+\alpha}\eta^{2} \right).\] It follows from (4.15) and the above inequality that \[\frac{2a_{1}\alpha}{(2\alpha+p)^{2}}e^{-c_{0}(1+\sqrt{\kappa}R)} V^{\frac{2}{n}}R^{-2}\left(\int_{\Omega}(\chi f)^{\frac{n}{n-2}(\frac{p}{2}+ \alpha)}\eta^{\frac{2n}{n-2}}\right)^{\frac{n-2}{n}}+\frac{1}{n-1}\int_{ \Omega}f^{\alpha+q-\frac{p}{2}+1}\eta^{2}\chi\] \[\leq\frac{a_{3}}{\alpha}\int_{\Omega}f^{\frac{p}{2}+\alpha}| \nabla\eta|^{2}\chi+2(n-1)\left(\kappa+2r^{2}\|b\|_{1,\infty}^{2}\right)\int_{ \Omega}f^{\frac{p}{2}+\alpha}\eta^{2}\chi \tag{4.16}\] \[+a_{4}\|b\|_{1,\infty}^{2}\int_{\Omega}|u|^{2r}f^{\alpha+\frac{p}{ 2}}\eta^{2}\chi+\frac{2a_{1}\alpha}{(2\alpha+p)^{2}R^{2}}\int_{\Omega}(\chi f )^{\frac{p}{2}+\alpha}\eta^{2}.\] We choose suitable \(a_{5}\) and \(a_{6}\) depending only on \(n\), \(p\), \(q\) and \(r\) such that \[\frac{2a_{5}}{\alpha}\leq\frac{2a_{1}\alpha}{(2\alpha+p)^{2}}\qquad\text{and} \qquad a_{6}\left(\frac{(1+\sqrt{\kappa}R)^{2}}{R^{2}}+B\right)\geq 2(n-1)(\kappa+2r^{2}B)+ \frac{2a_{1}\alpha}{(2\alpha+p)^{2}}R^{-2},\] then it follows that \[\frac{2a_{5}}{\alpha}e^{-c_{0}(1+\sqrt{\kappa}R)}V^{\frac{2}{n}} R^{-2}\left(\int_{\Omega}(\chi f)^{\frac{n}{n-2}(\frac{p}{2}+\alpha)}\eta^{ \frac{2n}{n-2}}\right)^{\frac{n-2}{n}}+\frac{1}{n-1}\int_{\Omega}(\chi f)^{ \alpha+q-\frac{p}{2}+1}\eta^{2}\] \[\leq a_{6}\left(\frac{(1+\sqrt{\kappa}R)^{2}}{R^{2}}+B\right)\int_{ \Omega}(\chi f)^{\frac{p}{2}+\alpha}\eta^{2}+\frac{a_{3}}{\alpha}\int_{M}( \chi f)^{\frac{p}{2}+\alpha}|\nabla\eta|^{2} \tag{4.17}\] \[+a_{4}B\int_{M}|u|^{2r}f^{\alpha+\frac{p}{2}}\eta^{2}\chi.\] Thus, we finish the proof of the lemma. ### The case of \(u\in L^{k}(B_{R}(o))\) To deal with the terms involved with \(u\), i.e., the last term on the right-hand side of (4.17), we use a technique introduced by Wang-Wang([33]). The key point is that we use the first term on the left-hand side of (4.17) to absorb the high-order term resulting from the last term on the right-hand side of (4.17). **Lemma 4.3**.: _Assume \(u\) is a solution to equation (1.1) in \(B_{R}(o)\) with \(p>1,q\geq 1\) and \(r\geq 1\) and \(u\in C^{1}(B_{R}(o))\cap L^{k}(B_{R}(o))\) where \(k\geq 2rn\). Let \(f\) and \(\chi\) be the same as in the above lemma. Let \(\alpha_{0}\) be the unique positive solution of the equation_ \[\alpha_{0}^{2}=c_{1}^{2}\left(1+\sqrt{\kappa}R\right)^{2}+R^{2}\|b\|_{1,\infty }^{2}\|u\|_{k}^{\frac{2k}{k/r-n}}(T\alpha_{0})^{\frac{n}{k/r-n}},\] _where the constant \(c_{1}\) depends only on \(n,p,q\) (see (4.21)) and \(T=e^{c_{0}(1+\sqrt{\kappa}R)}V^{-\frac{2}{n}}R^{2}\). Then, there holds true_ \[e^{-\alpha_{0}}V^{\frac{2}{n}}\left(\int_{\Omega}(\chi f)^{\frac {n}{n-2}(\frac{p}{2}+\alpha)}\eta^{\frac{2n}{n-2}}\right)^{\frac{n-2}{n}}+a_ {8}\alpha R^{2}\int_{\Omega}(\chi f)^{\alpha+q-\frac{p}{2}+1}\eta^{2}\] \[\leq a_{9}\left(\alpha_{0}^{2}\left(\frac{\alpha}{\alpha_{0}}\right)^ {\frac{n}{2c-n}}\alpha\int_{\Omega}(\chi f)^{\frac{p}{2}+\alpha}\eta^{2}+R^{2} \int_{\Omega}(\chi f)^{\frac{p}{2}+\alpha}|\nabla\eta|^{2}\right).\] Proof.: For the third term on the right-hand side of (4.17), we choose \(\mu\), \(\nu\in(1,\infty)\) such that \[\frac{1}{\mu}+\frac{1}{\nu}=1,\qquad\frac{1}{\mu}+\frac{n}{\nu(n-2)}=\frac{c }{c-1},\] where \(c=\frac{k}{2r}>n\). Then, by Holder and Young inequalities we have \[\begin{split}\int_{M}|u|^{2r}(\chi f)^{\alpha+\frac{p}{2}}\eta^ {2}&\leq\|u^{2r}\|_{c}\left(\int_{\Omega}\left(\eta^{2}(\chi f)^ {\alpha+\frac{p}{2}}\right)^{\frac{c}{c-1}}\right)^{\frac{c-1}{c}}\\ &\leq\|u\|_{k}^{2r}\left(\int_{\Omega}\eta^{2}(\chi f)^{\alpha+ \frac{p}{2}}\right)^{\frac{c-1}{c\mu}}\left(\int_{\Omega}\left(\eta^{2}(\chi f )^{\alpha+\frac{p}{2}}\right)^{\frac{n}{n-2}}\right)^{\frac{c-1}{c\nu}}\\ &\leq\|u\|_{k}^{2r}\left[\epsilon^{-\frac{n}{2c-n}}\int_{\Omega }\eta^{2}(\chi f)^{\alpha+\frac{p}{2}}+\epsilon\left(\int_{\Omega}\left(\eta^ {2}(\chi f)^{\alpha+\frac{p}{2}}\right)^{\frac{n}{n-2}}\right)^{\frac{n-2}{n} }\right].\end{split} \tag{4.18}\] Now we choose \(\epsilon\) small enough such that \[a_{4}\|b\|_{1,\infty}^{2}\|u\|_{k}^{2r}\epsilon=\frac{a_{5}}{\alpha}e^{-c_{0} (1+\sqrt{\kappa}R)}V^{\frac{2}{n}}R^{-2}. \tag{4.19}\] Set \[T=T(R)=e^{c_{0}(1+\sqrt{\kappa}R)}V^{-\frac{2}{n}}R^{2},\] then we have \[\epsilon^{-\frac{n}{2c-n}}=\left(\frac{a_{4}}{a_{5}}T\alpha\|u\|_{k}^{2r} \right)^{\frac{n}{2c-n}}.\] Now, let \(\epsilon\) satisfy (4.19), by substituting (4.18) into (4.17) we obtain \[\begin{split}&\frac{a_{5}}{\alpha}e^{-c_{0}(1+\sqrt{\kappa}R)}V^{ \frac{2}{n}}R^{-2}\left(\int_{\Omega}(\chi f)^{\frac{n}{n-2}(\frac{p}{2}+\alpha )}\eta^{\frac{2n}{n-2}}\right)^{\frac{n-2}{n}}+\frac{1}{n-1}\int_{\Omega}( \chi f)^{\alpha+q-\frac{p}{2}+1}\eta^{2}\\ \leq&\ \left(a_{6}\frac{(1+\sqrt{\kappa}R)^{2}}{R^{2}}+a _{4}\|b\|_{1,\infty}^{2}\|u\|_{k}^{\frac{4rc}{2c-n}}(T\alpha)^{\frac{n}{2c-n}} \right)\int_{\Omega}(\chi f)^{\frac{p}{2}+\alpha}\eta^{2}+\frac{a_{3}}{\alpha }\int_{M}(\chi f)^{\frac{p}{2}+\alpha}|\nabla\eta|^{2}\\ \leq&\ a_{7}\left(\frac{(1+\sqrt{\kappa}R)^{2}}{R^{2} }+\|b\|_{1,\infty}^{2}\|u\|_{k}^{\frac{4rc}{2c-n}}(T\alpha)^{\frac{n}{2c-n}} \right)\int_{\Omega}(\chi f)^{\frac{p}{2}+\alpha}\eta^{2}+\frac{a_{3}}{\alpha }\int_{\Omega}(\chi f)^{\frac{p}{2}+\alpha}|\nabla\eta|^{2},\end{split} \tag{4.20}\] where \(a_{7}>\max\{a_{6},a_{4}\}\) depends only on \(n,p,q\) and \(r\). Now we define \(c_{1}\) by \[c_{1}=\max\left\{\frac{4(n-1)a_{2}^{2}}{a_{1}},\ 1+c_{0}\right\}. \tag{4.21}\] Since \(k\geq 2rn\) we have \(\frac{n}{2c-n}\leq 1\), thus the following equation \[\alpha_{0}^{2}=c_{1}^{2}\left(1+\sqrt{\kappa}R\right)^{2}+R^{2}\|b\|_{1,\infty }^{2}\|u\|_{k}^{\frac{4rc}{2c-n}}(T\alpha_{0})^{\frac{n}{2c-n}} \tag{4.22}\] has a unique positive solution \(\alpha_{0}\). Then, it is easy to see that \[\alpha_{0}\geq c_{1}(1+\sqrt{\kappa}R)\geq 1.\] It follows from \(\alpha_{0}\geq 1\) and \(\frac{n}{2c-n}\leq 1\) that \[\alpha_{0}^{2}\leq c_{1}^{2}\left(1+\sqrt{\kappa}R\right)^{2}+R^{2}\|b\|_{1, \infty}^{2}\|u\|_{k}^{\frac{4rc}{2c-n}}T^{\frac{n}{2c-n}}\alpha_{0}. \tag{4.23}\] Consequently, one can see from the quadratic inequality (4.23) about \(\alpha_{0}\) that \[\alpha_{0}\leq c_{1}\left(1+\sqrt{\kappa}R\right)+R^{2}\|b\|_{1,\infty}^{2}\|u \|_{k}^{\frac{4rc}{2c-n}}T^{\frac{n}{2c-n}}. \tag{4.24}\] For any \(\alpha\geq\alpha_{0}\), it follows from (4.22) that \[\frac{(1+\sqrt{\kappa}R)^{2}}{R^{2}}+\|b\|_{1,\infty}^{2}\|u\|_{k}^{\frac{4rc} {2c-n}}(T\alpha)^{\frac{n}{2c-n}}\leq\frac{\alpha_{0}^{2}}{R^{2}}\left(\frac{ \alpha}{\alpha_{0}}\right)^{\frac{n}{2c-n}}.\] Hence, it follows from (4.20) that \[\begin{split}&\frac{a_{5}}{\alpha}e^{-\alpha_{0}}V^{\frac{2}{n}}R^{-2 }\left(\int_{\Omega}(\chi f)^{\frac{n}{n-2}(\frac{p}{2}+\alpha)}\eta^{\frac{2 n}{n-2}}\right)^{\frac{n-2}{n}}+\frac{1}{n-1}\int_{\Omega}(\chi f)^{\alpha+q- \frac{p}{2}+1}\eta^{2}\\ \leq&\ a_{7}\frac{\alpha_{0}^{2}}{R^{2}}\left(\frac{ \alpha}{\alpha_{0}}\right)^{\frac{n}{2c-n}}\int_{\Omega}(\chi f)^{\frac{p}{2}+ \alpha}\eta^{2}+\frac{a_{3}}{\alpha}\int_{\Omega}(\chi f)^{\frac{p}{2}+\alpha }|\nabla\eta|^{2}.\end{split} \tag{4.25}\] Immediately, we can conclude that there holds true \[\begin{split}& e^{-\alpha_{0}}V^{\frac{2}{n}}\left(\int_{\Omega}( \chi f)^{\frac{n}{n-2}(\frac{p}{2}+\alpha)}\eta^{\frac{2n}{n-2}}\right)^{ \frac{n-2}{n}}+a_{8}\alpha R^{2}\int_{\Omega}(\chi f)^{\alpha+q-\frac{p}{2}+1} \eta^{2}\\ \leq&\ a_{9}\left(\alpha_{0}^{2}\left(\frac{\alpha}{ \alpha_{0}}\right)^{\frac{n}{2c-n}}\alpha\int_{\Omega}(\chi f)^{\frac{p}{2}+ \alpha}\eta^{2}+R^{2}\int_{\Omega}(\chi f)^{\frac{p}{2}+\alpha}|\nabla\eta|^{ 2}\right).\end{split} \tag{4.26}\] The proof of this lemma is finished. #### 4.1.1. \(L^{\alpha_{1}}\) bound of gradient in a geodesic ball with radius \(3R/4\) We first prove the following lemma. **Lemma 4.4**.: _Let \((M,g)\) be a complete Riemannian manifold satisfying \(\mathrm{Ric}_{g}\geq-(n-1)\kappa g\) for some constant \(\kappa\geq 0\). Assume that \(u\) and \(f\) are the same as in Lemma 4.3. If \(b\in W^{1,\infty}(B_{R}(o))\) is a real function and the constants \(p\), \(q\) and \(r\) associated with (1.1) satisfy_ \[p>1,\quad q>\max\{1,p-1\}\quad\text{and}\quad r\geq 1.\] _then there exist \(\alpha_{1}=\left(\alpha_{0}+\frac{p}{2}\right)\frac{n}{n-2}\) and a constant \(a_{12}\), which depends only on \(n,p,q\) and \(r\), such that_ \[\|\chi f\|_{L^{\alpha_{1}}(B_{\frac{3R}{4}}(o))}\leq a_{12}V^{\frac{1}{\alpha_ {1}}}\left(\frac{\alpha_{0}^{2}}{R^{2}}\right)^{\frac{1}{q-p+1}}. \tag{4.27}\] _Here \(\alpha_{0}\) is determined in the above Lemma 4.3._ Proof.: We choose \(\alpha=\alpha_{0}\) in (4.26) and we obtain \[\begin{split}& e^{-\alpha_{0}}V^{\frac{2}{n}}\left(\int_{\Omega}( \chi f)^{\frac{n}{n-2}(\frac{p}{2}+\alpha_{0})}\eta^{\frac{2n}{n-2}}\right)^{ \frac{n-2}{n}}+a_{8}\alpha_{0}R^{2}\int_{\Omega}(\chi f)^{\alpha_{0}+q-\frac{ p}{2}+1}\eta^{2}\\ &\leq a_{9}\left(\alpha_{0}^{3}\int_{\Omega}(\chi f)^{\frac{p}{2 }+\alpha_{0}}\eta^{2}+R^{2}\int_{\Omega}(\chi f)^{\frac{p}{2}+\alpha_{0}}| \nabla\eta|^{2}\right).\end{split} \tag{4.28}\] When \[f\geq\left(\frac{2a_{9}\alpha_{0}^{2}}{a_{8}R^{2}}\right)^{\frac{1}{q-p+1}},\] obviously we have \[a_{9}\alpha_{0}^{3}(\chi f)^{\frac{p}{2}+\alpha_{0}}\eta^{2}\leq\frac{1}{2}a _{8}\alpha_{0}R^{2}(\chi f)^{\alpha_{0}+q-\frac{p}{2}+1}\eta^{2}.\] Now, we decompose \(\Omega\) into \(\Omega=\Omega_{1}\cup\Omega_{2}\), where \[\Omega_{1}=\left\{f\geq\left(\frac{2a_{9}\alpha_{0}^{2}}{a_{8}R^{2}}\right)^{ \frac{1}{q-p+1}}\right\}\] and \(\Omega_{2}\) is the complement of \(\Omega_{1}\), then we have \[a_{9}\alpha_{0}^{3}\int_{\Omega_{1}}(\chi f)^{\frac{p}{2}+\alpha_{0}}\eta^{2} \leq\frac{a_{8}\alpha_{0}R^{2}}{2}\int_{\Omega}(\chi f)^{\alpha_{0}+q-\frac{p }{2}+1}\eta^{2} \tag{4.29}\] and \[a_{9}\alpha_{0}^{3}\int_{\Omega_{2}}(\chi f)^{\frac{p}{2}+\alpha_{0}}\eta^{2} \leq a_{9}\alpha_{0}^{3}V\left(\frac{2a_{9}\alpha_{0}^{2}}{a_{8}R^{2}}\right) ^{\frac{\alpha_{0}+\frac{p}{2}}{q-p+1}}, \tag{4.30}\] where \(V\) is the volume of \(\Omega\). Plugging (4.29) and (4.30) into (4.28) leads to \[\begin{split}& e^{-\alpha_{0}}V^{\frac{2}{n}}\left(\int_{\Omega}( \chi f)^{\frac{n}{n-2}(\frac{p}{2}+\alpha_{0})}\eta^{\frac{2n}{n-2}}\right)^{ \frac{n-2}{n}}+\frac{1}{2}a_{8}\alpha_{0}R^{2}\int_{\Omega}(\chi f)^{\alpha_{0 }+q-\frac{p}{2}+1}\eta^{2}\\ \leq& a_{9}R^{2}\int_{\Omega}(\chi f)^{\frac{p}{2}+ \alpha_{0}}|\nabla\eta|^{2}+\frac{1}{2}a_{8}\alpha_{0}R^{2}V\left(\frac{2a_{9 }\alpha_{0}^{2}}{a_{8}R^{2}}\right)^{\frac{\alpha_{0}+q-\frac{p}{2}+1}{q-p+1}}.\end{split} \tag{4.31}\] Let the cut-off function \(\gamma\in C_{0}^{\infty}(B_{R}(o))\) satisfy \[\begin{cases}0\leq\gamma\leq 1,\quad\gamma\equiv 1\text{ in }B_{\frac{3R}{4}}(o);\\ |\nabla\gamma|\leq\frac{C}{R},\end{cases}\] and \[\eta=\gamma^{\frac{\alpha_{0}+q-\frac{p}{2}+1}{q-p+1}}.\] Then we have \[a_{9}R^{2}|\nabla\eta|^{2}\leq a_{9}C^{2}\left(\frac{\alpha_{0}+q-\frac{p}{2} +1}{q-p+1}\right)^{2}\eta^{\frac{2\alpha_{0}+p}{\alpha_{0}+q-\frac{p}{2}+1}} \leq a_{10}\alpha_{0}^{2}\eta^{\frac{2\alpha_{0}+p}{\alpha_{0}+q-\frac{p}{2}+1 }}. \tag{4.32}\] By Holder and Young inequalities, we obtain \[\begin{split} a_{9}R^{2}\int_{\Omega}(\chi f)^{\frac{p}{2}+ \alpha}|\nabla\eta|^{2}&\leq a_{10}\alpha_{0}^{2}\int_{\Omega}( \chi f)^{\frac{p}{2}+\alpha}\eta^{\frac{2\alpha+p}{\alpha_{0}+q-\frac{p}{2}+1 }}\\ &\leq a_{10}\alpha_{0}^{2}\left(\int_{\Omega}(\chi f)^{\alpha_{0} +q-\frac{p}{2}+1}\eta^{2}\right)^{\frac{\alpha_{0}+\frac{p}{2}}{\alpha_{0}+q- \frac{p}{2}+1}}V^{\frac{q-p+1}{\alpha_{0}+q-\frac{p}{2}+1}}\\ &\leq\frac{a_{8}\alpha_{0}R^{2}}{2}\left[\int_{\Omega}(\chi f)^{q -\frac{p}{2}+\alpha_{0}+1}\eta^{2}+\left(\frac{2a_{10}\alpha_{0}}{a_{8}R^{2}} \right)^{\frac{\alpha_{0}+q-\frac{p}{2}+1}{q-p+1}}V\right].\end{split} \tag{4.33}\] It follows from (4.31) and (4.33) that \[\begin{split}&\left(\int_{\Omega}(\chi f)^{\frac{n(\frac{p}{2}+ \alpha_{0})}{n-2}}\eta^{\frac{2n}{n-2}}\right)^{\frac{n-2}{n}}\\ &\leq e^{\alpha_{0}}V^{1-\frac{2}{n}}\left[\frac{a_{8}\alpha_{0} R^{2}}{2}\left(\frac{2a_{9}\alpha_{0}^{2}}{a_{8}R^{2}}\right)^{\frac{\alpha_{0}+q- \frac{p}{2}+1}{q-p+1}}+\frac{a_{8}\alpha_{0}R^{2}}{2}\left(\frac{2a_{10}\alpha _{0}}{a_{9}R^{2}}\right)^{\frac{\alpha_{0}+q-\frac{p}{2}+1}{q-p+1}}\right]. \end{split} \tag{4.34}\] Since \(\alpha_{0}\geq 1\), then it is not difficult to see from (4.34) that \[\left(\int_{\Omega}(\chi f)^{\frac{n}{n-2}(\frac{p}{2}+\alpha_{0})}\eta^{\frac {2n}{n-2}}\right)^{\frac{n-2}{n}}\leq a_{11}^{\alpha_{0}}\alpha_{0}^{3}e^{ \alpha_{0}}V^{1-\frac{2}{n}}\left(\frac{\alpha_{0}^{2}}{R^{2}}\right)^{\frac{ \alpha_{0}+\frac{p}{2}}{q-p+1}}.\] Recalling that \(\alpha_{1}=\frac{n}{n-2}\left(\alpha_{0}+\frac{p}{2}\right)\), we have \[\|f\chi\|_{L^{\alpha_{1}}(B_{\frac{3R}{4}}(o))}\leq a_{12}V^{\frac{1}{\alpha_ {1}}}\left(\frac{\alpha_{0}^{2}}{R^{2}}\right)^{\frac{1}{q-p+1}}.\] Thus, we completed the proof of the lemma. #### 4.1.2. Moser Iteration (Proof of Theorem 1.4) Now we turn to use the Moser iteration to deduce the local \(L^{\infty}\)-estimate of \(f\). Proof.: By discarding the second term on the left-hand side of (4.26), we have \[\left(\int_{\Omega}(\chi f)^{\frac{n}{n-2}(\frac{p}{2}+\alpha)}\eta^{\frac{2n }{n-2}}\right)^{\frac{n-2}{n}}\leq a_{9}e^{\alpha_{0}}V^{-\frac{2}{n}}\left[R ^{2}\int_{M}(\chi f)^{\frac{p}{2}+\alpha}|\nabla\eta|^{2}+K\alpha\int_{\Omega }(\chi f)^{\frac{p}{2}+\alpha}\eta^{2}\right], \tag{4.35}\] where \[K=K(\alpha)=(1+\sqrt{\kappa}R)^{2}+R^{2}\|b\|_{1,\infty}^{2}\|u\|_{k}^{\frac{4rc}{ 2c-n}}(T\alpha)^{\frac{n}{2c-n}}.\] Define the sequence \(\alpha_{l+1}=\alpha_{l}\frac{n}{n-2}\) for \(l=1,2,\cdots\). For the sequence of geodesic balls \[\Omega_{l}=B\left(o,\frac{R}{2}+\frac{R}{4^{l}}\right),\quad l=1,2\ldots,\] we choose a sequence of cut-off functions \(\eta_{l}\in C_{0}^{\infty}(\Omega_{l})\) such that \[\eta\equiv 1\text{ in }\Omega_{l+1},\quad 0\leq\eta_{l}\leq 1\quad\text{and} \quad|\nabla\eta_{l}|\leq\frac{C4^{l}}{R}.\] Taking \(\alpha+\frac{p}{2}=\alpha_{l}\) and \(\eta=\eta_{l}\) in (4.35). Since \[K(\alpha_{l})\alpha_{l}\eta^{2} =\left((1+\sqrt{\kappa}R)^{2}+\|b\|_{1,\infty}^{2}\|u\|_{k}^{ \frac{4rc}{2c-n}}R^{2}\left(T\left(\alpha_{0}+\frac{p}{2}\right)\left(\frac{n }{n-2}\right)^{l}\right)^{\frac{n}{2c-n}}\right)\alpha_{l}\] \[\leq\left((1+\sqrt{\kappa}R)^{2}+\|b\|_{1,\infty}^{2}\|u\|_{k}^{ \frac{4rc}{2c-n}}R^{2}\left(T\left(\alpha_{0}+\frac{p}{2}\right)\right)^{\frac {n}{2c-n}}\right)\left(\frac{n}{n-2}\right)^{\frac{2cl}{2c-n}}\alpha_{l}\] \[\leq a_{13}\alpha_{0}^{2}\left(\frac{n}{n-2}\right)^{\frac{2cl}{2c-n}} \alpha_{l},\] we have \[R^{2}|\nabla\eta_{l}|^{2}+K(\alpha_{l})\alpha_{l}\eta^{2} \leq C^{2}16^{l}+a_{13}\alpha_{0}^{2}\left(\frac{n}{n-2}\right)^{ \frac{2cl}{2c-n}}\alpha_{l}\] \[\leq a_{14}^{l}\alpha_{0}^{2}\alpha_{1}.\] It follows from the above that \[\left(\int_{\Omega_{l+1}}(\chi f)^{\alpha_{l+1}}\right)^{\frac{1}{\alpha_{l+1 }}}\leq\ \left(a_{9}e^{\alpha_{0}}V^{-\frac{2}{n}}\alpha_{0}^{2}\alpha_{1}\right)^{ \frac{1}{\alpha_{l}}}a_{14}^{\frac{l}{\alpha_{l}}}\left(\int_{\Omega_{l}}(\chi f )^{\alpha_{l}}\right)^{\frac{1}{\alpha_{l}}}. \tag{4.36}\] By the facts \[\sum_{l=1}^{\infty}\frac{1}{\alpha_{l}}=\frac{n}{2\alpha_{1}}\quad\text{ and }\quad\sum_{l=1}^{\infty}\frac{l}{\alpha_{l}}=\frac{n^{2}}{4\alpha_{1}}<\frac{n(n -2)}{2p}\] and taking standard iteration, we obtain \[\|\chi f\|_{L^{\infty}(B_{\frac{R}{2}}(o))} \leq\ \left(a_{9}e^{\alpha_{0}}V^{-\frac{2}{n}}\alpha_{0}^{2} \alpha_{1}\right)^{\sum_{l=1}^{\infty}\frac{1}{\alpha_{l}}}a_{14}^{\sum_{l=1} ^{\infty}\frac{l}{\alpha_{l}}}\|f\|_{L^{\alpha_{1}}(B_{\frac{3R}{4}}(o))} \tag{4.37}\] \[\leq a_{15}V^{-\frac{1}{\alpha_{1}}}\|f\|_{L^{\alpha_{1}}(B_{\frac{3 R}{4}}(o))}.\] Furthermore, by Lemma 4.4 we can infer from (4.37) that \[\|\chi f\|_{L^{\infty}(B_{\frac{R}{2}}(o))}\leq a_{16}\left(\frac{\alpha_{0}^ {2}}{R^{2}}\right)^{\frac{1}{q-p+1}}.\] In view of the upper bound of \(\alpha_{0}\) given by (4.24), we obtain \[\sup_{B_{\frac{R}{2}}(o)}|\chi\nabla u|\leq a_{17}\left[\left(\frac{1+\sqrt{\kappa} R}{R}\right)^{\frac{1}{q-p+1}}+\left(R\|b\|_{1,\infty}^{2}\|u\|_{k}^{\frac{4rc}{2c-n}}T^{ \frac{n}{2c-n}}\right)^{\frac{1}{q-p+1}}\right].\] Thus, we finish the proof of Theorem 1.4. ### The case of \(u\in L^{\infty}(B_{R}(o))\) **Proposition 4.5**.: _Let \((M,g)\) be a complete Riemannian manifold with \(\mathrm{Ric}_{g}\geq-(n-1)\kappa g\) for some constant \(\kappa\geq 0\). Suppose that \(b\in W^{1,\infty}(B_{R}(o))\) is a real function and the constants \(p\), \(q\) and \(r\) associated with (1.1) satisfy_ \[p>1,\quad q>\max\{1,p-1\}\quad\text{and}\quad r\geq 1. \tag{4.38}\] _If \(u\in C^{1}(B_{R}(o))\cap L^{\infty}(B_{R}(o))\) is a solution of (1.1), then there exist two constants_ \[C=C(n,p,q,r)\quad\text{and}\quad N=\left(\|b\|_{1,\infty}^{2}+\|b\|_{1,\infty }^{2}\|u\|_{\infty}^{2r}\right)^{\frac{1}{2(q-p+1)}}\] _such that there holds true_ \[\sup_{B_{\frac{R}{2}}(o)}|\nabla u|\leq\max\left\{1,\,C\left[\left(\frac{1+ \sqrt{\kappa}R}{R}\right)^{\frac{1}{q-p+1}}+N\right]\right\}\] _where \(V=\mathrm{Vol}(B_{R}(o))\) is the volume of the geodesic ball \(B_{R}(o)\)._ In order to prove the proposition, we need to establish the following lemma. #### 4.2.1. \(L^{\alpha_{1}}\) bound of gradient in a geodesic ball with radius \(3r/4\) **Lemma 4.6**.: _Assume that \((M,g)\) satisfies the same conditions as the above proposition. Let \(f\) and \(\chi\) be the same as before. If \(u\in C^{1}(B_{R}(o))\cap L^{\infty}(B_{R}(o))\) is a solution to equation (1.1) with \(p,q\) and \(r\) satisfying (4.38), then there exists a positive constant \(\alpha_{1}=\left(\alpha_{0}+\frac{p}{2}\right)\frac{n}{n-2}\) such that_ \[\|\chi f\|_{L^{\alpha_{1}}(B_{\frac{3R}{4}}(o))}\leq a_{12}V^{\frac{1}{\alpha_ {1}}}\left(\frac{\alpha_{0}^{2}}{R^{2}}\right)^{\frac{1}{q-p+1}}.\] _Here_ \[\alpha_{0}=c_{1}\left(1+\sqrt{\kappa}R+R\|b\|_{1,\infty}\left(1+\|u\|_{\infty }^{2r}\right)^{\frac{1}{2}}\right).\] _where \(c_{1}\) is a constant depending only on \(n\), \(p\) and \(q\) (see (4.40))._ Proof.: Since \(u\in L^{\infty}(B_{R}(o))\), then we can infer from (4.17) that \[\begin{split}&\frac{a_{5}}{\alpha}e^{-c_{0}(1+\sqrt{\kappa}R)}V^{ \frac{2}{n}}R^{-2}\left(\int_{\Omega}(\chi f)^{\frac{n}{n-2}(\frac{p}{2}+ \alpha)}\eta^{\frac{2n}{n-2}}\right)^{\frac{n-2}{n}}+\frac{1}{n-1}\int_{ \Omega}(\chi f)^{\alpha+q-\frac{p}{2}+1}\eta^{2}\\ \leq& a_{7}\left(\frac{(1+\sqrt{\kappa}R)^{2}}{R^{2} }+\|b\|_{1,\infty}^{2}+\|b\|_{1,\infty}^{2}\|u\|_{\infty}^{2r}\right)\int_{ \Omega}(\chi f)^{\frac{p}{2}+\alpha}\eta^{2}+\frac{a_{3}}{\alpha}\int_{M}( \chi f)^{\frac{p}{2}+\alpha}|\nabla\eta|^{2},\end{split} \tag{4.39}\] where \(a_{7}\) depends on \(n\), \(M\), \(p\), \(q\) and \(r\). Similarly, we choose \[c_{1}=\max\left\{c_{0}+1,\,\frac{4(n-1)a_{2}^{2}}{a_{1}}\right\} \tag{4.40}\] and set \[\alpha_{0}=c_{1}\left(1+\sqrt{\kappa}R+R\|b\|_{1,\infty}\left(1+\|u\|_{\infty}^ {2r}\right)^{\frac{1}{2}}\right).\] The equation (4.39) becomes \[\begin{split}&\frac{a_{5}}{\alpha}e^{-\alpha_{0}}V^{\frac{2}{n}}R^{-2} \left(\int_{\Omega}(\chi f)^{\frac{n}{n-2}(\frac{p}{2}+\alpha)}\eta^{\frac{2n }{n-2}}\right)^{\frac{n-2}{n}}+\frac{1}{n-1}\int_{\Omega}(\chi f)^{\alpha+q- \frac{p}{2}+1}\eta^{2}\\ \leq& a_{7}\frac{\alpha_{0}^{2}}{R^{2}}\int_{\Omega }(\chi f)^{\frac{p}{2}+\alpha}\eta^{2}+\frac{a_{3}}{\alpha}\int_{M}(\chi f)^ {\frac{p}{2}+\alpha}|\nabla\eta|^{2}.\end{split} \tag{4.41}\] Now we denote \(\alpha_{1}=\left(\alpha_{0}+\frac{p}{2}\right)\frac{n}{n-2}\) and next we will give a \(L^{\alpha_{1}}\) bound of \(f\). By choosing \(\alpha=\alpha_{0}\) in (4.41) and multiplying the both sides of (4.41) by \(\alpha_{0}/a_{5}\), then we obtain \[\begin{split}& e^{-\alpha_{0}}V^{\frac{2}{n}}\left(\int_{\Omega }(\chi f)^{\frac{n}{n-2}(\frac{p}{2}+\alpha_{0})}\eta^{\frac{2n}{n-2}}\right)^ {\frac{n-2}{n}}+a_{8}\alpha_{0}R^{2}\int_{\Omega}(\chi f)^{\alpha_{0}+q-\frac{ p}{2}+1}\eta^{2}\\ \leq& a_{9}\left(\alpha_{0}^{3}\int_{\Omega}(\chi f )^{\frac{p}{2}+\alpha_{0}}\eta^{2}+R^{2}\int_{\Omega}(\chi f)^{\frac{p}{2}+ \alpha_{0}}|\nabla\eta|^{2}\right).\end{split} \tag{4.42}\] Now, if \[f\geq\left(\frac{2a_{9}\alpha_{0}^{2}}{a_{8}R^{2}}\right)^{\frac{1}{q-p+1}},\] then we have \[a_{9}\alpha_{0}^{3}(\chi f)^{\frac{p}{2}+\alpha_{0}}\eta^{2}\leq\frac{1}{2}a _{8}\alpha_{0}R^{2}(\chi f)^{\alpha_{0}+q-\frac{p}{2}+1}\eta^{2}.\] We decompose \(\Omega\) into two subregions \(\Omega_{1}\) and \(\Omega_{2}\), where \[\Omega_{1}=\left\{f\leq\left(\frac{2a_{9}\alpha_{0}^{2}}{a_{8}R^{2}}\right)^{ \frac{1}{q-p+1}}\right\}\] and \(\Omega_{2}\) is the complement of \(\Omega_{1}\). We have \[\begin{split} a_{9}\alpha_{0}^{3}\int_{\Omega}(\chi f)^{\frac{p} {2}+\alpha_{0}}\eta^{2}&\leq a_{9}\alpha_{0}^{3}\int_{\Omega_{1}}( \chi f)^{\frac{p}{2}+\alpha_{0}}\eta^{2}+a_{9}\alpha_{0}^{3}\int_{\Omega_{2}}( \chi f)^{\frac{p}{2}+\alpha_{0}}\eta^{2}\\ &\leq a_{9}\alpha_{0}^{3}\left(\frac{2a_{9}\alpha_{0}^{2}}{a_{8}R^{ 2}}\right)^{\frac{\alpha_{0}+\frac{p}{2}}{q-p+1}}V+\frac{a_{8}\alpha_{0}R^{2}} {2}\int_{\Omega}(\chi f)^{\alpha_{0}+q-\frac{p}{2}+1}\eta^{2},\end{split} \tag{4.43}\] where \(V=\operatorname{Vol}(\Omega)\) denote the volume of \(\Omega\). Choose \(\gamma\in C_{0}^{\infty}(B_{R}(o))\) such that \[\begin{cases}0\leq\gamma(x)\leq 1,\quad|\nabla\gamma(x)|\leq\frac{C}{R},& \forall x\in B_{R}(o);\\ \gamma(x)\equiv 1,&\forall x\in B_{\frac{3R}{4}}(o),\end{cases}\] and set \[\eta=\gamma^{\frac{\alpha_{0}+q-\frac{p}{2}+1}{q-p+1}}.\] Direct computation shows that \[|\nabla\eta|=\left|\left(\frac{\alpha_{0}+q-\frac{p}{2}+1}{q-p+1}\right)\eta^{ \frac{\alpha_{0}+\frac{p}{2}}{\alpha_{0}+q-\frac{p}{2}+1}}\nabla\gamma\right| \leq\left(\frac{\alpha_{0}+q-\frac{p}{2}+1}{q-p+1}\right)\eta^{\frac{\alpha_{0} +\frac{p}{2}}{\alpha_{0}+q-\frac{p}{2}+1}}\frac{C}{R}. \tag{4.44}\] It follows from (4.44) and Young inequality that \[\begin{split} a_{9}R^{2}\int_{\Omega}(\chi f)^{\frac{p}{2}+ \alpha_{0}}|\nabla\eta|^{2}&\leq a_{10}\alpha_{0}^{2}\int_{ \Omega}(\chi f)^{\frac{p}{2}+\alpha_{0}}\eta^{\frac{2\alpha_{0}+p}{2}+1}\\ &\leq a_{10}\alpha_{0}^{2}\left(\int_{\Omega}(\chi f)^{\alpha_{0} +q-\frac{p}{2}+1}\eta^{2}\right)^{\frac{\alpha_{0}+\frac{p}{2}}{\alpha_{0}+q- \frac{p}{2}+1}}V^{\frac{q-p+1}{\alpha_{0}+q-\frac{p}{2}+1}}\\ &\leq\frac{a_{8}\alpha_{0}R^{2}}{2}\int_{\Omega}(\chi f)^{\alpha _{0}+q-\frac{p}{2}+1}\eta^{2}+\frac{a_{8}\alpha_{0}R^{2}}{2}\left(\frac{2a_{ 10}\alpha_{0}}{a_{8}R^{2}}\right)^{\frac{\alpha_{0}+q-\frac{p}{2}+1}{q-p+1}}V.\end{split} \tag{4.45}\] Combining (4.43) with (4.45), we obtain \[e^{-\alpha_{0}}V^{\frac{2}{n}}\left(\int_{\Omega}(\chi f)^{\frac{n}{n-2}(\frac {p}{2}+\alpha_{0})}\eta^{\frac{2n}{n-2}}\right)^{\frac{n-2}{n}}\leq a_{10} \alpha_{0}^{2}\left(\frac{2a_{10}\alpha_{0}}{a_{8}R^{2}}\right)^{\frac{ \alpha_{0}+\frac{p}{2}}{q-p+1}}V+a_{9}\alpha_{0}^{3}\left(\frac{2a_{9}\alpha_{ 0}^{2}}{a_{8}R^{2}}\right)^{\frac{\alpha_{0}+\frac{p}{2}}{q-p+1}}V.\] Since \(\alpha_{0}\geq 1\), the above inequality implies that \[\left(\int_{\Omega}(\chi f)^{\frac{n}{n-2}(\frac{p}{2}+\alpha_{0})}\eta^{ \frac{2n}{n-2}}\right)^{\frac{n-2}{n}}\leq e^{\alpha_{0}}a_{11}^{\alpha_{0}}V ^{1-\frac{2}{n}}\alpha_{0}^{3}\left(\frac{K}{R^{2}}\right)^{\frac{\alpha_{0}+ \frac{p}{2}}{q-p+1}}. \tag{4.46}\] Taking \(1/\left(\frac{p}{2}+\alpha_{0}\right)\) power on the both sides of (4.46), we have \[\|\chi f\|_{L^{\alpha_{1}}(B_{\frac{3R}{4}}(o))}\leq a_{12}V^{\frac{1}{\alpha_ {1}}}\left(\frac{\alpha_{0}^{2}}{R^{2}}\right)^{\frac{1}{q-p+1}}. \tag{4.47}\] Here, \(a_{12}\) depends only on \(n\), \(p\), \(q\) and \(r\). The proof of this lemma is completed. #### 4.2.2. Proof of Theorem 1.5 or Proposition 4.5 (Nash-Moser iteration) Proof.: Deleting the second term on the left-hand side of inequality in (4.42), we have \[\begin{split}& e^{-\alpha_{0}}V^{\frac{2}{n}}\left(\int_{\Omega}( \chi f)^{\frac{n}{n-2}(\frac{p}{2}+\alpha)}\eta^{\frac{2n}{n-2}}\right)^{\frac {n-2}{n}}\\ &\leq a_{9}\left(\alpha_{0}^{2}\alpha\int_{\Omega}(\chi f)^{ \frac{p}{2}+\alpha}\eta^{2}+R^{2}\int_{\Omega}(\chi f)^{\frac{p}{2}+\alpha}| \nabla\eta|^{2}\right).\end{split} \tag{4.48}\] For \(l=1\), \(2\), \(\cdots\), we set \[\alpha_{l+1}=\alpha_{l}\frac{n}{n-2}=\alpha_{1}\left(\frac{n}{n-2}\right)^{l} \quad\text{and}\quad r_{l}=\frac{R}{2}+\frac{R}{4^{l}}.\] Let \(\Omega_{l}=B_{r_{l}}(o)\). We choose \(\eta_{l}\in C_{0}^{\infty}(\Omega_{l})\) such that \[\begin{cases}0\leq\eta_{l}(x)\leq 1,&\forall x\in\Omega_{l};\\ \eta_{l}(x)\equiv 1,&\forall x\in\Omega_{l+1};\\ |\nabla\eta_{l}(x)|\leq\frac{C(n)^{4^{l}}}{R},&\forall x\in\Omega_{l}. \end{cases}\] By substituting \(\eta=\eta_{l}\) and \(\alpha=\alpha_{l}-p/2\) into (4.48), we deduce that \[\begin{split} e^{-\alpha_{0}}V^{\frac{2}{n}}\left(\int_{\Omega_{l+1 }}(\chi f)^{\alpha_{l+1}}\right)^{\frac{n-2}{n}}&\leq e^{-\alpha _{0}}V^{\frac{2}{n}}\left(\int_{\Omega_{l}}(\chi f)^{\alpha_{l+1}}\eta_{l}^{ \frac{2n}{n-2}}\right)^{\frac{n-2}{n}}\\ &\leq a_{9}\left(\alpha_{0}^{2}\alpha_{l}\int_{\Omega_{l}}(\chi f )^{\alpha_{l}}\eta_{l}^{2}+R^{2}\int_{\Omega_{l}}(\chi f)^{\alpha_{l}}|\nabla \eta_{l}|^{2}\right).\end{split} \tag{4.49}\] Noting the fact \[\alpha_{0}^{2}\alpha_{l}\eta_{l}^{2}+R^{2}|\nabla\eta_{l}|^{2}\leq\alpha_{0}^{ 2}\alpha_{1}\left(\frac{n}{n-2}\right)^{l}+C^{2}(n)16^{l}\leq\alpha_{1}^{3}a_ {13}^{l},\] we obtain \[\left(\int_{\Omega_{l+1}}(\chi f)^{\alpha_{l+1}}\right)^{\frac{n-2}{n}}\leq e ^{\alpha_{0}}V^{-\frac{2}{n}}a_{9}\alpha_{1}^{3}a_{13}^{l}\int_{\Omega_{l}}( \chi f)^{\alpha_{l}}. \tag{4.50}\] Taking \(1/\alpha_{l}\) power on both sides of (4.50) yields \[\|\chi f\|_{L^{\alpha_{l+1}}(\Omega_{l+1})}\leq\left(e^{\alpha_{0}}V^{-\frac{ 2}{n}}a_{9}\alpha_{1}^{3}a_{13}^{l}\right)^{\frac{1}{\alpha_{l}}}\|\chi f\|_{L ^{\alpha_{l}}(\Omega_{l})}.\] By an iteration to \(\infty\) and the facts \[\sum_{l=1}^{\infty}\frac{1}{\alpha_{l}}=\frac{n}{2\alpha_{1}}\quad\text{ and }\quad\sum_{l=1}^{\infty}\frac{l}{\alpha_{l}}=\frac{n^{2}}{4\alpha_{1}}<\frac{n (n-2)}{2p}\] we have \[\begin{split}\|\chi f\|_{L^{\infty}(B_{\frac{R}{2}}(o))}\leq& \left(e^{\alpha_{0}}V^{-\frac{2}{n}}a_{9}\alpha_{1}^{3}\right)^{\sum_{l=1}^{ \infty}\frac{1}{\alpha_{l}}}a_{13}^{\sum_{l=1}^{\infty}\frac{l}{\alpha_{l}}}\| \chi f\|_{L^{\alpha_{1}}(\Omega_{1})}\\ \leq& a_{14}V^{-\frac{1}{\alpha_{1}}}\|\chi f\|_{L^{ \alpha_{1}}(\Omega_{1})}.\end{split} \tag{4.51}\] Combining (4.51) with (4.47), we obtain \[\|\chi f\|_{L^{\infty}(B_{\frac{R}{2}}(o))}\leq a_{14}V^{-\frac{1}{\alpha_{1}}} a_{12}V^{\frac{1}{\alpha_{1}}}\left(\frac{\alpha_{0}^{2}}{R^{2}}\right)^{\frac{1}{ q-p+1}}\leq a_{15}\left(\frac{\alpha_{0}^{2}}{R^{2}}\right)^{\frac{1}{q-p+1}}.\] It follows that \[\sup_{B_{\frac{R}{2}}(o)}|\nabla u|\leq\max\left\{1,\,C\left[\left(\frac{1+ \sqrt{\kappa}R}{R}\right)^{\frac{1}{q-p+1}}+\|b\|_{1,\infty}^{\frac{1}{q-p+1}} \left(1+\|u\|_{\infty}^{2r}\right)^{\frac{1}{2(q-p+1)}}\right]\right\}.\] We has finished the proof of Theorem 1.5. ## 5. Properties of non-negative solutions to (1.1) ### \(L^{\infty}\)-estimate of non-negative solutions In this section we consider the local \(L^{\infty}\)-estimate for any non-negative function \(u\) satisfying the differential inequality \[\Delta_{p}u\geq-Cu^{r}, \tag{5.1}\] where \(0\leq r\leq p-1\) and \(C\) is a positive constant. Since \(b(x)\in L^{\infty}(B_{R}(o))\), any non-negative solution to (1.1) satisfies (5.1). We are ready to show Proposition 1.7. As before, we denote \(\Omega=B_{R}(o)\). Proof.: Choose \(\varphi=u^{s}\eta^{t}\chi\), where \(\chi=\chi_{\{u\geq 1\}}\) is the characteristic function of the set \(\{x:\,u(x)\geq 1\}\) and \(\eta\in C_{0}^{\infty}(\Omega)\) is a non-negative cut-off function which will be determined later. Note that \(\varphi\geq 0\). Multiplying \(\varphi\) on both sides of (5.1) and integrating over \(\Omega\), we have \[\begin{split}&\int_{\Omega}su^{s-1}\eta^{t}|\nabla u|^{p}\chi+ \int_{\Omega}tu^{s}\eta^{t-1}|\nabla u|^{p-2}\langle\nabla\eta,\nabla u\rangle \chi\\ \leq& C\int_{\Omega}u^{r+s}\eta^{t}\chi+\int_{\Omega }\operatorname{div}\left(|\nabla u|^{p-2}u^{s}\eta^{t}\nabla u\right)\chi.\end{split} \tag{5.2}\] Since the outward pointing normal vector of the region \(\{x:u(x)=1\}\) is \(-\nabla u/|\nabla u|\), we have \[\int_{\Omega}\operatorname{div}(|\nabla u|^{p-2}u^{s}\eta^{t}\nabla u)\chi= \int_{\{u\geq 1\}}\operatorname{div}(|\nabla u|^{p-2}u^{s}\eta^{t}\nabla u)=- \int_{\{u=1\}}|\nabla u|^{p-1}u^{s}\eta^{t}\leq 0.\] Omitting the above boundary integral term and applying Cauchy inequality to (5.2) gives \[\int_{\Omega}su^{s-1}\eta^{t}|\nabla u|^{p}\chi-\int_{\Omega}tu^{s}\eta^{t-1}| \nabla u|^{p-1}|\nabla\eta|\chi\leq C\int_{\Omega}u^{r+s}\eta^{t}\chi. \tag{5.3}\] Now, using Young inequality to the second term on the left-hand side (denoted by \(L^{*}_{2}\)) of (5.3), we can see that \[\begin{split} L^{*}_{2}=&-\int_{\Omega}\left( \epsilon|\nabla u|^{p-1}u^{\frac{(p-1)(s-1)}{p}}\eta^{\frac{(p-1)t}{p}}\right) \left(t\epsilon^{-1}u^{\frac{p+s-1}{p}}\eta^{\frac{t}{p}-1}|\nabla\eta|\chi \right)\\ \geq&-\frac{p-1}{p}\epsilon^{\frac{p}{p-1}}\int_{ \Omega}|\nabla u|^{p}u^{s-1}\eta^{t}\chi-\frac{1}{p}t^{p}\epsilon^{-p}\int_{ \Omega}u^{p+s-1}\eta^{t-p}|\nabla\eta|^{p}\chi.\end{split} \tag{5.4}\] Choosing \(\epsilon=s^{\frac{p-1}{p}}\) in (5.4) and substituting (5.4) into (5.3), we obtain \[\frac{s}{p}\int_{\Omega}u^{s-1}\eta^{t}|\nabla u|^{p}\chi-\frac{1}{p}t^{p}s^{1 -p}\int_{\Omega}u^{p+s-1}\eta^{t-p}|\nabla\eta|^{p}\chi\leq\int_{\Omega}u^{r+ s}\eta^{t}\chi. \tag{5.5}\] We will now address the first term on the left-hand side of (5.5). Observing the following \[(a+b)^{p}\leq 2^{p}(a^{p}+b^{p})\] for any \(a\geq 0\) and \(b\geq 0\), we can simplify the expression \[|\nabla(u^{\xi}\eta^{\theta})|^{p}\leq 2^{p}\left[\left(\xi u^{\xi-1}\eta^{ \theta}|\nabla u|\right)^{p}+\left(\theta u^{\xi}\eta^{\theta-1}|\nabla\eta| \right)^{p}\right]. \tag{5.6}\] By picking the constants \(\xi\) and \(\theta\) in (5.6) such that \[p(\xi-1)=s-1\quad\text{and}\quad\theta p=t,\] i.e., \[\xi=\frac{p+s-1}{p}\quad\text{and}\quad\theta=\frac{t}{p},\] we can see that \[\left(\frac{p}{2(p+s-1)}\right)^{p}\left|\nabla\left(u^{\frac{p+s-1}{p}}\eta^{ \frac{t}{p}}\right)\right|^{p}\leq u^{s-1}\eta^{t}|\nabla u|^{p}+\left(\frac{t }{p+s-1}\right)^{p}u^{p+s-1}\eta^{t-p}|\nabla\eta|^{p}. \tag{5.7}\] Substituting (5.7) into (5.5), we arrive at \[\begin{split}&\frac{s}{p}\left(\frac{p}{2(p+s-1)}\right)^{p} \int_{\Omega}\left|\chi\nabla\left(u^{\frac{p+s-1}{p}}\eta^{\frac{t}{p}}\right) \right|^{p}-\frac{s}{p}\left(\frac{t}{p+s-1}\right)^{p}\int_{\Omega}u^{p+s-1} \eta^{t-p}|\nabla\eta|^{p}\chi\\ \leq&\frac{1}{p}t^{p}s^{1-p}\int_{\Omega}u^{p+s-1} \eta^{t-p}|\nabla\eta|^{p}\chi+C\int_{\Omega}u^{r+s}\eta^{t}\chi.\end{split} \tag{5.8}\] Taking \(t=p\) in (5.8), we have \[\frac{s}{p}\left(\frac{p}{2(p+s-1)}\right)^{p}\int_{\Omega}\left|\chi\nabla \left(u^{\frac{p+s-1}{p}}\eta^{\frac{t}{p}}\right)\right|^{p}\leq 2p^{p-1}s^{1-p} \int_{\Omega}u^{p+s-1}|\nabla\eta|^{p}\chi+\int_{\Omega}u^{r+s}\eta^{p}\chi,\] i.e., \[\int_{\Omega}\left|\chi\nabla\left(u^{\frac{p+s-1}{p}}\eta^{\frac{t}{p}} \right)\right|^{p}\leq C_{1}\left[\int_{\Omega}u^{p+s-1}|\nabla\eta|^{p}\chi+ s^{p-1}\int_{\Omega}u^{r+s}\eta^{p}\chi\right], \tag{5.9}\] where \(C_{1}\) depends on \(p\). By Sobolev inequality (1.16), we have \[\begin{split}\left\|u^{\frac{p+s-1}{p}}\eta\chi\right\|_{\frac{np }{n-p}}^{p}&\leq C_{2}\left(\left\|\chi\nabla\left(u^{\frac{p+s- 1}{p}}\eta\right)\right\|_{p}^{p}+\left\|u^{\frac{p+s-1}{p}}\eta\chi\right\|_{p }^{p}\right)\\ &\leq C_{1}C_{2}\left(\int_{\Omega}u^{p+s-1}|\nabla\eta|^{p}\chi+ s^{p-1}\int_{\Omega}u^{r+s}\eta^{p}\chi\right)+C_{2}\left\|u^{\frac{p+s-1}{p}} \eta\chi\right\|_{p}^{p}\\ &\leq C_{3}\left[\int_{\Omega}u^{p+s-1}\chi\left(|\nabla\eta|^{p }+1\right)+s^{p-1}\int_{\Omega}u^{r+s}\eta^{p}\chi\right],\end{split} \tag{5.10}\] where the constants \(C_{2}\) and \(C_{3}\) depend on \(M\), \(p\) and \(r\). Set the sequence \(\rho_{l}=\frac{R}{2}+\frac{R}{4^{l}}\) for \(l=1\), \(2\), \(\cdots\). For any ball \(B_{l}=B(o,\rho_{l})\), we choose a cut-off function \(\eta_{l}\in C_{0}^{\infty}(B_{l})\) satisfying \[\begin{cases}\eta_{l}(x)\equiv 1,&x\in B_{l+1};\\ 0\leq\eta_{l}(x)\leq 1,|\nabla\eta_{l}|\leq\frac{C4^{l}}{R},&x\in B_{l}. \end{cases}\] Since \(p-1\geq r\), we have \[u^{r+s}\chi\leq u^{s+p-1}\chi.\] It follows from (5.10) that \[\|u\chi\|_{L^{\frac{n(s+p-1)}{n-p}}(B_{l+1})}^{s+p-1}\leq C_{4}\left(\frac{4^{ lp}}{R^{p}}+s^{p-1}+1\right)\|u\chi\|_{L^{s+p-1}(B_{l})}^{s+p-1}. \tag{5.11}\] Now we define a sequence \(s_{l}\) (\(l=0\), \(1\), \(2\), \(\cdots\)) by \[s_{l}=\left(\frac{n}{n-p}\right)^{l}p.\] We choose suitable \(s\) such that \(s+p-1=s_{l}\). It is easy to see that \[\begin{split}\|u\chi\|_{L^{s_{l+1}}(B_{l+1})}^{s_{l}}\leq& \ C_{5}\left(\frac{4^{lp}}{R^{p}}+1+s_{l}^{p-1}\right)\|u\chi\|_{L^{s_{l}}(B_{ l})}^{s_{l}}\\ \leq&\ C_{5}\left[\frac{4^{lp}}{R^{p}}+1+p^{p-1} \left(\frac{n}{n-p}\right)^{l(p-1)}\right]\|u\chi\|_{L^{s_{l}}(B_{l})}^{s_{l}}. \end{split} \tag{5.12}\] We can choose some constant \[C_{6}>\max\left\{4^{p},\,\left(\frac{n}{n-p}\right)^{p-1}\right\}\] such that \[\|u\chi\|_{L^{\alpha_{l+1}}(B_{l+1})}\leq\left(\frac{1}{R^{p}}+p^{p-1}\right) ^{\frac{1}{s_{l}}}C_{6}^{\frac{l}{s_{l}}}\|u\chi\|_{L^{s_{l}}(B_{l})}. \tag{5.13}\] Due to the facts \[\sum_{l=1}^{\infty}\frac{1}{s_{l}}=\frac{n-p}{p^{2}}\quad\text{and}\quad\sum_ {l=1}^{\infty}\frac{l}{s_{l}}=\frac{n}{p^{2}},\] we can deduce from (5.13) that \[\begin{split}\|u\chi\|_{L^{\infty}(B_{\frac{p}{2}}(o))}\leq& \left(\frac{1}{R^{p}}+p^{p-1}\right)^{\sum_{l=1}^{\infty}\frac{1}{s_{l}}}C_{6 }^{\sum_{l=1}^{\infty}\frac{l}{s_{l}}}\|u\chi\|_{L^{s_{0}}(B_{1})}\\ &\leq C_{7}\left(\frac{1}{R^{p}}+p^{p-1}\right)^{\frac{n-p}{p^{2} }}\|u\chi\|_{L^{p}(B_{1})}\\ &\leq C_{8}\left(\frac{1}{R}+1\right)^{\frac{n-p}{p}}\|u\chi\|_{L ^{p}(B_{1})}.\end{split}\] So we obtain \[\|u\|_{L^{\infty}(B_{\frac{R}{2}}(o))}\leq\max\left\{1,C_{9}\|u\|_{L^{p}(B_{ R}(o))}\right\}. \tag{5.14}\] It is worth to mention that \[C_{9}=C_{8}\left(\frac{1}{R}+1\right)^{\frac{n-p}{p}}\] depends on \(\left(\frac{1}{R}+1\right)\), but \(C_{9}\) is uniformly bounded as \(R\to\infty\). ### Proof of Theorem 1.8 Proof.: Theorem 1.8 is a direct consequence of Proposition 1.7 and Proposition 4.5. Indeed, by Proposition 1.7 we know that any solution of equation (1.1) in a geodesic ball \(B_{R}(o)\) satisfies \[\sup_{B_{\frac{R}{2}}(o)}|u|\leq C, \tag{5.15}\] where the constant \(C=C(n,p,q,r,\|u\|_{L^{p}(B_{R}(o))})\). Now, since \(u\) is also a solution of equation (1.1) in a geodesic ball \(B_{\frac{R}{2}}(o)\) with radius \(R/2\), by Proposition 4.5 and (5.15) we have \[\sup_{B_{\frac{R}{4}}(o)}|\nabla u|\leq\max\left\{1,\,C\left[\left(\frac{1+ \sqrt{\kappa}R}{R}\right)^{\frac{1}{q-p+1}}+N\right]\right\}, \tag{5.16}\] where \(N=N\left(n,p,q,r,\|b\|_{1,\infty},\|u\|_{p}\right)\). Thus we complete the proof. If \(u\in L^{p}(M,g)\) is a global non-positive solution of (1.1) on a non-compact complete Riemannian manifold. By letting \(R\to\infty\) in (5.16), we deduce that \(|\nabla u|\) is uniformly bounded, which means \(u\) has at most linear growth. **Acknowledgements**: The authors are grateful to Jingchen Hu for the helpful discussion. The author Y. Wang is supported partially by NSFC (Grant No.11971400) and National Key Research and Development projects of China (Grant No. 2020YFA0712500).
2309.15372
Seeing Beyond the Patch: Scale-Adaptive Semantic Segmentation of High-resolution Remote Sensing Imagery based on Reinforcement Learning
In remote sensing imagery analysis, patch-based methods have limitations in capturing information beyond the sliding window. This shortcoming poses a significant challenge in processing complex and variable geo-objects, which results in semantic inconsistency in segmentation results. To address this challenge, we propose a dynamic scale perception framework, named GeoAgent, which adaptively captures appropriate scale context information outside the image patch based on the different geo-objects. In GeoAgent, each image patch's states are represented by a global thumbnail and a location mask. The global thumbnail provides context beyond the patch, and the location mask guides the perceived spatial relationships. The scale-selection actions are performed through a Scale Control Agent (SCA). A feature indexing module is proposed to enhance the ability of the agent to distinguish the current image patch's location. The action switches the patch scale and context branch of a dual-branch segmentation network that extracts and fuses the features of multi-scale patches. The GeoAgent adjusts the network parameters to perform the appropriate scale-selection action based on the reward received for the selected scale. The experimental results, using two publicly available datasets and our newly constructed dataset WUSU, demonstrate that GeoAgent outperforms previous segmentation methods, particularly for large-scale mapping applications.
Yinhe Liu, Sunan Shi, Junjue Wang, Yanfei Zhong
2023-09-27T02:48:04Z
http://arxiv.org/abs/2309.15372v1
Seeing Beyond the Patch: Scale-Adaptive Semantic Segmentation of High-resolution Remote Sensing Imagery based on Reinforcement Learning ###### Abstract In remote sensing imagery analysis, patch-based methods have limitations in capturing information beyond the sliding window. This shortcoming poses a significant challenge in processing complex and variable geo-objects, which results in semantic inconsistency in segmentation results. To address this challenge, we propose a dynamic scale perception framework, named GeoAgent, which adaptively captures appropriate scale context information outside the image patch based on the different geo-objects. In GeoAgent, each image patch's states are represented by a global thumbnail and a location mask. The global thumbnail provides context beyond the patch, and the location mask guides the perceived spatial relationships. The scale-selection actions are performed through a Scale Control Agent (SCA). A feature indexing module is proposed to enhance the ability of the agent to distinguish the current image patch's location. The action switches the patch scale and context branch of a dual-branch segmentation network that extracts and fuses the features of multi-scale patches. The GeoAgent adjusts the network parameters to perform the appropriate scale-selection action based on the reward received for the selected scale. The experimental results, using two publicly available datasets and our newly constructed dataset WUSU, demonstrate that GeoAgent outperforms previous segmentation methods, particularly for large-scale mapping applications. ## 1 Introduction The acquisition of high spatial resolution (HSR) remote sensing images has become possible with the advancement of sensors [4]. These images enable the extraction of geospatial objects, such as buildings, roads, and aircraft, through image interpretation [47, 45, 22, 53]. Furthermore, HSR remote sensing images facilitate the determination of more semantic land use/ land cover (LULC) categories, including industrial land, cropland, and residential land [8]. Such LULC mapping is critical in urban management, planning, and ecological conservation [3, 16, 30]. The task of LULC mapping requires the semantic categorization of each pixel, also known as semantic segmentation. Recently, Fully Convolutional Networks (FCNs) [25] have achieved significant progress for natural images in computer vision [43, 49, 17, 33, 50, 5]. Deep learning techniques have also shown success in the automatic interpretation of HSR remote sensing images [15, 10, 51, 21, 29]. In contrast to conventional ground-based images, HSR remote sensing images have a large length and width that can encompass thousands of objects in a single image. However, distinguishing LULC (Land Use and Land Cover) geo-objects that vary greatly at different scales using a fixed, finite size image patch is challenging. Although Fully Convolutional Networks (FCNs) can theoretically receive input of arbitrary size, when processing HSR remote sensing images, a sliding window (SW) of a specific size is often used to generate image patches [41, 40, 9]. In small patches, different LULC classes may consist of the same objects with similar patterns, making them almost indistinguishable [52, 48]. For instance, the same water may appear in rivers, lakes, and ponds, and the same dense housing may belong to rural or urban residential land. Moreover, some cultivated land and meadows may have similar Figure 1: Illustration of the scale problem for HSR remote sensing image LULC segmentation. The sliding window \(\square\) is enlarged for a better view. Legend: \(\blacksquare\) river, \(\blacksquare\) lake, \(\blacksquare\) pond. solls [37, 38]. Regardless of the multi-scale module's design [5, 19, 23, 50, 13, 44, 24], the model's receptive field cannot surpass the image patch's size [1]. Consequently, the commonly used Deeplabv3+ [5] model's mapping results exhibit an apparent grid effect due to the gap between the sliding window's limited size and the river's enormous scale, as illustrated in Figure 1. The accurate segmentation of land use and land cover (LULC) in high spatial resolution (HSR) remote sensing images poses a challenge due to the significant scale variance and limited receptive field of the network by patch size. Various models aim to address this issue by tackling the multiscale problem from the perspective of input data [6, 11, 7]. While incorporating contextual information can effectively resolve the problem of identifying rivers and water bodies within limited image patches, different LULC geo-objects necessitate different scales of image patches. For instance, local image patches or slightly larger scales may suffice for identifying a pit pond, whereas lakes and continuous rivers may require larger scales. An inappropriate scale of receptive field may have an adverse impact on the segmentation. The supervised learning methods have difficulty in adaptively determining the appropriate patch scales, as it requires manual labeling of images that correspond to the appropriate context scale. In contrast, reinforcement learning (RL) algorithms operate through interactions between the _agent_ and the _environment_, receiving rewards for _actions_ and using that information to update model parameters, without requiring labeled data [36]. RL techniques have achieved success in various areas, including board games, video games, and mathematics [27, 35, 14]. Thus, applying deep RL techniques to LULC segmentation networks may allow for the adaptive adjustment of the receptive field scale based on image features, dynamically controlling the data flow of HSR remote sensing images. Therefore, we propose a scale-adaptive segmentation network, called GeoAgent to dynamically select appropriate patch scale based on different geo-obejets. A HSR remote sensing segmentation dataset WUSU is constructed for experiment. The main contributions include: (1) A RL-based HSR imagery segmentation method is proposed. In GeoAgent, the HSR remote sensing images are represented by different _states_. GeoAgent observes different _states_ and performs a series of scale-selection _actions_ to control the area of each patch, obtaining the final global mapping _reward_ based on the selected scales. (2) A Scale Control Agent (SCA) is proposed to dynamically controls the scale of the GeoAgent. The SCA contains two parts: an 'actor' network that learns a policy to map the _states_ to a probability distribution of _actions_ and a 'critic' network that evaluates the value of each _action_ using an advantage function. A feature indexing module is proposed to enhance the _agent_'s ability to distinguish the location of the current image patch. (3) GeoAgent use a dual-branch segmentation network for multi-scale image patches segmentation. The input scale and context branch are switched under the control of the _actions_ selected by SCA. The network processes the multi-scale context patches and local scale patches separately, and the multi-scale features are aggregated based on the geo-location information to generate the final multi-scale segmentation result. ## 2 Related Works ### Multi-Scale Segmentation Networks There have been several techniques to solve multi-scale problems, ranging from multi-scale feature ensemble (e.g. Deeplab [5], FPN [19]) to context information preservation (e.g. ParseNet [23], PSPNet [50]). Through the self-attention mechanism, it is able to capture long-distance relationships in images [13, 44, 24]. But no matter how the multi-scale module is designed, the receptive field of the model cannot break through the size of the sliding window. Therefore some models try to solve the multi-scale problem from the perspective of input data. [32] proposed a saliency-based distortion layer for convolutional neural networks to improve the spatial sampling of input data with the condition of the given task. Some methods fetch larger scale image outside the patch to provide spatial context. [6] proposed the GLNet, which consists of a global branch and a local branch, which take the downsampled global image and the full-resolution cropped image as input, respectively, effectively preserving the detail information and the global context information. [11] propose a wide-context network (WiCoNet), a context transformer is introduced to extract the downsampled global context image. On the other hand, some approaches address the problem by leveraging multi-scale processing. [7] proposed CascadePSP, which first performs segmentation on the downsampled coarse resolution images, and then uses the original resolution images to refine the segmentation results. [18] proposed multi-scale framework MagNet that uses multiple processing stages to resolve local ambiguity by examining the image at various magnification levels and propagating coarse-to-fine information. However, some of the above methods do not break the limits of image patches [32], or introduced a fixed scale context image[6, 11] and complex processes [18, 7] that do not allow the model adaptively choose appropriate scale to zoom the sliding window based on different objects. ### DRL-based Dynamic Networks Deep Reinforcement Learning (DRL) is able to autonomously make decisions about the following _action_ by observing the _state_ of environment, which in turn affects the environment, and updating the model parameters by re ceiving the environment's _reward_ for the _action_. By introducing the DRL approach, CNNs can be more dynamic and able to automatically change the network structure based on the input or influence the input to the network. [46] uses reinforcement learning to train a policy network for double _reward_. During training and testing time, the _agent_ uses the input image as a _state_ to try to utilize the least number of blocks while maintaining recognition accuracy. [39] proposed a Patch-Drop algorithm to selectively use high-resolution satellite data when necessary. A policy network is utilized to take low-resolution remote sensing images as _state_ to decide whether to sample high-resolution remote sensing images as additional inputs. [2] proposed a reinforcement learning setup to acquire high-resolution satellite images conditionally. A cost-aware _reward_ function was designed to reduce satellite image acquisition costs while maintaining accuracy. [12] proposed RAZN to achieve accurate and fast prediction of breast cancer segmentation in whole-slide images by learning a policy network to decide whether zooming is required in a given region of interest, motivated by the zoom-in operation of a pathologist using a digital microscope. To the authors' knowledge, there has yet to be work using RL to solve the problem of limited patch size in remote sensing imagery segmentation task, so the scale-variable network GeoAgent is proposed. ## 3 Methodology ### Overview The GeoAgent for HSR remote sensing image segmentation comprises two sub-networks: the Scale Control Agent (SCA) \(\pi(a|s)\) and the multi-scale segmentation network \(\pi(Y|X)\). The entire HSR image is considered as the segmentation target, and the segmentation process is modeled as a Markov Decision Making (MDP) process. The SCA selects an appropriate scale based on the _states_\(s\) of global thumbail \(X^{\downarrow}_{thumb}\) and the position mask \(M_{pos}\), and determines the _action_\(a\) which control the input scale of the segmentation network. The dual-branch segmentation network \(\pi(Y|X)\) takes the local image patches \(X_{loc}\) and the multi-scale context image patch \(X^{\downarrow}_{context}\) to assign categories to each pixel of the current patch. The sequence of _states_, _actions_, and _rewards_ form a complete _trajectory_\(\tau\), representing the entire mapping process of image \(X_{i}\). The large scale segmentation results \(\hat{Y}_{i}\) are obtained and a final mapping _reward_ will be given. The objective of GeoAgent is to learn an _agent_\(\pi(a|s)\) and a segmentation network \(\pi(Y|X)\) that maximize the expected expected cumulative _reward_ of \(J(\tau)\). ### Scale Control Agent Module #### 3.2.1 Patch Scale Selection The High Spatial Resolution (HSR) remote sensing image segmentation dataset, denoted as \(\mathcal{D}=\{(X_{i},Y_{i})\}_{i=1}^{N}\), comprising \(N\) pairs of HSR remote sensing images \(Xi\) and the corresponding training labels \(Y_{i}\), serves as the RL environment in this study. Each _trajectory_\(\tau\) begins with the environment's initialization, achieved by randomly selecting an image \(X_{i}\) of size \(H\times W\) at its original resolution, which is subsequently divided into \(T\) fixed-size image patches. Each image patch's segmentation defines a Figure 2: The overall flowchart of the proposed reinforcement learning based scale-adaptive segmentation network GeoAgent. timestep \(t\) in the _trajectory_. At each timestep, the _state_\(s_{t}\) is represented by the global thumbnail \(X_{thumb}^{\downarrow}\) and the position mask \(M_{pos}\) of the current patch. The thumbnail \(X_{thumb}^{\downarrow}\) is a downsampled HSR image of specific size \(h\times w\) (e.g., \(512\times 512\)), containing the global-scale context under coarse resolution. The position mask \(M_{pos}\) is a binary mask of size \(h\times w\), indicating the current patch's position, with the value of the current extent set to 1 and the rest of the pixels set to 0. As shown in Fig.2, the Scale Control Agent (SCA) \(\pi(a|s)\) is comprised of a policy network, namely, the 'actor', that determines the appropriate _action_\(a_{t}\) for each timestep (image patch) \(t\). The _action_ space \(a_{t}\in 1,2,\ldots,N\) represents the scale of the context. In addition to the local image patches \(X_{loc}\) of size \(h\times w\) with the original resolution, the context image patch \(X_{context}^{\downarrow}\) of same size \(h\times w\) with coarse resolution downsampled by a factor of \(at\) is acquired. The segmentation network \(\pi(Y|X)\) assigns categories to each pixel of the current patch based on \(X_{loc}\) and \(X_{context}^{\downarrow}\). The _action_\(a_{t}\) receives an immediate _reward_\(rt\) based on the segmentation result and the _environment_\(Y_{i}\). Following the _action_, the _environment_ transitions to the next _state_, where the sliding window moves to the next image patch. This sequence of _actions_ is repeated until the entire image is observed, and the final mapping results \(\hat{Y}_{i}\) are obtained. Since the difference between the _states_ of the different positions lies only in position mask \(M_{pos}\), this makes discriminability of the different _states_ very small. When the _states_ are directly used as input, it is difficult for SCA to recognize the weak differences and thus select similar _actions_. To improve the learning capability of the SCA, a feature indexing module is proposed to handle the subtle differences between _states_. As shown in Fig.2, the SCA is not directly trained to learn the mapping from similar _states_ to different _actions_. First, the thumbnail \(X_{thumb}^{\downarrow}\) is fed to a CNN-based backbone for whole-image feature extraction. Then, the features relevant to the current patch are indexed by the position mask \(M_{pos}\), and the irrelevant features outside the patch are masked. The features of the current patch are extracted through a \(3\times 3\) window and transformed into feature vectors by global average pooling. Finally, an actor head with multiple fully connected layers outputs the _action_ probabilities. #### 3.2.2 Segmentation Rewards and Values of Actions After formulating the MDP of GeoAgent, the _agent_'s parameters must be optimized using DRL methods to maximize the cumulative _reward_\(J(\tau)\). The goal of the SCA is to select the appropriate scale to obtain contextual information outside the patch and improve mapping accuracy. To penalize inappropriate _actions_, the immediate _reward_ for each patch is defined as the accuracy gain of the final result based on the _action_\(a_{t}\) chosen by the _agent_ compared to the result obtained only by the local scale, i.e., \(a_{t}=1\): \[\mathcal{R}_{patch}(a_{t},y)=Score(y,\hat{y}(a_{t}))-Score(y,\hat{y}(1)) \tag{1}\] where \(Score(y,\hat{y})=mIoU(y,\hat{y})+mF_{1}(y,\hat{y})\), and \(mIoU\) and \(mF_{1}\) are mean Intersection-Over-Union and mean F\({}_{1}\) score, respectively. The _reward_ value based only on the local image is 0, and the value is positive if the _action_ leads to an accuracy improvement and negative if it results in a decline in accuracy. However, rewarding only the current single patch may cause the _agent_ to overlook the potential long-term benefits of whole image mapping. Therefore, after completing the episode, the accuracy of the predicted results for the entire image is evaluated, and the final global mapping _reward_ is given: \[\mathcal{R}_{map}=T*(Score(Y_{i},\hat{Y}_{i})-Score(Y_{i},\hat{Y}_{i}^{1})) \tag{2}\] where \(T\) is the number of generated patches for normalization, and \(\hat{Y}_{i}^{1}\) is the mapping result of the whole image \(X_{i}\) based only on the local scale. To evaluate the expected accumulative future _reward_ of a _state_ or _state_-_action_ pair, _value_ function \(V^{\pi}(s)\) and \(Q^{\pi}(s,a)\) are introduced: \(V^{\pi}(s)\) denotes the cumulative _reward_ from using policy \(\pi\) from _state_\(s\): \[V^{\pi}(s)=\mathrm{E}_{s_{0}=s,\tau\sim\pi}\left[\sum_{t=0}^{T}\gamma^{t}r_{t}\right] \tag{3}\] and \(Q^{\pi}(s,a)\) denotes the cumulative _reward_ brought by performing _action_\(a\) from _state_\(s\) based on policy \(\pi\): \[Q^{\pi}(s,a)=\mathrm{E}_{s_{0}=s,a_{0}=a,\tau\sim\pi}\left[\sum_{t=0}^{T} \gamma^{t}r_{t}\right] \tag{4}\] To estimate the _value_ of each _action_, in addition to the actor network to learn the mapping from current _states_ to _actions_, SCA also contains a critic network that shares weights with the actor network to learn the _value_ function for evaluating the current _states_and _actions_. The critic network shares a CNN backbone with the actor network to extract the features, but has a separate header to output the estimated values for later optimization of the network parameters. ### Scale-Variable Segmentation based on Dual-Branch Network The Dual-Branch segmentation Network, denoted as \(\pi(Y|X)\), produces the segmentation _action_ given the _state_ of a local image patch \(X_{loc}\) and a multi-scale context patch \(X_{context}^{\downarrow}\). The local patch \(X_{loc}\), of size \(h\times w\), is sampled from the original resolution image based on the patch position. The context patch \(X_{context}^{\downarrow}\) is obtained by expanding the local patch size by a factor of \(a_{t}\) centered at the current position and then downsampling it by the same factor to obtain a context patch of size \(h\times w\). As shown in Fig.2, the \(\pi(Y|X)\) network employs a two-branch structure sharing a common segmentation network to handle multi-scale input images. The context branch takes downsampled global thumbnails \(X^{\downarrow}_{context}\), while the local branch receives local patches \(X_{loc}\) of the same size. When \(a_{t}=1\), i.e., when the _agent_ believes that only local image patches are sufficient, the context branch is deactivated, and the dual-branch segmentation network is reduced to a single branch network. During the segmentation process, all layers in either branch share the same weights. The high-level feature maps \(f_{l}\) and \(f_{c}\) are then combined to generate the final segmentation mask using a branch aggregation layer. During the feature fusion phase, the range corresponding to the local image is cropped from the context patch feature map based on its geographic coordinates. The two scales of feature maps are concatenated and fused by a convolutional layer of size \(3\times 3\). Finally, the segmentation probability is computed using a \(1\times 1\) convolution and a softmax layer. To facilitate training for both branches, auxiliary headers are added to output segmentation results for local patches and context patches based on either \(f_{l}\) or \(f_{g}\) at a single scale. ### Optimization of GeoAgent The Advantage Actor Critic (A2C) algorithm is employed for stable learning of the _agents_ in an adaptive _environment_[26]. As illustrated in Section 3.2, the SCA module consists of two learnable components: an "actor" \(\theta_{A}\) that learns the parameterized policy and a "critic" \(\theta_{C}\) that learns the _value_ function to evaluate the _state-action_ pair. The critic provides reinforcement signals to the actor by learning the advantage function \(A^{\pi}(s,a)\)[26]: \[A^{\pi}(s,a)=Q^{\pi}(s,a)-V^{\pi}(s) \tag{5}\] where \(V^{\pi}(s)\) denotes the cumulative _reward_ from using policy \(\pi\) from _state_\(s\), and \(Q^{\pi}(s,a)\) denotes the cumulative _reward_ brought by performing _action_\(a\) from _state_\(s\) based on policy \(\pi\). The target advantage value \(\hat{V}^{\pi}_{tar}(s_{t})\) can be estimated by the temporal difference (TD) algorithm and _trajectory_ data (Sutton 1988), and used to train the critic network \(\theta_{c}\) by mean squared error (MSE) loss [26]: \[L_{value}(\theta_{C})=\frac{1}{T}\sum_{t=0}^{T}(\hat{V}^{\pi}(s_{t})-V^{\pi}_{ tar}(s_{t}))^{2} \tag{6}\] The actor network predict the advantage _value_\(\hat{A}^{\pi}(s_{t},a_{t})\) and optimized by the advance based on the policy loss (Mnih et al. 2016; Williams 1992): \[L_{policy}(\theta_{A})=\frac{1}{T}\sum_{t=0}^{T}(-\hat{A}^{\pi}(s_{t},a_{t}) \log\pi\theta_{A}(a_{t}|s_{t})) \tag{7}\] The parameters of the segmentation network \(\pi(Y|X)\) can be updated through supervised learning from the loss function calculated directly from the labels. The commonly used cross-entropy loss function is employed to optimize the segmentation network. In addition to the loss values computed from the final outputs, additional segmentation head outputs in the global and local branches are also used to compute the auxiliary losses. When the entire GeoAgent is trained, the segmentation network \(\pi(Y|X)\) is first pre-trained by taking random _action_\(a_{t}\). After completing the pre-training, the SCA \(\pi(a|s)\) is trained to learn to select the appropriate scale dynamically. Finally, joint learning is performed to optimize the segmentation network and the SCA. Since the data reading pipelines of the two training methods differ greatly, \(\theta_{scale}\) and \(\theta_{cls}\) are updated asynchronously at specific intervals to avoid the instability caused by simultaneous optimization. ## 4 Experiments ### Experimental Setup Three semantic segmentation datasets for HSR remote sensing images were used for experiments: **Gaofen Image Dataset (GID)**[37]: GID is a high resolution semantic segmentation dataset based on Gaofen-2 (GF-2) satellite images. The satellite collects spectral reflectance information in the red-green-blue and near-infrared ranges to create 4-band images, with a spatial resolution of 4 meters from the multispectral camera. The dataset includes 10 training and testing images, each labeled with 15 categories and with a size of \(7200\times 6800\). The GID dataset is split into 50% for training and 50% for testing. **Five-Billion-Pixels (FBP)**[38]: FBP is large-scale LULC semantic segmentation dataset. Similar to the GID dataset, the FBP dataset is built based on 4-m resolution GF-2 images annotated in a 24-category system. The FBP dataset contains 150 images with 5 billion labeled pixels, and the size of each image is also \(7200\times 6800\). **Wuhan urban semantic understanding dataset (WUSU)**: We create A high-resolution LULC semantic segmentation dataset for Wuhan using GF-2 satellite imagery with finer annotations. The spatial resolution of the WUSU dataset was improved to 1-m through the fusion of panchromatic data. The categories were refined to differentiate between building instances and label them as high- or low-rise buildings, as well as to annotate excavations and structures within the city. The dataset annotates six satellite images with a total of 67,764 instances, with image heights and widhes ranging from 5500 pixels to 7025 pixels. The dataset can be found in [https://github.com/AngieNikki/openWUSU](https://github.com/AngieNikki/openWUSU). The GeoAgent was implemented based on PyTorch and Stable-Baseline3 [31]. The ResNet-18 is used for policy networks, and the backbones of segmentation networks are ResNet-50. The training patches were randomly sampled with the patch size of \(512\times 512\) and batch size 16. The stochastic gradient descent optimizer was used with the learning rate initialized to 0.001 with decay. The hyperparameters of A2C follow the default settings of stable-baseline3, except that the batch size is adjusted to 16. The models were trained for 20,000 steps on the GID and WUSU datasets and 60,000 on the FBP dataset. The segmentation _agents_ are pre-trained for 10,000 steps on the GID and WUSU datasets and 20000 on the FBP dataset. In the joint training phase, the two components perform asynchronous parameter updates at 100-step intervals. The mIoU and mean F\({}_{1}\) score were used for evaluation. ### Comparison With Segmentation Methods The proposed GeoAgent is a generic framework where the segmentation backbone can be an arbitrary FCN model. The GeoAgent is compared with the commonly used HSR remote sensing imagery semantic segmentation networks, including UNet [33], UNet++ [54], FPN [19], PSPNet [50], Deeplabv3+ [5]. Additionally, the global scale methods such as GLNet [6], MagNet [18], RAZN [12], WiCoNet [11] and CascadePSP [7] are also compared. The results listed in Table.1 show that GeoAgent significantly outperforms different patch-based based segmentation models in the challenging three HSR remote sensing image segmentation datasets. With the introduction of the GeoAgent, a high accuracy improvement can be obtained for all the models. When compared to fixed global-local scale patch-based methods such as GLNet, WiCoNet, and CascadePSP, which also employ ResNet50, SFPN, and PSPNet as backbones, GeoAgent also demonstrates accuracy advantages on all three datasets. Examples of Deeplabv3+ on three datasets are given in Fig.3. For some artificial objects, such as roads, buildings, and residential areas, Deeplabv3+ can distinguish these categories well. However, larger natural objects, such as "rivers" and " in the WUSU and FBP datasets, have almost the same features as lakes and ponds under the finite \begin{table} \begin{tabular}{l l c c c c c c} \hline \hline \multirow{2}{*}{Method} & \multirow{2}{*}{Backbone} & \multicolumn{2}{c}{WUSU} & \multicolumn{2}{c}{GID} & \multicolumn{2}{c}{FBP} \\ \cline{3-8} & & IoU(\%) & F\({}_{1}\)(\%) & IoU(\%) & F\({}_{1}\)(\%) & IoU(\%) & F\({}_{1}\)(\%) \\ \hline \multicolumn{8}{c}{Local scale patch based methods} \\ \hline - & UNet [33] & 54.10 & 59.52 & 58.76 & 68.93 & 53.02 & 59.67 \\ - & PSPNet [50] & 60.37 & 69.29 & 59.04 & 69.34 & 59.69 & 64.27 \\ - & FPN [19] & 64.85 & 72.68 & 55.31 & 65.79 & 61.88 & 65.50 \\ - & Deeplabv3+ [5] & 63.39 & 69.97 & 64.89 & 68.65 & 57.08 & 59.46 \\ - & UNet++ [54] & 60.26 & 66.88 & 61.71 & 71.68 & 55.31 & 58.43 \\ \hline \multicolumn{8}{c}{Global-local scale patch based methods} \\ \hline GLNet [6] & FPN & 51.71 & 59.09 & 58.69 & 68.79 & 21.86 & 24.34 \\ MagNet [18] & FPN & 51.47 & 63.46 & 66.98 & 55.94 & 45.05 & 49.09 \\ RAZN [12] & Deeplabv3+ & 58.62 & 66.27 & 63.52 & 68.49 & 57.63 & 57.73 \\ WiCoNet [11] & ResNet50 & 61.83 & 75.68 & 65.80 & 76.72 & 58.83 & 70.28 \\ CascadePSP [7] & PSPNet & 67.48 & 77.42 & 72.45 & 77.24 & 67.35 & 70.03 \\ \hline \multicolumn{8}{c}{Scale-variable segmentation network GeoAgent} \\ \hline GeoAgent & UNet & 70.33\({}_{(1:6.23)}\) & 77.11\({}_{(1+7.59)}\) & 75.88\({}_{(1+7.12)}\) & 75.23\({}_{(6:30)}\) & 57.77\({}_{(4:7.75)}\) & 61.37\({}_{(1+1.70)}\) \\ GeoAgent & UNet++ & 74.93\({}_{(1+14.67)}\) & 79.17\({}_{(1+22.29)}\) & 77.30\({}_{(1+15.59)}\) & 78.62\({}_{(6:94)}\) & 61.84\({}_{(6:53)}\) & 60.01\({}_{(1+1.58)}\) \\ GeoAgent & Deeplabv3+ & 76.19\({}_{(1+12.80)}\) & 80.56\({}_{(+10.59)}\) & 75.51\({}_{(1+0.62)}\) & 76.07\({}_{(4:7.42)}\) & 62.25\({}_{(5+1.7)}\) & 63.21\({}_{(3:7.75)}\) \\ GeoAgent & PSPNet & 75.19\({}_{(1+14.82)}\) & 79.62\({}_{(+10.33)}\) & 75.57\({}_{(1+6.53)}\) & 79.34\({}_{(+10.00)}\) & 69.67\({}_{(9:98)}\) & 74.36\({}_{(+10.09)}\) \\ GeoAgent & FPN & 76.00\({}_{(1+11.15)}\) & 82.46\({}_{(+9.78)}\) & 78.16\({}_{(+22.85)}\) & 77.56\({}_{(+11.77)}\) & 73.95\({}_{(+12.07)}\) & 74.11\({}_{(+8.01)}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Segmentation accuracies of different models on three datasets. Figure 3: Segmentation results of Deeplabv3+ and modified GeoAgent models on three dataset. size \(512\times 512\) image patch, and the "irrigated land" and "dry cropland" in the GID are also very confusing. The fixed scale image patch limits all existing models to identify different parts of the water body as different classes, leading to fragmented segmentation results. In contrast, the proposed GeoAgent can obtain more global information by capture capturing context outside the image patch to a suitable size based on different objects, the large-scale LULC geo-objects are correctly identified and segmentation results are consistent. ### Ablation Experiment In this section, the ablation study was conducted to verify the validation of the proposed GeoAgent. Different scale-selection policies of SCA are compared. All experiments are based on the Deeplabv3+, benefiting from its balanced performance and computational complexity. The GeoAgent was compared with the following strategies: **Local only**: This baseline policy uses the traditional pipeline of FCNs with fixed \(512\times 512\) local patches as input. **Context only**: In this policy, the datasets were downsampled with fixed scales by 2-6 times, and the \(512\times 512\) down-sampled context patches were directly fed into FCNs. **Single-branch**: Although the SCA module is used to select the appropriate patch scale, only one branch, a normal segmentation network, is used to process the different multi-scale image patches. This policy is similar to RAZN [12]**Fixed scale**: The local and context branches were trained with the local image patches and context thumbnails, but the scales of the context branch are fixed to 2-6. This policy is similar to the fixed scale methods GLNet [6] and WiCoNet[11]. **Random scale**: In this policy, the local and context branches were both activated, but the scales were randomly selected. Actions were randomly performed five times to report the mean and variance. This is to verify that the DRL-based scale selection gives better results than the random scale selection. When testing, the model performs inference at all scales, and the obtained probabilities are averaged to obtain the final results. The results for the WUSU and FBP datasets are shown in Fig.4. The segmentation accuracy of the simple downsampled thumbnails decreases significantly as the selected scale increases, which is reasonable because the increasingly large scale downsampling directly destroys the information in the images. The global information introduced using the fixed scale performs better but negatively affects the final segmentation results in specific cases (fixed 2x scale for the FBP dataset). Moreover, the optimal fixed scales are different for different datasets. Random scale policy performs better, with a significantly higher final score than the baseline using only local information. Our proposed GeoAgent, on the other hand, dynamically decides the scale of patches and achieves the best performance. ### Sensitivity Analysis of SCA In this section, we investigate the impact of various reinforcement learning methods and backbones on the policy network of GeoAgent. As the FCN and DRL algorithms of the GeoAgent framework are interchangeable, we train all methods compared in Section 4.2 using different DRL algorithms for each dataset. In addition to the A2C used [26], two representative algorithms, Deep Q Network (DQN) [28] and Proximal Policy Optimization (PPO) [34], are employed for comparison. The means and standard deviations of average episode _rewards_ during training with different DRL algorithms for each dataset are shown in Fig.5. All DRL algorithms prove effective in learning scale-selection policies on all three datasets. On the all datasets, all models converge rapidly due to the relatively small size of the data volume, with A2C converging the fastest and ultimately converging to the highest value of the average episode _rewards_. A2C demonstrates an advantage in terms of convergence speed and results, although other DRL methods also perform well. The models may prioritize short-term returns over the long-term mapping accuracy of the entire image, which is represented by \(\mathcal{R}_{map}\). To address this, the accuracy gain of different models using GeoAgent is calculated and presented in Table.2, in addition to the average episode returns during training. GeoAgent consistently delivers significant mapping accuracy gains on all datasets. While there are some differences in performance among the DRL methods, they all result in significant accuracy gains. Despite these differences, the final mapping results are not signifi Figure 4: Ablation experiment results on different datasets. Figure 5: Average performance of various DRL methods in different datasets. cantly impacted, highlighting the robustness of the GeoAgent framework to reinforcement learning algorithms. ### Visualization Analysis This section analyzes the _actions_ of SCA, which are recorded by sliding a window through an image and completing an episode. The resulting _action_ maps of GeoAgent models based on A2C and Deeplabv3+ are presented in Fig. 6 for three datasets. To evaluate the effectiveness of the proposed feature indexing method, the _action_ maps of the policy network without this module are compared. In this case, thumbnails and position mask _states_ are concatenated to form a multi-channel image, which is then fed to the CNN for feature extraction and output _action_ probabilities. The results indicate that the GeoAgent can adaptively take different context scale for different _states_. For large-scale objects, such as rivers and lakes, larger scale chosen to obtain more background information due to the lack of long-range spatial relationships at smaller observation scales. The SCA mainly take larger magnification zoom for natural objects with large scales, while smaller scales or even no zooming are chosen for artificial features that can be easily identified. Without the feature indexing module, the policy network finds it difficult to locate the position of the current _state_ on the whole thumbnail, as the position mask is fed in only as a channel of observation, and the thumbnails of different _states_ are identical. Consequently, the policy network without feature indexing takes the same _action_ for all _states_ on the whole image. In the two examples of large-scale objects on the GID dataset and the FBP dataset, the conservative _action_ of taking the largest scale on the whole image is chosen by the _agent_ to guarantee a higher average expected return. This outcome highlights the effectiveness of the proposed feature indexing method. In addition, to evaluate the generalization of GeoAgent, we trained an Agent with RGB input on the WUSU dataset and performed inference on unseen WorldView-based datasets, namely LovaDA [42], xView[20], and DeepGlobe[8]. The action maps in Fig.7 demonstrate GeoAgent's ability to learn action strategies that generalize well across different datasets. GeoAgent consistently chooses larger scales for big objects to understand their context, while it selects localized scales for smaller objects, showing its ability to focus on detail. This pattern is observed regardless of the dataset's origin, proving its effective generalization skills. ## 5 Conclusion In this paper, we propose a deep reinforcement learning-based scale-adaptive semantic segmentation network called GeoAgent, breaking the limitation sliding window size of that conventional segmentation networks. Experiments on three datasets show that our proposed GeoAgent achieves state-of-the-art results on the semantic segmentation task of high-resolution satellite images. ## 6 Acknowledgements This work was supported by National Natural Science Foundation of China under Grant No. 42325105, 42071350, and LIESMARS Special Research Funding. \begin{table} \begin{tabular}{c c c c} \hline Methods & WUSU & GID & FBP \\ \hline DQN [28] & 13.85\(\pm\)1.9 & 15.63\(\pm\)3.0 & **7.76\(\pm\)3.0** \\ PPO [34] & 13.51\(\pm\)1.8 & 15.77\(\pm\)3.0 & 7.76\(\pm\)3.0 \\ A2C [26] & **13.93\(\pm\)1.7** & **16.54\(\pm\)3.1** & 7.70\(\pm\)2.9 \\ \hline \end{tabular} \end{table} Table 2: Accuracy improvement of various FCNs for different DRL methods in the three datasets. Figure 6: Action map visualization of GeoAgent on different datasets. Figure 7: Action visualization of the GeoAgent on unseen datasets.